Please complete this form for your free AI risk assessment.

How Agents Exfiltrate Data & How to Defend Them

Published on Sep 15, 2025

Heading 1

Ashish Rajan spoke with Ankur Shah & Vinay Pidathala (VP of AI Security Research) about a fully autonomous attack where an AI agent was manipulated to exfiltrate sensitive enterprise data all without a single user confirmation .

Ashish Rajan hosts Ankur Shah (CEO, Straiker AI) and Vinay Pidathala (VP of AI Security Research) for a deep dive into one of the first real-world demonstrations of a fully autonomous AI attack. The conversation explores how Indirect Prompt Injection, where agents and tools are manipulated without user confirmation, enabled the exfiltration of sensitive enterprise data.

Key insights include why the risk surface of AI agents grows with their utility (“the more useful it is, the more prone to risk it is”), and why traditional “shift-left” security models fall short when applied to AI systems. Instead, the bulk of vulnerabilities emerge during inference time, making runtime guardrails essential.

The episode introduces Straiker’s six-layer framework for securing agents and emphasizes why the future of defense will be built on “securing agents with agents.” For security leaders, AppSec teams, and AI engineers, this session provides a forward-looking blueprint for defending the next wave of agentic AI applications.

Secure your agentic AI and AI-native application journey with Straiker