What I’m Paying Attention to at RSAC 2026 (If You Care About AI Security)
A practitioner's take on the most important agentic AI security sessions at RSAC 2026, from OWASP AIVSS to AIKatz and the rise of AI-powered threats.

RSAC is noisy this year. Everyone is talking about AI. But most of that conversation is still stuck in last year's problems: prompt injection, data leakage, "what if my model says something weird."
That’s not really the story anymore. The real shift in AI security is toward agentic AI systems because once AI moves beyond generating content and starts interacting with real systems, the security model changes with it. The risks now look like workflow compromise, tool misuse, privilege abuse, and multi-step attack chains. That is where agentic AI security matters most, and where defenders need to focus.
If you are heading to RSAC and want signal, not marketing, here are the sessions worth your time and why they matter for where AI security is going next.
1. Defining, Measuring, and Managing Agentic AI Risk (OWASP AIVSS)
Monday, March 23 | 9:40 AM | Moscone West 2001
This is one of the most important conversations happening right now. We have spent years building scoring systems for traditional vulnerabilities: CVSS, EPSS, and their relatives. But agentic AI breaks a lot of those assumptions. The vulnerability is not always a bug, it is behavior. The blast radius is not a system, it is a workflow. The exploit chain is not linear, it is compositional across agents, tools, and data.
That is where AIVSS, the AI Vulnerability Scoring System, comes in. What I am looking for here is how they model agent autonomy and actionability, whether they account for multi-step attack chains, and whether scoring reflects real-world impact rather than just theoretical risk. Because in agentic systems, the question is not only whether something can be exploited. It is what the agent can do once it is.
This is foundational for anyone trying to build agentic security programs, governance frameworks, or risk management practices that actually hold up.
2. AIKatz: Attacking AI Desktop Apps for Fun & Profit
Monday, March 23 | 9:40 AM | Moscone West 2006
This one is just fun, and also deeply relevant. We are seeing a new class of targets emerge: AI desktop apps, local copilots, coding agents like Cursor and Claude Code, and anything with local context plus system access plus AI reasoning baked in. That combination is incredibly powerful and incredibly fragile.
Attacks here start to look like prompt injection leading to local file access, model manipulation enabling command execution, and context poisoning enabling persistent compromise. In other words, AI apps are becoming the new endpoint attack surface.
What I care about in this session is how attackers are chaining model behavior into OS-level actions, because that is where things move from interesting to dangerous very quickly. I also want to see where trust boundaries are actually breaking in practice, which is usually more places than people expect, and how those failures translate into real attack paths. Most importantly, I am interested in what post-exploitation looks like in an AI-native application, since that is still poorly understood and likely very different from what we are used to in traditional systems.
This is exactly why AI runtime security and visibility into agent behavior matters, not just scanning prompts or inputs.
3. My Talk: AiPTs: When Agents Become the New APT
Wednesday, March 25 | 1:00 PM | Briefing Center #1 (with Dan Regalado)
I will be presenting this one with Dan Regalado on Wednesday, and it is the session I am most excited about because it is the conversation I think the industry needs to be having.
We are starting to see the emergence of what I am calling AiPTs: AI-powered Persistent Threats. Not just attackers using AI to move faster, but attackers abusing AI agents themselves as the attack infrastructure. Think about agents that can browse, retrieve, execute, and act. Agents that persist across sessions and workflows. Agents that can be subtly manipulated rather than overtly exploited. Now layer on long-lived context, tool integrations across MCP servers, APIs, and SaaS applications, and autonomous decision-making, and the picture changes significantly.
You do not need traditional malware anymore in these environments. What emerges instead is persistent manipulation, workflow hijacking, and outcome manipulation, where attackers focus less on stealing data and more on influencing what the system does and the decisions it makes on behalf of real users.
That is what autonomous chaos looks like in practice, and it is exactly what Dan and I will be breaking down at the Briefing Center on Wednesday.
4. More Sessions Worth Your Time, By Day
If you are trying to understand where agentic AI security is actually heading, these are a few additional sessions worth your time across the week. I picked them because they each add a different angle to the same core problem: the security model built for static systems does not hold up when your infrastructure can think and act.
Tuesday, March 24
Crashing Comets: How AI Agents Break the Browser Threat Model
Browsers have been hardened over decades of threat modeling. Agentic AI browsers break all of that, suddenly making well-established mitigations ineffective. This talk shows how prompt injection becomes a path to stealing local files, making fraudulent purchases, or full account takeover once a capable agent is sitting in between. This is one of the most concrete examples of how agentic AI collapses existing security assumptions rather than just adding new ones on top.
When AI Agents Become Backdoors: The New Era of Client-Side Threats
This one is uncomfortable in the best way. The session covers critical vulnerabilities found across Cursor, Claude Code, Codex CLI, and Gemini CLI that transform trusted AI developer tools into persistent backdoors. If your security model assumes coding agents are safe by default because they were built by reputable vendors, this session will update that assumption with live demos and responsible disclosure timelines.
Wednesday, March 25
When Agents Fail: What 194,000 Attacks Reveal About LLM Security
Most LLM security conversations are still running on intuition and red team anecdotes. This session changes that. 194,000 attacks worth of real production telemetry on how LLM-powered systems actually fail — where agents break consistently, which techniques show up repeatedly, and what the gap looks like between your assumed attack surface and your real one. That gap is usually where defenders get caught. If you want to stop guessing and start working from evidence, be in this room.
Thursday, March 26
MCPwned: MCP RCE Vulnerability Leads to Azure Takeover
If MCP is on your radar at all, you should be in this room. The talk demonstrates a remote code execution vulnerability in the official Azure MCP server and walks through how an attacker can harvest credentials and compromise an entire Azure tenant from there. MCP is rapidly becoming standard infrastructure for LLM data access, and this session is a clear signal that we are not securing it fast enough. It directly connects to what I described earlier about tool integrations becoming the new attack surface in agentic environments.
The Gap Most Teams Still Have
Most organizations still do not have a clear understanding of their AI environment. They lack a complete inventory of agents, visibility into how those agents connect to tools and data, and the ability to observe what those agents are doing at runtime.
Because of that, detecting multi-step agentic attack chains is nearly impossible today. Attackers are not thinking in terms of architecture diagrams. They are focused on what your agents can actually do and how those capabilities can be turned against you.
What This All Points To
If you zoom out across the sessions listed above, there is a clear pattern. We are shifting from static models to dynamic agents, from input validation to behavior monitoring, from point-in-time testing to continuous adversarial pressure. And most importantly, security is moving from "what was generated" to "what was done." That is the core of agentic security.
What I Would Focus on After RSAC
If you are walking away from the week with action items, make it these:
• Get visibility into your agent ecosystem: agents, tools, MCP connections, and what each can actually reach.
• Start thinking in terms of attack chains, not single vulnerabilities.
• Invest in runtime guardrails, not just pre-deployment testing. Guardrails without runtime enforcement are suggestions.
• Assume agents will be manipulated, not just exploited in the traditional sense.
Final Thought
We are not securing chatbots anymore. We are securing autonomous systems that can take real-world actions across tools, data, and workflows that matter, where trust boundaries are loose and identity management remains undefined. That is a very different problem, and honestly, a much more interesting one.
If you are around RSAC, come check out the Straiker booth! I am always up for talking through real attack scenarios, weird edge cases, or what we are seeing break in production. That is usually where the truth is.
RSAC is noisy this year. Everyone is talking about AI. But most of that conversation is still stuck in last year's problems: prompt injection, data leakage, "what if my model says something weird."
That’s not really the story anymore. The real shift in AI security is toward agentic AI systems because once AI moves beyond generating content and starts interacting with real systems, the security model changes with it. The risks now look like workflow compromise, tool misuse, privilege abuse, and multi-step attack chains. That is where agentic AI security matters most, and where defenders need to focus.
If you are heading to RSAC and want signal, not marketing, here are the sessions worth your time and why they matter for where AI security is going next.
1. Defining, Measuring, and Managing Agentic AI Risk (OWASP AIVSS)
Monday, March 23 | 9:40 AM | Moscone West 2001
This is one of the most important conversations happening right now. We have spent years building scoring systems for traditional vulnerabilities: CVSS, EPSS, and their relatives. But agentic AI breaks a lot of those assumptions. The vulnerability is not always a bug, it is behavior. The blast radius is not a system, it is a workflow. The exploit chain is not linear, it is compositional across agents, tools, and data.
That is where AIVSS, the AI Vulnerability Scoring System, comes in. What I am looking for here is how they model agent autonomy and actionability, whether they account for multi-step attack chains, and whether scoring reflects real-world impact rather than just theoretical risk. Because in agentic systems, the question is not only whether something can be exploited. It is what the agent can do once it is.
This is foundational for anyone trying to build agentic security programs, governance frameworks, or risk management practices that actually hold up.
2. AIKatz: Attacking AI Desktop Apps for Fun & Profit
Monday, March 23 | 9:40 AM | Moscone West 2006
This one is just fun, and also deeply relevant. We are seeing a new class of targets emerge: AI desktop apps, local copilots, coding agents like Cursor and Claude Code, and anything with local context plus system access plus AI reasoning baked in. That combination is incredibly powerful and incredibly fragile.
Attacks here start to look like prompt injection leading to local file access, model manipulation enabling command execution, and context poisoning enabling persistent compromise. In other words, AI apps are becoming the new endpoint attack surface.
What I care about in this session is how attackers are chaining model behavior into OS-level actions, because that is where things move from interesting to dangerous very quickly. I also want to see where trust boundaries are actually breaking in practice, which is usually more places than people expect, and how those failures translate into real attack paths. Most importantly, I am interested in what post-exploitation looks like in an AI-native application, since that is still poorly understood and likely very different from what we are used to in traditional systems.
This is exactly why AI runtime security and visibility into agent behavior matters, not just scanning prompts or inputs.
3. My Talk: AiPTs: When Agents Become the New APT
Wednesday, March 25 | 1:00 PM | Briefing Center #1 (with Dan Regalado)
I will be presenting this one with Dan Regalado on Wednesday, and it is the session I am most excited about because it is the conversation I think the industry needs to be having.
We are starting to see the emergence of what I am calling AiPTs: AI-powered Persistent Threats. Not just attackers using AI to move faster, but attackers abusing AI agents themselves as the attack infrastructure. Think about agents that can browse, retrieve, execute, and act. Agents that persist across sessions and workflows. Agents that can be subtly manipulated rather than overtly exploited. Now layer on long-lived context, tool integrations across MCP servers, APIs, and SaaS applications, and autonomous decision-making, and the picture changes significantly.
You do not need traditional malware anymore in these environments. What emerges instead is persistent manipulation, workflow hijacking, and outcome manipulation, where attackers focus less on stealing data and more on influencing what the system does and the decisions it makes on behalf of real users.
That is what autonomous chaos looks like in practice, and it is exactly what Dan and I will be breaking down at the Briefing Center on Wednesday.
4. More Sessions Worth Your Time, By Day
If you are trying to understand where agentic AI security is actually heading, these are a few additional sessions worth your time across the week. I picked them because they each add a different angle to the same core problem: the security model built for static systems does not hold up when your infrastructure can think and act.
Tuesday, March 24
Crashing Comets: How AI Agents Break the Browser Threat Model
Browsers have been hardened over decades of threat modeling. Agentic AI browsers break all of that, suddenly making well-established mitigations ineffective. This talk shows how prompt injection becomes a path to stealing local files, making fraudulent purchases, or full account takeover once a capable agent is sitting in between. This is one of the most concrete examples of how agentic AI collapses existing security assumptions rather than just adding new ones on top.
When AI Agents Become Backdoors: The New Era of Client-Side Threats
This one is uncomfortable in the best way. The session covers critical vulnerabilities found across Cursor, Claude Code, Codex CLI, and Gemini CLI that transform trusted AI developer tools into persistent backdoors. If your security model assumes coding agents are safe by default because they were built by reputable vendors, this session will update that assumption with live demos and responsible disclosure timelines.
Wednesday, March 25
When Agents Fail: What 194,000 Attacks Reveal About LLM Security
Most LLM security conversations are still running on intuition and red team anecdotes. This session changes that. 194,000 attacks worth of real production telemetry on how LLM-powered systems actually fail — where agents break consistently, which techniques show up repeatedly, and what the gap looks like between your assumed attack surface and your real one. That gap is usually where defenders get caught. If you want to stop guessing and start working from evidence, be in this room.
Thursday, March 26
MCPwned: MCP RCE Vulnerability Leads to Azure Takeover
If MCP is on your radar at all, you should be in this room. The talk demonstrates a remote code execution vulnerability in the official Azure MCP server and walks through how an attacker can harvest credentials and compromise an entire Azure tenant from there. MCP is rapidly becoming standard infrastructure for LLM data access, and this session is a clear signal that we are not securing it fast enough. It directly connects to what I described earlier about tool integrations becoming the new attack surface in agentic environments.
The Gap Most Teams Still Have
Most organizations still do not have a clear understanding of their AI environment. They lack a complete inventory of agents, visibility into how those agents connect to tools and data, and the ability to observe what those agents are doing at runtime.
Because of that, detecting multi-step agentic attack chains is nearly impossible today. Attackers are not thinking in terms of architecture diagrams. They are focused on what your agents can actually do and how those capabilities can be turned against you.
What This All Points To
If you zoom out across the sessions listed above, there is a clear pattern. We are shifting from static models to dynamic agents, from input validation to behavior monitoring, from point-in-time testing to continuous adversarial pressure. And most importantly, security is moving from "what was generated" to "what was done." That is the core of agentic security.
What I Would Focus on After RSAC
If you are walking away from the week with action items, make it these:
• Get visibility into your agent ecosystem: agents, tools, MCP connections, and what each can actually reach.
• Start thinking in terms of attack chains, not single vulnerabilities.
• Invest in runtime guardrails, not just pre-deployment testing. Guardrails without runtime enforcement are suggestions.
• Assume agents will be manipulated, not just exploited in the traditional sense.
Final Thought
We are not securing chatbots anymore. We are securing autonomous systems that can take real-world actions across tools, data, and workflows that matter, where trust boundaries are loose and identity management remains undefined. That is a very different problem, and honestly, a much more interesting one.
If you are around RSAC, come check out the Straiker booth! I am always up for talking through real attack scenarios, weird edge cases, or what we are seeing break in production. That is usually where the truth is.
Related Resources
Click to Open File
similar resources
Secure your agentic AI and AI-native application journey with Straiker
.avif)







