Straiker’s STAR team leads research in AI security where they are uncovering vulnerabilities in LLMs and agentic AI, sharing discoveries, contributing to open-source, and building the guardrails that protect tomorrow’s autonomous systems and applications.
Straiker AI & Security Research
.avif)
Agentic AI Security Research & Insights
Explore Straiker STAR’s latest findings on LLM vulnerabilities, agentic AI threats, and runtime guardrails. Access blogs, frameworks, and downloadable reports that define the future of AI-native security.
Cyberspike Villager – Cobalt Strike’s AI-native Successor
Straiker uncovers Villager, a Chinese-based pentesting framework that acts as an AI-powered framework in the style of Cobalt Strike, automating hacking and lowering the barrier for global attackers.


The Silent Exfiltration: Zero‑Click Agentic AI Hack That Can Leak Your Google Drive with One Email
Straiker reveals how zero-click exploits can hijack AI agents to exfiltrate Google Drive data, no user interaction needed. See how attack chains form, why autonomy is dangerous, and how runtime guardrails catch what others miss.


Agentic Danger: DNS Rebinding Exposes Internal MCP Servers
The Straiker AI Research (STAR) team found a new attack that we’re calling MCP rebinding attack, which is a combination of DNS rebinding and MCP over Server-Sent Events (SSE) protocol.


Rethinking Security in the AI Age
An AI Security Researcher’s Perspective


Securing Agentic AI in a Multi-Agent World
This post introduces the unique security challenges posed by agentic architectures and why traditional security measures aren’t equipped to handle them.


Meet Straiker’s STAR Team
STAR is more than a research lab—it’s a team of innovators dedicated to securing agentic AI. Meet the experts publishing breakthroughs, contributing to open-source, and collaborating with the wider AI security community.

Our Approach to Agentic AI Security Research
Straiker’s STAR team advances AI security with a focus on impact and openness. We investigate vulnerabilities in LLMs and autonomous agents, publish frameworks that guide the industry, and contribute to open-source guardrails. Our mission: make agentic AI safer for enterprises and the world.
AI-native Thinking
Our researchers embrace both the promise and the pitfalls of AI. Securing non-deterministic systems demands a fresh approach—one that fuses proven security principles with new methods built from the ground up for agentic AI.
Proactive Defense
We don’t just study AI vulnerabilities—we weaponize our findings to build defenses. Straiker’s red teaming and runtime guardrails are informed by STAR research, ensuring every safeguard is tested against the latest real-world threats.
Practical Impact
Our research is grounded in reality. We turn complex discoveries into actionable guardrails, benchmarks, and frameworks that security and engineering teams can apply today.
Securing the future, so you can focus on imagining it
Get FREE AI Risk Assessment
.avif)