Straiker’s STAR team leads research in AI security where they are uncovering vulnerabilities in LLMs and agentic AI, sharing discoveries, contributing to open-source, and building the guardrails that protect tomorrow’s autonomous systems and applications.

Straiker AI & Security Research

Research in Action

Agentic AI Security 
Research & Insights

Explore Straiker STAR’s latest findings on LLM vulnerabilities, agentic AI threats, and runtime guardrails. Access blogs, frameworks, and downloadable reports that define the future of AI-native security.

Blog
September 11, 2025

Cyberspike Villager – Cobalt Strike’s AI-native Successor

Straiker uncovers Villager, a Chinese-based pentesting framework that acts as an AI-powered framework in the style of Cobalt Strike, automating hacking and lowering the barrier for global attackers.

Read More
Blog
August 5, 2025

The Silent Exfiltration: Zero‑Click Agentic AI Hack That Can Leak Your Google Drive with One Email

Straiker reveals how zero-click exploits can hijack AI agents to exfiltrate Google Drive data, no user interaction needed. See how attack chains form, why autonomy is dangerous, and how runtime guardrails catch what others miss.

Read More
Blog
May 22, 2025

Agentic Danger: DNS Rebinding Exposes Internal MCP Servers

The Straiker AI Research (STAR) team found a new attack that we’re calling MCP rebinding attack, which is a combination of DNS rebinding and MCP over Server-Sent Events (SSE) protocol.

Read More
Blog
March 26, 2025

Rethinking Security in the AI Age

An AI Security Researcher’s Perspective

Read More
Blog
March 26, 2025

Securing Agentic AI in a Multi-Agent World

This post introduces the unique security challenges posed by agentic architectures and why traditional security measures aren’t equipped to handle them.

Read More
Tag
Date

Heading

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Read More
Tag
Date

Heading

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Read More
Guardians Behind the Research

Meet Straiker’s STAR Team

STAR is more than a research lab—it’s a team of innovators dedicated to securing agentic AI. Meet the experts publishing breakthroughs, contributing to open-source, and collaborating with the wider AI security community.

Research with Purpose

Our Approach to Agentic AI Security Research

Straiker’s STAR team advances AI security with a focus on impact and openness. We investigate vulnerabilities in LLMs and autonomous agents, publish frameworks that guide the industry, and contribute to open-source guardrails. Our mission: make agentic AI safer for enterprises and the world.

AI-native Thinking

Our researchers embrace both the promise and the pitfalls of AI. Securing non-deterministic systems demands a fresh approach—one that fuses proven security principles with new methods built from the ground up for agentic AI.

Proactive Defense

We don’t just study AI vulnerabilities—we weaponize our findings to build defenses. Straiker’s red teaming and runtime guardrails are informed by STAR research, ensuring every safeguard is tested against the latest real-world threats.

Practical Impact

Our research is grounded in reality. We turn complex discoveries into actionable guardrails, benchmarks, and frameworks that security and engineering teams can apply today.

Securing the future, so you can focus on imagining it

Get FREE AI Risk Assessment

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.