Inside Shadow Escape: The Zero-Click AI attack that changes everything

Operant AI’s discovery of Shadow Escape reveals how zero-click exploits can weaponise AI agents like ChatGPT and Gemini via MCP, creating invisible attack chains that bypass traditional enterprise cybersecurity.

author-image
DQC Bureau
New Update
Inside Shadow Escape The Zero-Click AI attack that changes everything

Inside Shadow Escape: The Zero-Click AI attack that changes everything

In a discovery that shakes the AI security landscape, Operant AI, creator of the world’s only Runtime AI Defence Platform, has uncovered “Shadow Escape,” a powerful zero-click exploit targeting the Model Context Protocol (MCP) and connected AI agents. The attack enables data exfiltration through legitimate AI assistants, including ChatGPT, Claude, Gemini, and open-source LLM-based agents, all without user interaction or detection.

Advertisment

This isn’t a traditional cyberattack. It’s a new class of threat that operates entirely within authorised identity boundaries and inside enterprise firewalls, making it invisible to conventional security tools.

When Trusted AI Becomes the Attack Vector

According to Operant AI’s threat research, the exploit leverages MCP, a widely adopted protocol that allows AI agents to connect with enterprise systems, APIs, and databases. As organisations rush to integrate agentic AI for automation and productivity, they’re unknowingly opening invisible pathways for data theft.

“Securing MCP and agentic identities has become mission-critical,” said Donna Dodson, former Chief of Cybersecurity at NIST. “Operant AI’s real-time detection and redaction capabilities are pivotal to operationalising MCP safely in high-security industries.”

Advertisment

Inside the Shadow Escape Attack Chain

Shadow Escape doesn’t rely on phishing, social engineering, or user error. Instead, it weaponises trust within authenticated systems through three invisible stages:

  1. Infiltration: Malicious code hides inside otherwise legitimate documents uploaded to AI assistants, evading all standard security scans.

  2. Discovery: The AI agent autonomously searches across connected databases, surfacing sensitive data using authorised MCP access.

  3. Exfiltration: Hidden instructions command the agent to send entire datasets to external servers, disguised as routine analytics or performance uploads.

The outcome: silent extraction of PII, medical, or financial data, including Social Security and patient records, to the dark web, all while remaining undetected by IT teams or SIEM tools.

Advertisment

Beyond Any Single AI Platform

Shadow Escape is not vendor-specific. It affects any AI assistant or enterprise agent utilising MCP, from ChatGPT and Claude to Google Gemini, Llama-based copilots, and industry-specific assistants in healthcare, finance, and customer service.

“The risk lies not in the model, but in the protocol,” explained Vrajesh Bhavsar, CEO and co-founder of Operant AI. “Traditional tools can’t see what happens within trusted AI sessions. The result is a blind spot that’s both invisible and immediate.”

The Scale of the Threat

According to McKinsey’s 2025 Technology Trends Outlook, nearly 80% of enterprises now use agentic AI assistants for mission-critical operations. Operant AI’s analysis suggests that trillions of private records could be exposed through such zero-click, MCP-based exploit chains.

Advertisment

The implications are profound, especially for healthcare, banking, and insurance sectors, where AI copilots handle regulated data under HIPAA, PCI-DSS, and other compliance frameworks.

Operant AI’s Response and Recommendations

Operant AI has reported the issue to OpenAI and initiated the CVE designation process, framing Shadow Escape as a protocol-level vulnerability rather than a product flaw. The company recommends immediate steps for MCP users:

  • Audit all MCP-enabled AI agents for access scope, permissions, and exposed endpoints.

  • Deploy runtime AI defence guardrails capable of detecting and blocking zero-click data exfiltration.

  • Implement MCP trust zones with allow-listing for verified servers and real-time connection blocking.

  • Use inline auto-redaction tools for sensitive data (PII, PHI, financial) before external transmission.

  • Apply least-privilege access policies to all MCP-enabled AI systems.

Advertisment

These measures aim to restore visibility into what AI agents can access and where that data travels.

A Wake-Up Call for AI Security

Shadow Escape is more than an exploit; it’s a turning point for enterprise AI governance. As AI systems grow more autonomous, organisations must rethink security not just around endpoints and APIs but around the identities and privileges of AI agents themselves.

The message is clear: AI can now attack from within. Enterprises that fail to secure their runtime AI environments risk silent breaches that unfold at machine speed.

Advertisment

Read More:

How enterprise cloud is transforming in India with AI-native innovation?

Inside Veeam’s ProPartner strategy: What’s next for data protection