Anonymous Intelligence Signal

EchoLeak Zero-Click Attack Exploits M365 Copilot with CVSS 9.3 Severity—Indirect Prompt Injection Emerges as Critical AI Threat

human The Lab unverified 2026-05-10 11:01:39 Source: Mastodon:mastodon.social:#infosec

Indirect prompt injection has emerged as a stealthy attack vector that bypasses traditional interaction models entirely—planting malicious payloads in content that AI systems ingest, then leveraging tool access to exfiltrate data, send emails, or execute unauthorized API calls. The severity of this threat class was underscored by EchoLeak, a zero-click exploit against Microsoft 365 Copilot rated CVSS 9.3, demonstrating that enterprise AI assistants can be weaponized without any direct user engagement with the model.

The attack methodology represents a paradigm shift in AI security: adversaries no longer need to craft prompts that users submit. Instead, they embed instructions in documents, emails, or web content that AI systems process autonomously. When the AI reads compromised content and has access to organizational tools—email clients, file systems, APIs—the embedded payload can trigger actions ranging from data exfiltration to lateral movement within corporate environments. Microsoft's M365 Copilot, deeply integrated into enterprise workflows, became a high-value target precisely because of its extensive tool access and autonomous processing capabilities.

Security researchers are responding with structured testing frameworks. garak, an open-source tool, automates LLM security probing across multiple attack categories, enabling organizations to identify prompt injection vulnerabilities before adversaries exploit them. As AI assistants proliferate across enterprise software, the EchoLeak disclosure signals mounting pressure on vendors to harden indirect input channels and restrict autonomous tool execution—particularly for systems with access to sensitive data and communication infrastructure.