EchoLeak exposes zero-click LLM exfiltration risk
Attacks
EchoLeak, disclosed as CVE-2025-32711, demonstrates a practical, high-severity zero-click prompt injection against a production assistant. Large Language Model (LLM) features in Microsoft 365 Copilot are shown to bridge internal data and external networks without any user interaction, allowing remote, unauthenticated data exfiltration and privilege escalation.
The stakes are straightforward for security teams and decision makers. An assistant that mixes internal context, external content and automatic fetching becomes a new attack surface. Defences that worked for traditional apps, such as simple redaction or single-stage input filters, can be bypassed when the LLM itself forms part of the execution chain.
How EchoLeak worked
The researchers map an attack chain built from public information. A crafted email carries hidden instructions that manipulate the assistant's prompt context. Microsoft’s Cross Prompt Injection Attempt (XPIA) classifier is evaded. Reference-style Markdown links defeat link redaction. Auto-fetched images trigger network requests, and a Microsoft Teams proxy permitted by the content security policy (CSP) relays exfiltrated data out of the tenant. The result is crossing of trust boundaries and escalation of privileges inside the LLM environment.
This is not a novel bug class so much as an old pattern amplified. EchoLeak echoes earlier web-era problems where composition and automatic fetching turned innocuous content into an attack vehicle. New platforms revive familiar risk models but with higher scale and automation.
What teams should do now
The paper recommends engineering controls that map clearly to operational steps. Apply least privilege to model access, partition prompts so powerful models never see unrestricted internal context, and add provenance-based access control to gate sensitive sources. Strengthen input and output filtering, restrict or disable automatic content fetching, and tighten CSPs and proxy allowances. Defence in depth and continuous adversarial testing against realistic prompt injections are essential.
Caveats matter: the work is a case study built from public data and does not claim a live reproduction in Microsoft’s production environment. Implementations will differ across vendors and configurations.
For practitioners the practical takeaway is plain: treat copilots as active, networked components of your stack and harden them accordingly. History shows that new platforms reveal old failure modes; teams that assume otherwise get surprised. Continuous red teaming and compartmentalisation will buy time while the industry converges on safer defaults.
Additional analysis of the original ArXiv paper
📋 Original Paper Title and Abstract
EchoLeak: The First Real-World Zero-Click Prompt Injection Exploit in a Production LLM System
🔍 ShortSpan Analysis of the Paper
Problem
EchoLeak presents a real world zero click prompt injection vulnerability in a production LLM system, Microsoft 365 Copilot, that enabled remote unauthenticated data exfiltration via a crafted email. The work shows how prompt manipulation can bridge internal data sources and external networks across trust boundaries without user interaction, highlighting AI security risks in enterprise deployments.
Approach
The paper offers an in depth case study of CVE-2025-32711, analysing the attack chain and the ways the attacker bypassed multiple safeguards. It describes how hidden instructions in email coerced Copilot to expose sensitive data in its output, how reference style Markdown links circumvented link redaction, how auto fetched images and a Microsoft Teams proxy allowed by the content security policy enabled data exfiltration, and why existing protections failed. It then outlines engineering mitigations including prompt scope isolation, enhanced input and output filtering, provenance based access control, and stricter content security policies, underpinned by principles of least privilege, defence in depth and continuous adversarial testing.
Key Findings
- Zero click exfiltration from a crafted external email allowed by bypassing LLM trust boundaries without user interaction.
- The attack chain circumvented a cross prompt injection classifier, external link redaction and a CSP by using reference style links and a corporate proxy through an auto fetched image.
- EchoLeak demonstrates prompt injection as a practical high severity threat in production AI systems and provides a blueprint for defending against future AI native threats.
- Defences proposed include prompt partitioning, stronger input output filtering, provenance based access control and tightened content security policies, along with defence in depth and ongoing adversarial testing.
Limitations
The work is a case study based on publicly available information and did not involve reproducing the attack in a live Copilot environment. Some conclusions reflect a conceptual analogue rather than direct replication in Microsoft production Copilot, and not all mitigations were evaluated exhaustively. The authors acknowledge potential differences across systems and the evolving security landscape.
Why It Matters
The incident emphasises enterprise privacy and governance risks from AI enabled data leakage and cross domain access as AI copilots become ubiquitous in critical workflows. It argues for robust safeguards including least privilege, defence in depth, provenance based access controls and continuous adversarial testing, to harden AI systems against future AI native threats and to guide secure AI design and operations in organisations.