Invisible Heist: How ShadowLeak Used AI to Steal Gmail Data in Plain Sight
A new zero-click exploit let attackers siphon Gmail data through an AI agent - without a single click or hint to the user.
Fast Facts
- Researchers at Radware discovered the ShadowLeak exploit in OpenAI’s ChatGPT Deep Research agent.
- The attack worked without any user interaction, using hidden commands in emails.
- Private Gmail data could be exfiltrated directly from OpenAI’s servers, bypassing traditional security tools.
- The vulnerability was reported in June 2025 and fixed by OpenAI by September 3, 2025.
- Similar tactics could target other connected services, like Google Drive and Microsoft Teams.
A Ghost in the Machine
Imagine opening your inbox, trusting your AI assistant to sift through the clutter - and unknowingly handing your secrets to a thief. That’s the chilling scenario exposed by Radware’s security team, who unearthed a flaw in OpenAI’s ChatGPT Deep Research agent that allowed attackers to pilfer Gmail data without so much as a click or warning.
ShadowLeak: The Exploit You Never Saw
Dubbed “ShadowLeak,” this exploit was a masterclass in digital sleight of hand. Unlike previous browser-based vulnerabilities, ShadowLeak operated entirely on OpenAI’s cloud servers, making it invisible to the user and undetectable by most security tools. At its core was a technique called “indirect prompt injection” - think of it as a secret message hidden inside a seemingly harmless email. When the AI agent was asked to scan emails, it would unwittingly read these invisible instructions, extracting sensitive data and sending it straight to the attacker’s server.
The attack was “zero-click,” meaning victims didn’t have to open, click, or interact with the message. The malicious code, cleverly disguised using social engineering tricks, manipulated the AI agent into believing it was acting on legitimate requests. By encoding stolen data in Base64, the attack looked innocuous and bypassed safety checks, achieving a 100% success rate in the researchers’ tests.
From AgentFlayer to ShadowLeak: The AI Attack Evolution
ShadowLeak isn’t the first time AI tools have been manipulated to leak data. Previous exploits like AgentFlayer and EchoLeak relied on browser vulnerabilities, but ShadowLeak’s server-side approach marks a dangerous evolution. Because the attack never touched a user’s device, traditional antivirus and endpoint detection tools were powerless. The exploit’s success also highlights how AI’s very strength - its ability to autonomously process vast amounts of data - can become a liability when attackers inject invisible commands into the information it consumes.
According to cybersecurity analysts, the rush to integrate AI agents into business workflows has created a new attack surface. As more sensitive data flows through cloud-connected AI tools, the potential for service-side exploits grows. A 2024 Gartner report warned that “AI-driven business automation will become the next major frontier for cybercriminals,” and ShadowLeak proves that prediction prescient.
What’s Next? Lessons from the Shadow
After Radware responsibly disclosed the flaw in June 2025, OpenAI patched the vulnerability within weeks. But the underlying lesson lingers: as AI agents become trusted gatekeepers to our most private information, attackers will find ever more inventive ways to turn that trust against us. Companies are now urged to sanitize emails before AI processing and monitor agent activity closely - because in the world of AI, what you don’t see can hurt you.
WIKICROOK
- Zero: A zero-day vulnerability is a hidden security flaw unknown to the software maker, with no fix available, making it highly valuable and dangerous to attackers.
- Prompt injection: Prompt injection is when attackers feed harmful input to an AI, causing it to act in unintended or dangerous ways, often bypassing normal safeguards.
- Service: A service is a network-accessible application or process, like email or file storage, that provides functions to users or systems and may be targeted by cyberattacks.
- Base64 encoding: Base64 encoding converts data into a readable text string, making it easier to embed or transfer files and code within text-based systems.
- Social engineering: Social engineering is the use of deception by hackers to trick people into revealing confidential information or providing unauthorized system access.