- Researchers found a security flaw in OpenAI ChatGPT’s Deep Research tool that leaks Gmail inbox data through a single crafted email.
- The attack, called ShadowLeak, requires no user interaction and was fixed by OpenAI in August 2025.
- The method uses hidden commands in email formatting to trick the AI agent into exfiltrating data from cloud services.
- This vulnerability bypasses standard security and works with several connectors, including Gmail, Dropbox, and Microsoft Outlook.
- Researchers also showed how attackers can trick ChatGPT agents into solving CAPTCHAs using context manipulation.
Researchers have reported a major security vulnerability in OpenAI’s ChatGPT Deep Research agent that allowed attackers to steal Gmail inbox data using a specially crafted email. The flaw, named ShadowLeak by Cybersecurity firm Radware, involved no user action and was resolved by OpenAI in August 2025 after its disclosure in June.
The attack works through an indirect prompt injection, where malicious instructions are concealed within the email’s HTML content using methods like white-on-white text or layout tricks. These instructions remain invisible to the user but are still processed and followed by the AI agent when reading emails. Radware researchers explained, “The attack utilizes an indirect prompt injection that can be hidden in email HTML…so the user never notices the commands, but the agent still reads and obeys them.”
Unlike early methods that used images to carry out data theft, ShadowLeak enables data to be leaked directly from OpenAI’s cloud infrastructure. As described by researchers Zvika Babo, Gabi Nakibly, and Maor Uziel, this makes the breach hard to detect with typical local or enterprise security systems. The malicious email prompts the agent to scan the user’s email for sensitive information, encode it in Base64, and then send it to an external server using a browser tool.
The proof-of-concept required users to have the Gmail integration enabled in ChatGPT. However, Radware stated that the same technique can target other supported connectors such as Box, Dropbox, GitHub, Google Drive, HubSpot, Microsoft Outlook, Notion, or SharePoint, increasing the potential risk. The main difference between ShadowLeak and previous attacks is that this one operates in the cloud environment, making it less visible to conventional defenses.
In a separate demonstration, AI security platform SPLX showed that prompt manipulation can also make ChatGPT agents solve image-based CAPTCHAs, which are designed to block automated access. By framing CAPTCHAs as “fake” and continuing a conversation that established context, researchers found, “Attackers could reframe real controls as ‘fake’ to bypass them, underscoring the need for context integrity, memory hygiene, and continuous red teaming.”
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- Crypto Market May Underestimate Aggressive Fed Rate Cut Path
- Kevin Durant Recovers Lost Bitcoin After 10 Years Locked Out
- FTX Bankruptcy Estate to Distribute $1.6B to Creditors Sept. 30
- Trump to Impose $100,000 Fee on H-1B Visas, Overhaul Process
- Giant Gold Trump Bitcoin Statue Unveiled Outside U.S. Capitol