- A vulnerability has been found in OpenAI‘s ChatGPT Atlas browser that allows attackers to insert harmful commands into the AI’s memory.
- The flaw exploits a cross-site request forgery (CSRF) attack to corrupt persistent AI memory that carries across devices and sessions.
- This memory feature was introduced in February 2024 to personalize ChatGPT responses based on stored user details.
- Malicious instructions persist until manually deleted, posing risks of code execution, privilege escalation, and data theft.
- ChatGPT Atlas has weaker anti-phishing protections compared to browsers like Google Chrome and Microsoft Edge, increasing user exposure.
Researchers at LayerX Security disclosed a new security weakness in OpenAI‘s ChatGPT Atlas web browser on October 27, 2025. The flaw allows attackers to inject malicious instructions into ChatGPT’s persistent memory and execute arbitrary code. This vulnerability could let Hackers infect systems, gain unauthorized access, or spread Malware.
Or Eshed, CEO of LayerX, explained in a report that the exploit relies on a cross-site request forgery (CSRF) attack. CSRF tricks a logged-in user into executing unwanted actions by sending unauthorized commands from an attacker’s site. In this case, attackers inject harmful data into ChatGPT’s memory, which remains across devices and browsing sessions.
ChatGPT’s memory feature, introduced by OpenAI in February 2024 and described here, allows the AI to remember personal user information like names, interests, or preferences to tailor responses. Michelle Levy, head of security research at LayerX, noted, “By chaining a standard CSRF to a memory write, an attacker can invisibly plant instructions that survive across devices, sessions, and even different browsers.” She added that normal user prompts might trigger malicious actions without detection.
The attack sequence involves a user logging in to ChatGPT, being tricked into opening a harmful link, which sends a CSRF request. This request silently injects rogue instructions into the AI’s memory. When the user later interacts with ChatGPT, these tainted memories execute unauthorized code. Affected users must manually delete corrupted memories by navigating to ChatGPT’s settings, as the harmful data persists indefinitely.
LayerX highlighted that ChatGPT Atlas lacks strong anti-phishing measures, making it about 90% more vulnerable than common browsers like Google Chrome or Microsoft Edge. Testing showed Chrome and Edge blocked nearly half of phishing attempts, while ChatGPT Atlas blocked less than 6%. This weak protection widens risks, including scenarios where malicious coding requests could plant hidden instructions in the AI.
Additional research by NeuralTrust revealed a similar prompt injection attack where ChatGPT Atlas could be jailbroken through disguised URLs. According to LayerX, AI-based browsers combine apps, identity, and AI features, increasing their security risk. Eshed stated, “Vulnerabilities like ‘Tainted Memories’ are the new supply chain: they travel with the user, contaminate future work, and blur the line between helpful AI automation and covert control.” He emphasized the need for enterprises to treat browsers as critical infrastructure due to their growing AI integration.
For detailed technical information and mitigations, see the official LayerX report here.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- Figure’s sudden crash triggers $13 billion surge in blockchain loan activity – DL News
- Rapper Razzlekhan Thanks Trump for Early Prison Release
- Crypto Soars 3-5% on U.S.-China Trade Deal Hopes, BTC $115.5K
- Has Shiba Inu Awakened from Its 1-Cent Ambition? Was It Achievable?
- UK Targets Crypto Fraud with Advanced Blockchain Intelligence in 2025
