- Three security flaws affecting Google‘s Gemini AI assistant were recently fixed after being disclosed by Cybersecurity researchers.
- The vulnerabilities threatened users’ privacy by enabling data theft through prompt injections and search manipulations.
- Each flaw targeted a different element of the Gemini suite: Cloud Assist, Search Personalization model, and Browsing Tool.
- Google has since strengthened defenses by stopping hyperlink rendering in logs and enhancing protections against prompt injection attacks.
- The findings emphasize that AI tools themselves can be exploited as attack platforms, highlighting the need for strict security measures.
Google has patched three security vulnerabilities found in its Gemini Artificial Intelligence assistant that could have exposed users to privacy risks and data theft. The flaws were revealed by cybersecurity researchers on September 30, 2025, who identified methods attackers might have used to access sensitive information.
The security issues, called the Gemini Trifecta, affected three components of the Gemini suite. They included a prompt injection vulnerability in Gemini Cloud Assist, a search-injection flaw in the Gemini Search Personalization model, and a prompt injection risk in the Gemini Browsing Tool.
Tenable researcher Liv Matan detailed that the Cloud Assist defect allowed threats actors to embed malicious prompts in HTTP requests, targeting various cloud services like Cloud Run and App Engine. The Search Personalization flaw let attackers manipulate Chrome search history via JavaScript to control the AI’s responses and leak saved data. The Browsing Tool vulnerability enabled exfiltration of user information by exploiting its webpage summarization function.
One possible attack involved prompting Gemini to query all public assets or misconfigurations in cloud settings and sending the sensitive data to a malicious server. According to Matan, “This should be possible since Gemini has the permission to query assets through the Cloud Asset API.”
Following responsible disclosure, Google disabled hyperlink rendering in log summaries and implemented additional safeguards to prevent prompt injection. Matan commented, “The Gemini Trifecta shows that AI itself can be turned into the attack vehicle, not just the target.” She stressed the importance of visibility and strict policy enforcement to secure AI tools.
The vulnerabilities highlight the increasing attack surface as AI software integrates more deeply into systems. In a related case, security firm CodeIntegrity described a data exfiltration method using prompt instructions hidden in PDF files for Notion’s AI agent, demonstrating ongoing risks when AI tools have broad workspace access.
This collection of security issues serves as a reminder that advancing AI capabilities requires parallel investments in protecting these technologies from abuse.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- BRICS May Announce New Common Currency at 2026 Delhi Summit
- Visa Launches Stablecoin Pre-Funding Pilot for Business Payouts
- Bitcoin Surges as Fed Shift Sparks Market Panic and Gold Rally
- UGM Pilots Blockchain Credentials, Digital Wallets for 60,000 Students
- UK Seizes £5.5B Bitcoin in Largest Crypto Fraud Bust