- Google used its AI-powered framework to spot a major security flaw in the open-source SQLite database before it was widely exploited.
- The flaw, registered as CVE-2025-6965, is a memory corruption vulnerability affecting versions before 3.50.2.
- The AI agent, named “Big Sleep,” identified the threat, potentially stopping active attempts to exploit it.
- Google is promoting a hybrid security approach for AI agents to help reduce risks from vulnerabilities and malicious actions.
- This marks the first documented case of an AI agent stopping a vulnerability before real-world exploitation.
On July 16, 2025, Google announced that its AI-based vulnerability detection system identified a critical flaw in the SQLite database engine before attackers could exploit it. The discovery involved an issue labeled CVE-2025-6965 and was found by “Big Sleep,” an AI agent created through a collaboration between Google DeepMind and Google Project Zero.
The vulnerability received a CVSS score of 7.2, which signals a severe risk. According to SQLite project maintainers, attackers able to inject harmful SQL code could cause an integer overflow and read beyond the limits of an array, leading to unpredictable behavior or data leaks. All SQLite versions prior to 3.50.2 are affected.
Google described this security flaw as critical, noting that threat actors were aware of it and could have exploited it. “Through the combination of threat intelligence and Big Sleep, Google was able to actually predict that a vulnerability was imminently going to be used and we were able to cut it off beforehand,” said Kent Walker, President of Global Affairs at Google and Alphabet, in an official statement. He also said, “We believe this is the first time an AI agent has been used to directly foil efforts to exploit a vulnerability in the wild.”
Last year, Big Sleep also detected a separate SQLite vulnerability—a stack buffer underflow—that could have led to crashes or attackers running arbitrary code. In response to these incidents, Google released a white paper that recommends clear human controls and strict operational boundaries for AI agents.
Google says traditional software security controls are not enough, as they don’t provide the needed context for AI agents. At the same time, security based only on AI’s judgment does not provide strong guarantees because of weaknesses like prompt injection. To tackle this, Google uses a multi-layered, “defense-in-depth” approach that blends traditional safeguards and AI-driven defenses. These layers aim to reduce risks from attacks, even if the agent’s internal process is manipulated by threats or unexpected input.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- Trader James Wynn Returns to Hyperliquid With $20M Bitcoin Bet
- Canton Network Adds Top Liquidity Firms for On-Chain Collateral
- Nexo Cuts AXS Flexible Savings Rates Following Protocol Changes
- DEA Seizes $10M in Sinaloa Cartel Crypto During Florida Drug Raids
- Hong Kong Unveils LEAP Framework, Sets Global Crypto Rulebook