- Google‘s DeepMind introduced CodeMender, an AI tool that detects and fixes software vulnerabilities automatically.
- CodeMender uses Google‘s Gemini Deep Think models to identify and repair security flaws in code.
- The AI agent can both address new vulnerabilities and proactively secure existing codebases.
- Since development began, CodeMender has contributed 72 security patches to open-source projects.
- Google is launching an AI Vulnerability Reward Program to encourage reporting of AI security issues in its products.
Google‘s DeepMind division announced the launch of CodeMender, an Artificial Intelligence agent designed to detect, patch, and rewrite vulnerable software code automatically. The goal is to prevent future security exploits by fixing existing code and addressing new vulnerabilities swiftly.
Since its creation, CodeMender has contributed 72 security fixes to various open-source projects, including some with codebases as large as 4.5 million lines. DeepMind stated that the AI tool helps developers focus on software creation by automating the generation of high-quality security patches.
CodeMender operates by leveraging Google‘s Gemini Deep Think models, which debug, flag, and resolve security weaknesses at their root cause. It also uses a large language model (LLM)-based critique system to compare original and modified code, ensuring patches do not introduce errors or regressions and enabling self-correction if necessary.
DeepMind researchers Raluca Ada Popa and Four Flynn explained that CodeMender functions both reactively—addressing freshly discovered vulnerabilities—and proactively by rewriting existing code to eliminate entire categories of risks. The company plans to engage maintainers of critical open-source projects to review and provide feedback on CodeMender’s patches to improve code security.
In addition, Google has launched an AI Vulnerability Reward Program (AI VRP), offering rewards up to $30,000 for reporting AI-related security problems like prompt injections, jailbreaks, and misalignment in its products. Some issues, including policy violations and hallucinations, are excluded from this program.
Google also maintains an AI Red Team as part of its Secure AI Framework (SAIF), focusing on addressing emerging AI threats. The latest iteration of SAIF emphasizes managing agentic security risks, such as unintended actions and data disclosure, through proper controls.
This suite of measures underscores Google‘s commitment to using AI to bolster security defenses against evolving cyber threats.
For more information, see DeepMind’s CodeMender announcement and Google’s AI Vulnerability Reward Program.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- Pompliano: Bitcoin’s Scarcity Will Keep Driving Price Higher
- Grayscale Stakes $150M in Ether After Launching ETP Staking
- Binance’s CZ Rejects Forbes Rich List, Focuses on True Impact
- North Korea Hackers Steal Over $2 Billion in Crypto in 2025
- S&P Launches Crypto-Stock Index With Strict Market Cap Criteria