- Google confirmed cybercriminals used an AI model to develop a zero-day exploit targeting a popular open-source web administration tool, a first-of-its-kind discovery.
- The exploit bypassed two-factor authentication by identifying a logic flaw in how the software was intended to work, not by breaking its code.
- While Google warns AI is accelerating sophisticated attacks, other research suggests its current role in advanced cybercrime may be overstated.
- State-linked threat actors from China and North Korea are using AI for vulnerability research, while Russian groups use it to create obfuscated Malware.
- Earlier this year, Anthropic restricted access to its Claude Mythos model after tests showed it could identify thousands of previously unknown software flaws.
Cybercriminals have weaponized Artificial Intelligence to develop and exploit a zero-day vulnerability for the first time, according to a report published Monday by Google’s Threat Intelligence Group. The AI-assisted attack targeted a popular open-source web administration tool, allowing the perpetrators to bypass its two-factor authentication protections.
Google researchers stated, “As the coding capabilities of AI models advance, we continue to observe adversaries increasingly leverage these tools as expert-level force multipliers for vulnerability research and exploit development.” Consequently, the company worked with the affected vendor to patch the flaw before the attackers could launch a mass exploitation campaign.
The AI model identified a contradiction in the software’s intended logic, a flaw traditional scanners would likely miss. “This capability can allow models to surface dormant logic errors that appear functionally correct to traditional scanners but are strategically broken from a security perspective,” the report explained.
Meanwhile, actors linked to China and North Korea are actively using AI for vulnerability discovery. Simultaneously, suspected Russian-nexus groups are employing it to generate polymorphic malware and sophisticated obfuscation networks for defense evasion.
However, a separate study led by Cambridge University suggests the immediate threat may be overblown. Its analysis found most cybercriminals currently use AI for spam and phishing, not for coding sophisticated exploits.
These developments follow other major AI security concerns, including a patched flaw in Google‘s own Antigravity AI coding platform in April. The findings underscore how AI is reshaping Cybersecurity for both defenders and attackers, accelerating the discovery of critical vulnerabilities.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
