BTC $71,807
2026 Bull Run Is Building Start trading with 5% OFF all fees
Sign Up Now
BTC $71,807
Bull Run 2026 | 5% Off Fees Open your Binance account today
Sign Up

Google Threat Intelligence Group has confirmed that cybercriminals are using AI to develop zero-day exploits targeting a popular open-source web administration tool. This marks the first time Google has identified AI-assisted zero-day development in the wild.Cybercriminals used an AI model to discover and weaponize a zero-day vulnerability in a popular open-source web administration tool, according to Google’s Threat Intelligence Group.In a report published Monday, Google said the flaw let attackers bypass two-factor authentication and warned that the attackers were preparing a mass exploitation campaign before the company intervened. It is the first time Google has confirmed AI-assisted zero-day development in the wild.“As the coding capabilities of AI models advance, we continue to observe adversaries increasingly leverage these tools as expert-level force multipliers for vulnerability research and exploit development, including for zero-day vulnerabilities,” Google wrote. While these tools empower defensive research, they also lower the barrier for adversaries to reverse-engineer applications and develop sophisticated and AI-generated exploits.The report comes as researchers and governments warn that AI models are accelerating cyberattacks by helping hackers find vulnerabilities and generate malware, and automate exploit development.Though frontier LLMs struggle to navigate complex enterprise authorization logic, they have an increasing ability to perform contextual reasoning and effectively reading the developer’s intent to correlate the 2FA enforcement logic with the contradictions of its hard-coded exceptions,” the report said. This capability can allow models to surface dormant logic errors that appear functionally correct to traditional scanners but are strategically broken from a security perspective.According to Google, the unnamed attackers used AI to identify a logic flaw where the software trusted a condition that bypassed its two-factor authentication protections. Unlike traditional scanners that search for broken code or crashes, the AI analyzed how the software was intended to work and detected the contradiction, allowing attackers to bypass the security check without breaking the encryption itself.“AI-driven coding has accelerated the development of infrastructure suites and polymorphic malware by adversaries,” Google wrote. These AI-enabled development cycles facilitate defense evasion by enabling the creation of obfuscation networks and the integration of AI-generated decoy logic in malware that we have linked to suspected Russia-nexus threat actors.The report says that threat actors from China and North Korea are using AI to find software weaknesses, while Russian groups are using it to hide their malware.These actors have leveraged sophisticated approaches toward AI-augmented vulnerability discovery and exploitation, beginning with persona-driven jailbreaking attempts and the integration of specialized and high-fidelity security datasets to augment their vulnerability discovery and exploitation workflows,” Google wrote.While Google’s report aimed to warn about the growing risk of AI-powered cyberattacks, some researchers argue that the fear is overblown. A separate study led by Cambridge University of over 90,000 cybercrime forum threads found that most criminals were using AI for spam and phishing rather than vibe coding sophisticated cyberattacks.“The role of jailbroken LLMs (Dark AI) as instructors is also overstated, given the prominence of subculture and social learning in initiation – new users value the social connections and community identity involved in learning hacking and cybercrime skills as much as the knowledge itself,” the study said. Our initial results, therefore, suggest that even bemoaning the rise of the Viber criminal may be overstating the level of disruption to date.Despite Cambridge’s findings, however, the Threat Intelligence Group’s report also comes as Google has faced security concerns tied to AI-powered tools. In April, the company patched a prompt injection flaw in its Antigravity AI coding platform that researchers said could let attackers execute commands on a developer’s machine through manipulated prompts.“Although we do not believe Gemini was used based on the structure and content of these exploits, we have high confidence that the actor likely leveraged an AI model to support the discovery and weaponization of this vulnerability,” Google researchers wrote.Earlier this year, Anthropic restricted access to its Claude Mythos model after tests showed it could identify thousands of previously unknown software flaws. The findings also add to growing concerns that AI models are reshaping cybersecurity by helping both defenders and attackers find vulnerabilities faster.“As these capabilities reach the hands of more defenders, many other teams are now experiencing the same vertigo we did when the findings first came into focus,” Mozilla wrote in a blog post in April. “For a hardened target, just one such bug would have been red-alert in 2025, and so many at once makes you stop to wonder whether it’s even possible to keep up.”

AI-first zero-day exploit discovered, targeting web tools and bypassing security.

  • Google confirmed cybercriminals used an AI model to develop a zero-day exploit targeting a popular open-source web administration tool, a first-of-its-kind discovery.
  • The exploit bypassed two-factor authentication by identifying a logic flaw in how the software was intended to work, not by breaking its code.
  • While Google warns AI is accelerating sophisticated attacks, other research suggests its current role in advanced cybercrime may be overstated.
  • State-linked threat actors from China and North Korea are using AI for vulnerability research, while Russian groups use it to create obfuscated Malware.
  • Earlier this year, Anthropic restricted access to its Claude Mythos model after tests showed it could identify thousands of previously unknown software flaws.

Cybercriminals have weaponized Artificial Intelligence to develop and exploit a zero-day vulnerability for the first time, according to a report published Monday by Google’s Threat Intelligence Group. The AI-assisted attack targeted a popular open-source web administration tool, allowing the perpetrators to bypass its two-factor authentication protections.

- Advertisement -

Google researchers stated, “As the coding capabilities of AI models advance, we continue to observe adversaries increasingly leverage these tools as expert-level force multipliers for vulnerability research and exploit development.” Consequently, the company worked with the affected vendor to patch the flaw before the attackers could launch a mass exploitation campaign.

The AI model identified a contradiction in the software’s intended logic, a flaw traditional scanners would likely miss. “This capability can allow models to surface dormant logic errors that appear functionally correct to traditional scanners but are strategically broken from a security perspective,” the report explained.

Meanwhile, actors linked to China and North Korea are actively using AI for vulnerability discovery. Simultaneously, suspected Russian-nexus groups are employing it to generate polymorphic malware and sophisticated obfuscation networks for defense evasion.

However, a separate study led by Cambridge University suggests the immediate threat may be overblown. Its analysis found most cybercriminals currently use AI for spam and phishing, not for coding sophisticated exploits.

- Advertisement -

These developments follow other major AI security concerns, including a patched flaw in Google‘s own Antigravity AI coding platform in April. The findings underscore how AI is reshaping Cybersecurity for both defenders and attackers, accelerating the discovery of critical vulnerabilities.

✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.

Previous Articles:

- Advertisement -
Ad
Altseason Is Loading. Don't watch from the sidelines.
SOL $90.51
DOGE $0.0963
LINK $9.02
SUI $1.00
5% off fees when you sign up
Start Trading
Ad
Pay Less on Every Trade. For Life.
$10K/mo volume Save $60/yr
$50K/mo volume Save $300/yr
$100K/mo volume Save $600/yr
5% off all trading fees when you sign up
Claim Your Discount

Latest News

Tesla’s $420 Price Includes Free Optimus: Piper

Analysts at Piper Sandler view Tesla's current price near $420 as a solid buying...

BitMine Slows ETH Buys; Lee Sets $62K Price Target

BitMine Immersion slowed its aggressive Ethereum accumulation, purchasing just 26,659 ETH last week compared...

Crypto Conference After-Party at Strip Club Sparks Backlash

CoinDesk's Consensus 2026 closing party was held at Miami strip club E11even, drawing sharp...

Indian Stock Market Plunge: Sensex Crashes 1200 Points

The Sensex plunged by nearly 1,200 points and the Nifty 50 fell below 23,845...

Bailey Warns of US Stablecoin Flood Risk to UK

Bank of England Governor Andrew Bailey warns a "coming wrestle" with the U.S. over...

Must Read

9 Best Trading Platforms for Crypto Beginners

Many newcomers to the crypto space are looking for platforms to buy, sell and exchange cryptocurrencies. While there are hundreds of crypto exchanges around...
Ad
Altseason Is Loading. These 4 coins are trending right now.
SOL $92.12
DOGE $0.0950
LINK $9.02
SUI $1.02
5% off spot fees when you sign up
Start Trading