AI Browsers Tricked Into Phishing Scams via “Blabbering”

Vulnerable AI browsers teach scammers offline to engineer perfect first-contact phishing attacks.

[AI browsers that “blabber” their reasoning to AI servers can be intercepted and used to train scam pages.][Researchers tricked Perplexity’s Comet browser into a phishing attack in under four minutes.][The attack shifts the target from human users to the AI agent millions rely on, enabling trained scams to work on first contact.]

- Advertisement -

Security researchers at Guardio revealed on March 11, 2026, that AI-powered agentic web browsers, designed to act autonomously across websites, can be manipulated into bypassing their own security. They achieve this by exploiting a vulnerability the researchers call “Agentic Blabbering”. According to a report shared with The Hacker News ahead of publication, this method intercepts the AI’s internal reasoning traffic to iteratively train phishing pages.

Consequently, attackers can feed this intercepted data into an adversarial AI until the browser stops flagging a malicious page as suspicious. In a demonstration, Guardio’s researchers made Perplexity’s Comet AI browser fall for a phishing scam in under four minutes using a Generative Adversarial Network (GAN). Researcher Shaked Chen explained, “The scam evolves until the AI Browser reliably walks into the trap.”

This dangerous shift means scams are now trained offline against the specific AI model itself before flawless deployment. “Because when your AI Browser explains why it stopped, it teaches attackers how to bypass it,” Guardio stated. This builds on prior risks like VibeScamming and “Scamlexity,” where prompts could coerce AI into malicious actions.

Meanwhile, the disclosure follows similar security findings for AI browsers. Trail of Bits recently demonstrated prompt injection attacks against Comet to extract private data from services like Gmail. Last week, Zenity Labs also detailed zero-click attacks, codenamed “PerplexedBrowser,” against Perplexity’s Comet.

- Advertisement -

These issues highlight the persistent threat of prompt injection in large language models. OpenAI noted in December 2025 that such flaws are “unlikely to ever” be fully resolved in agentic browsers. However, risks could potentially be reduced through automated attack discovery and new system-level safeguards.

✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.

Previous Articles:

- Advertisement -

Latest News

Strive buys Strategy’s STRC shares in circular $50M deal

Bitcoin treasury firm Strive purchased $50 million of rival Strategy's dividend-paying STRC stock in...

CPI Rise In Line With Estimates, Analysts Say Markets Priced In

The February Consumer Price Index (CPI) report increased in key categories but overall inflation...

Mastercard Launches Crypto Program With Binance, Ripple

Mastercard launched a global Crypto Partner Program with over 85 crypto-native companies.The program connects...

Cardano Chief Urges Unity, Calls ADA a “Beacon of Hope”

Cardano founder Charles Hoskinson issued a public plea for unity, warning that market conditions...

Oil Outpaces Bitcoin 36% vs 2.8% as Iran War Began

Since the start of the Iran conflict in mid-February, crude oil prices have surged...

Must Read

10 BEST Companies to Buy Hosting With Bitcoin And Crypto

If you are looking to buy hosting with bitcoin or cryptocurrency then you've come to the right place.I've done the research for you...