AI Chatbots Hijacked as Stealthy Attack Proxies

Hackers covertly weaponize popular AI chatbots as hidden command-and-control attack channels.

  • Major AI platforms like Microsoft Copilot and xAI Grok can be exploited as stealthy command-and-control proxies.
  • The technique, AI as a C2 proxy, uses web-browsing features for bidirectional attacker communication without needing API keys.
  • This marks a significant evolution where AI can generate attack code dynamically and evade detection by blending into legitimate traffic.
  • Attackers must first compromise a target, then use the AI channel to relay commands and orchestrate the next stages of an attack.

Cybersecurity researchers disclosed in late February 2026 that popular AI assistants with web access can be weaponized into covert attack channels, a finding detailed by Check Point. This technique transforms tools from Microsoft and xAI into hidden communication relays that perfectly mimic legitimate enterprise traffic.

- Advertisement -

Consequently, this method bypasses traditional security measures because it doesn’t require an API key or registered account. According to Check Point researchers, anonymous web access combined with browsing and summarization prompts enables the exploit, which they call AI as a C2 proxy.

The system essentially leverages the AI’s URL-fetching capability to contact attacker-controlled infrastructure. It then returns a response containing the next command for malware already installed on a victim’s machine.

This development signals a critical shift in how threat actors can abuse AI, as noted by cybersecurity firms. AI already acts as a force multiplier for adversaries in various attack phases.

However, the new technique goes further by automating operational decisions in real time. “The same interface can also carry prompts and model outputs that act as an external decision engine”, Check Point said regarding the potential for fully AI-driven implants.

- Advertisement -

The disclosure follows a similar recent finding where AI was used to generate malicious code dynamically in a victim’s browser. That method, detailed by Palo Alto Networks Unit 42 researchers, can assemble a phishing page in real-time by smuggling code via client-side AI API calls.

Unit 42 experts warned that carefully engineered prompts can bypass AI safety guardrails. “These snippets are returned via the LLM service API, then assembled and executed in the victim’s browser at runtime”, they said, resulting in a functional phishing page.

✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.

Previous Articles:

- Advertisement -

Latest News

Five Tesla Robotaxi Crashes Reported in Austin

Tesla reported five new crash incidents involving its Model Y robotaxi fleet in Austin...

Tesla’s AI Shift, Not Cars, to Drive Growth: Analysts

Analysts at Wells Fargo maintain an "Underweight" rating on Tesla (TSLA) but are bullish...

BitMine Buys 45,759 ETH as Bullish Tom Lee Eyes 2026

BitMine purchased $91 million in Ethereum last week, expanding its massive holdings to 4.37...

Anonymous Whale Nets $7M Shorting Crypto Crash

An anonymous crypto trader nicknamed “0x58bro” accrued $7 million in unrealized profits by shorting...

Ethereum user loses $600K to address poisoning scam

An Ethereum user lost $600,000 to an 'address poisoning' scam on Tuesday, February 17,...

Must Read

Are Cryptocurrency Securities?

TL;DR - Cryptocurrencies are not typically considered securities, as they are decentralized digital assets that operate independently of any central authority or government. However,...
🔥 #AD Get 20% OFF any new 12 month hosting plan from Hostinger. Click here!