AI Chatbots Hijacked as Stealthy Attack Proxies

Hackers covertly weaponize popular AI chatbots as hidden command-and-control attack channels.

  • Major AI platforms like Microsoft Copilot and xAI Grok can be exploited as stealthy command-and-control proxies.
  • The technique, AI as a C2 proxy, uses web-browsing features for bidirectional attacker communication without needing API keys.
  • This marks a significant evolution where AI can generate attack code dynamically and evade detection by blending into legitimate traffic.
  • Attackers must first compromise a target, then use the AI channel to relay commands and orchestrate the next stages of an attack.

Cybersecurity researchers disclosed in late February 2026 that popular AI assistants with web access can be weaponized into covert attack channels, a finding detailed by Check Point. This technique transforms tools from Microsoft and xAI into hidden communication relays that perfectly mimic legitimate enterprise traffic.

- Advertisement -

Consequently, this method bypasses traditional security measures because it doesn’t require an API key or registered account. According to Check Point researchers, anonymous web access combined with browsing and summarization prompts enables the exploit, which they call AI as a C2 proxy.

The system essentially leverages the AI’s URL-fetching capability to contact attacker-controlled infrastructure. It then returns a response containing the next command for malware already installed on a victim’s machine.

This development signals a critical shift in how threat actors can abuse AI, as noted by cybersecurity firms. AI already acts as a force multiplier for adversaries in various attack phases.

However, the new technique goes further by automating operational decisions in real time. “The same interface can also carry prompts and model outputs that act as an external decision engine”, Check Point said regarding the potential for fully AI-driven implants.

- Advertisement -

The disclosure follows a similar recent finding where AI was used to generate malicious code dynamically in a victim’s browser. That method, detailed by Palo Alto Networks Unit 42 researchers, can assemble a phishing page in real-time by smuggling code via client-side AI API calls.

Unit 42 experts warned that carefully engineered prompts can bypass AI safety guardrails. “These snippets are returned via the LLM service API, then assembled and executed in the victim’s browser at runtime”, they said, resulting in a functional phishing page.

✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.

Previous Articles:

- Advertisement -

Latest News

$1B Inflows Fuel Crypto Rebound As Bitcoin Surges Past $70K

Crypto funds saw $1 billion in weekly inflows, the largest since January, breaking a...

Senator: White House Staff May Have Profited Off Iran Strikes

Senator Chris Murphy alleges individuals with White House access placed six-figure bets on a...

Arthur Hayes Warns Bitcoin Rally Could Be a ‘Dead Cat Bounce’

Arthur Hayes of Maelstrom warned that Bitcoin’s rally might be a ‘dead cat bounce’...

A16z Seeks $2B for New Crypto Venture Fund

Despite a severe crypto downturn, Andreessen Horowitz is raising a new $2 billion crypto...

AI Giants Pledge to Pay for Power Grid Strain

Seven top AI firms, including Amazon, Google, and OpenAI, have signed a White House...

Must Read

18 Countries With No Privacy Laws According To UN (List)

Privacy laws are legal frameworks designed to protect personal data from unauthorized access, misuse, or disclosure.Lack of privacy laws can lead to misuse of...
🔥 #AD Get 20% OFF any new 12 month hosting plan from Hostinger. Click here!