BTC $71,807
2026 Bull Run Is Building Start trading with 5% OFF all fees
Sign Up Now
BTC $71,807
Bull Run 2026 | 5% Off Fees Open your Binance account today
Sign Up

Claude Maker Catches AI Firms in Major Distillation Attacks

Chinese AI Firms Target Claude in Massive, Illegal Model Extraction Attack

  • Anthropic identified three Chinese AI firms using over 24,000 fraudulent accounts for industrial-scale “distillation attacks” on its Claude model.
  • The illicit campaigns generated over 16 million exchanges to extract advanced capabilities like reasoning and coding, violating terms of service and regional bans.
  • The company warns such distillation strips crucial safety safeguards, creating significant national security risks that authoritarian governments could weaponize.
  • Google recently disclosed similar attacks on its Gemini model, indicating a growing trend of AI model extraction.

On February 24, 2026, AI company Anthropic publicly accused three Chinese competitors of massive, illegal campaigns to steal its technology. According to its reports, DeepSeek, Moonshot AI, and MiniMax orchestrated “industrial-scale campaigns” using fraudulent accounts to extract Claude‘s advanced capabilities.

- Advertisement -
Ad
Altseason Is Loading. Don't watch from the sidelines.
SOL $90.51
DOGE $0.0963
LINK $9.02
SUI $1.00
5% off fees when you sign up
Start Trading

These distillation attacks generated over 16 million exchanges through about 24,000 accounts, violating strict regional restrictions on its services in China. The technique involves training a weaker model on a stronger AI system’s outputs to cheaply acquire its skills.

However, illicit distillation bypasses the massive development costs rivals would normally face. Anthropic stated “Illicitly distilled models lack necessary safeguards, creating significant national security risks.”

Consequently, foreign entities could weaponize these unprotected capabilities for malicious cyber activities or surveillance systems. The campaigns specifically targeted Claude’s most advanced features like agentic reasoning and coding across millions of prompts.

Anthropic attributed each attack using metadata and infrastructure clues, noting the prompts’ volume and structure were distinct from normal use. Meanwhile, the attacks relied on proxy services with “hydra cluster” architectures to avoid detection.

- Advertisement -

To counter this, the company has built classifiers and strengthened verification for certain account types. This disclosure follows similar findings from Google, which recently disrupted attacks on its Gemini model.

✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.

Previous Articles:

- Advertisement -
Ad
Pay Less on Every Trade. For Life.
$10K/mo volume Save $60/yr
$50K/mo volume Save $300/yr
$100K/mo volume Save $600/yr
5% off all trading fees when you sign up
Claim Your Discount

Latest News

UK parliamentary committee seeks crypto donation moratorium

A UK cross-party committee urges an immediate moratorium on crypto donations to political parties.The...

Apple Patches WebKit Zero-Day in iOS, macOS

Apple released its first Background Security Improvements to patch a cross-origin vulnerability in WebKit.The...

CBO: U.S. Debt to Hit $64 Trillion by 2036 Amid BRICS Exit

The Congressional Budget Office predicts US national debt will hit $64 trillion by 2036,...

Crypto Payments Double in Australia Despite Banking Hurdles

In 2026, 12% of Australians used crypto for goods and services, a significant rise...

Meta Shuts Down Virtual Reality Horizon Worlds in June

Meta Platforms will shut down its Horizon Worlds Metaverse for VR users in June,...

Must Read

9 DePIN Programs For Passive Income

Here’s something most people don’t realize: your smartphone and PC can generate passive income with almost no effort.I’m not talking about clicking ads for...
Ad
Altseason Is Loading. These 4 coins are trending right now.
SOL $92.12
DOGE $0.0950
LINK $9.02
SUI $1.02
5% off spot fees when you sign up
Start Trading