BTC $71,807
2026 Bull Run Is Building Start trading with 5% OFF all fees
Sign Up Now
BTC $71,807
Bull Run 2026 | 5% Off Fees Open your Binance account today
Sign Up

Claude Maker Catches AI Firms in Major Distillation Attacks

Chinese AI Firms Target Claude in Massive, Illegal Model Extraction Attack

  • Anthropic identified three Chinese AI firms using over 24,000 fraudulent accounts for industrial-scale “distillation attacks” on its Claude model.
  • The illicit campaigns generated over 16 million exchanges to extract advanced capabilities like reasoning and coding, violating terms of service and regional bans.
  • The company warns such distillation strips crucial safety safeguards, creating significant national security risks that authoritarian governments could weaponize.
  • Google recently disclosed similar attacks on its Gemini model, indicating a growing trend of AI model extraction.

On February 24, 2026, AI company Anthropic publicly accused three Chinese competitors of massive, illegal campaigns to steal its technology. According to its reports, DeepSeek, Moonshot AI, and MiniMax orchestrated “industrial-scale campaigns” using fraudulent accounts to extract Claude‘s advanced capabilities.

- Advertisement -

These distillation attacks generated over 16 million exchanges through about 24,000 accounts, violating strict regional restrictions on its services in China. The technique involves training a weaker model on a stronger AI system’s outputs to cheaply acquire its skills.

However, illicit distillation bypasses the massive development costs rivals would normally face. Anthropic stated “Illicitly distilled models lack necessary safeguards, creating significant national security risks.”

Consequently, foreign entities could weaponize these unprotected capabilities for malicious cyber activities or surveillance systems. The campaigns specifically targeted Claude’s most advanced features like agentic reasoning and coding across millions of prompts.

Anthropic attributed each attack using metadata and infrastructure clues, noting the prompts’ volume and structure were distinct from normal use. Meanwhile, the attacks relied on proxy services with “hydra cluster” architectures to avoid detection.

- Advertisement -

To counter this, the company has built classifiers and strengthened verification for certain account types. This disclosure follows similar findings from Google, which recently disrupted attacks on its Gemini model.

✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.

Previous Articles:

- Advertisement -
Ad
Altseason Is Loading. Don't watch from the sidelines.
SOL $90.51
DOGE $0.0963
LINK $9.02
SUI $1.00
5% off fees when you sign up
Start Trading
Ad
Pay Less on Every Trade. For Life.
$10K/mo volume Save $60/yr
$50K/mo volume Save $300/yr
$100K/mo volume Save $600/yr
5% off all trading fees when you sign up
Claim Your Discount

Latest News

Global Police Use Ad-Based Phone Tracking

An ad-based surveillance tool, Webloc, is used by global law enforcement to track up...

Suspect Arrested After Molotov Cocktail Attack on OpenAI CEO’s Home

A suspect allegedly threw a Molotov cocktail at the home of OpenAI CEO Sam...

Suspect Attacks OpenAI CEO Sam Altman’s Home With Molotov Cocktail

OpenAI CEO Sam Altman's San Francisco home was targeted with a Molotov cocktail early...

Justin Sun’s $70M Frozen in Trump-Linked Crypto Project

Justin Sun had approximately 544 million World Liberty Financial tokens frozen in September 2024...

BTC to Bottom at $55K in 2026 Before Bull Run

New analysis from CryptoQuant predicts Bitcoin will bottom near $55,000-$60,000 in late 2026.The forecast...

Must Read

What Are Anonymous Debit Cards And How Do They Work?

You've heard about anonymous debit cards, but what are they really? Anonymous Debit Cards are cards that let you make purchases without revealing your...
Ad
Altseason Is Loading. These 4 coins are trending right now.
SOL $92.12
DOGE $0.0950
LINK $9.02
SUI $1.02
5% off spot fees when you sign up
Start Trading