- Anthropic blocked a large cyberattack in July 2025 that used its AI tool Claude for data theft and extortion.
- The operation targeted at least 17 organizations across healthcare, emergency services, government, and religious sectors.
- The attackers used Claude Code to automate Hacking tasks, gather data, and generate tailored ransom demands between $75,000 and $500,000 in Bitcoin.
- Other misuses of Claude include aiding in fraudulent job schemes, developing Ransomware, and supporting cyber operations against critical infrastructure.
- Anthropic has introduced new safeguards, including custom classifiers and technical sharing with partners, to counter advanced AI-driven cyber threats.
Anthropic reported that it stopped a sophisticated cyberattack in July 2025 in which unknown Hackers used its AI chatbot, Claude, to steal large amounts of personal data and issue extortion demands. The attackers targeted 17 organizations, including healthcare providers, emergency services, government bodies, and religious groups.
The attackers used Claude Code, an agentic coding tool, as their main platform, running it on Kali Linux. According to Anthropic, instructions saved in a “CLAUDE.md” file gave context for each step of the attack. Instead of encrypting stolen data, the hackers threatened to publish it publicly, pressing victims to pay ransoms ranging from $75,000 to $500,000 in Bitcoin.
The operation, named GTG-2002, relied on AI for different parts of the attack. Using Claude Code, the hackers scanned thousands of VPN connections to find weak points, gained initial access to networks, and gathered user credentials. They used AI-made versions of tools, like Chisel, to avoid detection and disguised harmful files as official Microsoft software, showing how AI helps create Malware that can bypass standard defenses.
“Agentic AI tools are now being used to provide both technical advice and active operational support for attacks that would otherwise have required a team of operators,” Anthropic stated. The company explained that these tools can adjust in real time to security measures like malware detectors, making defense harder.
Anthropic developed a custom screening system to detect similar attacks and shared technical indicators with major partners to prevent future threats. The company also documented further abuse of Claude, including by North Korean and Chinese threat actors, ransomware developers, and fraudsters using the tool to build fake identities, automate credit card fraud, and enhance romance scams—sometimes advertising the chatbot as a “high EQ model” for these efforts.
The firm blocked North Korean hackers tied to the Contagious Interview campaign from creating accounts on its service to produce malware, phishing content, and harmful software packages. The report highlights the rising use of AI in complex cybercrimes, with Anthropic experts stating: “Criminals with few technical skills are using AI to conduct complex operations, such as developing ransomware, that would previously have required years of training.”
For complete details on specific cases, visit Anthropic’s official update.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- Aave Relaunches Institutional Platform Horizon for Tokenisation Boom
- Elliptic Launches Wallet Risk Tool for Banks Holding Stablecoin Reserves
- Hedera Projects Migrate to Hiero, Require SDK Namespace Updates
- Jim Chanos’ Hedged MSTR Short Delivers Big Gains vs S&P 500
- Whales Buy $456M in Ether as Investors Rotate Out of Bitcoin