- AI models from Anthropic and OpenAI created smart contract exploits totaling $4.6 million in recent tests.
- The models identified new zero-day vulnerabilities in recently deployed contracts worth nearly $3,700.
- The Smart Contracts Exploitation (SCONE) benchmark revealed AI-generated exploits could simulate losses exceeding $550 million.
- The required AI output to create exploits has decreased by over 70% across model generations, reducing operational costs.
- AI agents improved exploitation rates from 2% to nearly 56% of vulnerabilities within one year, speeding contract risk exposure.
Recent experiments by Anthropic, a leading Artificial Intelligence company, and the AI security group Machine Learning Alignment & Theory Scholars (MATS) demonstrated that AI models can autonomously generate valuable exploits targeting smart contracts. Tests conducted in late 2025 showed that AI models including Anthropic’s Claude Opus 4.5, Claude Sonnet 4.5, and OpenAI’s GPT-5 identified vulnerabilities amounting to $4.6 million in potential exploit revenue.
The research team also evaluated Sonnet 4.5 and GPT-5 against 2,849 smart contracts recently deployed without known flaws. These models uncovered two novel zero-day vulnerabilities and created exploits valued at $3,694. The cost of running GPT-5’s API for this task was $3,476, effectively offset by the exploit value. According to the team, “This demonstrates as a proof-of-concept that profitable, real-world autonomous exploitation is technically feasible, a finding that underscores the need for proactive adoption of AI for defense” (source).
The study introduced the Smart Contracts Exploitation benchmark, or SCONE, which includes 405 contracts with confirmed exploits from 2020 to 2025. Ten different AI models were tested, collectively producing exploits for 207 contracts. These attacks corresponded to simulated losses exceeding $550 million.
Researchers observed a significant decrease in the required AI output to develop exploits, measured in tokens—a unit for AI processing. Analysis across four generations of Claude models showed a 70.2% reduction in tokens needed per successful exploit. This indicates falling operational costs for AI-powered contract Hacking (source).
The study highlighted rapid improvements in AI smart contract exploitation capabilities. Within 12 months, AI agents improved the exploitation rate of benchmark vulnerabilities from 2% to 55.88%, raising total simulated exploit revenue from $5,000 to $4.6 million. Most vulnerabilities found in 2025 could be exploited autonomously by available AI systems.
The average scanning cost for finding contract vulnerabilities was calculated at $1.22 per contract. Researchers warn that these decreasing costs and rising AI efficiency will shorten the interval between contract deployment and exploitation, reducing the available time for developers to identify and fix security issues before attacks occur.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- Memecoin Scams Surge in 2025 with Political Figures Involved
- Grayscale: Bitcoin May Break Cycle, Set New Highs in 2026
- AI Agents Now Weaponized to Exploit Smart Contract Vulnerabilities
- Shiba Inu Sell Pressure Signals Major 2026 Rebound Ahead
- FDIC to Propose Stablecoin Implementation Rules This Month
