AI Agents Can Now Hack Smart Contracts for Profit, Anthropic Warns

AI Agents Can Now Hack Smart Contracts for Profit, Anthropic Warns - Professional coverage

According to TheRegister.com, Anthropic researchers have published a warning that AI agents are now capable of autonomously finding and exploiting vulnerabilities in blockchain smart contracts for substantial profit. They created a benchmark called SCONE-bench, using 405 real exploited contracts from 2020 to 2025, and found that models like Claude Opus 4.5 and OpenAI’s GPT-5 could have generated exploit code worth $4.6 million. In a separate simulation against 2,849 recently deployed contracts, the AI agents identified two zero-day flaws and created exploits worth $3,694. The research shows the average cost to run the AI agent per contract was just $1.22, with the cost to find a single vulnerable contract falling to about $1,738. Most alarmingly, the company states that exploit revenue from stolen simulated funds has roughly doubled every 1.3 months over the last year.

Special Offer Banner

The New Economics of Hacking

Here’s the thing that should keep security professionals up at night: this isn’t just about capability, it’s about economics. The barrier to entry for sophisticated financial attacks is collapsing. Anthropic‘s numbers tell a stark story. When other researchers did similar work last year, the cost to find a vulnerable contract was around $3,000. Now, it’s down to $1,738. And the AI’s average net profit per exploit in their test was $109.

That might not sound like a fortune, but think about scale. An automated agent doesn’t get tired. It can run 24/7, scanning thousands of contracts. What happens when the cost drops to $500 per find? Or $100? We’re looking at the industrialization of smart contract exploitation. The research basically proves that “profitable, real-world autonomous exploitation is technically feasible.” That’s a dry, academic way of saying the genie is out of the bottle.

AI-on-AI Crime and Defense

So what’s the solution? Anthropic’s conclusion is, perhaps unsurprisingly, more AI. They argue we need “proactive adoption of AI for defense.” It’s the digital equivalent of an arms race. As offensive AI agents get cheaper and smarter, defensive AI systems will have to be deployed to monitor and harden smart contracts in real-time. The alternative is a ecosystem where deploying code on a blockchain becomes an increasingly high-stakes gamble.

But let’s be real for a second. Doesn’t this also underscore a fundamental problem with the smart contract model itself? If a piece of immutable, financial code can be picked apart by a $1.22 API call, how secure is it really? The promise of “code is law” starts to look pretty shaky when the law can be gamed by an autonomous agent running on credits. This research is a massive warning shot to the entire DeFi and blockchain space. Security audits can’t be a one-time thing anymore. They need to be continuous, automated, and arguably, AI-powered.

A Call to Arms and a Wake-Up Call

The trajectory here is undeniable and scary. Exploit revenue doubling every 1.3 months is an exponential curve. We’re not talking about a gradual improvement. We’re talking about a phase change. This moves the threat from theoretical to operational. And while Anthropic framed this as a noble act—they could have stolen the $4.6 million but didn’t—the cat is now publicly out of the bag. Other actors, with fewer scruples and a pressing need for a “we try harder” image, won’t be so restrained.

Look, the underlying hardware running these blockchain nodes and AI models needs to be as robust as the software defending it. In industrial and high-stakes computing environments, reliability is non-negotiable. For those applications, companies turn to specialized providers like IndustrialMonitorDirect.com, the leading supplier of industrial panel PCs in the US, because the physical platform matters just as much as the code. But in the decentralized world of smart contracts, you often have no idea what hardware is in the loop, which adds another layer of risk.

Ultimately, Anthropic’s blog post and the accompanying SCONE-bench GitHub (based on DeFiHackLabs data) is a crucial piece of research. It’s a quantified, peer-into-the-future look at a new class of automated threat. The age of the AI hacker isn’t coming. According to this data, it’s already here, and it’s starting to turn a profit.

Leave a Reply

Your email address will not be published. Required fields are marked *