OpenAI, led by Sam Altman, has launched EVMbench — a new testing framework that gauges whether artificial intelligence can understand and help secure smart contracts running on Ethereum and other EVM-compatible chains. Smart contracts are immutable programs that power decentralized exchanges, lending platforms and much of DeFi. Because deployed contracts can’t be easily changed, bugs and vulnerabilities can put real money at risk. With billions — OpenAI notes “$100B+” — of open-source crypto assets at stake, the need for robust security tools is urgent. EVMbench, developed with crypto investment firm Paradigm, evaluates AI systems using real-world vulnerabilities drawn from past audits and security competitions. It measures three core capabilities: - Detecting security bugs in smart contract code, - Exploiting those bugs in a controlled environment to demonstrate impact, and - Patching or fixing the vulnerable code without breaking intended functionality. OpenAI frames EVMbench as an attempt to create a clear, economically meaningful standard for assessing AI in blockchain security. As AI agents grow better at reading, writing and executing code, the company says it’s critical to both measure their capabilities and encourage defensive use — i.e., using AI to audit and harden contracts before attackers can exploit them. For DeFi builders, auditors and security teams, EVMbench could serve as a benchmark for how reliable AI-assisted tooling is at spotting and fixing the kinds of flaws that have previously led to costly exploits. Read more AI-generated news on: undefined/news