- OpenAI reports tripling its user base to 300 million weekly active users as it advances toward artificial general intelligence (AGI).
- Ethereum co-creator Vitalik Buterin proposes blockchain-based safety controls for advanced AI systems.
- Buterin’s plan includes a “soft pause” mechanism requiring weekly approval from three international groups.
- OpenAI CEO Sam Altman expects AI agents to impact company productivity by 2025.
- The contrasting approaches highlight growing tensions between rapid AI advancement and safety measures.
OpenAI and Ethereum leaders presented opposing visions for Artificial Intelligence development, with Sam Altman announcing major user growth while Vitalik Buterin proposed new safety measures using blockchain technology.
OpenAI’s Growth and AGI Aspirations
In a recent blog post, OpenAI CEO Sam Altman reported that the company’s weekly active users increased from 100 million to 300 million in the past two years. Altman stated, "We are now confident we know how to build AGI as we have traditionally understood it," suggesting AI agents could integrate into workforces by 2025.
The company has expanded beyond its research lab origins, developing commercial applications while pursuing artificial general intelligence (AGI) – AI systems capable of performing any intellectual task that humans can do.
Blockchain-Based Safety Protocols
Ethereum co-creator Vitalik Buterin introduced a defensive approach called "d/acc" (decentralized/defensive acceleration), contrasting with Silicon Valley’s aggressive "e/acc" (effective acceleration) philosophy. The proposal implements blockchain technology and zero-knowledge proofs to create global safety controls for advanced AI systems.
Under Buterin’s framework, major AI computing systems would require weekly authorization from three international bodies to maintain operations. The system would function as an all-or-nothing mechanism, preventing selective enforcement of safety measures.
Industry Impact and Implementation Challenges
The implementation of global AI safety measures faces significant hurdles, requiring unprecedented cooperation between AI developers, government regulators, and blockchain technology experts. Buterin emphasized, "A year of ‘wartime mode’ can easily be worth a hundred years of work under conditions of complacency."
Zero-knowledge proofs, a cryptographic method allowing one party to prove knowledge without revealing the information itself, would serve as the technical foundation for the proposed safety system. This approach aims to balance technological progress with human agency and safety considerations.
The divergent perspectives from these industry leaders reflect broader debates about managing AI development responsibly while maintaining innovation momentum. Neither OpenAI nor other major AI developers have publicly responded to Buterin’s proposal.
✅ Follow BITNEWSBOT on Facebook, LinkedIn, X.com, and Google News for instant updates.
Consider a small donation to support our journalism
Previous Articles:
- Bitwise CEO Forecasts Crypto Boom in 2025 Driven by AI and Tokenization Trends
- FDIC Crypto Crackdown: Redacted Documents Spark Calls for Congressional Investigation
- Japanese VC Firm Sets Ambitious Goal to Accumulate 10,000 Bitcoin in 2025
- Hamster Kombat Plans TON Layer-2 Network Following Massive Community Vote
- Bitcoin Mining Giant MARA’s Crypto Lending Portfolio Hits $4.2B Valuation