OpenAI's Altman and Ethereum's Buterin outline competing visions for the future of AI


This week, two of the most influential voices in technology offered contrasting visions for the development of artificial intelligence, highlighting the growing tension between innovation and safety.

CEO Sam Altman revealed Sunday evening in a... Blog post About his company's trajectory, OpenAI has tripled its user base to more than 300 million weekly active users as it moves toward artificial general intelligence (AGI).

“We are now confident that we know how to build artificial general intelligence as we have traditionally understood it,” Altman said, claiming that in 2025, AI agents could “join the workforce” and “tangibly change the production of companies.”

Altman says OpenAI is moving toward more than just AI and AGI clients, saying the company is starting to work on “super intelligence in the true sense of the word.”

The timeframe for the delivery of artificial general intelligence or superintelligence is unclear. OpenAI did not immediately respond to a request for comment.

But hours before Sunday, Vitalik Buterin, co-founder of Ethereum, made the announcement Suggested Using blockchain technology to create global fail-safe mechanisms for advanced AI systems, including a “soft pause” capability that can temporarily restrict industrial-scale AI operations if warning signs appear.

Cryptography-based security for AI safety

Buterin is talking here about "d/acc" or eccentric/defensive acceleration. In its simplest sense, d/acc is a variation of e/acc, or efficient acceleration, a philosophical movement espoused by Silicon Valley luminaries like a16z's Marc Andreessen.

Buterin's d/acc program also supports technological advances but prioritizes developments that enhance human safety and effectiveness. Unlike effective acceleration (e/acc), which takes a “growth at all costs” approach, d/acc focuses on building defensive capabilities first.

“D/acc is an extension of the core values ​​of cryptocurrencies (decentralization, censorship resistance, open global economy, community) to other areas of technology,” Buterin wrote.

Looking at how d/acc has progressed over the past year, Buterin wrote about how a more cautious approach to artificial general intelligence and superintelligent systems could be implemented using existing cryptographic mechanisms such as zero-knowledge proofs.

Under Buterin's proposal, AI mainframe computers would need weekly approval from three international groups to continue operating.

“Signatures will be device-independent (if desired, we can even require zero-knowledge proof that they have been published on the blockchain), so it will be all-or-nothing: there will be no practical way to authorize a single device to continue to function without Licensing all other devices.

The system will act as a master switch through which all authorized computers will either be turned on, or none of them will be turned on, preventing anyone from conducting selective enforcement operations.

“Until such a critical moment occurs, just having the ability to simply pause will do little harm to developers,” Buterin noted, describing the system as a form of insurance against catastrophic scenarios.

However, OpenAI's explosive growth from 2023 — from 100 million to 300 million weekly users in just two years — shows how rapidly AI adoption is progressing.

From an independent research laboratory to a major technology company, Altman acknowledged the challenges of building “an entire company, almost from scratch, around this new technology.”

The proposals reflect broader industry discussions about managing AI development. Proponents have previously argued that implementing any global control system would require unprecedented cooperation between major AI developers, governments, and the cryptocurrency sector.

Buterin wrote: "A year of 'wartime mode' can easily be worth a hundred years of work under conditions of complacency." “If we have to restrict people, it seems better to restrict everyone equally and do the hard work of actually trying to cooperate to regulate that rather than one party seeking to control everyone.”

Modified by Sebastian Sinclair

Smart in general Newsletter

A weekly AI journey narrated by Jane, a generative AI model.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *