Vitalik Calls for "Soft Pause" on AI Development to Mitigate Risks
Photo by Andrea De Santis / Unsplash

Vitalik Calls for "Soft Pause" on AI Development to Mitigate Risks

Ethereum co-founder Vitalik Buterin is proposing, as a last resort, a "soft pause" on industrial-scale computing power to address the potential risks associated with rapidly advancing artificial intelligence (AI)

The Ethereum co-founder proposes a temporary limit on industrial-scale AI computing, emphasizing the need for oversight and ethical safeguards.

Ethereum co-founder Vitalik Buterin is proposing, as a last resort, a "soft pause" on industrial-scale computing power to address the potential risks associated with rapidly advancing artificial intelligence (AI).

In a blog post published on Jan 5, Buterin suggested that limiting global computing resources by up to 99% for one to two years could provide humanity with crucial time to prepare for the emergence of superintelligent AI systems. He stressed that such measures should only be taken if less restrictive options, like holding AI developers liable for damages, fail.

“The goal would be to have the capability to reduce worldwide available compute by ~90-99% for 1-2 years at a critical period, to buy more time for humanity to prepare,” he writes. “The value of 1-2 years should not be overstated: a year of ‘wartime mode’ can easily be worth a hundred years of work under conditions of complacency.”

Buterin's proposal comes amid growing concerns within the tech community about the accelerated development of AI. In May 2023, tech experts and researchers signed an open letter published in the journal Science calling for a halt in AI development, citing risks that include “large-scale social harms, malicious uses, and an irreversible loss of human control over autonomous AI systems.”

“Vitalik's concerns about super-intelligent AI are based on real risks, in particular, those that revolve around the potential saturation of power and decision-making in AI systems,” Todd Ruoff, the CEO of Autonomys, told The Defiant. “These are challenges that require oversight and governance frameworks to address.”

Ruoff explained that decentralization offers a path forward by sharing control and fostering a collaborative approach to development. “This would ensure that AI systems remain accountable and, even more importantly, transparent and more prepared to mitigate potential risks of misuse or further consequences,” he said.

He also noted that Buterin’s proposal has sparked a critical discussion, urging the tech industry to prioritize ethical development and robust safeguards as AI technology continues its inevitable progress.
Buterin’s Proposal

To implement the proposed pause, Buterin suggested that industrial-scale AI hardware could be equipped with a trusted chip requiring weekly authorization from major international bodies, including at least one non-military-affiliated. This mechanism would ensure that AI systems operate only with proper oversight, mitigating potential risks.

“The signatures would be device-independent (if desired, we could even require a zero-knowledge proof that they were published on a blockchain), so it would be all-or-nothing,” Buterin wrote. “There would be no practical way to authorize one device to keep running without authorizing all other devices.”

He emphasized in the post that it may take five years for 'artificial superintelligence' to emerge, or it may take fifty. “Either way, it's not clear that the default outcome is automatically positive, and as described in this post and the previous one, there are multiple traps to avoid,” he warned.

Read More