The AI ‘Pause’ Proposal Is Deceptive and Alarmingly Hazardous

As originally published in CoinDesk

Last month, several tech giants signed a letter calling for a six-month pause on training artificial intelligence (AI) models more powerful than GPT-4.

This letter is dangerous and should provoke thoughtful citizens. In it, the signatories claim that a pause will allow humanity more time to understand and respond to the potential risks of AI. The letter itself serves to rally public support for OpenAI and its allies as they consolidate their dominance, build an extended innovation lead and secure their advantage over a technology of fundamental importance to the future. If this occurs, it will irreparably harm Americans – our economy and our people.

Read more: The AI ‘Pause’ Proposal Is Deceptive and Alarmingly Hazardous

GPT-4 and similar foundation models promise to increase human capacity by 1,000 times, driving social change in many arenas of life. The industry as currently structured is likely to solidify a cabal deciding who benefits from this technology.

Imagine if, in 1997, Microsoft and Dell had issued a similar “pause” letter, urging a halt to browser innovation and a ban on new e-commerce sites for six months, citing their own research that the internet would destroy brick-and-mortar stores and aid terrorist finance. Today we’d recognize this as self-serving alarmism and a regulatory capture attempt.

The “pause” letter is no different. A few outspoken, charming leaders are making a power grab in the guise of protecting us from AI dangers. They have positioned themselves as the sole arbiters of what tech the world gets to see and use, and deciders of what makes an AI “accurate, safe, interpretable, transparent, robust, aligned, trustworthy and loyal.”

It’s wrong for a handful of billionaires to decide what’s good and safe for the world. Even well-intentioned AI leaders should not hold such power. Absolute power corrupts absolutely.

The world is in a race toward next-generation foundation models. Nobody racing will halt or even slow research and development. Independent AI labs and foreign rivals, eager to integrate advanced AI into their systems, won’t pause; they will continue relentlessly.

How can we ensure all humans benefit from the 1,000x improvement AI offers us? The only way is through free and open development, including sharing capabilities, methodologies and network checkpoints. For example, for years, EleutherAI has been leading the way, resisting scam warnings that GPT-2 and GPT-3 were “too dangerous” for the public, fearlessly releasing research and models. We need 100 to 1,000x the scale of EleutherAI, globally. (Note: This author is not affiliated with EleutherAI.)

We should not pause. Right now we all must prioritize, invest in, contribute to and broadly publish genuinely open AI models. We must remain on guard against those who seek to control humanity’s destiny.

Eleven years ago, early pioneers in the blockchain space pulled together and formed the Bitcoin Foundation, a model for crypto-industry organization balancing out the needs and goals of private individuals with those of the companies that would make crypto thrive. The Foundation encompassed many kinds of people and many goals, but we were united in wanting Bitcoin to thrive outside the control of any single group or cabal. I believe something similar is needed for AI technology, taking the best lessons from the last decade and more of decentralized technology and matching it with a fierce commitment to independence, humanity and openness.

The proposed pause would consolidate control of AI development among the wealthy and powerful. Instead, let us race forward together and in the open. I invite the like-minded, the skeptical and the curious to discuss how we can achieve a better future. Connect with others who are committed to keeping AI free and open for all at freeaimovement.com.