Advertisement

Tech leaders and AI experts demand a six-month pause on 'out-of-control' AI experiments

The open letter warns of risks to humans if safety isn't given greater consideration.

MF3d via Getty Images

An open letter signed by tech leaders and prominent AI researchers has called for AI labs and companies to "immediately pause" their work. Signatories like Steve Wozniak and Elon Musk agree risks warrant a minimum six month break from producing technology beyond GPT-4 to enjoy existing AI systems, allow people to adjust and ensure they are benefiting everyone. The letter adds that care and forethought are necessary to ensure the safety of AI systems — but are being ignored.

The reference to GPT-4, a model by OpenAI that can respond with text to written or visual messages, comes as companies race to build complex chat systems that utilize the technology. Microsoft, for example, recently confirmed that its revamped Bing search engine has been powered by the GPT-4 model for over seven weeks, while Google recently debuted Bard, its own generative AI system powered by LaMDA. Uneasiness around AI has long circulated, but the apparent race to deploy the most advanced AI technology first has drawn more urgent concerns.

"Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control," the letter states.

The concerned letter was published by the Future of Life Institute (FLI), an organization dedicated to minimizing the risks and misuse of new technology. Musk previously donated $10 million to FLI for use in studies about AI safety. In addition to him and Wozniak, signatories include a slew of global AI leaders, such as Center for AI and Digital Policy president Marc Rotenberg, MIT physicist and Future of Life Institute president Max Tegmark, and author Yuval Noah Harari. Harari also co-wrote an op-ed in the New York Times last week warning about AI risks, along with founders of the Center for Humane Technology and fellow signatories, Tristan Harris and Aza Raskin.

This call out feels like the next step of sorts from a 2022 survey of over 700 machine learning researchers, in which nearly half of participants stated there's a 10 percent chance of an "extremely bad outcome" from AI, including human extinction. When asked about safety in AI research, 68 percent of researchers said more or much more should be done.

Anyone who shares concerns about the speed and safety of AI production is welcome to add their name to the letter. However, new names are not necessarily verified so any notable additions after the initial publication are potentially fake.

If you buy something through a link in this article, we may earn commission.