Elon Musk, Steve Wozniak and over 1,000 others are calling for a halt to AI experiments

AI systems competing with human intelligence can pose serious risks to society and humanity, reminds The Future of Life Institute, whose members wrote an open letter demanding to pause experiments with artificial intelligence. Among them are Elon Musk, Steve Wozniak and more than 1000 other people.

According to the AI principles outlined at the Asilomar conference in 2017, advanced artificial intelligence can cause profound changes in the history of life on Earth and should be planned and managed with due diligence and allocation of appropriate resources.

“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects,” the authors of the letter note.

OpenAI’s recent statement on artificial general intelligence states that “at some point, it may be important to get an independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” This moment has come, the members of the Institute claim in the letter and call on all AI laboratories to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”.

AI laboratories and independent experts are advised by the authors of the letter to use this pause to jointly develop and implement a set of common security protocols for advanced AI design and development, which will be thoroughly tested and monitored by independent external experts.

These protocols must ensure that systems that adhere to them are secure.

“AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”

At the same time, AI developers must work with policymakers to significantly accelerate the development of robust AI governance systems. They should, at a minimum, include:

  • new and capable regulatory authorities dedicated to AI;
  • oversight and tracking of highly capable AI systems and large pools of computational capability;
  • provenance and watermarking systems to help distinguish real from synthetic and to track model leaks;
  • a robust auditing and certification ecosystem;
  • liability for AI-caused harmІ;
  • robust public funding for technical AI safety research;
  • and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

“Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt.”