A group of IT industry leaders is warning that the artificial intelligence technology they are creating could one day threaten the existence of humanity and should be considered a social risk on par with pandemics and nuclear wars. This is reported by The New York Times.
“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads a statement released by the Center for AI Safety, a nonprofit organization.
The open letter was signed by more than 350 managers, researchers, and engineers working in the field of artificial intelligence. Among the signatories are top managers of three leading AI development companies: CEO of OpenAI Sam Altman, CEO of Google DeepMind Demis Hassabis, and CEO of Anthropic Dario Amodei.
Dan Hendrycks, the executive director of the AI Security Center, said the open letter was a “coming-out” for some industry leaders who had expressed concerns — but only privately — about the risks of the technology they were developing.
The brevity of the statement, he said, was intended to bring together AI experts who may disagree about the nature of specific risks or steps to prevent those risks, but who share a common concern about powerful AI systems.
We will remind you that recently Sam Altman spoke before the Senate Committee on the Judiciary and answered questions from lawmakers about the potential unintended consequences of generative artificial intelligence. He urged lawmakers to pass new AI laws and regulations as soon as possible to potentially set standards for technology companies to train and release new AI systems.
Loading comments …