OpenAI and Google DeepMind employees warn about the risks of “human extinction” due to AI
A group of current and former employees of companies working in the field of artificial intelligence have expressed concern about the risks associated with this technology. This was reported by Reuters.
An open letter from 11 current and former OpenAI employees, as well as one current and one former Google DeepMind employee, states that companies’ financial motives prevent effective AI oversight.
“We do not believe bespoke structures of corporate governance are sufficient to change this,” they said in their letter.
Experts also warn about the risks associated with unregulated AI. They include the spread of misinformation and even the loss of control over independent AI systems and the deepening of existing inequalities, which could lead to the “human extinction.”
Experts found examples of image generators from companies including OpenAI and Microsoft creating photos with voting-related disinformation, despite policies against such content.
The letter says that AI companies have “weak obligations” to share information with governments about the capabilities and limitations of their systems. It also says that these firms cannot be relied upon to share such information voluntarily.
The open letter is the latest to raise concerns about the security of generative AI technology, which can quickly and cheaply create human-like text, images, and audio.
By the way, in February this year, it became known that Google, Apple, Meta, Microsoft, and a number of other large tech companies joined the Alien Institute for Security Consortium (AISIC), which was established in the United States. In total, more than 200 companies and organizations have joined this initiative.