OpenAI has announced the creation of a new team that will study artificial intelligence models to protect against what it calls “catastrophic risks”, writes TechCrunch.
The team is called Preparedness, and its main responsibilities will be to track, predict, and protect against the dangers of future AI systems, ranging from their ability to persuade and deceive people to their ability to generate malicious code.
In its blog post, OpenAI cites “chemical, biological, radiological, and nuclear” threats as the ones that are of greatest concern when it comes to AI models.
The company is also open to exploring “less obvious” areas of AI risk. Therefore, OpenAI is calling for ideas for risk research, promising a $25 thousand prize and a job at Preparedness for the top ten entries.
As you know, in the summer, the European Parliament adopted a bill known as the Artificial Intelligence Act, which imposes new restrictions on what is considered the most risky use of the technology. The final version of the document may be adopted no earlier than the end of this year.