OpenAI has unveiled a plan to secure its state-of-the-art artificial intelligence models. It includes several initiatives, writes Reuters.

For example, OpenAI will deploy the latest technologies only when they are considered safe in specific areas. These include cybersecurity and nuclear threats.

The company will also establish an advisory group to review safety reports and forward them to management and the board of directors. While decisions will be made by management, the board of directors will have the power to override them.

Since the launch of ChatGPT, the potential risks of AI have been in the spotlight of researchers and the public. Generative AI technology has impressed users with its capabilities, but has also raised security concerns due to its ability to spread misinformation and manipulate people.

Meanwhile, Bill Gates believes that generative AI has reached a dead end. According to him, the leap from GPT-2 to GPT-4 was incredible. However, GPT technology has now reached a plateau, and GPT-5 is unlikely to be better than its predecessor, GPT-4. At the same time, in the short term, Bill Gates sees significant potential in AI.