OpenAI’s GPT-4 may pose “at most” a minor risk of helping people create biological threats. This is evidenced by the company’s preliminary tests, writes Bloomberg.

OpenAI conducted them in order to understand and prevent potential “catastrophic” damage from its technology. The fact is that many experts, including some heads of technology companies, have expressed concern about whether AI could help attackers develop biological weapons.

As part of the study, OpenAI gathered a group of 50 biology experts and 50 college-level biology students. Half of the participants were asked to complete tasks related to the creation of a biological threat using the Internet and GPT-4. The other group was given access to the Internet only.

Comparing the results of the two groups, the authors of the study found a slight increase in “accuracy and completeness for those who had access to the language model.” Therefore, the researchers concluded that access to GPT-4 “provides at most a mild uplift in information acquisition for biological threat creation.”

OpenAI called this conclusion a starting point for further research and discussion in society.

Last fall, OpenAI announced the creation of a new team that will study artificial intelligence models to protect against what it calls “catastrophic risks.”