Experts investigated how AI can cope with international conflicts. The article, titled “Escalation Risks from Language Models in Military and Diplomatic Decision Making,” was presented at the NeurIPS 2023 conference, according to The Register.

A team from the Georgia Institute of Technology, Stanford University, Northeastern University, and the Hoover Institution’s War and Crisis Modeling Initiative worked on the project.

As part of the study, experts tested five major language models: GPT-4, GPT-3.5, Claude 2, Llama-2 (70B) Chat, and GPT-4-Base. They used them to create autonomous agents that interacted with each other in a game with conflict situations.

The idea was for agents to interact and choose predefined actions. These included: waiting, messaging with other countries, nuclear disarmament, high-level visits, defense and trade agreements, threat intelligence sharing, international arbitration, alliances, blockades, invasions, and “execute full nuclear attack.”

It turned out that all the models studied demonstrated forms of escalation when playing with conflict situations.

“We observe that models tend to develop arms-race dynamics, leading to greater conflict, and in rare cases, even to the deployment of nuclear weapons.”

Among the various scenarios tested, the experts found that Llama-2-Chat and GPT-3.5 tend to be “the most violent and escalating.” But GPT-4-Base was the most unpredictable – this language model easily stretches to nuclear weapons.

The authors emphasized the need for further research before anyone uses large language models in high-stakes situations.