The US military is testing generative artificial intelligence for military missions. This is what US Air Force Colonel Matthew Strohmeyer says, writes Bloomberg.

Large-scale linguistic models (LLMs) are trained on massive amounts of internet data to help AI predict and generate human responses to user prompts. AI tools such as OpenAI’s ChatGPT and Google’s Bard work on them.

Five of them are being tested as part of a broader series of experiments by the US Department of Defense aimed at advancing the integration of data and digital platforms in the armed forces. The exercises are managed by the Pentagon’s digital and AI office and military leadership with the participation of US allies. At the same time, the Pentagon does not say which language models are being tested.

According to Matthew Strohmeyer, the implementation of the military task using the language model turned out to be successful.

“It was highly successful. It was very fast,” he said. “We are learning that this is possible for us to do.”

The use of the LLM would be a significant shift for the Army, where so little is digitized and connected to the Internet. Currently, it can take several employees hours or even days to send an information request to a particular military unit, Strohmeyer said. During one of the tests, one of the AI tools completed the request in 10 minutes.

The colonel also added that the language models were given classified operational information so that they could answer sensitive questions. The long-term goal of such exercises is to improve the U.S. military so that it can use AI-derived data for decision-making, sensor deployment, and firepower.

The military training will also serve as a test of whether the military can use the LLM to generate entirely new options that they have never considered.

Col. Tucker “Cinco” Hamilton, commander of the 96th Test Wing’s Operations Group and the US Air Force’s Chief of AI test and operations, previously said that artificial intelligence drone during a simulated anti-aircraft defense (SEAD) mission went out of control and “killed” its human operator because he was preventing it from performing the mission. But later he stated that he misspoke about this situation, but such an experiment was not actually conducted.