Leading Chinese research institutions affiliated with the People’s Liberation Army have used the large Meta language model to develop AI-based tools for military use. Reuters reports this with reference to three scientific articles and analytics.

In the research papers that the journalists reviewed, six Chinese researchers described how they used Meta’s version of Llama to create ChatBIT. Two of the researchers were from the Academy of Military Science (AMS), a group within the People’s Liberation Army of China.

The researchers used an early version of the large open-source language model Llama 13B, adding their own parameters. Their goal was to create an AI-based ChatBIT tool to collect and process intelligence and provide accurate information for operational decision-making. In the military sphere, ChatBIT was able to produce better results than 90% of AI models, which are comparable in power to GPT-4.

Meta, in turn, stated that it prohibits the use of its artificial intelligence models for “military activities, warfare, nuclear industry or applications, espionage” and other activities subject to US export controls, as well as for the development of weapons and content intended to “incite and promote violence.”

“”Any use of our models by the People’s Liberation Army is unauthorized and contrary to our acceptable use policy. In the global competition on AI, the alleged role of a single, and outdated, version of an American open-source model is irrelevant when we know China is already investing more than a trillion dollars to surpass the US on AI,” Meta representatives told Reuters in a telephone interview.

The revised documents also stated that in the future, ChatBIT will not be limited to intelligence gathering and processing. The Chinese AI tool is also planned to be used for strategic planning, simulation training, and command decision-making. The Chinese Ministry of Defense and none of the institutions or researchers have commented publicly on this.