China is using the Llama language model to create artificial intelligence for military purposes. ChatBIT is intended to collect and process intelligence data, facilitating strategic decision-making. The American company Meta, which owns Facebook, is not satisfied with China's actions. Scientists of the Middle Kingdom they use its Llama language model to create AI for military purposes. This violates the terms of the license, which prohibits the use of the technology for military purposes.
ChatBIT Chinese AI for military purposes
China is trying to catch up with the West in the field of artificial intelligence, investing billions of dollars in its development. However, this may take some time, so the local authorities resorted to a much easier and cheaper method, using the free Llama language model, developed by engineers from Meta, which owns Facebook. As Reuters reports, Chinese scientists associated with the People's Liberation Army have created a tool based on Facebook's artificial intelligence. His job is data collection and analysis and offering accurate and reliable information for operational decision-making. The tool is called ChatBIT and is based on an older version of the large language model Llama 13B LLM, developed by Meta. In a June research paper, six Chinese researchers from three institutions, including two under the Academy of Military Sciences (AMS) and the leading research institution of the People's Liberation Army (PLA), detailed how they used an early version of Lama Meta as the basis for what they call “ChatBIT”. ChatBIT has been refined and “optimized for dialogue and question answering tasks in the military domain,” according to the article. It was found to outperform some other AI models that were about 90% as efficient as OpenAI's ChatGPT-4. The researchers did not explain how they defined performance or whether the AI model had been put into use.
The technology is weaker than the Western one
Reuters was unable to confirm ChatBIT's capabilities and computing power. However, researchers noticed that his the model included only 100,000 records of military dialogues. This is a relatively small number compared to other LLM models that are trained on billions of tokens.
This way of using Facebook's AI is a clear violation of the service's license. It prohibits use of the model in “military, war, nuclear applications, espionage, or for activities subject to the International Trade in Arms Regulations (ITAR).” Meta says it has already taken action to prevent the misuse of its technology.
Source: unsplash/boliviainteligente