The United States, the United Kingdom, and a number of other countries have agreed to secure artificial intelligence technologies, reports Reuters.

A total of 18 countries have signed the agreement. In addition to the United States and the United Kingdom, this list includes: Germany, Italy, Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore.

The document consists of 20 pages, and one of the high-ranking American officials called it the first detailed international agreement on how to protect artificial intelligence from malicious actors.

The signatories of the document agreed that companies should develop and implement AI in a way that protects customers and the general public from abuse. At the same time, they urge developers to make such systems safe by design.

The agreement is non-binding and contains mostly general recommendations, such as monitoring AI systems for misuse, protecting data from tampering, and vetting software providers.

However, Jen Easterly, director of the U.S. Cybersecurity and Infrastructure Security Agency, said it was important that so many countries supported the idea that AI systems should put security first.

The document addresses how to protect AI technology from being hijacked by hackers and contains recommendations, such as releasing models only after appropriate security testing. However, the agreement does not address the thorny issues of the proper use of AI or how to collect the data on which models learn.

The development of artificial intelligence raises many concerns, including fears that it could be used to undermine democracy, commit fraud, or lead to significant job losses.

Europe is still pacing the US in regulating artificial intelligence: local lawmakers are developing rules for AI. France, Germany, and Italy have also recently reached an agreement on how artificial intelligence should be regulated.