On Sunday, the United States, Britain, and over a dozen other nations agreed on the world’s first comprehensive international agreement addressing the safety of artificial intelligence (AI) against illicit actors.

As described by a senior U.S. official, this agreement advocates for companies to construct AI systems with a foundation of security, emphasizing the concept of being “secure by design.”

The 18 countries, as outlined in a 20-page document unveiled on Sunday, concur that companies engaged in the design and utilization of AI must ensure its development and deployment prioritize the safety of customers and the broader public, guarding against potential misuse.

However, there will be no legal repercussions for companies that do not conform to this agreement as it is not legally binding. It primarily consists of broad recommendations, including suggestions for monitoring AI systems to prevent abuse, safeguarding data against tampering, and scrutinizing software suppliers.

Jen Easterly, the director of the U.S. Cybersecurity and Infrastructure Security Agency, emphasized the significance of numerous countries endorsing the notion that prioritizing safety in AI systems is crucial.

While speaking to Reuters, he said: “This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs. An agreement that the most important thing that needs to be done at the design phase is security.”

This agreement is only one of the many initiatives that have kicked off around the world to impose AI guidelines for safety, though many of them lack enforceability. These agreements have mostly been undertaken by governments globally to influence the trajectory of AI development since the impact of AI is becoming increasingly pronounced in both industry and society.

Among the signatories to the new guidelines, in addition to the United States and Britain, are 16 other countries: Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore.

The agreement also goes into the topic of protecting AI systems from hackers or other potentially malicious actors. It includes guidelines such as advocating for the release of models only after undergoing thorough security testing.

However, it does not delve into more complex issues such as the ethical uses of AI or the methodologies employed in gathering the data that fuels these models.