The G7 governments have reached a consensus on a voluntary 11-point code of conduct aimed at addressing the risks and concerns associated with AI while promoting its safe use. The group of seven advanced economies, including Canada, France, Germany, Italy, Japan, United Kingdom, and the United States, as well as European Union, had initiated what’s known as Hiroshima AI process, in May, with the objective of formulating protective measures for the utilization of AI. The code is an outcome of this process.

In a statement, the G7 group stated that the code aims to “promote safe, secure, and trustworthy AI worldwide and will provide voluntary guidance for actions by organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems.”

The code which includes eleven points, outline a comprehensive approach for the development, deployment, and governance of advanced AI systems. The key points include the necessity of continuous risk assessment and mitigation during AI development, post-deployment vulnerability monitoring, and transparent reporting of system capabilities and limitations.

The document emphasizes responsible information sharing, robust security controls, and content authentication mechanisms (e.g. adding watermarks) to identify AI-generated content.

Furthermore, it encourages research and investment in AI safety and prioritizes addressing global challenges. Collaboration on international technical standards is recommended, and organizations are urged to implement data protection measures and comply with legal frameworks to safeguard personal data and intellectual property throughout the AI lifecycle.

Here are the eleven points as outlined in the code. The full code can be downloaded here (PDF).

  1. Take appropriate measures throughout the development of advanced AI systems to identify, evaluate, and mitigate risks.
  2. Identify and mitigate vulnerabilities, incidents, and patterns of misuse after deployment.
  3. Publicly report advanced AI systems’ capabilities, limitations, and domains of appropriate and inappropriate use.
  4. Work towards responsible information sharing and reporting of incidents among organizations developing advanced AI systems.
  5. Develop, implement, and disclose AI governance and risk management policies grounded in a risk-based approach.
  6. Invest in and implement robust security controls, including physical security, cybersecurity, and insider threat safeguards across the AI lifecycle.
  7. Develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content.
  8. Prioritize research to mitigate societal, safety, and security risks and prioritize investment in effective mitigation measures.
  9. Prioritize the development of advanced AI systems to address the world’s greatest challenges, notably but not limited to the climate crisis, global health, and education.
  10. Advance the development of and, where appropriate, adoption of international technical standards.
  11. Implement appropriate data input measures and protections for personal data and intellectual property.

It is expected that the code will help minimize many risks involved with AI to some extent. While the technology is helping millions across the globe, it also has equal potential of causing alarming damage in the wrong hands. Bad actors have already developed a variety of AI tools that can easily breach privacy and steal information in clever ways. Hopefully, this code will allow AI companies to suppress these dangers.Â