Outside glass view of RPC building.

G7 AI Regulation – a new international code of conduct on regulating AI

Published on 13 December 2023

What is the impact of the G7’s new voluntary AI code of conduct?

The question

What is the impact of the G7’s new voluntary AI code of conduct?

The key takeaway

Voluntary guidance published by the G7 encourages responsible development of generative AI. Further regulation on a national level should be expected in response. 

The background

At the G7 Summit on 19 May 2023, the G7 countries established the G7 Hiroshima Artificial Intelligence Process to promote controls for advanced AI systems on an international level. This is one of several multi-national initiatives around regulating AI; others include the OECD Global Partnership on AI and the Bletchley Declaration agreed by 28 countries at the UK’s AI Summit (discussed in a separate Winter 2023 Snapshot).

The development

On 30 October 2023, the countries in the G7 announced the International Guiding Principles on Artificial Intelligence and the International Code of Conduct for Advanced AI Systems (the Code) aimed at companies developing advanced AI.

The Code’s purpose is to encourage a collective response to the development of trustworthy AI by setting out a non-exhaustive list of voluntary commitments by companies including:

  • taking a preventative approach to risks, particularly by developing recognised processes to test and record them
  • considering impact beyond the initial development phase by analysing potential harm to the end-user and society
  • creating an open dialogue between developers and society, with developers sharing testing reports and concerns, and their own codes of conduct with government bodies or relevant academics
  • being alive to the surrounding social context and the need to use AI to aid global challenges, for example, improving public education. The Code also gives specific examples of improving knowledge around climate change
  • acknowledging privacy and IP rights by implementing training measures and privacy policies.

The Code is likely to be developed in the future following further stakeholder discussion. 

Why is this important?

The development of the Code and its international background shows that the international community will respond to growing calls to regulate AI. The Code comes in addition to other responses referenced above, including the EU’s more stringent AI Act (discussed in previous Snapshots). The Code is another example of regulation and a sign of more to come, albeit in different forms. 

Any practical tips?

Whilst it is currently voluntary, companies developing advanced AI systems should consider ensuring their models are compliant with the Code, as there are likely to be reputational pressures – not least from the public - to demonstrate that AI is safe and trustworthy. Developers should also be alive to the prospect of more regulation at a national level and the challenges with responding to different obligations from different countries if working cross-jurisdictionally. 

Winter 2023