Outside glass view of RPC building.

AI Safety Summit and the Bletchley Declaration

Published on 13 December 2023

What is the impact of the UK’s recent AI Safety Summit on the governance of AI systems around the world?

The question

What is the impact of the UK’s recent AI Safety Summit on the governance of AI systems around the world?

The key takeaway

Representatives of 28 countries as well as other tech companies, academia, and civil society leaders signed the Bletchley Declaration to establish shared agreement and responsibility on the risks and opportunities presented by frontier AI. 

The background

As various governments seek to harness the great economic and social potential of AI-driven technology, there has been a coinciding push to put in place the necessary legislative framework to protect the interest of countries, businesses and individuals. In the UK, the government framed its pro-innovation approach to AI regulation in the White Paper it published in March 2023 (covered in our previous Snapshots). This was followed by the UK hosting the AI Safety Summit in Bletchley Park on 1 and 2 November 2023. The summit coincided with the US government’s publication of an executive order on AI safety and the G7’s International Code of Conduct on AI. 

The development

As part of the summit, the participating 28 countries signed the Bletchley Declaration on AI Safety. The declaration recognises the positive potential impact that AI can have on the world and calls for the alignment of AI development with values that prioritise safety, human-centric design, trustworthiness, and responsibility. In particular, the declaration notes the specific risks presented by frontier AI models - highly capable general-purpose AI models that can perform a wide variety of tasks and for which the potential for intentional misuse and unforeseen consequences are not fully understood. 

Significantly, as part of the Bletchley Declaration, leading AI companies such as OpenAI, Google DeepMind, Anthropic, Microsoft, and Meta agreed to allow governments to test their latest models before they are released to the public. 

Governments have also agreed to share the results of their evaluations with other nations and to collaboratively develop AI standards over time, thus establishing a foundation for future advancements in international AI safety efforts. As part of the UK’s input into the development of the global understanding of AI, the Prime Minister announced that the existing UK Frontier AI Taskforce is to be renamed the AI Safety Institute, with its focus shifting to advanced AI safety for the public. The newly named institute will be in charge of conducting evaluations on AI systems, research into AI safety and sharing developments in the AI sphere with the Government and other players in the field.

Why is this important?

The summit reflects the global ambition to regulate AI in a way that promotes its use for economic and societal benefit, whilst balancing the safety concerns. Governments have differing opinions on where to position their respective AI frameworks, with the UK currently taking a less aggressive approach to regulation. However, the summit highlights the importance of a unified and cooperative approach to monitoring the use of AI between countries and the biggest actors in the AI sphere. The summit also marks the commencement of a sequence of discussions among the 28 participating countries. The Republic of Korea has committed to jointly organising a virtual summit on AI within the next six months. Subsequently, France is set to host the next in-person AI safety summit in the Autumn of 2024.

Any practical tips?

The UK Government has come under some criticism for its light-touch approach to AI regulation thus far. Whilst there are no indications that it is likely to alter its “pro-innovation” approach, it is evident that AI is an area of particular interest for the Government and naturally we are likely to see further initiatives and legislation as AI’s influence continues to grow. Businesses developing AI technologies (particularly those engaged in frontier AI) should be wary to the fact that further laws governing AI are inevitable, and governments have expressed their desire to be involved in the testing of such products.

Companies operating across multiple jurisdictions will have to be cognisant to the patchwork of legislatives frameworks that are springing up across the UK, EU and the US and differing approaches to regulation.

Winter 2023