Cyber Bytes banner RPC law

Part 5 – AI Regulation Globally

Published on 11 March 2024

On 30 October 2023 the G7 published its international guiding principles on AI, in addition to a voluntary code of conduct for AI developers. The G7 principles are a non-exhaustive list of guiding principles aimed at promoting safe, secure and trustworthy AI and are intended to build on the OECD's AI Principles, adopted back in May 2019.

On 1 and 2 November 2023 the UK Government hosted the AI Safety Summit. The Summit brought together representatives from governments, AI companies, research experts and civil society groups from across the globe, with the stated aims of considering the risk of AI and discussing how they can be mitigated through internationally co-ordinated action.

One output from the UK's AI Safety Summit was the 'Bletchley Declaration', made by the countries attending the summit, which in addition to the UK, included the USA, China, Brazil, India, France, Germany, Japan, Italy and Canada. A central theme of the declaration was the importance of international collaboration on identifying AI safety risks and creating risk-based policies to ensure safety in light of such risks. Another output was an agreement between senior government representatives from leading AI nations, and major AI developers and organisations (including Meta, Google DeepMind and OpenAI), to a plan for safety testing of frontier AI models. The plan involves testing models both pre- and post-deployment, and a role for governments in testing, particularly for critical national security, safety and societal harms. For example, the UK’s AI Safety Institute would be able to evaluate the safety of AI models such as ChatGPT before they are released to businesses and consumers.

Also in November, the UK's National Cyber Security Centre released its guidelines for developers on Secure AI System Development, developed with the US’s Cybersecurity and Infrastructure Security Agency. The guidelines are endorsed not only by the US but also 17 other countries. The guidelines help developers ensure that cyber security is both an essential pre-condition of AI system safety and integral to the development process from the outset and throughout, known as a ‘secure by design’ approach.

On AI standards, early in 2023, a new standard for AI risk management – ISO/IEC 23894 was published by the International Organisation for Standardisation (ISO) and the International Electrotechnical Commission (IE). ISO/IEC 23894 offers strategic guidance to organisations across all sectors for managing risks connected to the development and use of AI. It also provides guidance on how organisations can integrate risk management into their AI-driven activities and business functions. As an International Standard, ISO/IEC 23894:2023 provides a common framework that can be adopted by organisations globally. It followed ISO/IEC TR 24028:2020 that analyses the factors that can impact the trustworthiness of systems providing or using AI. It also discusses possible approaches to mitigating AI system vulnerabilities and ways to improving their trustworthiness. In its response to the white paper, the UK government  mentions specifically the importance of engaging with global standards development organisations such as the ISO and IEC.

At the end of 2023, global collaboration on AI was further bolstered by IBM and Meta's launch of the AI Alliance, a collaboration with more than 50 other organisations across academia, civil society, public bodies like NASA and major corporate operators like Oracle to "advance open, safe and responsible AI". It aims to develop and deploy benchmarks and evaluation standards, tools, and other resources that enable the responsible development and use of AI systems at global scale, including the creation of a catalogue of vetted safety, security and trust tools.  

 

Discover more insights on the AI guide