Cyber Bytes banner RPC law

Part 1 - UK AI regulation

Published on 11 March 2024

This is Part 1 of 'Regulation of AI – raising the trillion dollar bAIby'

There has been consistent messaging from the UK government that the UK has decided to adopt a light touch approach to regulating AI. This was evident in the AI white paper published in March 2023 which outlined a principles based framework (The Ethics of AI – the Digital Dilemma). The UK government held a consultation on the AI white paper last year and published a response on 6 February 2024 that adds slightly more flesh to the bones of the UK framework.

ai-governance-landscape

UK AI regulation landscape in Spring 20242

Existing regulators will, using context-specific approaches, take the lead in guiding the development and use of AI using the five principles.  Such an approach could potentially lead to a lack of consistency in enforcement or some larger regulators dominating by interpreting the scope of their remit or role too broadly.  Therefore, the government (via DSIT) intends to provide regulators with central support and monitoring (but without appointing a new AI regulator) including via a new steering committee containing government representatives and key regulators, to support knowledge exchange and coordination on AI governance.

A principles based, slightly wait and see approach means that practically, it's not straightforward for providers and users to put in place plans for compliance and navigation of a potential AI minefield.  For example, while developers of AI systems may undertake their own safety research, there is no common standard in quality or consistency. To deal with this, in its AI white paper and in its follow up Initial Guidance for Regulators, the government gave "phase one" guidance to regulators on implementation considerations. For example, on transparency and explainability regulators are expected to set explainability requirements and expectations on information sharing by AI lifecycle actors, and also to consider the role of available technical standards addressing AI transparency and explainability (such as IEEE 7001, ISO/IEC TS 6254, ISO/IEC 12792) to clarify regulatory guidance and support the implementation of risk treatment measures.  The government has also released guidance on assurance mechanisms and global technical standards, aimed at both industry and regulators to enable the building and deployment of responsible AI systems.

Phase one guidance also suggested regulators put information on their AI plans into the public domain, collaborate with each other using cooperation forums, align approaches with other regulators on guidance and definitions, issue joint tools or guidance where possible, map which technical standards could help AI developers and deployers understand the principles in practice and cite them in tools and guidance. The government is intending to release phase two guidance by summer 2024.

Further policy, tools and guidance on risk and governance will come from bodies such as the AI Safety Institute (AISI), the AI Policy Directorate (the Office for Artificial Intelligence has now been subsumed into the Directorate), as well as the Responsible Technology Adoption Unit  (was the Centre for Data Ethics and Innovation). The AISI is evaluating and testing new frontier AI systems (on a voluntary basis) aiming to characterise safety-relevant capabilities, understand the safety and security of systems, and assess their societal impacts, while developing its own technical expertise (see section on 'Bletchley Declaration' below).  If it identifies potentially dangerous capability through its evaluation of advanced AI systems, it will address risks by engaging the developer on suitable safety mitigations.

Four key regulators who will largely be leading the way on implementing the framework, for example by issuing guidance on best practice for adherence to the principles, have been brought together under the umbrella of the Digital Regulation Cooperation Forum (DRCF): the Information Commissioner's Office (ICO), Ofcom, the Competition and Markets Authority (CMA) and the Financial Conduct Authority. From within the DRCF, the AI and Digital Hub will advise innovators on AI regulatory compliance in an aim to navigate conflicting regulations.

Some early activity from the regulators includes a first set of guidance from the ICO published in 2020 which was followed by an AI and Data Protection risk toolkit in 2022. In March 2023, the ICO published updated guidance on AI and data protection following requests from UK industry to clarify requirements for fairness in AI.

The CMA launched an initial review into FMs in May 2023 to consider the rapidly emerging AI markets from a competition and consumer protection perspective. In September 2023, the CMA published an update report including its 'guiding principles'. The CMA's overarching principle is one of accountability for AI outputs provided to consumers by AI developers and deployers. Its further guiding principles relate to access, diversity, choice, flexibility, fair-dealing and transparency. The second stage of the AI review is underway, and the CMA will publish a further update in April 2024. In this second phase, the CMA has been engaging with consumer groups, leading FM developers and deployers, and innovators and academics. In Spring 2024, the CMA and the ICO will be publishing a joint statement in relation to the crossover between competition, consumer and data protection objectives. The Digital Markets, Competition and Consumers Bill, which is currently progressing through Parliament, will give the CMA additional tools to identify and address any competition issues in AI markets and other digital markets affected by recent developments in AI.

Following the passing of the Online Safety Act (OSA) last year, Ofcom as the online safety regulator, is the regulator for services that are at the forefront of AI adoption. It is in the process of getting the new online safety regime up and running (most of the new rules will come into force in late 2024) and a key feature of the regime relates to embedding proactive risk management as part of an organisation's broader approach to governance and compliance when integrating generative AI tools which might fall into scope of the OSA, with user safety being recognised and represented at all levels.  With a special focus on generative AI, Ofcom's current activity is largely in research, monitoring, discussion and review.

The government is expecting a number of key regulators (not just DRCF) to publish an update outlining their strategic approach to AI by 30 April 2024, to cover:

  • an outline of the steps they are taking in line with the expectations set out in the white paper;
  • analysis of AI-related risks in the sectors and activities they regulate and the actions they are taking to address these;
  • an explanation of their current capability to address AI as compared with their assessment of requirements, and the actions they are taking to ensure they have the right structures; and skills in place; and
  • a forward look of plans and activities over the coming 12 months.

The Private Members' Artificial Intelligence (Regulation) Bill that was introduced to the House of Lords in November 2023 is also worth a mention. The Bill creates an AI Authority that would collaborate with relevant regulators to construct regulatory sandboxes for AI. Although the Bill was initially of interest as a tool to harness debate on AI issues, somewhat unexpectedly, the UK Secretary of State for Science, Innovation and Technology, Michelle Donelan has since confirmed plans to introduce a UK AI Bill in the future. Speaking to the House of Commons Science, Innovation and Technology on 13 December 2023, Donelan said the bill would go further than the government’s AI white paper to introduce a statutory duty for regulators to take into account requirements for AI systems. However, she emphasised that there are still no plans to introduce any AI law in this parliamentary session and maintained that any future domestic law would not entail the creation of an overarching AI regulator.

The level and kind of future intervention contemplated by the UK government was clarified further in the government's white paper response. It said that while combining cross-sectoral principles and a context-specific framework, international leadership and collaboration, and voluntary measures on developers is the right position to take right now, the challenges posed by AI technologies will ultimately require legislative action in every country once understanding of risk has matured. It has set out a case for new responsibilities for developers of "highly capable general-purpose AI systems" acknowledging that developers of these systems currently face the least clear legal responsibilities. This means some risks may not be addressed effectively, particularly as the existing technology powering these systems may be replaced by a yet unknown technology. How to address the risks of the two other types of advanced AI models: highly capable narrow AI and agentic AI is still in the evidence gathering stage. There is potential for the capabilities of these systems to fall through the cracks of regulator coverage and capacity. Consequently, the government anticipates that all jurisdictions will, in time, want to place targeted mandatory interventions on the design, development, and deployment of such systems to ensure risks are adequately addressed. The measures may also include transparency measures (for example, relating to the data that systems are trained on); risk management, accountability, and corporate governance related obligations; or actions to address potential harms, such as those caused by misuse or unfair bias before or after training.

The new responsibilities may not just sit with developers – data and cloud hosting providers will also be considered.

2 Contains public sector information licensed under the Open Government Licence v3.0

 

Discover more insights on the AI guide