Part 1 - UK AI regulation

Published on 03 June 2024

This is Part 1 of 'Regulation of AI 

There has been consistent messaging from the UK Conservative-led government that the UK has decided to adopt a light touch approach to regulating AI. This was evident in the AI white paper published in March 2023 which outlined a principles based framework (The Ethics of AI – the Digital Dilemma). The UK government held a consultation on the AI white paper in 2023 and published a response on 6 February 2024 that adds slightly more flesh to the bones of the UK framework.

UK AI regulation landscape in Spring 20242

Existing regulators will, using context-specific approaches, take the lead in guiding the development and use of AI using the five principles.  Such an approach could potentially lead to a lack of consistency in enforcement or some larger regulators dominating by interpreting the scope of their remit or role too broadly.  Therefore, the government (via DSIT) intends to provide regulators with central support and monitoring (but without appointing a new AI regulator) including via a new steering committee containing government representatives and key regulators, to support knowledge exchange and coordination on AI governance.

A principles based, slightly wait and see approach means that practically, it's not straightforward for providers and users to put in place plans for compliance and navigation of a potential AI minefield.  For example, while developers of AI systems may undertake their own safety research, there is no common standard in quality or consistency. To deal with this, in its AI white paper and in its follow up Initial Guidance for Regulators, the government gave "phase one" guidance to regulators on implementation considerations. For example, on transparency and explainability regulators are expected to set explainability requirements and expectations on information sharing by AI lifecycle actors, and also to consider the role of available technical standards addressing AI transparency and explainability (such as IEEE 7001, ISO/IEC TS 6254, ISO/IEC 12792) to clarify regulatory guidance and support the implementation of risk treatment measures.  The government has also released guidance on assurance mechanisms and global technical standards, aimed at both industry and regulators to enable the building and deployment of responsible AI systems. This guidance sits alongside other support in the area of standardisation for AI tech available from the UK AI Standards Hub, the Portfolio of AI Assurance Techniques and from the government's Spring 2024 "Introduction to AI assurance" guide which aims to help businesses and organisations build their understanding of the techniques for safe and trustworthy systems. 

Phase one guidance also suggested regulators put information on their AI plans into the public domain, collaborate with each other using cooperation forums, align approaches with other regulators on guidance and definitions, issue joint tools or guidance where possible, map which technical standards could help AI developers and deployers understand the principles in practice and cite them in tools and guidance. The government is intending to release phase two guidance by summer 2024.

Further policy, tools and guidance on risk and governance will come from bodies such as the AI Safety Institute (AISI), the AI Policy Directorate (the Office for Artificial Intelligence has now been subsumed into the Directorate), as well as the Responsible Technology Adoption Unit  (was the Centre for Data Ethics and Innovation). The AISI is evaluating and testing new frontier AI systems (on a voluntary basis) aiming to characterise safety-relevant capabilities, understand the safety and security of systems, and assess their societal impacts, while developing its own technical expertise (see section on 'Bletchley Declaration' below).  If it identifies potentially dangerous capability through its evaluation of advanced AI systems, it will address risks by engaging the developer on suitable safety mitigations.

Four key regulators who will largely be leading the way on implementing the framework, for example by issuing guidance on best practice for adherence to the principles, have been brought together under the umbrella of the Digital Regulation Cooperation Forum (DRCF): the Information Commissioner's Office (ICO), Ofcom, the Competition and Markets Authority (CMA) and the Financial Conduct Authority. From within the DRCF, the AI and Digital Hub will advise innovators on AI regulatory compliance in an aim to navigate conflicting regulations.

Some early activity from the regulators includes a first set of guidance from the ICO published in 2020 which was followed by an AI and Data Protection risk toolkit in 2022. The AI toolkit aims to help organisations identify and mitigate risks during the AI lifecycle. In March 2023, the ICO published updated guidance on AI and data protection on how to apply the concepts of data protection law when developing or deploying AI, including generative AI. This followed requests from UK industry to clarify requirements for fairness in AI. The ICO also runs an Innovation advice service. Its four part consultation series on generative AI and data protection closed on 10 June 2024.

The CMA launched an initial review into FMs in May 2023 to consider the rapidly emerging AI markets from a competition and consumer protection perspective. In September 2023, the CMA published an update report including its 'guiding principles'. The CMA's overarching principle is one of accountability for AI outputs provided to consumers by AI developers and deployers. Its further guiding principles relate to access, diversity, choice, fair-dealing and transparency. In the second stage of its AI review, the CMA published: 

  • An AI Foundation Models update paper (11 April)
  • An AI Foundation Models technical update report (16 April)
  • An AI strategic update (29 April)
  • A progress update on its market investigation into the supply of public cloud infrastructure services (23 May)

  • The CMA's 2024 ongoing programme of work on the impact of FMs on markets will result in the CMA and the ICO publishing a joint statement on the interaction between competition, consumer protection and data protection in FMs; a forthcoming paper on AI accelerator chips, which will consider their role in the FM value chain; joint research in the DRCF on consumers' understanding and use of FM services, and participation in the DRCF AI and Digital Hub pilot, launched in April, to provide innovators with obtain answers to complex queries which span the regulatory remits of DRCF members. The Digital Markets, Competition and Consumers Act, published in June, will give the CMA additional tools to identify and address any competition issues in AI markets and other digital markets affected by recent developments in AI.

    Following the passing of the Online Safety Act (OSA) last year, Ofcom as the online safety regulator, is the regulator for services that are at the forefront of AI adoption. It is in the process of getting the new online safety regime up and running (most of the new rules will come into force in late 2024) and a key feature of the regime relates to embedding proactive risk management as part of an organisation's broader approach to governance and compliance when integrating generative AI tools which might fall into scope of the OSA, with user safety being recognised and represented at all levels.  With a special focus on generative AI, Ofcom's current activity is largely in research, monitoring, discussion and review.

    A number of key regulators (not just DRCF) were asked by the government to each publish an update outlining their strategic approach to AI by 30 April 2024, to cover:

    • an outline of the steps they are taking in line with the expectations set out in the white paper;
    • analysis of AI-related risks in the sectors and activities they regulate and the actions they are taking to address these;
    • an explanation of their current capability to address AI as compared with their assessment of requirements, and the actions they are taking to ensure they have the right structures; and skills in place; and
    • a forward look of plans and activities over the coming 12 months.

    The updates varied in their breadth and level of detail with some common themes on prospective activities: research into consumer use of AI and cross-sector adoption of generative AI technology by organisations; and a focus on collaboration with the other regulators, the government, standards bodies and international partners.

    The Private Members' Artificial Intelligence (Regulation) Bill that was introduced to the House of Lords in November 2023 is also worth a mention. The Bill creates an AI Authority that would collaborate with relevant regulators to construct regulatory sandboxes for AI. Although the Bill was initially of interest as a tool to harness debate on AI issues, somewhat unexpectedly, the UK Secretary of State for Science, Innovation and Technology, Michelle Donelan confirmed plans to introduce a UK AI Bill in the future. Speaking to the House of Commons Science, Innovation and Technology on 13 December 2023, Donelan said the bill would go further than the government’s AI white paper to introduce a statutory duty for regulators to take into account requirements for AI systems. However, she emphasised that there are still no plans to introduce any AI law in this parliamentary session and maintained that any future domestic law would not entail the creation of an overarching AI regulator. The Bill completed its progress through the Lords but did not make it through Parliamentary wash-up following the prime minister's announcement of a UK general election to take place on 4 July. However, when Parliament opens on 17 July, the introducer of the Bill, Lord Holmes, has said he plans to reintroduce the bill and hopes for its inclusion in the legislative program. 

    The level and kind of future intervention contemplated by the UK government was clarified further in the government's white paper response. It said that while combining cross-sectoral principles and a context-specific framework, international leadership and collaboration, and voluntary measures on developers is the right position to take right now, the challenges posed by AI technologies will ultimately require legislative action in every country once understanding of risk has matured. It has set out a case for new responsibilities for developers of "highly capable general-purpose AI systems" acknowledging that developers of these systems currently face the least clear legal responsibilities. This means some risks may not be addressed effectively, particularly as the existing technology powering these systems may be replaced by a yet unknown technology. How to address the risks of the two other types of advanced AI models: highly capable narrow AI and agentic AI is still in the evidence gathering stage. There is potential for the capabilities of these systems to fall through the cracks of regulator coverage and capacity. Consequently, the government anticipates that all jurisdictions will, in time, want to place targeted mandatory interventions on the design, development, and deployment of such systems to ensure risks are adequately addressed. The measures may also include transparency measures (for example, relating to the data that systems are trained on); risk management, accountability, and corporate governance related obligations; or actions to address potential harms, such as those caused by misuse or unfair bias before or after training.

    The new responsibilities may not just sit with developers – data and cloud hosting providers will also be considered.

    2 Contains public sector information licensed under the Open Government Licence v3.0


    Discover more insights on the AI guide

    Stay connected and subscribe to our latest insights and views 

    Subscribe Here