Water cooler and triangular chairs

The "Unicorn Kingdom's" AI White Paper

12 May 2023. Published by Helen Armstrong, Partner and Ricky Cella, Senior Associate and Joshy Thomas, Knowledge Lawyer

The UK's pro-innovation AI White paper has been published. It landed almost simultaneously with an open letter from the Future of Life Institute which called for a six-month halt in work on AI systems more powerful than the generative AI system: GPT-4.

Such systems are now being referred to as having human-competitive (rather than human-like) intelligence and the proposed pause is to allow for the joint development and implementation of a set of shared safety protocols for advanced AI design and development that are audited and overseen by independent outside experts.

Since then, leading scientist Geoffrey Hinton, who developed the foundations of modern machine learning, has decided to step away from developing AI and into a role warning about the dangers of its technology in terms of the potential for widespread job losses and use by 'bad actors' and to urge responsible investment in safety and control of AI that is developing at a spectacular rate.

The AI White Paper claims that the UK is third in the world for AI research and development, and that it is home to a third of Europe’s total AI companies—twice as many as any other European country. The UK's approach to regulating AI is undoubtedly of key interest not just to UK AI and non-AI focussed businesses, but also to Europe, the US and the rest of the world.  Given the concerns being raised by those closest to the most advanced generative AI developments, no doubt many will be asking: does the White Paper go far enough?

The unicorn approach in a nutshell

The UK's AI White Paper is pro-innovation and, it's fair to say, light on regulation.  There's no surprise in this as it follows the UK's National AI Strategy and the principles of the Plan for Digital Regulation. There is no intention to introduce legislation—the framework will be principles-based and will progress iteratively with a wait and see approach to the detail to allow "getting regulation right so that innovators can thrive and the risks posed by AI can be addressed". In this respect, the government has given itself monitoring functions to provide real time assessments of how the regulatory framework is performing. This monitoring will include test beds and sandbox initiatives, conducting and asking convening industry to conduct horizon scanning, and promoting interoperability with international regulatory frameworks. In addition, the framework will be supplemented by assurance techniques, voluntary guidance and technical standards, in collaboration with bodies such as the UK AI Standards Hub and the AI Council.

No AI regulator to mind the gaps

There are no plans to appoint an AI regulator, instead the plan is that existing sectoral regulators will incorporate AI into their normal responsibilities. Following an initial period of implementation, the government anticipates introducing a statutory duty on regulators requiring them to have 'due regard' to the principles. This statutory duty won't be introduced if the government's monitoring of the framework shows that implementation is effective without the need to legislate. While the duty to have due regard will require regulators to demonstrate that they had taken account of the principles, the government recognises that not every regulator will need to introduce measures to implement every principle.

In the AI White Paper, the government recognises that AI risks arise across, or in the gaps between, existing regulatory remits. Unless the various sectoral regulators' approaches to regulating AI are aligned, businesses may end up being caught by complex rules and confused by inconsistent enforcement across regulators who have limited capacity and access to AI expertise.  This may disproportionately impact small businesses.

Aside from acknowledging that regulatory coordination will be key through existing formal networks such as the Digital Regulation Cooperation Forum (this has already published its vision for a joined up approach to digital regulation and has established a multi-agency advice service), the government is planning cross-sectoral risk assessment activities. These include: developing and maintaining a cross-economy, society-wide AI risk register to support regulators’ internal risks assessments; working with regulators to clarify responsibilities in relation to new risks or areas of contested responsibility; sharing risk enforcement best practices and supporting join-up between regulators.

Definition of AI 

There is currently no widely accepted worldwide definition of what is meant by AI. The UK government has therefore decided against a rigid legal definition and has decided to define AI by reference to the two characteristics that generate the need for a regulatory response: its adaptivity and autonomy. The reasoning behind this is that the combination of AI's adaptivity and autonomy makes it difficult to explain, predict, or control the outputs of an AI system, or the underlying logic by which they are generated. It can also be challenging to allocate responsibility for the system’s operation and outputs. Within the framework, the government will retain the ability to adapt its approach to defining AI, alongside its ongoing monitoring obligations.

Regulating use via non statutory principles

The UK is proposing a non-statutory framework that existing regulators will be expected to implement. The framework is underpinned by five, now familiar, principles to guide and inform the responsible development and use of AI in all sectors of the UK economy: 

safety, security and robustness; 
appropriate transparency and explainability; 
fairness;
accountability and governance; and 
contestability and redress. 

The UK aims to regulate the use of AI, not the technology itself – focussing on the context in which AI is deployed rather than specific technologies. An example given is that an AI-powered chatbot used to triage customer service requests for an online clothing retailer should not be regulated in the same way as a similar application used as part of a medical diagnostic process.

Regulators are expected to issue guidance or update existing guidance on the principles and will be encouraged to publish joint guidance on AI use cases that cross multiple regulatory remits.

UK alignment with international jurisdictions

The government is proposing that this is done centrally by monitoring alignment between UK principles and international approaches to regulation, assurance and/or risk management, and technical standards.  It will also aim to support cross-border coordination and collaboration by identifying opportunities for regulatory interoperability. 

Currently, the UK's apparent 'light touch' approach sits apart from the US and EU's risk-based focus, particularly when it comes to foundation models.  Last year’s release of ChatGPT has prompted recent revisions to the EU AI Act draft legislation, honing in on foundation models.  In a slight departure from regulating use rather than specific systems, the revisions seek to impose specific obligations on providers of general-purpose foundation models for example to mitigate against use for high-risk purposes such as deepfakes. 

In a similar vein, while there is currently no comprehensive federal legislation regulating AI systems in the US, recent commentary suggests that the US (again prompted by ChatGPT) is shifting from a wholly voluntary framework towards the idea of more formal, risk based, state and federal level governance of AI.   

Practical issues

Big tech

It seems like some of the big tech firms don't yet want to launch their chatbots, but don't feel they have a choice if they are to remain competitive in this area. As a result, tech firms, and their executives, may end up with enormous responsibility and liability if things progress in a way that is harmful to humans.

AI supply chains

The complexity and opaqueness of AI supply chains makes allocating risk within the supply chain challenging.  Under the UK's current legal frameworks there is a real chance of getting it wrong in terms of inappropriate allocation of liability as between businesses using (but not developing) AI and businesses developing foundation models for use by third parties.

The government is not yet clear on how responsibility and liability for demonstrating compliance with the AI regulatory principles will be or should ideally be allocated and it is not proposing to make changes to life cycle accountability at this stage. Going forward, it plans an agile approach—with targeted measures deployed if necessary.  In the meantime, it plans to rely on assurance techniques (aiming, in collaboration with industry, to launch a Portfolio of AI assurance Techniques shortly) and technical standards (including through the UK AI Standards Hub) to support supply chain risk management.  

Foundation models

There are a small number of organisations supplying foundation models and a proportionately larger number of businesses integrating or otherwise deploying foundation models elsewhere in the AI ecosystem.  The government is again looking to assurance techniques and technical standards (particularly important for bias mitigation) to regulate foundation models and will be supported by the UK's Foundation Model AI Taskforce to help build capability in this area. 

The government is also expecting regulators to build capability in their sectors. In line with this, the Competition and Markets Authority (CMA) announced, on 4 May 2023, a review of AI foundation models. The review seeks to understand how foundation models are developing and will produce an assessment of the conditions and principles that will best guide the development of foundation models and their use in the future. As well as exploring the opportunities and risks these models could bring for competition and consumer protection, the review aims to produce guidance. 

Intellectual property

The AI White Paper doesn't address how the government plans to balance the rights of content producers and AI developers. It refers to its response to Sir Patrick Vallance's Pro-Innovation Regulation of Technologies Review recommendations, published earlier in the Spring. In its response, the government proposed that the Intellectual Property Office will produce a code of practice by the summer that will provide guidance to support AI firms in accessing copyright protected works as an input to their models. For further detail on the practical points relating to the UK's approach to AI and intellectual property rights see our earlier article.

The regulators

Busy and already under-resourced regulators are, at least at some point, likely to be overwhelmed with the technical aspects of AI.  Fact—it's incredibly difficult to understand. For example, they may lack the expertise to consider properly the application of the principles to the entirety of their sector, or they may ask for evidence as part of their investigations and simply not understand it when it arrives. There is also a risk that some regulators could begin to interpret the scope of their remit broadly to fill the gaps in ways not originally envisaged or expected.

Next steps

The government is currently consulting on the AI White Paper (the consultation closes on 21 June 2023). Further details about the implementation of the regulatory framework will be provided through an AI regulation roadmap, which will be published alongside the government response to the consultation on the AI White Paper. Thereafter it has set out a plan that covers the next year and beyond (playing out during a general election). 

In the next six months it is planning to, among other things, publish the government’s response to the AI White Paper consultation and issue cross-sectoral principles to regulators, together with initial guidance, as well as design and publish an AI Regulation Roadmap with plans for establishing its central functions. 

During the following 6 months it will encourage key regulators to publish guidance on how the cross-sectoral principles apply within their remit and design a monitoring and evaluation framework. The CMA's review of AI foundation models, referred to above, closes in June and the CMA is looking to publish a report which sets out its findings in September 2023. 

In the longer term the government will provide detail on central functions, prompt regulators who have not produced guidance to do so, publish a draft central, cross-economy AI risk register for consultation and develop the regulatory sandbox or testbed. 

The UK government is clearly not wishing to rush in when it comes to regulating AI and there are some benefits to its proposed iterative approach.  AI is, however, here and interacting with humans now.  Consequently businesses, large and small, operating in the UK's AI landscape do require more immediate regulatory parameters to protect them and allow them to deal safely with the enormous opportunities presented by digital superintelligence as well as what Geoffrey Hinton describes as an incoming flood of misinformation, job losses and even an existential threat to humanity.