Cyber Bytes banner RPC law

Part 2 - AI regulation in the EU

Published on 11 March 2024

This is Part 2 of 'Regulation of AI – raising the trillion dollar bAIby'

The EU AI Act, the main elements of which are covered in our previous article, was provisionally agreed in December 2023. Shortly after it was agreed, the Commission released some Q&As to flesh out the key provisions and timelines for application. It is anticipated that the latest text of the Artificial Intelligence Act will be formally adopted by both Parliament and Council in April, triggering a graduated two year period for compliance (with obligations for high-risk systems defined in Annex II applicable in 3 years). 

The intention is to achieve proportionality by setting the level of regulation according to the potential risk the AI can generate to health, safety, fundamental rights or the environment. AI systems with an unacceptable level of risk to people’s safety, for example AI systems that manipulate human behaviour to circumvent their free will, are prohibited. High-risk systems are in certain circumstances permitted subject to compliance with requirements relating to disclosure of technical documentation, dataset transparency, and regular monitoring and auditing. For high-risk systems that may have an adverse impact on people's safety or their fundamental rights, a mandatory fundamental rights impact assessment has been included for some deployers (see the Q&As). High risk is not only defined – the Act also annexes a list of use cases that the Commission will keep up to date. There are also rules to regulate general-purpose AI models – with rules for all general-purpose AI models, and additional rules for general-purpose AI models with systemic risks.

Transparency is a key theme that runs throughout the AI Act. General artificial intelligence systems' (such as ChatGPT) transparency requirements include being required to draw up technical documentation, comply with EU copyright law and disseminate detailed summaries about the content used for training the systems.  For high-risk systems, nationals will have the right to receive explanations about decisions based on those high-risk AI systems that impact their rights. 

The new regime is complex and potentially administratively onerous which may favour tech industry incumbents (unless AI itself provides the mechanism to cut through the many obligations).  Although the new rules will be implemented at national level, a new EU AI office will contribute to fostering standards and testing practices and will supervise the implementation and enforcement of new rules on general purpose AI models. 

In the period after formal adoption but before the AI Act becomes applicable, the Commission plans to launch an AI Pact whereby AI developers commit, by giving a series of pledges on a voluntary basis, to implement key obligations of the AI Act ahead of the legal deadlines. Those who have already produced internal guidelines or processes to ensure the design, development and use of trustworthy AI systems can share and test them with the AI Pact community now

Those with an active role in the AI system lifecycle, including those that deploy or operate AI, that adhere to harmonised standards will, in the EU, enjoy a legal presumption of conformity with the AI Act. Consequently, the AI standards developed by CEN-CENELEC’s Joint Technical Committee 21 (JTC21) in support of the AI Act will play an important role as AI governance tools. They will be relevant to any AI developers seeking to do business in the EU and will also play an important role in discussions on best practices and regulatory interoperability for AI at the wider international level. 

AI system providers looking to comply with the AI Act will also be looking to track both the EU's planned revision of the EU Product Liability Directive (the text of which has now been provisionally agreed) and the proposal for an EU AI Liability Directive which will address liability and damage caused by AI systems. As above, the UK is seeking views on adequacy of current legislation in this area and the most effective way to address liability within the lifecycle and supply chain of AI.

It will be some time before the EU AI Act is implemented however existing legislation such as the GDPR already provides a starting point for how to approach similar principles such as the need for people to be given notice of an AI system being used in a way that could affect them and may help with how to approach accountability, transparency, accuracy, storage limitation, security, and performing risk assessments.

 
Discover more insights on the AI guide