Part 2 - AI regulation in the EU

Published on 03 June 2024

This is Part 2 of 'Regulation of AI

The EU AI Act, the main elements of which are covered in our previous article and our podcast: was provisionally agreed in December 2023. Shortly after it was agreed, the Commission released some Q&As to flesh out the key provisions and timelines for application. The regulation was approved in May 2024 and is due to come into effect in July, triggering a graduated two year period for compliance (with obligations for high-risk systems defined in Annex II applicable in 3 years). A timeline for guidance and codes of practice is expected on the day the AI Act comes into effect in July. 

The intention is to achieve proportionality by setting the level of regulation according to the potential risk the AI can generate to health, safety, fundamental rights or the environment. AI systems with an unacceptable level of risk to people’s safety, for example AI systems that manipulate human behaviour to circumvent their free will, are prohibited. High-risk systems are in certain circumstances permitted subject to compliance with requirements relating to disclosure of technical documentation, dataset transparency, and regular monitoring and auditing. For high-risk systems that may have an adverse impact on people's safety or their fundamental rights, a mandatory fundamental rights impact assessment has been included for some deployers (see the Q&As). High risk is not only defined – the Act also annexes a list of use cases that the Commission will keep up to date. There are also rules to regulate general-purpose AI models – with rules for all general-purpose AI models, and additional rules for general-purpose AI models with systemic risks.

Transparency is a key theme that runs throughout the AI Act. General purpose artificial intelligence systems' (such as ChatGPT) transparency requirements include being required to draw up technical documentation, comply with EU copyright law and disseminate detailed summaries about the content used for training the systems.  For high-risk systems, nationals will have the right to receive explanations about decisions based on those high-risk AI systems that impact their rights. 

The new regime is complex and potentially administratively onerous which may favour tech industry incumbents (unless AI itself provides the mechanism to cut through the many obligations).  Although the new rules will be implemented at national level, a new EU AI office, to take effect on 16 June 2024,  will contribute to fostering standards and testing practices and will supervise the implementation and enforcement of new rules on general purpose AI models. The office comprises five different units, including the Regulation and Compliance Unit to focus on coordinating enforcement, and the Unit on the AI Safety to identify systemic risks of general purpose models. Its first tasks will be to prepare guidelines on the AI Act's definition of an AI system and on the prohibited AI practices. It will also help to draw up codes of practice relating to general purpose AI such as on the level of detail required for the summary about the content (ie the main data sets) used for training. 

In the period after entry into force but before the AI Act becomes applicable, the Commission is promoting the  "AI Pact" whereby AI developers commit, by giving a series of pledges on a voluntary basis, to implement key obligations of the AI Act ahead of the legal deadlines. Those who have already produced internal guidelines or processes to ensure the design, development and use of trustworthy AI systems can share and test them with the AI Pact community now

Those with an active role in the AI system lifecycle, including those that deploy or operate AI, that adhere to harmonised standards will, in the EU, enjoy a legal presumption of conformity with the AI Act. Consequently, the AI standards developed by CEN-CENELEC’s Joint Technical Committee 21 (JTC21) in support of the AI Act will play an important role as AI governance tools. They will be relevant to any AI developers seeking to do business in the EU and will also play an important role in discussions on best practices and regulatory interoperability for AI at the wider international level. 

AI system providers looking to comply with the AI Act will also be looking to prepare for their liability risk under the EU Product Liability Directive (approved in March 2024) which explicitly includes AI systems. It's expected that the changes brought by the new directive will come into force by mid-2026. The proposed EU AI Liability Directive which aims to address liability and damage caused by AI systems, might not now arrive and is under review by the European Parliamentary Research Service.  As above, the UK is seeking views on adequacy of current legislation in this area and the most effective way to address liability within the lifecycle and supply chain of AI.

It will be some time before the EU AI Act is fully implemented however existing legislation such as the GDPR already provides a starting point for how to approach similar principles such as the need for people to be given notice of an AI system being used in a way that could affect them and may help with how to approach accountability, transparency, accuracy, storage limitation, security, and performing risk assessments. A strategy to help EU institutions and bodies prepare for the implementation of the AI Act is set to be published at about the same time as it comes into force in July 2024. The European Data Protection Supervisor, the data protection regulator for the EU institutions, will expand its supervision to the AI Act's implementation and in that role, will issue a strategy for the EU institutions. 

 
Discover more insights on the AI guide

Stay connected and subscribe to our latest insights and views 

Subscribe Here