Image of transparent glass of RPC building.

EU Artificial Intelligence Act

Published on 21 December 2022

What impact will the EU Artificial Intelligence Act have on the development, deployment and use of AI and related technologies?

The question

What impact will the EU Artificial Intelligence Act have on the development, deployment and use of AI and related technologies?

The key takeaway

Coined the first law on AI by a major regulator anywhere, the EU Artificial Intelligence Act (the AI Act) sets out a regulatory framework for AI systems that are used in services and products placed in the EU. 

The background

On 21 April 2021 the European Commission published the long-awaited draft AI Act, a result of years of consultations and a key element of the EU’s AI Strategy adopted in 2018. The AI Act is a landmark EU law which, once in force, will import a technology-neutral definition of AI systems into EU law, which is wider in scope than most definitions of AI. 

The development

The AI Act defines AI widely - as any software that: (i) is developed using any number of listed general techniques; and (ii) can, for a given set of defined objectives, generate outputs influencing the environments they interact with. Alongside this, the AI Act takes a risk-based approach and sets out obligations depending on the risk presented by the AI system. As with other EU laws, the Act has extraterritorial application and applies to businesses that place services with AI systems in the EU, as well as to businesses whose AI systems produce outputs which are used in the EU.

All AI systems that propose a clear threat to the safety, livelihoods, and rights of people, therefore presenting an unacceptable risk, will be banned. Nevertheless, research on banned AI systems shall be permitted for legitimate purposes as long as ethical standards are followed and no harm is caused to real persons.

The AI Act imposes several requirements on high-risk uses of AI. High-risk systems are permitted but must comply with obligations around data governance, transparency, and human oversight. In addition, the Commission will maintain a database of high-risk systems.
Certain limited-risk systems, like chatbots, are covered under transparency requirements. The AI Act allows the free use of minimal-risk AI, such as spam filters.

The Act requires post-market monitoring of AI systems. Serious breaches of safety laws or fundamental rights must be reported to the national supervisory board. 

The Council of Europe adopted the Act on 6 December 2022, and will enter negotiations with the European Parliament once the latter adopts its own position with a view to reaching an agreement on the proposed regulation, It is likely that a two-year implementation period will begin in 2023.

Why is this important?

The sanctions under the AI Act can be severe, with potential penalties of up to €30m or 6% of total worldwide annual turnover (whichever is higher) for the most serious infringements. In the broader context, the AI Act sets a benchmark for the regulation of AI that will certainly inform approaches to AI governance around the world. 

Any practical tips?

Given the extremely broad definition of AI, it is highly likely that most sophisticated businesses already incorporate AI systems into many aspects of their operations. Those that do business in the EU will need to assess those systems against the risk rubric in the AI Act and prepare for compliance. It is worth noting that the AI Act is accompanied by the AI Liability Directive (discussed in a separate Snapshot) which makes it easier for individuals to bring claims against those deploying AI systems. 

Winter 2022