Outside view of RPC's transparent glass building.

European Commission proposes new rules on AI

Published on 02 August 2021

The question

How will future EU regulations affect the development of artificial intelligence (AI)?

The key takeaway

The European Commission’s new draft regulations set specific standards and obligations on the developers of AI systems, particularly those which fall into a high-risk category. Developers of these systems will need to pay very close attention indeed to the new rules in order to avoid crippling fines in the future – being up to €30m or 6% of a company’s total worldwide annual turnover, whichever is higher. 

The background

Published in April 2021, the new draft regulations seek to turn Europe into “the global hub for trustworthy AI”, and to “guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU”. This is a big development in the legislative landscape for AI and developers will need to pay close attention to them to ensure compliance in the future.

The development

The regulations follow a risk-based approach depending on the level of risk in the use of AI in each particular context. The higher the risk, the stricter the rules. The categories include high-risk, limited risk and minimal risk, with the clear focus on high-risk systems. The latter are those where the AI creates a high risk to the health and safety or fundamental rights of natural persons. These include:

  • employment, worker management and access to self-employment (eg CV-sorting software for recruitment)
  • safety components of products (eg AI application in robot-assisted surgery)
  • law enforcement that may interfere with people’s fundamental rights (eg evaluation of the reliability of evidence), and
  • administration of justice and democratic processes (eg applying the law to a concrete set of facts).
    Developers of high-risk AI systems will have to adhere to specific obligations before they can be placed in the European market. These include:
  • adequate risk assessment and mitigation systems run throughout an AI system’s lifecycle
  • high quality datasets used in training of an AI system to minimise discrimination and risks
  • logging of the AI’s activity to ensure traceability of results throughout its lifecycle
  • designing and developing the system in a way which ensures transparency of its operation and use to the end-user, and 
  • appropriate human oversight of the AI system during the period of its operation.

Some high-risk systems will be outright banned, if they are deemed a clear threat to the safety, livelihoods and rights of people. Limited and minimal risk AI systems will have fewer, if any, obligations to comply with. A good example falling within the limited risk category are chatbots (being systems that interact with natural persons), where developers will have to only adhere to transparency obligations, informing users that they are interacting with a machine. Those deemed as minimal risk AI systems include AI-enabled video games or spam filters, which can be used freely and will not be regulated. 

Non-compliance with the regulations can result in hefty fines for developers. Non-compliance with provisions on prohibited AI practices or data and governance obligations can incur fines of up to €30m or 6% of a company’s total worldwide annual turnover, whichever is higher. An infringement of any other requirements (eg for activity logging, transparency or human oversight) can incur a fine of €10m or 2% of a company’s total worldwide annual turnover.

Why is this important?

The draft regulations show clear intent by the EU for the setting of strong, robust standards for the development of AI, as most clearly witnessed by the potentially massive fines of up to 6% of worldwide turnover. 

Any practical tips?

Any developers of AI, currently or in the future, must become familiar with the draft regulations as soon as possible, especially understanding which risk category their systems may fall into. If there’s a chance of being deemed the provider of a high-risk system, then taking steps to hardwire in the necessary protections could prove critical in avoiding huge regulatory fines down the line.