Outside view of RPC's transparent glass building.

New report on the impact of AI on product safety

Published on 28 July 2022

The question

How is AI impacting product safety, and where are the gaps in existing legal and regulatory frameworks?

The key takeaway

It goes without saying that AI presents a variety of opportunities, challenges and risks to product safety. Whilst the current legislation is sufficient for most AI products, it’s clear that legislation and regulator knowledge do not adequately cover all AI products which in turn may lead to legal uncertainty.

The background

The Office of Product Safety and Standards (OPSS) commissioned a report from the Centre for Strategy and Evaluation Services. On 23 May 2022, the Report titled “Study on the Impact of Artificial Intelligence on Product Safety” (the Report) was published.

The development

The objective of the Report was “to examine the current and forecasted future impacts of artificial intelligence (AI) in consumer products, and what this means for product safety”.

The Report found that the use of AI in consumer products is increasing, which creates both opportunities and risks from a product safety perspective.The Report found that the use of AI systems can improve product safety by enabling more efficient and effective products and assisting with data collection. It also allows the use of predictive maintenance which improves safety, including a reduction in maintenance and downtime. The analysis produced during industrial assembly also increases product quality and can lead to improvements in cyber security protection.

On the down side, some of the challenges to product safety presented by AI set out in the Report include:

  • concerns over robustness and predictability if AI products do not behave as expected by the manufacturer/consumer
  • a lack of transparency and explainability presented by some AI systems
  • issues with security and resilience, for example cyber security vulnerabilities in AI consumer products can lead to consumer harm
  • a lack of fairness and discrimination has been demonstrated by AI systems due to biases or inaccuracies in the data used to train them, and
  • privacy and data protection problems as a result of the data imputed into AI systems to train and use them.

A number of potential material and immaterial harms are also identified in the Report, although it is noted that the many of the harms remain theoretical. An example of material harm is an AI-driven robot causing a person physical injury whereas an example of immaterial harm is potential discrimination cause by the AI.

Looking at the current regulatory framework and how this applies to AI, the Report concludes that current legislation is sufficient for some AI products. However, a challenge is that many legal definitions do not fit AI products. For example, the General Product Safety Regulations 2005 do not refer to software in their definition of product. The Report also identifies the need to improve the skills and knowledge of regulatory bodies on AI. The result is that there could be legal uncertainty for AI developers over how the law applies to them and their products.

Why is this important?

The Report comes at a time when we are starting to see an explosion in AI-driven processes. It is therefore well-timed as it increases focus on some of the weak spots in AI development, not least the uncertainty that exists over the application of product safety legislation to AI products and in turn their developers. 

Any practical tips?

If you’re in any way involved in the development of AI systems, the Report (and others like it) should be essential bedtime reading. The AI systems you’re building need to be robust for the long term, and its findings should help you ensure that you’re in a good place (if not ahead of the competition) when the law starts to fill in some of the current gaps in the legal and regulatory framework.