Entrance to RPC building - dark

ICO issues guidance on artificial intelligence: explaining the “black box”

Published on 07 August 2020

What steps do businesses need to take to comply with the ICO’s new guidance on artificial intelligence?

The key takeaways

Organisations using AI to assist in decision making processes should be able to explain how AI has produced its output. This concept of “explainability” is important, as the GDPR may imply a right for data subjects, in Article 15 and 22, to an explanation of an automated decision after it has been made (by virtue of Recital 71). To this end, companies employing AI should produce documentation explaining the mechanics of the AI used, and issue policy guidance to staff covering how they should ensure the decisions made can be explained. 


The background


The ICO and The Alan Turing Institute have collaborated to produce the “Explaining decisions made with AI” guidance, which runs to over 130 pages. It sets out, among other things, key principles to follow and steps to take when explaining AI-assisted decisions, including the procedures and policies organisations should consider putting in place. 


This is in response to concerns surrounding “black box” AI systems which have opaque inner workings and are inaccessible to normal human understanding. Decisions made with “black box” systems may be difficult to explain to individuals about whom a decision has been made using their personal data. 


The guidance is not a statutory code of practice under the Data Protection Act 2018 but is intended as a best practice guide for explaining decisions to individuals which have been made using AI to process personal information. It builds on the ICO’s previous work in this area, including its AI Auditing Framework and Project ExplAIn interim report. Whilst GDPR requirements are touched on in the guidance, it also includes points that are wider in scope, such as the ethical considerations around the use of AI. 


This is of particular importance to organisations that develop, test or deploy AI decision making systems such as those employing AI based ad-tech. 


The guidance


The thrust of the guidance is that procedures should be adopted to enable a company to explain and evidence to a decision recipient how that decision was made with AI. 

It is suggested that policies should cover all the “explainability” considerations and actions that are required from employees involved from the concept formation to the deployment of AI decision-support systems. 

Furthermore, it is essential to not only document the processes behind the design and implementation of the AI system, but also the actual explanation of its outcome. It should be comprehensible to people with varying levels of technical knowledge and may help you provide the evidence to explain how a decision was made. 


If the AI system is supplied by a third party, it is the responsibility of the data controller in the organisation procuring the AI system, to ensure that it is capable of producing an explanation for the decision recipient.


Why is this important?


Anyone involved in the decision-making pipeline has a role to play in contributing to an explanation of a decision based on an AI model’s result. As AI becomes ever more prevalent in mainstream technology, firms should keep abreast of developing guidance. In addition, there may be an implicit right to an explanation of an automated decision after it has been made in the GDPR, as explained above.


Any practical tips?


In the absence of formal specific regulation on the matter, there are several steps which any organisation using AI should consider taking, including: 

  • producing or updating company policy covering “explainability” and the steps each staff member should take when involved with the creation and deployment of the AI system
  • documenting each stage of the process behind the design and deployment of an AI decision-support system and a full explanation of the outcome 
  • verifying that any third-party suppliers of AI used in decision making processes can explain how the AI has produced its output.