Cyber Bytes banner RPC law

The Ethics of AI - The Digital Dilemma

Published on 11 March 2024

The possible benefits of AI and the ways in which it may transform a wide range of sectors is already evident.  However, along with more powerful AI comes new or heightened risks.  Given these risks, ethical principles are necessary to guide the development and deployment of AI systems in a manner that maximises their benefits while minimising harm. 

Ethical principles tailored to the responsible design and use of AI systems have been put forward by a number of different organisations across the globe.  Whilst there is not a single agreed version of these ethical principles, they tend to follow similar themes and highlight a number of potential harms which should be considered in the development and use of AI tools. 

The "starting five" principles

The UK government's AI White Paper published on 29 March 2023 lists five values-focused cross-sectoral principles for regulators to interpret and apply within their respective domains, intended to promote the ethical use of AI:

  • safety, security and robustness
  • appropriate transparency and explainability
  • fairness
  • accountability and governance
  • contestability and redress.

The government confirmed the same principles in their response to the consultation to the AI White Paper which was published on 6 February 2024 (Consultation Response). 

These principles are pivotal to the government's approach to regulating AI in the UK.  They provide a framework which the regulators must apply, with the intention of allowing them to do so in a proportionate and agile manner given the level of risk which is determined by where and how AI is used in a particular context. See Part 1 - AI Regulation in the UK for information about the UK's regulatory approach. 

We will explain each of the key principles in turn, noting the unavoidable interplay between the various principles and the possible harms that they seek to address.

Safety, security and robustness.   AI systems must be safe, and should be designed and deployed, and operate throughout the lifecycle, in a manner that minimises the risk of harm to individuals and society.  These harms could be deliberate or accidental, and may affect individuals, groups, organisations or even nations, and may take various forms, such physical, psychological, social, or economic harms.        

AI systems must be technically secure, and be protected against unauthorised access, manipulation or attacks that could lead to harmful outcomes.  Security measures are crucial to protect against malicious use, such as spreading misinformation, stealing personal data or disrupting critical infrastructure.  Developers should consider the security threats that could apply at each stage of the AI life cycle and embed resilience to these threats into their systems.

As for robustness, AI systems should be reliable and perform consistently and as intended under a wide range of conditions. They should be able to handle unexpected situations or inputs without failing or producing erroneous outcomes.  This includes being resilient to changes in their operating environment and being able to recover from errors or failures. 

Transparency and explainability. The principle of transparency refers to the need for information about an AI system to be communicated to relevant stakeholders.  This means that an appropriate level of information about the processes, decisions and operations of an AI system should be accessible and comprehensible, both to the developers and engineers who designed the system but also to the end users and other stakeholders who may be affected by its use.  Explainability refers to the ability to interpret and understand the decision-making processes of an AI system.  Otherwise, the inability to see how deep-learning systems make decisions creates what is known as the 'black box problem'. Opacity in decision-making is problematic in several ways, including difficulties in diagnosing and fixing issues and its potential to reflect or amplify societal or dataset biases without the business deploying the AI knowing. In practice, however, explainability may be easier said than done in some cases, as the logic and decision-making in AI systems cannot always be meaningfully explained in a way that can be understood by humans.  This could involve simplifying complex models, using visualisations or providing simplified rules that approximate the model's decision making-process.

Transparency and explainability should be proportionate to the risks that an AI system may present, but in any event is necessary to afford regulators and stakeholders sufficient visibility of, and information about, an AI system and its inputs and outputs to give effect to the other ethical principles (for example, for regulators to identify accountability and for individuals who may have been affected by an AI decision to challenge the decision and seek redress if necessary). 

The UK government's Consultation Response noted that there was strong support from respondents to the consultation for transparency (including as to AI use, outputs and training data), citing the importance of transparency to aid with education, awareness, consent and contestability.  Suggestions for transparency measures included:

  • the public disclosure of inputs like compute and data
  • labelling AI use and outputs
  • ·opt-ins and human alternatives to automated processing
  • explanations for AI outcomes, impacts and limitations
  • public or organisational AI registers
  • disclosure of model details to regulators
  • independent assurance tools including audits and technical standards.

The UK government has also published an Emerging processes for frontier AI safety document to establish greater transparency on AI outputs.

Fairness.  Fairness includes identifying and mitigating biases to prevent discriminatory outcomes caused by AI systems, and ensuring the use of AI systems does not undermine the legal rights of individuals or organisations.  In order to do so, fairness needs to be considered in every aspect of AI. This would include:

  1. data fairness – AI systems can inadvertently learn and in turn perpetuate or amplify societal biases through biased training data or algorithmic design, and so only fair and equitable datasets should be used, or training examples should be re-weighted if required
  2. design fairness – using reasonable features, processes, and analytical structures in the AI architecture
  3. outcome fairness - preventing the system from having any discriminatory impact
  4. implementation fairness - implementing the system in an unbiased way.

Ensuring fairness in AI also involves adhering to legal standards and ethical norms.  Fairness is a concept which underpins many areas of law and regulation, such equality and human rights, data protection, consumer and competition law, and many sector-specific regulatory requirements (for instance the consumer duty and other consumer protections in the financial services sector). This means that AI systems should comply with anti-discrimination laws and ethical guidelines to promote justice and prevent harm.

Accountability and governance. It is crucial to have clear lines of responsibility for the decisions made by an AI system, in order to be able to identify and hold responsible the relevant parties for the decisions and any harm or unintended consequences that arise as a result of the use of AI systems.  This is essential for creating business certainty (such as allocating liability in an AI supply chain) while also ensuring regulatory compliance.   

Appropriate governance frameworks should be in place to oversee the supply and use of AI systems, incorporating ethical guidelines and standards for AI development and usage.  Assurance techniques such as impact assessments may assist in identifying risks early in the development life cycle, which can in turn be mitigated through appropriate safeguards and governance mechanisms. 

Once in use, regular auditing of AI systems to ensure they operate as intended and adhere to the required standards and in compliance with the ever-changing regulations and standards can also be useful.  Engaging with a wide range of stakeholders (experts but also those potentially impacted by AI systems) can also help to shape robust ethical AI governance, identify and avoid potential ethical issues, and spot opportunities to improve ethical standards and practices in AI. Retaining a "human in the loop", by using AI in such a way that it does not replace human judgement and decision making, but rather augments it, is also vital. 

Ethical AI governance should very much be seen as an ongoing process; as AI technologies and their societal impacts evolve, governance frameworks should also adapt. 

Contestability and redress.  This principle of contestability refers to the ability of users and affected parties to challenge and seek rectification for decisions made by AI systems that impact them.  This is particularly important when these decisions have significant consequences on people's lives.  For AI decisions to be contestable, the systems need to be transparent about how decisions are made.  Mechanisms through which challenges can be made also need to be provided. This could include user interfaces for feedback, human oversight where decisions can be reviewed and clear processes for escalating concerns.

Redress involves correcting wrong decisions and where necessary providing avenues for affected individuals to seek compensation or other remedies in cases where the individual believes they have been unfairly treated by an AI system or it otherwise causes harm.  Beyond addressing individual grievances, redress also involves taking feedback and challenges to improve the AI system.  This could mean retraining models with more diverse data, adjusting algorithms to eliminate biases, or refining decision-making processes.  Effective redress mechanisms are usually supported by robust policy and legal frameworks that define the rights of individuals and the obligations of AI developers and deployers.  These frameworks can provide guidelines for the types of redress available and the procedures for seeking it.

The UK’s non-statutory approach to date has meant that new rights or new routes to redress have not been implemented.  However the Government in its Consultation Response noted that it is continuing to analyse how the existing legal frameworks allocate accountability and legal responsibility for AI across the lifecycle.  Watch this space as to whether the Government introduces measures to allocate accountability and legal responsibility to those in the AI life cycle who are best able to mitigate AI-related risks.

Moving forward

Implementing these ethical principles is complex and multifaceted, particularly in the face of a regulatory regime that is still taking shape; it requires ongoing effort, multidisciplinary collaboration and continuous evaluation and adaptation of AI systems as technology and societal norms evolve.

 

Discover more insights on the AI guide