Ethics in the age of AI: new Institute of Directors checklist
What are the key considerations for boards regarding the ethical use of AI within their companies based on the Institute of Directors’ (IoD) Checklist for Boards (Checklist)?
The key takeaway
Directors must understand and effectively mitigate AI-related risks. The Checklist highlights the importance of monitoring and audit measures, as well as board accountability and other oversight mechanisms. Additionally, compliance with data and privacy requirements is vital to meet these objectives. It will be imperative to conduct regular reviews and identify where corrective actions are necessary.
The application of AI for commercial purposes is becoming increasingly relevant, revolutionising business operations and decision-making processes across organisations. However, as the use of AI expands, so does the need for ethical considerations linked to companies’ ESG and CSR goals.
Recognising this imperative, the IoD, a professional organisation for directors and business leaders, has released a reflective checklist to guide boards in ensuring the ethical use of AI. The Checklist aims to address the gaps revealed by an IoD member survey, where a significant number of boards (80%) lacked AI audit processes and were unaware of existing AI implementation within their companies.
The Checklist sets out several points to keep in mind during board meetings in respect of ethical AI considerations. The key takeaways are:
- boards should pay attention to how AI is implemented in their organisation and closely monitor the evolving regulatory environment
- organisations should focus on implementing robust auditing processes, guaranteeing the ongoing measurement and evaluation of AI systems
- impact assessments should be conducted to assess any potential negative effects of AI on employees and other stakeholders
- boards should assume accountability for the ethical use of AI and, where necessary, exercise their veto power over its implementation
- high-level goals aligned with organisational values should be established, focusing on augmenting human tasks, unbiased decision-making, and achieving better outcomes
- diverse and empowered ethics committees with veto powers should oversee AI proposals and safeguard ethical considerations
- organisations should prioritise data documentation and security, compliance with privacy requirements, and secure-by-design principles, and
- regular reviews and testing should be undertaken to monitor AI performance and rectify deviations as they arise.
Why is this important?
It is particularly crucial for businesses to keep a close eye on these developments to steer clear of potential regulatory risks. Seeing that the IoD’s Checklist aligns with the growing regulatory landscape surrounding AI, including initiatives such as the UK Government’s AI White Paper and the EU’s AI Act, directors are well advised to take the IoD’s suggestions into account.
Similarly, the Checklist emphasises compliance with data protection and privacy legislation, such as the GDPR. This ties in with guidance provided by regulatory bodies, like the ICO’s AI and Data Protection Toolkit, and should be evaluated carefully to ensure compliance in the AI domain.
Any practical tips?
Organisations utilising AI-powered solutions in their day-to-day operations should reflect on any associated ethical implications of doing so. Board-level accountability and oversight are necessary to ensure responsible decision-making in this regard. AI impact assessments can help address any risks head-on, and auditing processes allow for the evaluation of AI systems and their performance. Setting high level goals for a business’s use of AI can help ensure that it is properly utilised in line with the company’s goals, which in turn will help to ensure that its use is internally regulated. It is also advisable to document data sources, implement strict measures to detect AI bias and ensure compliance with data privacy regulations.
By following these practical tips, organisations can navigate the legal landscape surrounding AI, promote ethical practices, and mitigate potential risks and liabilities.