Cyber Bytes banner RPC law

Part 6 – Practical Considerations

Published on 11 March 2024

This is Part 6 of 'Regulation of AI – raising the trillion dollar bAIby'

AI focussed actors and providers have been focussing on their forthcoming AI obligations and on governance for some time, but it is now prudent for the majority of organisations to assess how their use of AI will come within the scope of regulation in key territories and become familiar with each regime (and devise a means to keep up with the anticipated fast moving changes). Planning for the costs of compliance and for AI governance including systems and procedures for data retention and record keeping should also be part of current business strategy together with building expertise on AI internally and identifying trusted advisors from the "noise" of what is being offered externally.

Promoting training to allow individuals to perform their roles and/or use the AI system in a way that is consistent with related policies and procedures will help businesses to clarify roles, demonstrate accountability and minimise risks.

All providers should establish written policies, procedures, and instructions for various aspects of the AI system (including oversight of the system) and produce documentation explaining the technicalities of their AI model and its output. They should assess and document the likelihood and impact of any risks associated with the AI system, including in relation to privacy and security.

Some uses of AI are likely to come under prohibited or high risk classifications, for example in the EU, and businesses may wish to take action now or in the near future to remove or adjust their products and services to remove or limit these risks.

Where appropriate businesses might consider using voluntary commitments in their relevant industry sector.  In December, in the US, 28 healthcare companies agreed to voluntary commitments on the use and purchase of safe, secure and trustworthy AI. Participating companies have agreed to inform users when they use content largely generated by AI, adhere to risk management frameworks when using AI-powered solutions, and develop new AI approaches that promote responsible use of the technology. These commitments follow a similar September agreement between 15 leading AI companies to develop models responsibly.

As discussed above, AI, IT or cyber ISO/IEC standards (such as ISO 23894) can be used as tools to support the safety, security and resilience of AI systems and solutions together with research and development programmes addressing key technical challenges, development of metrics, and risk assessment to measure and evaluate AI. Under these standards, organisations should prepare to be in a position to be able to provide information on AI system decision making and source of training data to regulators.

 

Discover more insights on the RPC AI guide