Cyber Bytes banner RPC law

Part 3 - AI regulation in the US

Published on 11 March 2024

This is Part 3 of 'Regulation of AI – raising the trillion dollar bAIby'

Back in October 2022, the White House published federal guidance – a Blueprint for an AI Bill of Rights identifying five principles aiming to guide the design, use, and deployment of automated systems. It was designed to operate as a roadmap to protect the public from AI harms and was followed in October 2023 by the US President's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Executive Order sets out eight "guiding principles and priorities", detailing how those principles and priorities should be put into effect, and reporting requirements. Federal agencies such as the National Institute of Standards and Technology (NIST) and Homeland Security will issue standards and guidance and use existing regulatory authorities, to monitor and control the use of AI in ways that will impact AI providers and users. In July 2023 Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI agreed voluntary commitments to move toward safe, secure, and transparent development of AI technology.  These tech companies will not only be facing different regimes globally but within the US they will be dealing with legislation that differs between states.  California is expected to lead the way with an AI framework. Some online platforms have expressed concern at the lack of policy at a federal level versus the hundreds of AI bills introduced at state level providing a patchwork effect.

Complementing the US government initiatives, NIST has, in the early part of last year, released voluntary guidance in the form of the AI Risk Management Framework (AI RMF 1.0), for organisations designing, developing, deploying, or using AI systems. The framework outlines the characteristics of trustworthy AI systems and how to balance these within the context of the use of AI.  It also provides for guidance on how to govern, map, measure and manage risk throughout the AI lifecycle. There are obligations to maintain records covering procedural aspects of the AI system, to train individuals who will be responsible for adhering to policies and procedures and to monitor the functionality and behaviour of systems. NIST has also published a companion AI RMF Playbook as well as several tools (cross walks) mapping the AI RMF to other AI standards such as (a) Crosswalk AI RMF (1.0) and ISO/IEC FDIS23894 Information technology - Artificial intelligence - Guidance on risk management and (b) An illustration of how NIST AI RMF trustworthiness characteristics relate to the OECD Recommendation on AI, Proposed EU AI Act, Executive Order 13960, and Blueprint for an AI Bill of Rights, in order to help users implement the framework.


Discover more insights on the AI guide