Outside glass view of RPC building.

President Biden’s Executive Order: How the US is planning to tame AI

Published on 13 December 2023

What is the impact of the Biden administration’s recent Executive Order on AI?

The question

What is the impact of the Biden administration’s recent Executive Order on AI?

The key takeaway

The Biden administration has recently issued an Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence, outlining a comprehensive approach to AI governance. 

The background

AI has been at the forefront of the public consciousness in the last year, promising efficiency and innovation while also causing concerns about security, safety and ethical implications. Different approaches have been taken to AI regulation around the world, for example the draft EU AI Act and the UK AI White Paper (both reported in previous Snapshots).

In October 2022, the US signalled its own approach to AI regulation in a Blueprint for an AI Bill of Rights. This sets out a list of five principles that should guide the design, use, and deployment of automated AI systems to protect the public, including:

  • safe and effective systems
  • protection against algorithmic discrimination and ensuring that algorithms and systems should be used and designed in an equitable way
  • data privacy
  • notice that automated systems are being used and an explanation of why and how it impacts you
  • ensuring the availability of human alternatives to AI, consideration, and fallback.

The development

The Executive Order follows on from the Blueprint by expressing the US’s commitment to establishing clear principles for the governance of AI systems. These principles identify potential harms to protect citizens from, marking a big step in addressing the ethical considerations surrounding AI development and deployment. Significantly, the Executive Order takes a proactive stance by requiring that developers of the most ground-breaking AI systems share safety test results with the US Government before making products and services generally available to the public. Simultaneously, the National Institute of Standards and Technology (NIST) has been entrusted with the responsibility to develop new standards, tools, and tests to guarantee the safety of AI systems. These standards will be applied by the Department of Homeland Security which will also establish an AI Safety and Security Board.

The Executive Order also includes actions to:

  • protect citizens from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content
  • establish an advanced cybersecurity programme to develop AI tools to find and fix vulnerability in critical software
  • set out a framework to develop standards for biological synthesis screening, thereby protecting against the risks of using AI to engineer dangerous biological materials, and
  • develop a National Security Memorandum that directs further actions on AI and security.

Why is this important?

While it does not appear that the US intends to pass any all-encompassing legislation on AI in the next few years (such as the EU AI Act), the Bill of Rights and Executive Order signify that it is actively considering the development and application of new standards. It remains to be seen the form these standards will take, whether these will be legally-binding or recommended, and how the US Government intends to enforce these standards. 

Any practical tips?

Businesses developing foundational models and other powerful AI systems should review the new processes in place to provide safety test results to the Government before public release. Otherwise, businesses looking to develop, procure, or use AI in the US should keep an eye on further developments, especially any new standards issued by NIST in the future. 

Winter 2023