Water cooler and triangular chairs

New guidance on Machine Learning – plenty for humans to learn too

04 November 2021.

A triumvirate of healthcare regulators has published ten guiding principles (the Principles) concerning the development of Good Machine Learning Practice (GMLP). The Principles shed light on the risks posed by artificial intelligence (AI) products that depend on machine learning. Humans should take note.

Machine learning has the potential to transform how patients are treated by deriving insights from the vast amount of data generated in healthcare settings every day. Machines are designed to improve performance by learning from these data.

The Principles, published last week by the US Food and Drug Administration (FDA), Health Canada and the UK's Medicines and Healthcare products Regulatory Agency (MHRA), are intended to serve as the bedrock for developing GMLP and facilitate growth.

We have looked at the Principles to highlight litigation and regulatory risks that healthcare companies and their insurers should guard against.

  • Risk: Products developed without input from clinicians on possible risks to patients


    A product could be placed on the market before "real world" considerations are assessed, based on doctors' experiences. The product may function perfectly, but only in the mind of a software engineer who does not have clinical experience. The Principles encourage obtaining multi-disciplinary clinical expertise throughout a product's lifecycle to ensure a product is as safe as possible.

  • Risk: Data collected and used for machine learning is not representative of the intended patient population


    If a data set is taken from too narrow a patient group, the result could be sub-optimal treatment of medical conditions where AI is used to assist diagnosis. The Principles encourage ensuring that data adequately reflect the intended patients' age, gender, sex, race and ethnicity.

  • Risk: Products placed on the market before the risks of injury and side-effects are adequately investigated


    Recent history is littered with examples of products whose risks were not fully understood until many months or years after they had been prescribed to large numbers of patients. The Principles advocate using machine learning models that support the mitigation of risks at the outset, based on an understanding of the clinical risks and benefits.

  • Risk: Human factors not taken into account


    Humans interact with technology but are fallible; they may misinterpret the results of AI analysis or be too dependent on AI where common sense would show that a different decision should be made. The Principles recommend that manufacturers address how people interpret data when developing AI models.

  • Risk: The laboratory used as a substitute for real-life


    Conditions in the testing phase of a product must be relevant to real-life conditions, or else the product will under-perform (or perform differently) when it is placed on the market. The Principles support making laboratory conditions as relevant as possible by predicting what the intended patient population and the clinical environment will be.

  • Risk: The manuals and instructions accompanying a product are sub-standard

Where a product is alleged to have caused an injury, the courts will assess the information and instructions that accompanied it. Clear warnings by a manufacturer, about a product's intended use and limitations, can make the difference between a product being deemed "safe", as opposed to "defective." The Principles remind manufacturers to ensure that users are provided with clear information that is appropriate for the intended audience, whether providers or patients.  

The full set of Principles can be found here. The MHRA states that the Principles will be used to inform areas where the International Medical Device Regulators Forum (IMDRF), international standards organisations and other collaborative bodies could work together to advance GMLP, including over setting regulatory policies. In the meantime, manufacturers, healthcare providers and their insurers can take note of the Principles in developing manufacturing practices intended to drive down the risks of AI products that are dependent on machine learning.