Abstract of metal white joints.

Computers, Coronavirus and Collaboration

Published on 03 July 2020

The power of AI can be seen in the world's response to global healthcare issues, most recently with Covid-19. But with great power comes great responsibility. In this article we look at why collaboration is key to minimising the risks whilst getting the best out of the opportunities offered by AI.

Opportunity

Although it may conjure up images of a futuristic sci-fi world, "artificial intelligence", at its core, is any algorithm or software designed to emulate human cognition in the analysis, interpretation and comprehension of data. Such capability can be used to make sense of extremely large medical data sets and, on a much smaller scale, for the treatment of individual patients.

Large-scale analysis

The AI platform BlueDot, which uses machine learning to track, locate and report on how infectious diseases spread, is a good example of how AI can be deployed to analyse quantities of data that are simply beyond the realms of human comprehension. The platform gathers data on over 150 diseases and syndromes around the world, collecting data from official healthcare sources (such as the US Center for Disease Control), commercial flight data, climate data from satellites and local information from journalists and healthcare workers (filtering through 100,000 online articles in 65 different languages each day). 

In December 2019, in the course of its usual activities, the platform registered a cluster of unusual pneumonia cases in Wuhan, China. The platform had detected what would become known as Covid-19 nine days before the World Health Organisation issued its statement alerting the world to the emergence of a novel coronavirus. Could the world have made better use of those nine days? We now know that countries that reacted quickly to Covid-19 seem to have done better than those that were slower to respond. Governments may pay more attention to AI platforms like BlueDot in future.

AI's ability to analyse large-scale data sets has also been deployed to help source a treatment for Covid-19. Concerns about the time it may take to discover, test and receive approval for any treatment has led pharmaceutical companies to review whether existing approved drugs can be repurposed for use in Covid-19 (in the same way that Viagra was repurposed from a treatment for pulmonary hypertension to a treatment for erectile dysfunction). Scientists have used AI to narrow the list of promising drug candidates so they can be put forward for clinical trials without delay.

Individual Patients

Whilst large-scale examples tend to grab the headlines, it is worth remembering that AI can also be used in the diagnosis and treatment of individual patients. 

In the early stages of the Covid-19 pandemic in China, when a review of radiological findings was needed in order to confirm a diagnosis, AI was used to relieve the immense pressure on medical practitioners who had to visually review up to 300 images of a patient. An algorithm was developed that could recognise the lesions in CT images and quantitively report on the findings, thereby producing results at much greater speed, and allowing the medical professionals to focus their attention elsewhere. Although developed in response to the very significant pressures of the pandemic, it is clear that the application of such AI would be of great assistance in a wide variety of healthcare scenarios. 

Self-care

AI can empower individuals to take more responsibility for their own healthcare. With an increased focus on wellbeing, healthcare apps now enable people to triage themselves at home at the first sign of an ailment. This is an advantage which has become particularly pertinent in the Covid-19 era. This means that patients are likely to present earlier to medical professionals, which provides more opportunity for earlier, and therefore, successful treatment.

Identify risks and collaborate to allocate liability 

In a healthcare context, there is inevitably a risk that things can go wrong, and when things go wrong, litigation might follow. Add AI to the mix and the picture becomes more complicated; some of the unique risks posed by AI can mean that it is much harder to identify where any fault might lie. Here are some common risks that can give rise to liabilities: 

Humans

All the way through the supply chain, AI systems and devices rely on human decision-making; that could be a software engineer involved in the initial design of an algorithm, or a clinician making the choice to rely on AI for prescription/ treatment purposes. With human decision-making comes the unavoidable risk of human error. Where AI uses large-scale data sets a small human error (in the input of data, for instance) can be amplified and repeated making it a far larger issue than if the same error was made during the course of treatment for one patient.

Automation bias

The well-documented phenomenon of automation bias is another example of how, when using AI, there can be unintended but potentially serious problems. Automation bias is the propensity for people to defer to recommendations made by AI, even in situations where those recommendations contradict an individual's own knowledge and experience. Within healthcare, the potential ramifications can be serious – it could mean the difference between making a correct or incorrect diagnosis, with a better or worse outcome for a patient. This is of particular concern if the AI being used is inadequate due to human error earlier in the process.

Limitations

The data driving machine learning and forming the basis for a lot of AI software also brings with it potential risk – data may be incomplete or unreliable (for example, demographic bias may render it flawed). Decisions made on the analysis of inadequate data could lead to adverse outcomes. 

In the context of COVID-19, imagine if governments had relied on something like BlueDot and it had suggested there was no reason to act quickly to combat the virus; or if AI misdiagnosed CT scans and patients were erroneously discharged from hospital early. 

Looking for someone to blame

If patients are injured and bring claims, claimant lawyers will focus on the party responsible for an alleged error. Where AI has been used, that could be the doctor, who made a diagnosis based on allegedly flawed AI-driven results; the hospital, which installed the equipment; the manufacturer, or the software engineer, responsible for the design and development of the device. Or there could be a combination of multiple defendants. 

Claims could also take various different forms; a doctor or hospital might face a claim for negligence if the medical care is inadequate. Where a product is involved, manufacturers might be targeted under consumer protection legislation, or face a contractual claim relating to the quality of services or goods. Manufacturers can be exposed if individuals rely on products to diagnose themselves and get it wrong. 

Minimising risk through collaboration 

Early collaboration between developers, manufacturers, hospitals and clinicians is key to mitigating the risks of litigation by identifying, and allocating, potential liabilities.

In 2019 the UK Government issued an updated "Code of Conduct", which sets out 10 principles for the development of "safe, ethical and effective data-driven health and care technologies". The Code encourages collaboration, transparency and accountability between all parties throughout the development process of health and care technologies. All parties in the supply chain should welcome further regulatory guidance.  

Anyone who is using, or has responsibility for, AI technologies within a healthcare setting, needs to ensure that the right people receive appropriate training to ensure that they are using any AI devices safely, and to their full potential. It is becoming increasingly clear that doctors of the future will need to be software specialists, as well as medical experts, and that all parties need to play their part in working out where the risks lie and how they can be minimised. 

The future 

The possibilities for AI are seemingly endless and we have already seen some of the enormous benefits it can bring to the healthcare sector. 

Active engagement and sharing of knowledge between the different players in the development chain will help to pre-empt, and potentially avoid, some of the risks posed by AI. AI has already been used in the fight against Covid-19. If scientists, manufacturers, regulators and clinicians can collaborate to harness AI, whilst catering for the risks, AI could help us find a much-needed way out of some of the healthcare challenges we are currently facing.

Emma Kislingbury and Genevieve Isherwood are Associates in the Medical and Life Sciences team at RPC.
This article first appeared in Healthcare Markets International on 1 July 2020.