The march of the machines?

Published on 10 February 2020

You may have seen the piece by the BBC a couple of weeks ago about Artificial Intelligence entering into a “winter”, following its “summer” of the last decade). In short, there appears to be a growing consensus that AI has been over-hyped particularly by those selling it.

There is a school of thought that the insurance industry is ripe to be turned into repository for Artificial Intelligence. The argument goes that both the writing of risks and the handling of claims are fundamentally about the assimilation, organisation and application of data. AI is better than humans at that, it can do it in real time rather than using imperfect historical proxy data and it is a lot cheaper. In order to make money in insurance you need to focus on your costs of doing business. Human employees are expensive and incapable of non-stop productivity. Replace humans with AI and your operation will be cheaper, your bottom line will improve, your shareholders will be happy and your share-price will sky-rocket.

In assessing whether this is really the future for those who work in insurance, it is important to understand what current AI is (and will remain for the foreseeable future) and what it isn’t.

Artificial Intelligence is not human intelligence. It might look like human intelligence, but it is not the same. At the risk of over-simplification, AI’s principal utility in an insurance claims context is identifying data that is economically valuable in the sense that it is data that it is relevant to the validity or quantum of a claim.

Typically, the algorithms in AI applications identify similarities between one piece of data and another. You give the AI application examples of the kind of data you want to find (say, in a “universe” of documents), the application will create a mathematical model of the “exemplar” data based upon the relationship between the characters (ie letters and numbers) in that data. The application will then effectively play a game of “snap” and identify the data in the data “universe” which matches the model it has created and presents to the user what it has found.

In short, AI is extremely good at rapidly and cost-efficiently identifying relevant material from a general mass of relevant and irrelevant material. AI can review and assimilate millions of pieces of data in a matter of seconds and then suggest what data might be relevant for whatever purpose it might be needed. The same exercise could take humans thousands of hours and cost millions of pounds. This is of considerable benefit in a world that is characterised by the proliferation of data.

Critically, however, in the context of data interrogation AI needs to be properly “trained” by a human to look for the right thing in the first place. AI engineers use the phrase “rubbish in – rubbish out” to explain the reliance of the AI technology on proper training. AI does not understand what it is looking for in the same way that a human does, nor indeed, why it is looking in the first place. It does not understand why the material it has identified is relevant, nor how that material can be used in assessing the validity of the claim in the context of a dispute or otherwise. The other important point is that AI cannot take account of “context” that is not recorded in the data fed to it, nor the infinitely imperfect ways that humans express themselves. Thus a piece of data which AI identifies as being potentially relevant may not be relevant at all. Conversely, AI can overlook relevant data. This is known as the problem of “false positives and false negatives”. The AI programme’s understanding of relevance is entirely dependent on the data and “training” it has been provided by a human operator at a given point in time. If one considers the use of AI in the context of a large insurance claim or dispute, often those claims and disputes evolve over their course as new information and evidence comes to light that provides additional allegations or defences. However, without the input and re-training of a human operator, the AI programme would overlook data and documents that are relevant to the new issues in the claim. The AI programme would not be able to formulate its own comprehension of relevance – it knows only what it is told and that knowledge itself is limited to a recognition of the relationships between the textual characters of data that has already been adjudged as relevant by a human operator. All that AI can identify is patterns in characters and letters but it does not understand what those characters and letters actually mean.

To illustrate this point, think of the word “star”. When you do that you do not tend to think of the letters S-T-A-R nor do you immediately have in mind lines drawn to 5 points each with an angle of 36 degrees. Instead, your brain will immediately and simultaneously think of the many things that the word “star” connotes, both literal and figurative – a luminous point in the night sky, a famous actor or sportsperson, a giant burning ball of gas, a spiritual or religious symbol, a symbol of rank and so on. Furthermore, when those various ideas come into your head you do not think of them in terms of the letters of the words and their relationship with one another but instead they are each meaningful images in your mind’s eye. To put it another way, as a human you instinctively know what the word “star” means in the context of your existence. AI doesn’t.

This is crucial in appreciating AI’s limits. It is very easy to fall into the trap of believing that AI is human intelligence because it seems to be so much better and faster than a human being at identifying similarities between different pieces of data and, in turn, identifying data that is useful – which is regarded as a hallmark of human intelligence. However, the identification of similarities between different pieces of data is just one facet of human intelligence and AI goes about that particular task in a very different way to a human. Importantly, AI’s apparent prowess at this task does not extrapolate into a broader ability to reason and argue. Reason aside, AI also lacks the emotional intelligence to communicate its understanding to an audience consisting of people with different levels of understanding of the relevant facts and different commercial agenda. AI is incapable of navigating a claims context in which compromise and sensitive management of stakeholders is critical. AI is basically just very good at “snap”. It is not a ‘holistic’ form of intelligence.

The misconception that AI is actually intelligent is reinforced by AI devices which are controlled by voice recognition technology like Amazon’s Echo and Google’s Home – they seem almost human because they appear to interact with you in a human-like way. This is fine when your dealings with them are limited to ordering more cat food or making Radio 2 come on in your kitchen. However, the sinister side to this is that we become conditioned through functionality like voice-recognition to accept that AI is actually intelligent and, worse, infallible. That becomes especially problematic when it comes to dealing with the machinery of the state (such as the Inland Revenue) which is inclined to use AI more and more to save cost.

The bottom line is that the job of deploying data (be it a document or something else) for the purposes of constructing a reasoned position on, for example, coverage remains (or ought to remain) for the foreseeable future the job of a human claims handler. And it will remain the job of a human until AI is capable of thinking and constructing a reasoned position like a human. That appears to be a very long way off and if it ever happens the human race as a whole, never mind those working in and around the insurance industry, is likely to be under threat from the “machines”.

 

 

Stay connected and subscribe to our latest insights and views 

Subscribe Here