Entrance to RPC building - dark

Coming to a bank near you? How "investment AI" could transform financial mis-selling claims

09 November 2023. Published by Daniel Hemming, Partner and Olivia Dhein, Knowledge Lawyer

Living under a rock is probably the only way anyone might have escaped the media attention given to ChatGPT and generative AI in recent months. Beyond the (considerable) hype, this technology could have a profound impact on financial mis-selling claims where financial institutions and fund managers turn to the new technology to help them select investments and products.

Dan Hemming and Olivia Dhein take a look at what generative AI can already do in this area and how fundamental concepts in financial mis-selling cases such as advice and misrepresentation could change in the near future. 

What can generative AI achieve in finance right now?

Generative AI already has already shown promise in investment experiments. For example, the University of Oxford1 published a paper which studied the performance of AI when selecting private equity funds. It found that AI achieved returns that were 5% higher per year than average funds. This comes after another experiment earlier this year where ChatGPT was persuaded (ie some of its security "guard rails" were overridden) to pick securities for an investment strategy following investing principles followed by leading funds. While only a theoretical exercise, the 38 stocks picked outperformed the UK's 10 most popular funds (including for example Vanguard and HSBC) by a very respectable 6.6%.2 In this context it is worth noting that another similar experiment3 had slightly less positive results, however ChatGPT still achieved a very respectable return.

Further, it was reported4 in May 2023 that JP Morgan filed a trademark application for a new tool called "Index GPT" which will be able to select investments for customers tailored to their needs. Goldman Sachs and Morgan Stanley have also5 started to test ChatGPT-style technology. 

These examples illustrate the potential of this technology, which may well overhaul how investments are picked, particularly as it is able to digest large amounts of data and text that would be impossible for humans to do. At the moment, the products may still have flaws. But overall, whichever drawbacks may still exist in current iterations, it seems clear that the direction of travel points towards banks adopting the advantages of generative AI and its ability to independently take decisions without human guidance. Given the speed of developments in this area (Chat GPT was launched a year ago in November 2022), it seems plausible that any AI investment tool that independently develops an investment strategy could become sophisticated very quickly. This would especially be the case where banks feed it specific data sets to train the model up. 

What difference could generative AI make to a financial mis-selling claim?

Given the examples above, it does not take a lot of imagination to see that financial institutions and fund managers may well use generative AI to select investments for their customers. What could possibly go wrong? The answer is that nobody knows - yet.

Unlike previous technology, the nature of generative AI means that humans are not programming the AI to do anything specific. Rather, the AI tool makes independent decisions based on general prompts as to what it would consider beneficial investments. In addition, the so-called AI "black box problem" means that as things stand, humans will not necessarily be able to understand how the tool selects an investment. To complicate matters further, AI tools currently have a tendency to make things up or "hallucinate", which may be difficult to detect for human users.

It becomes apparent that this will raise many legal and regulatory issues. What regulatory standards should the AI fulfil? What ethical principles should be followed when it is set up? And who should be sued if the AI malfunctions – the bank or the AI developer? If the answer is the bank, for example because it developed or enhanced the tool itself, what claims can be brought?

We can also see that some concepts will not change at all – for example, whether a human places reliance on advice given is unlikely to change fundamentally, whether the advice is given by a machine or a human. There will also always be the question whether the parties have excluded liability by contract. However, some of the discussion around core legal concepts in mis-selling cases may change significantly. 

Advice

It is an open question for example whether the recommendations of an AI tool could amount to "advice" given to the customer, which may give rise to a duty of care in tort for the bank to exercise reasonable care and skill. 

In terms of the natural language meaning of "advice", the technology already seems to be capable of providing advice because it is already at a point where it can select an investment strategy for maximum profit following general investment principles. There is also no technical reason why it would not be possible to connect it to the relevant trading systems to execute trades accordingly. 

To assess whether such a tool is providing advice or not, the courts would likely need to make an assessment of how the AI tool was set up and what general principles it was supposed to follow. The court would also likely need to look at the prompts used by the humans involved, ie the instructions given to the AI, to test further whether the intention was for the tool to provide "advice", or not. This is likely to represent a whole new area where disclosure and expert evidence will be needed.

Misrepresentation and implied representation

While liability for "advice" could be excluded contractually by the bank, there is also the question whether there could be a misrepresentation to the customer where the bank does not alert them to the fact that an AI tool has, independently, selected investments for them. 

Conceivably, a customer could argue that a misrepresentation occurred where they were under the impression that human bankers would conduct the customer's business, but in fact this was delegated to an AI tool with no or negligible human input. The novel point here is that generative AI is capable of taking investment decisions independently for the human banker, unlike previous technology which relies on being pre-programmed to do certain things within certain pre-set parameters. 

Where banks are using AI tools to select investments, the customer could also argue that there was an implied representation that human bankers would check everything that was done by an AI tool. Generally, it is difficult to show that silence can found a claim in misrepresentation (see Raiffeisen Zentralbank Osterreich AG v Royal Bank of Scotland plc6). But could this change?

The test cited in this case was to ask "whether a reasonable representee would naturally assume that the true state of facts did not exist and that, had it existed, he would in all the circumstances necessarily have been informed of it". Arguably, it could be said that a customer in the current circumstances would naturally assume that they would be informed if an AI tool took over the work of a human banker.

It is also worth noting that there is currently no specific labelling requirement in relation to AI tools that would require a financial institution to highlight to its customer that they are being used. The question will be whether a bank will have made an implied representation that humans are involved in the investment services provided to the customer, even where it is using generative AI which can act independently. 

The future: flipping the arguments on their head

Taking things further, the argument could also be flipped on its head. Assuming that AI develops further to become highly sophisticated in this area, it may become the market standard that these tools are used to at least check the investment selection made by humans, as the AI tool may be less prone to overlooking anything relevant or taking an unwise decision. If this becomes the state of affairs, one could imagine that not using an AI tool could be cause for complaint by the customer, or there may even be a misrepresentation as to what service the customer is receiving if they are served solely by a human banker without that being made explicit.

Conclusion 

We will need to wait and see what exactly transpires and how the technology is adopted in the financial services sector in order to assess how much of a legal shift will follow. English law has proven flexible when confronted with other new concepts such as cryptocurrency, and this would likely be the case here.

However, the shift that generative AI represents is a much more fundamental one because machines are becoming capable of taking over complex investment tasks traditionally carried out by humans. This has never happened before. Lawyers would be well advised to stay on top of these developments so that they are able to understand the implications for mis-selling cases which could change considerably in the future.

1 Uhoh oxford study shows ai really can pick funds better than humans and papers.ssrn.com 

2 An investment fund created by chatgpt is smashing the uks top 10 most popular funds

3 will chatgpt soon replace my private banker

4 JP Morgan files patent chatgpt and JPMorgan develops ai investment advisor

JPMorgan develops ai investment advisor

[2010] EWHC 1392 (Comm), para 84