Water cooler and triangular chairs

New developments in AI may put law firms at greater risk of phishing fraud

28 April 2023. Published by Will Sefton, Partner and Tom Morris, Associate and Tamsin Hyland, Partner

As the computing power of Artificial Intelligence continues to grow exponentially, we consider how generative technology may expand the reach of traditional phishing frauds aimed at law firms.

Friday Afternoon Fraud

Law firms are pressurised environments, and routinely manage urgent transactions involving large sums of money.  They therefore remain a prime target for phishing frauds aimed at diverting client funds into a fraudster's account.  Typically, a so-called "Friday Afternoon" fraud would involve a conveyancing firm holding completion monies receiving an email supposedly from a client requesting the monies to be paid into a different account.  

Fraudsters have a variety of tricks to make such requests appear legitimate, and the Law Society has issued guidance on warning signs which should now all be familiar to readers.  These include suggestions of urgency, requests to open attachments or click on links and emails from unfamiliar addresses.  Fraudsters may use domain names which appear legitimate at first glance, for example substituting the letter "o" for a zero.  Alternatively, a genuine email account may be hacked.  This not only enables the fraudster to communicate using the correct account details, but it further allows them to refer to previous confidential discussions to make the communications more convincing.

In line with Law Society guidance, law firms are obliged to train their staff to identify and report suspicious emails.  Moreover, law firms are expected to have appropriate cybersecurity measures in place to guard against these risks.  A failure to follow this guidance may trigger a regulatory investigation and, in the event of a claim materialising, may be relied on by a Claimant as strong evidence of negligence.

However, just as the protocols for recognising and responding to phishing frauds are reaching maturity and becoming widely adopted, the landscape of risk may be set to change fundamentally.

The Risk Posed by AI

The rapid progress of technology, and in particular AI, has many promising applications with the potential to impact on every aspect of our lives.  It also gives rise to new, and uncertain, risks.  Nonetheless, this trend is set to continue.  

As GPT-4, the successor of OpenAI's much-hyped Chat GPT, is released, reports are emerging of scammers using generative AI to clone voices to perpetrate frauds.

A quick internet search throws up a host of cheap, freely available voice emulators.  All that is required is a voice sample, for example taken from a video posted on social media, and the technology can replicate that person's voice.  

As with any generative AI, these have the potential for abuse.  Reported cases involve victims receiving a call explaining that a relative is in trouble and needs money.  The recipient then hears their relative speaking on the telephone, confirming they are in difficulty and urging for funds to be transferred.

Such calls can be made from anywhere in the world.  Therefore, once payment has been made, the prospects of tracking down those responsible for the scam are limited.

Risks Facing Solicitors

It is easy to see how these technologies may be used to target law firms as part of increasingly sophisticated frauds.

Law firms will endeavour to move with the times, and we are now all aware of red flags such as emails coming from unexpected email accounts and statements of urgency.  However, with new voice simulating technologies, fraudsters may be able to undermine these routine checks.

For example, a fraudulent email could be followed up by a telephone call, ostensibly from the client, giving an explanation for the change in payment details.  Alternatively, the fraudster might capture the voice of the supervising partner, for example from a publicly available podcast, and simulate their voice to confirm the payment should go ahead.  Lawyers are frequently reminded to verify a suspicious request for transfers of funds with a telephone call, and this is often seen as a failsafe way to verify a transaction.  The potential for voice emulators to undermine this safeguard is clear.

Whilst the extent to which generative AI will be used to target a law firm's accounts remains to be seen, it is nonetheless worthwhile being prepared.

The Law Society's existing regulatory guidance does not presently recognise the potential for phishing frauds to be supported by generative AI.  As such, it may not be fit for purpose in light of these new and evolving risks.  As the guidance is revised to meet these developments, there will be a balance to be struck between addressing specific risks as they emerge and putting in place flexible guidance which can respond to a variety of novel, and as of yet unknown, risks.

In the meantime, law firms would be wise to enhance training to, as a minimum, flag the potential for generative AI to be used in the furtherance of fraud.  Payment verification protocols might be bolstered to reduce the opportunity for fraudsters using voice simulating technologies to hijack the transaction.  

As a final point, Insurers will pay close attention to how the situation develops.  Where a firm falls victim to a scam and pays monies to a fraudster, this not only constitutes a breach of the SRA Accounts Rules but is likely also to constitute a claim.  Insurers would be well advised to assess at the underwriting stage what security protocols a law firm has in place, and whether these reflect current best practice.  

The magnitude of future risk posed by rapidly developing technologies is difficult to predict.  The Minimum Terms and Conditions do not permit Solicitors' PI insurers to exclude these new and evolving risks. Accordingly, incentivising a robust yet flexible approach to risk prevention will become ever more important.