What To Know About AI Fraudsters Before Facing Disputes
Fraudsters are quick to weaponise new technological developments and artificial intelligence is proving no exception, with AI-assisted scams increasingly being reported in the news, including most recently one using a likeness of a BBC broadcaster.
However, the potential of this technology to augment fraudsters' efforts is arguably unprecedented.
When attempting a fraud, the perpetrators face two critical limitations: the time they have available to devote to defrauding a particular target, and the effectiveness of their scams.
AI is being used by fraudsters to assist them on both fronts, and as such frauds proliferate, lawyers are facing new challenges.
Lawyers will swiftly need to become familiar with the fundamentals of AI to deal with it in the context of disputes and may need to view documentary evidence through a more skeptical lens as AI makes authenticity harder to establish.
A common theme to many of the scams AI has the power to augment is that the true identity of the fraudster is unknown to the victim.
English law is well placed to assist potential claimants in those situations — via the persons unknown regime developed in cyber fraud and cryptocurrency cases — and the innovative and flexible way the law has been applied in such cases provides a template for dealing with AI disputes.
Fraudsters face a choice in how they allocate their time.
Essentially it is a trade-off between maximising the volume of people they target or concentrating their time on a smaller number of possibly more valuable targets. An example of a high-volume scam might be a text message sent en masse containing a malicious web link.
A strategy focused on a more limited number of targets aims to offset the loss in volume by either increasing the probability of snaring a particular target or the average size of the sum that can be extracted per target.
Such an approach is often referred to as pig butchering, reflecting the time investment required to fatten up a target, or pig, before scamming — i.e., butchering — them.
A typical example of this is a romance scam where fraudsters seek to establish a romantic relationship with a target before attempting to extract money from them, which can involve months of painstaking effort crafting thoughtful and attentive messages to victims.
Fraudsters have now begun utilising AI chatbots such as ChatGPT to automate these efforts and increase the volume of people they can maintain such conversations with.
Such efforts have not been always successful, as illustrated when a fraudster passed on the following ChatGPT generated response to a potential victim's message: "Thank you very much for your kind words! [...] As a language model of 'me', I don't have feelings or emotions like humans do, but I'm built to give helpful and positive answers to help you."1
Notwithstanding the difficulties of fully automating the process, AI chatbots have the potential to save fraudsters time when executing traditionally time intensive scams.
The other axis in which fraudsters can leverage AI is in the sophistication of their scams.
One example is using AI to generate what are called deepfakes, where a person's video and/or voice likeness is simulated using an AI program trained on recordings of the relevant individual, usually from what is available online.
A recent reported example of this was where the likenesses of Elon Musk and BBC broadcaster Fiona Bruce were used to create a video advert that propagated on Facebook promoting an investment scam called Quantum AI.2
A less sophisticated version of this scam might have involved creating a fake news article or a single image advert, but such mediums are far less compelling than video, a format that people may be more naturally inclined to trust as being truthful.
A more sinister example of deepfake technology being used was a recent case in the U.S. where a mother received a phone call and heard what she believed to be the voice of her 15-year-old daughter, who was at the time on a ski trip, telling her that she had been kidnapped. A fraudster then demanded a ransom.
Fortunately, the mother realised she was being scammed before paying the demanded ransom.3
Other possible applications of this technology by fraudsters, besides impersonation, include generating material for blackmail or reputation destruction or the generation of fake evidence in legal proceedings.
The Nightmare Scenario
If the examples cited in this article are sobering, consider a scenario where fraudsters are able fully to leverage AI in both these dimensions simultaneously.
For example, imagine receiving personalised emails generated by an AI's consideration of your digital footprint or an automated version of the voice facsimile scam. These scenarios or their equivalent may well manifest in the not-too-distant future.
Relatedly, although AI chatbots like ChatGPT contain ethical guardrails that restrict it from answering certain questions — such as not providing advice about how to murder somebody — one can foresee alternative versions becoming available in the future that do not have such limits.
Imagine for example an AI chatbot, possibly trained using material on the dark web, that will generate custom malware on request or educate fraudsters on how to improve their scams.
Even now some of the current safeguards can simply be side-stepped using so-called jailbreak prompts, which for example might ask the AI chatbot to respond in the manner of some specified form of unethical persona.
Outlook for the Legal Sector
Although the challenges of AI-enabled fraud are significant, the English legal system is well equipped to assist victims of such fraud, which typically involve fraudsters whose true identity is not known to the victim.
In this regard the English court permits claimants to bring legal proceedings against persons unknown, notwithstanding the anonymity of the defendant(s) and seek interim relief such as freezing orders. This regime has been widely used in cyber-fraud and cryptocurrency litigation.4
A new jurisdictional gateway — Gateway 25 — has also recently been added to Practice Direction 6B of the Civil Procedure Rules, largely as a result of cryptocurrency litigation, to make it easier for claimants to seek disclosure orders against third parties outside the English jurisdiction to assist them in identifying such anonymous fraudsters.
More broadly, the English legal system has an excellent track record of successfully adapting to deal with issues arising from new technology.
For example, the English courts have held — on an interim basis — that cryptocurrencies, which by their nature are digital and decentralised are property and can therefore be the subject of a proprietary injunction;5 they have also applied traditional English jurisdictional rules to determine where a cryptocurrency is located for the purpose of establishing jurisdiction.6
In dealing with crypto cases the English courts have routinely been assisted by appropriate subject matter experts, in this case blockchain tracing experts. The AI equivalent of that may be experts who can analyse the authenticity of AI-generated media.
This flexibility of the English legal system and its effective utilisation of subject matter expertise is cause for optimism that it will be able to adapt and address the novel legal situations that will emerge as a result of claims involving AI technology.
This article was originally published by Law360.
1Pig butchers caught using ChatGPT to con victims (Computer Weekly)
2BBC presenter Fiona Bruce used in latest AI deepfake scam (Telegraph)
3US mother gets call from ‘kidnapped daughter’ – but it’s really an AI scam (The Guardian)
4See for example the landmark case of CMOC v Persons Unknown  EWHC 2230 (Comm).
5AA v Persons Unknown & Ors, Re Bitcoin  EWHC 3556 (Comm).
6Ion Science Limited & Anor v Persons Unknown & Ors (unreported) 2020.