RPC Source Publication Banner

Source@RPC - December 2023

Published on 12 December 2023

The aim of Source@RPC is to provide lawyers, procurement professionals and CIOs/CTOs (amongst others) with a regular update on the legal implications and risks (and how best to manage them) of sourcing and utilising technology and outsourced technology-enabled services, as they affect businesses operating in the insurance and financial services sector.

Welcome to the second edition of Source@RPC.

It is perhaps not surprising that developments around AI and data security remain prominent in this edition. For instance, not only have we recently seen the UK hosting the first AI Safety Summit, feedback on the consultation conducted by financial services regulators, but now also the introduction of a Private Members Bill relating to the regulation of AI (which has recently been put forward in the House of Lords.  The debate on whether to regulate (and if so how) AI will no doubt continue for some time to come. 

Added to that we also have sight of the Government's recent report on the anticipated impact of AI on UK jobs and training, which makes for a somewhat sobering reading.

AI

Fraudsters Exploiting Generative Artificial Intelligence

In the first six months of 2023, UK financial services firms reported 640 cybersecurity breaches to the Information Commissioners Office (ICO).  That figure represents a threefold increase on the 187 cybersecurity breaches in the previous period and highlights the importance of organisations having in place adequate both adequate protective measures and related cyber insurance.

An example of the significant financial ramifications that can result from failing to adequately prepare in this space has been demonstrated by the Financial Conduct Authority (FCA)'s recent fining of Equifax Ltd (details here).  In 2017 Equifax Ltd’s parent company, Equifax Inc, suffered one of the largest cybersecurity breaches in history. Cyber-hackers were able to access the personal data of approximately 13.8 million UK consumers, where Equifax Ltd had been outsourcing that data to Equifax Inc’s servers in the US for processing. Following a lengthy investigation, last month the FCA fined Equifax Ltd over £11 million for failing to manage and monitor the security of UK consumer data it had outsourced to its US parent company.

Despite a wealth of reporting emerging that generative AI is increasingly being used to support phishing scams and other cybersecurity attacks (for instance by using publicly available recordings to mimic a person's voice), current guidance from industry regulators (see, for example, the Law Society) often does not yet recognise the potential for the technology to assist in attacks in this way, highlighting a disparity between the regulatory position and the reality of the fast paced AI movement.

Key points to note:

  • Cybersecurity breaches of financial services companies have increased by over 200% in 2022/23 (in the pensions sector, the increase was 4000%).
  • Firms should exercise their own judgment regarding any necessary implementations or enhancements to training and protections which could help to, as a minimum, flag the potential for generative AI to be used in the furtherance of fraud. For example, payment verification protocols might be bolstered to reduce the opportunity for fraudsters using voice simulating technologies to hijack the transaction.
  • For insurers underwriting professional risks, a detailed assessment of a firm's security protocols as against current best practice at the underwriting stage is increasingly important.

To find out more about how generative AI is enabling phishing fraud, see here.

To find out more about what you need to know about AI fraud before facing disputes, see here.

To find out more about recent trends in cybersecurity breaches at financial services firms, click here.

The UK's AI Safety Summit and other AI developments from around the world

On 1 and 2 November 2023, representatives from 28 countries, tech companies, academia, and civil society leaders gathered for the first major global Artificial Intelligence Safety Summit at Bletchley Park.

The key event was the signing of the Bletchley Declaration on AI Safety (the Declaration), described as fulfilling "key summit objectives in establishing shared agreement and responsibility on the risks, opportunities and a forward process for international collaboration on frontier AI safety and research".

This development signals a shared commitment to navigating the opportunities and risks in frontier AI. Amongst other things, the Declaration addresses risks such as intentional misuse and control issues related to the application of novel AI solutions. The Declaration acknowledges the potential harms linked to advanced AI models and urges joint scientific research efforts on AI safety. Significantly, in response to these concerns, leading AI companies (including OpenAI, Google DeepMind, Anthropic, Microsoft and Meta) agreed to allow governments to test their latest models before releasing them to the wider public.

While the AI Safety Summit was mostly attended by industry players from the tech sector, the knock-on effects of the meeting will certainly have implications for stakeholders across other industries, including insurance and financial services. For instance, where firms are users of generative AI (or related technologies), they will need to put appropriate controls in place to account for the potential risks flowing from the implementation of such solutions. Firms will need to introduce robust processes to address such emerging challenges. A follow-up meeting is anticipated to take place in 2024 to further the ongoing dialogue on the safety of AI.

Whilst these events are indicative of a seemingly unified international approach amongst industry leaders, we have seen a divergence in approach to industry regulation. Whereas the UK government has confirmed in its response to a report from the Science, Innovation and Technology committee that it does not propose to regulate AI in the short term (more detail on its approach is expected in the forthcoming response to the AI Regulation White Paper (published in March 2023)), the EU's AI Act has moved into a final period of intense negotiation between the EU Commission, Council and Parliament, with the next key trilogue meeting taking place on 6 December.  EU stakeholders are mindful of the need to finalise the text in the first few weeks of 2024 to avoid the legislation being derailed by the European elections.

Key points to note:

  • Country leaders and tech representatives convened at Bletchley Park for the inaugural global AI Safety Summit, culminating in the signing of the Bletchley Declaration on AI Safety, reflecting a shared commitment to navigating opportunities and risks in frontier AI.
  • A number of leading tech companies have agreed to government testing of advanced AI models before their public release. While primarily attended by the tech sector, the summit's impact extends to other industries as well, emphasising the need for robust controls and processes in companies using generative AI.
  • Whilst the AI Safety Summit marks a significant show of international unity in terms of approach to AI, the UK government has since publicly confirmed that it does not intend to regulate AI in the short term, while EU counterparts move ahead with the finalising of the EU AI Act, the first milestone regulation of AI.

To find out more about the AI Safety Summit see, here and here.

Tribunal Reverses ICO Fine in Clearview AI Case

Clearview AI uses web crawlers to scrape images of human faces from the internet, storing them in a database for its facial recognition software.

Although Clearview AI is based in Delaware and does not have a presence in the UK or EU, the ICO argued that the database likely contains images of UK residents, leading to it issuing an Enforcement Notice in 2022 alleging misuse of biometric data.

The First-Tier Tribunal's decision hinged on the material scope of the UK GDPR. It determined that, since Clearview's clients were exclusively foreign criminal law bodies, the acts of these governments fell outside the purview of the UK GDPR. Consequently, the First Tier Tribunal concluded that the ICO lacked jurisdiction to issue the fine.

However, the Tribunal did provide commentary of the territorial scope outlined in Article 3 of the UK GDPR and of Clearview AI's data processing activities, which related to database creation and user image matching. The Tribunal agreed with the ICO's submission that the processing activities were linked to the monitoring of data subjects' behaviour in the UK by law enforcement clients, meaning that Clearview's database fell within the territorial scope of Article 3 of the UK GDPR.

Key points to note:

  • This decision is particularly useful for overseas service providers when considering the material and territorial scope of the UK GDPR, including the boundaries of the "monitoring behaviour" provision (Art. 3(2)(b) of UK GDPR).
  • The ICO has sought permission to appeal the tribunal's decision so this will be one to watch for private sector entities operating in this sector.

Read the First-Tier Tribunal's decision in full here. Details of the ICO's request for permission to appeal can be found here.

AI - Regulation

Industry response to consultation paper on AI regulation in UK financial services

On 26 October 2023, the Bank of England, the PRA and the FCA (the Supervisory Authorities), published feedback statement FS2/23, summarising responses to its discussion paper on the use of AI and machine learning in UK financial services.

The discussion paper invited comments on the use of AI and machine learning, and sought responses from a range of stakeholders on:

  • how to define and scope AI for the purposes of legal requirements and guidance;
  • identifying areas of potential harms which the Supervisory Authorities should prioritise and action; and
  • exploring whether current legislation/guidance is sufficient to address the risks and harms associated with AI and how additional intervention may support the safe adoption of AI in UK financial services.

Most respondents considered that a sectoral definition of AI would not be helpful for regulatory purposes - the pace of technological development could quickly make such a definition redundant, or the definition might not capture all relevant use cases. Some respondents additionally made the point that a sectoral definition might not be aligned with legislative meanings used in other jurisdictions, detracting from the enforcement of any UK AI rulings both abroad and at home.

Respondents highlighted the wide range of benefits of the use of AI but did note significant risks.  These included (i) risks relating to AI's impact on market integrity/financial stability; (ii) insufficient staff skills and experience to ensure adequate AI risk management; and (iii) the development of new AI techniques by bad actors to exploit existing cyber security systems or to commit fraud. Interestingly, one of the most frequently cited risks was the potential for discrimination against consumers with protected characteristics or vulnerabilities, and a majority of respondents mentioned consumer protection as a key area for the Supervisory Authorities to prioritise.

In relation to existing regulation, respondents noted that it would be particularly helpful to have clarification on what bias and fairness mean in the context of AI models and, more specifically, how firms should interpret the Equality Act 2010 and the FCA Consumer Duty in this context. A number of respondents also emphasised the importance of the existing regulatory framework relating to operational resilience and outsourcing as a tool to address risks posed by third party providers of AI.

The majority of respondents did not consider that the Supervisory Authorities' suggestion of creating a new "Prescribed Responsibility" (for AI to be allocated to a Senior Management Function (SMF)) would assist with strengthening effective AI governance. Indeed, several respondents argued that AI could be used by a firm in too many ways for the creation of a new AI-related Prescribed Responsibility to be practical.  Their views were that the relevant responsibilities were already adequately reflected in existing Prescribed Responsibilities and SMFs' ‘statements of responsibilities'.

Key points to note:

  • Based on the respondents' feedback, Supervisory Authorities may be reluctant to base future regulation on any sector-based definition of AI, and instead prefer a technology-neutral outcome and principles-based approach.
  • The Supervisory Authorities are likely to look to existing regulation to address AI-related risks and ensure that any new AI regulation is consistent with current regulatory frameworks. The importance of international regulatory harmonisation will also be an essential matter for consideration.

A key focus of any current and future regulation and supervision will likely be on consumer outcomes, particularly in respect of the fair treatment of end customers.

Cloud

The CMA's recent investigation into cloud services

Last month Ofcom published its final report on its study into cloud infrastructure services in the UK. Its report, which may not surprise many, found that two leading providers of cloud infrastructure services (Microsoft and Amazon Web Services) held a combined market share of between 70 and 80% in 2022. Google's share ranked third (between 5 and 10%). Ofcom has expressed its concern about the practices of these market leaders and, consequently, has referred the UK cloud infrastructure services market to the Competition and Markets Authority (CMA) for further investigation.

The CMA has now published an issues statement, in its own first-stage investigation of competition in the UK cloud infrastructure market. The CMA statement sets an initial framework for its investigation including:

  • Technical barriers to switching and multi-cloud – whether technical barriers are making it more difficult for customers to switch between cloud providers and benefit from multi-cloud usage.
  • Exit fees – whether exit fees are a barrier to switching and multi-cloud usage and whether they lead to unpredictable costs for customers.
  • Discounts – whether discounts set by existing cloud providers are raising barriers to entry and expansion for smaller providers by encouraging customers to engage with a single provider.
  • Software licensing practices – whether such practices are discouraging customers from using rival cloud providers and whether they are raising barriers to entry.

Key Points to Note:

  • The CMA's investigation is in its early stages, and it is worth noting that the issues statement does not represent the CMA’s emerging/provisional views, findings or conclusions on related competition concerns or remedies.
  • According to its timetable, the CMA will send information requests, conduct site visits, review responses and hold hearings between October 2023 and May 2024. The final report on the market investigation is expected by the statutory deadline of 4 April 2025.
  • The launching of this investigation may cause particular consternation for the companies involved given the proposed increase in the CMA's powers (through its Digital Markets Unit) introduced by the Digital Markets, Competition and Consumers Bill (see our Snapshot analysis here).

You can read more in the CMA's press release here.

Data

New Data Bridge to Allow For UK-US Data Transfers

On 21 September 2023, the UK government published the Data Protection (Adequacy) (United States of America) Regulations 2023 (the Regulations) to provide for a UK Extension to the Data Privacy Framework (DPF). The new extension mechanism is known as the Data Bridge.

The effect of the new Regulations is that the US is an adequate country for the purposes of data transfers from the UK provided that: (i) the transfers are to a US business certified under the DPF and Data Bridge; and (ii) the recipient business complies with the obligations set out in the DPF. Further, the US Attorney General has also designated the UK as a 'qualifying state' under an order which implements arrangements complementary to the DPF. In particular, these arrangements allow all UK individuals to enforce their rights in the US's newly established Data Protection Review Court.

Key Points to Note:

  • The Data Bridge is beneficial as it enables UK businesses to transfer data to the US without the need to agree Standard Contractual Clauses (SCCs) or conduct transfer risk assessments (TRAs).
  • However, only US entities under the jurisdiction of the Federal Trade Commission or the Department of Transportation are eligible for DPF self-certification. This excludes insurance, banking and telecommunications organisations. 
  • Further, it is likely that the Data Privacy Framework will be subjected to legal challenge (as previous arrangements have been).  On that basis, businesses may want to consider adopting a particularly robust approach to important contracts, and provide that SCCs and TRAs will be agreed and implemented should the Data Privacy Framework be invalidated.
  • Businesses seeking to benefit from the Data Bridge should also consider whether their existing privacy policies, their policies and contracts are adequate with respect to the use of the new regime.

You can read more in our coverage of this update here.

Regulation

Contract Changes for unregulated Buy-Now-Pay-Later (BNPL) agreements

The FCA has demonstrated that the BNPL sector remains firmly in its sights following its recent announcement that it has secured further changes to potentially unfair and unclear contract terms, this time focusing on PayPal and TV shopping channel QVC.

Terms such as those dealing with cancellations and continuous payment authorities were scrutinised and flagged as posing a potential risk of harm to consumers due to their drafting in certain unregulated BNPL agreements. As a result of the FCA’s activities, both firms voluntarily made changes to their continuous payment authority terms to make them easier to understand - with PayPal also making their terms relating to what happens when a consumer cancels the purchase funded by the loan clearer and fairer. This follows on from the FCA's earlier activities in February 2022 whereby it secured similar contract term changes from Klarna, ClearPay, OpenPay and LayBuy.

The Treasury's consultation regarding the regulation of BNPL agreements, published earlier this year, was accompanied by draft legislation that will realise the FCA's goals of bringing certain unregulated BNPL agreements within its authorisation regime for financial services. The legislation proposes that firms lending under agreements that fall within scope will require authorisation and will be subject to various statutory and regulatory obligations including requirements on the form and content of credit agreements, and the provision of pre- and post-contractual information.

Key points to note:

  • For now, at least, certain BNPL agreements remain unregulated and, for this reason, the FCA is making use of the Consumer Rights Act 2015 to assess whether or not such contract terms are fair and transparent.
  • By using consumer law effectively to extend its powers into unregulated territory, the FCA has issued a timely reminder to all firms to ensure that their consumer contracts comply with the spirit and the letter of consumer protection legislation requirements that apply to their business, regulated or otherwise.

Companies that outsource their payment processing to third party BNPL entities may feel the knock-on effects of increased regulatory scrutiny in this sector and should consider how they will satisfy the incoming requirements to ensure that promotions of in-scope agreements are approved by a relevant authorised person.

To find out more about the FCA's latest announcement, see here.

The Economic Crime and Corporate Transparency Bill

The Economic Crime and Corporate Transparency Bill aims to streamline the current corporate criminal liability regime, introducing a significant new offence of "failure to prevent" fraud and restating the current test for determining criminal liability for economic crime (such crimes including theft, fraud, bribery and tax offences). By enforcing identity verification obligations on directors, "People with Significant Control" (PSCs) and individuals providing information to Companies House, and broadening reporting obligations the Bill will enhance Companies House's authority as an "active gatekeeper" of economic crime. The Bill emphasises the UK Government's focus on tackling economic crime, especially given the fact that UK residents lost £1.2 billion to fraud in 2022.

One of the biggest changes to the corporate criminal liability regime is the introduction of the "failure to prevent" offence. The new offence will subject any large commercial organisation in the UK to an unlimited fine if one of their associated persons commits a fraud offence with the intention of benefiting the corporation or themselves. Such fraud offences could extend as far as greenwashing, misstatements in key financial documents and misleading sales practices.

Key points to note:

  • The Bill makes changes to the test for determining corporate criminal liability for economic crimes. Acts of senior managers will now be attributed to the company under the Bill, reforming the highly criticised "identification doctrine" under the previous legislation. Currently, only the actions of some of the most senior stakeholders in an organisation could be attributed to the company, due to the doctrine's narrow application.
  • Companies House will be given enhanced powers under the Bill, which will include investigative and enforcement powers allowing it to assume an "active gatekeeper" role in mitigating economic crime. Moreover, the Bill will enforce identity verification obligations on directors, PSCs and individuals providing information to Companies House, whilst new reporting requirements for shareholders will ensure that further, more detailed information is held by Companies House.
  • In an effort to address the risks of fraud and economic crime, companies should review their compliance procedures to ensure they will comply with the stipulations of the Bill, as well as guaranteeing that their Companies House records are up-to-date and in line with the new reporting requirements.

You can read more in our Autumn Retail Compass coverage of this update.