RPC Source Publication Banner

Source@RPC - February 2024

Published on 19 February 2024

The aim of Source@RPC is to provide lawyers, procurement professionals and CIOs/CTOs (amongst others) with a regular update on the legal implications and risks (and how best to manage them) of sourcing and utilising technology and outsourced technology-enabled services, as they affect businesses operating in the insurance and financial services sector.

Welcome to the third edition of Source@RPC.

In keeping with the previous editions, AI remains the "hot topic". On that note it is interesting to see that EU legislation governing AI is now quite close to being finalised.

What is also clear is that the pace of change in the use of technology in the financial services arena, and particularly in the insurance world, is not slowing. Indeed, if recent surveys are accurate it might even be increasing. Inevitably, that focuses the mind on ensuring that thorough procurement processes are run and that appropriate proper contractual protections are put in place with chosen suppliers (as, sadly, things don't always go to plan).  Also, as the sector becomes more and more dependent on technology it is not surprising that this space is continuing to attract the interest of regulators.

As always, do forward on this edition to those of your colleagues who may be interested in any of its contents and encourage them to subscribe to future editions.

Regulators consult on proposed requirements and expectations for UK financial services sector's critical third parties

In light of concerns in relation to the operational resilience of the financial services sector, The Financial Conduct Authority ("FCA"), Prudential Regulation Authority ("PRA") and the Bank of England ("BoE") (collectively the “Regulators”) have set out proposed requirements that critical third parties ("CTPs") must meet in order to effectively manage the risks to stability and confidence in the UK financial system that arise as a result of a disruption in the services provided by a CTP to financial service providers and/or financial market infrastructure entities.

UK financial services firms are becoming increasingly reliant on the services provided by third party providers. A CTP is a third-party service provider who is designated as such by His Majesty’s Treasury (HMT). In order to be designated as a CTP, HMT must be satisfied that a failure by the CTP in the provision of services would have a significant and detrimental impact on the stability of, or confidence in, the UK’s financial system. Though HMT has yet to designate any specific entities as CTPs, it is likely that critical IT (including cloud) providers will be designated as such, especially where their service provision is particularly concentrated and there are few alternative providers.

The proposed requirements are designed to introduce a framework that would effectively manage the risk of a service failure by holding CTPs to a particularly high standard. This would also introduce various reporting and testing requirements in order to ensure the operational resilience of CTPs.

These requirements are designed to be interoperable with existing frameworks, notably the CTP regime under the EU's Digital Operational Resilience Act (DORA).

The Regulators have outlined a set of six fundamental rules that a CTP would be required to comply with. These include a requirement to conduct business with integrity, skill, care and diligence and having effective risk management systems.  As well as these rules, the Regulators also propose introducing eight operational risk and resilience requirements that CTPs would be required to comply with. These are set out in Chapter 4 of the Critical Third Parties sourcebook in the FCA Handbook and the ‘Critical Third Party Operational Risk and Resilience Requirements’ in chapter 4 in the draft Critical Third Parties sections of the PRA and Bank rulebooks respectively. The rules and requirements would be applicable from the point a CTP is designated as such by HMT.

For more information on the proposals and how to respond to the consultation please click here (please note that responses are required by 15 March 2024).

99% of global insurance organisations set to change core tech systems

In a recent survey (conducted by Novidea) of over 300 CEOs and other C-level employees across insurance brokers, Lloyd's Managing Agents and other insurance organisations, an astonishing 99% of entities said they were planning to change their core technology systems.

Over 40% said that this would be in the next 12 months, which is perhaps unsurprising given the Blueprint Two strategy being implemented by Lloyd's later this year, in a move that it is claimed will "deliver profound change in the Lloyd’s market through digitalisation."

These technological upgrades appear to be long overdue, with the survey also finding that over three quarters of enterprise insurance organisations (with over 5,000 employees) currently have six or more insurance technologies in place. As well as the obvious challenges that come with managing so many technologies simultaneously (and in some cases supporting ageing systems), the key areas of concern with the current technological systems for the surveyed employees were data quality (41%), data privacy and security (35%) and scale (35%).

It will be interesting to see whether the new technological systems set to be adopted across the industry allay those concerns, how such technologies respond to the upcoming implementation of Blueprint Two and, indeed, how many of the organisations concerned manage to meet their expected rate of technology change.

EU court rules that credit scoring is automated decision-making under GDPR

In a ruling that is likely to have a significant impact for credit scoring agencies (and those relying on credit scoring) across the EU and UK, the Court of Justice of the European Union (CJEU) has determined (in the case of OQ v Land Hessen) that the automatic production of a credit score is, for the purposes of the GDPR, automated decision-making.

That is significant as, under Article 22 of the GDPR, decisions based solely on the automated processing of personal data are, except in a few limited circumstances, prohibited where the outcome of the decision-making process (in respect of a person) "produces legal effects concerning him or her or similarly significantly affects him or her".

What's interesting about this ruling is that the credit scoring agency wasn't the party making the final decision as to whether the complainant in the case received a loan that they had applied for. Rather, the credit scoring agency used an automated process to determine the consumer's credit score, before passing this score on to a bank which would then use this information to determine whether a loan was granted.

Despite not being the actual decision-maker regarding the provision of the loan, the CJEU felt that the automated credit scoring met the threshold under Article 22, of producing legal or similarly significant effects, as in reality it was the outcome of that credit scoring that would determine whether or not a loan would be granted.

For readers whose organisations are caught only by the UK GDPR, and not the (EU) GDPR discussed above, some small comfort may be taken from the fact that, post-Brexit, the CJEU's decisions are not legally binding on UK courts. However, the UK courts are likely to consider such rulings to be persuasive, and so this decision should not be ignored.

Credit scoring agencies, other businesses that provide automated decision-making services and those that rely on the outcome of the decision (e.g. the bank in the case described above) should, when using or relying on such automated systems, be mindful of the fact that the outcome of the decision-making process may well be caught by Article 22 of the GDPR or UK GDPR (as applicable), regardless of whether the result produced is the final outcome in the decision-making chain. If caught by Article 22, consideration should be given to the rights afforded to data subjects under the Article, namely to not have the decision in question based solely on automated processing and to be able to contest decisions made. Guidance from the ICO available here gives tips on how to ensure that there is enough "human intervention" involved in the decision making process to avoid it being considered to be solely automated.

Provisional deal struck on the contents of the EU's AI Act

The European Council and European Parliament have reached a provisional deal on the substance of the upcoming Artificial Intelligence Act (AI Act), on which we first reported in Edition 1 of Source@RPC. The deal has (in recent days) also now received wider political approval (of the Members States), but still needs to be formally adopted by the European Council and European Parliament.  The proposed AI Act contains the following key aspects:

  • Certain applications of AI will be completely prohibited, such as the use of emotion recognition in the workplace and social scoring based on personal characteristics.
  • Whilst the use of biometric identification systems by law enforcement will largely be prohibited, narrow exceptions have been agreed (e.g. prevention of a specific and present terrorist threat), that will require prior judicial authorisation and will only be permitted for a strictly-defined list of crimes.
  • Developers and users of "high-risk" AI systems (the definition of which includes, for example, systems having a significant potential to harm fundamental rights or the environment) will have to meet certain additional obligations, and members of the public will have additional rights in relation to receiving explanations of decisions made by AI systems.
  • General-purpose AI systems will be required to comply with transparency obligations, such as complying with EU copyright law.
  • So-called regulatory sandboxes allowing national authorities to develop and train AI systems before they are placed on the market will be developed to support smaller businesses in developing AI products.
  • Non-compliance with the AI Act could result in fines of up to €35m or 7% of global turnover.

Whilst the AI Act will be of keen interest to those working in the development of AI programmes, users of AI software should also pay close attention to the development of the final form of the AI Act. This is particularly so for insurance and financial entities, which have been specifically named by the European Parliament as being required to complete a mandatory fundamental rights impact assessment (along with other requirements yet to be publicised) where AI systems used fall into the high-risk category.

Insurance and financial entities should also be mindful of the new rights for members of the public to receive explanations about decisions made using AI systems, which could be relevant to automated credit scoring and risk evaluation.

ICO publishes guidance on personal data transfers from the UK to the US

Transfers of personal data from within the UK to outside of the UK are known as "restricted transfers" and require an appropriate transfer mechanism to be put in place. An example of this is the Article 46 (of the UK GDPR) transfer mechanism, which itself requires "appropriate safeguards" (e.g. the Addendum to the EU SCCs) to be implemented where the country the data is being transferred to is not already subject to an "adequacy decision".

Article 46 transfers to the US also require a Transfer Risk Assessment ("TRA") to be produced, which can often be a lengthy and complex process for organisations to undertake. In order to reduce this burden, the UK's Information Commissioner's Office ("ICO") has produced guidance to assist organisations when making restricted transfers to the US in reliance on the Article 46 transfer mechanism.

The ICO's guidance details when a TRA will be needed and what it must cover. Of most assistance, however, is the ICO's advice that, when making a restricted transfer to the US, organisations can rely on the analysis produced by the Department for Science, Innovation & Technology ("DSIT") in relation to the UK-US Data Bridge (see Edition 2 of Source@RPC for further information on the UK-US Data Bridge). Reliance on this analysis will streamline the TRA process for restricted transfers to the US, as it is accepted evidence that the laws and practices of the US provide adequate protections for the personal data of people in the UK. However, the ICO's guidance warns that those seeking to rely on the DSIT analysis must review any published updates to its contents, and subsequently review their TRA to ensure that it remains up-to-date. 

For further information regarding the ICO's guidance, see the January 2024 edition of RPC's Data Dispatch.

Financial entities have just under a year to ensure they are compliant with DORA

On 16 January 2024, the EU's Digital Operational Resilience Act ("DORA") celebrated its first birthday since being enacted, meaning that impacted financial entities now have less than a year to ensure that they are in compliance with the Act (the last day for achieving compliance being 16 January 2025).

As a reminder, DORA is the EU's answer to growing concerns that the financial sector is not sufficiently resilient to potential vulnerabilities in the use of information communication technologies ("ICT").

The three European Supervisory Authorities (EBA, EIOPA, and ESMA) marked DORA's first anniversary by publishing a set of joint final draft technical standards, including: (i) Regulatory Technical Standards ("RTS") on ICT risk management framework and on simplified ICT risk management framework; (ii) RTS on criteria for the classification of ICT-related incidents; (iii) RTS to specify the policy on ICT services supporting critical or important functions provided by ICT third-party service providers; and (iv) Implementing Technical Standards to establish templates for a register of information. This is the first of two batches of draft technical standards, with the second set due to arrive in June of this year.

These technical standards (which still need approval from the European Commission) specify the detailed requirements that must be met by the financial entities to which DORA applies, and are to be read in tandem with DORA itself (which imposes other requirements).

Compliance with DORA and the various technical standards will be assessed by relevant national competent authorities.

ICO publishes draft guidance in relation to Data Protection Fines

The ICO has published draft "Data Protection Fining Guidance" for public consultation. The guidance sets out: (i) the ICO's power to impose fines for breaches of UK Data Protection Legislation; (ii) the type of situations that may result in a fine; and (iii) the factors taken into account when the amount of a fine is calculated.

The guidance provides that the ICO may choose to impose a fine where a controller or processor has not complied with the provisions of the UK GDPR or the DPA 2018 in relation to the principles of processing, the rights conferred on data subjects, the obligations placed on controllers and processors or the principles for transfers of personal data outside the UK.           

Additionally, the ICO is empowered to impose further fines where a controller has failed to comply with a requirement to pay a fine (or other charges) to the ICO.

In determining whether to issue a fine, the guidance sets out several factors that the ICO will take into account, including:  

  • the nature, gravity and duration of the infringement(s), the purpose of the processing, the number of data subjects affected by the infringement(s) and the level of damage suffered;
  • whether any infringement(s) were intentional or negligent; and
  • any action taken to mitigate the damage suffered by data subjects.

In determining the amount of any fine, the ICO will take into account the seriousness of the infringement(s), the worldwide annual turnover of the controller or processor (where the controller or processor is part of an “undertaking”) and any mitigating factors.

For more information on the ICO Fining Guidance please click here.

Key points to note:

  • Where the ICO finds that the “same or linked processing operations” infringe more than one provision of the UK GDPR, the overall fine imposed will not exceed the maximum amount applicable to the most serious of the individual infringements
  • The finalised guidance will provide controllers and processors with a means of broadly estimating the fines they might face where they suspect a breach may have occurred.
  • The draft guidance applies to all controllers and processors, but does not change public sector enforcement, nor is it applicable to fines under PECR 2003.

High Court confirms approach to construing exclusion of liability clauses

Pinewood Technologies Asia Pacific Ltd ("PTAP") v Pinewood Technologies PLC ("PT") [2023] EWHC 2506 (TCC) highlighted that, even in situations where there is an imbalance in the parties' respective bargaining power, where the language of an exclusion clause is unambiguous and explicitly includes references to loss of profits, the court will not stretch the language of the clause to make it ineffective.

To summarise the dispute, PTAP claimed that PT had breached its obligations under a reseller agreement to develop a management system for motor dealers for use in the specified territories, and sought damages for loss of profits and wasted expenditure totalling an estimated US $312.7m. PT denied the alleged breaches of the agreement and PTAP’s claims for damages for lost profits and wasted expenditure, which it said came under the excluded types of loss in the reseller agreements[CMR1] [RU2] .

The exclusion clause that PT relied upon excluded:

"… liability for: (1) special, indirect or consequential loss; (2) loss of profit, bargain, use, expectation, anticipated savings, data, production, business, revenue, contract or goodwill; (3) any costs or expenses, liability, commitment, contract or expenditure incurred in reliance on this Agreement or representations made in connection with this Agreement; or (4) losses suffered by third parties or the Reseller’s liability to any third party".

The Court held that the exclusion clause was effective to exclude PTAP's claims for loss of profits and wasted expenditure and, in doing so, noted that:

  • the language of the clause clearly and unambiguously excluded such liability;
  • nothing in the Reseller Agreements or material background suggested that the parties had not intended to exclude claims for loss of profits and wasted expenditure; and
  • the clause did not have the effect of excluding all of PTAP's substantive rights and remedies as it was still able to bring a claim in relation to the incurred costs.

Key points to note:

  • The case confirms what we have seen with other judgments, that the courts will approach the exercise of construing an exclusion clause using the ordinary methods of contractual interpretation and on the basis that commercial parties are free to make their own bargains and to allocate risks as they think fit.
  • This case also shows that a court will not strain the language of an exclusion clause that explicitly includes a loss or profits, even if there is an imbalance in the parties’ bargaining power.
  • When drafting exclusion of liability clauses, parties should consider the effect of the clause (taking into account the likely consequences of breaches) closely, using clear and unambiguous wording.

For more information on this case, please click here.