RPC Data Dispatch Publication Banner

Data dispatch - January 2024

Published on 01 February 2024

Welcome to the third edition of Data Dispatch from the Data Advisory team at RPC. Our aim is to provide you on a monthly basis with an easy-to-digest summary of key developments in data protection law.

The format makes it easy for you to get a flavour of each item from a short summary, from which you can click "read full article".

Please do feel free to forward on the publication to your colleagues or, better still, recommend that they subscribe to receive the publication directly.

If there are any issues on which you'd like more information (or if you have any questions or feedback), please do let us know or get in touch with your usual contact at RPC.

Key developments

EU and UK AI Governance Update

EU officials in the European Council and Parliament have forged a historic provisional deal on the content of the AI Act, which is intended to regulate the use of systems like ChatGPT and facial recognition AI models. This deal contains some notable new features, compared to the initial European Commission proposal, including new rules on high-impact general-purpose AI models, further items on the list of prohibited uses of AI, and an obligation on entities deploying high-risk AI systems to conduct a fundamental rights impact assessment. The framework has been hailed as a unique development and is hoped to be a launchpad for global AI leadership. 

Against the background of this landmark deal, the unofficial consolidated draft text has recently been leaked revealing the latest position in a number of key areas and highlighting important dates. For example, the AI Act is intended to come into force two years after publication in the EU Official Journal (similarly to how the EU GDPR came into force ), with other key provisions coming into force in line with a set of staggered deadlines. Other amendments focus on promoting AI literacy and ensuring that high-risk AI systems should be designed in such a way that individuals can have effective oversight over their functioning.  The European Parliament is set to vote on the proposals in early 2024, although the final legislation is not expected to come into force until 2025. The upcoming June parliamentary elections are likely to incentivise all stakeholders to agree the final text as soon as possible.

In the meantime, the European Commission has announced an AI Innovation Package targeted at start-ups and SMEs. Elements of the package include access to AI dedicated supercomputers and other resources, establishing an AI Office to supervise and enforce the AI Act and provide around €4bn of financial support dedicated to generative AI until 2027.

In the UK, the NCSC released ground-breaking guidelines, which are a collaborative effort with global tech giants like Amazon, Google, Microsoft and OpenAI and aim to ensure safe and secure AI system development in the future. The internationally endorsed guidelines are broken down into four key sections: secure design; development; deployment; and operation and maintenance of AI systems. The NCSC provides guidance on considerations and mitigations which companies can take into account to reduce the risk associated with their development of AI systems. The "secure by default" approach embodied in this guidance is shared across other NCSC guidance which prioritises ownership of security outcomes, embracing radical transparency and accountability, and secure organisational structure and leadership.  The initiative also aligns with the US Cybersecurity and Infrastructure Security Agency's roadmap, supporting President Biden's Executive Order for AI standards.

Source:

The ICO simplifies the risk assessment requirements for personal data transfers to the US 

Following the Schrems II case, organisations subject to UK data protection law that wish to transfer personal data to a jurisdiction which does not benefit from an adequacy regulation (also known as a data bridge) must conduct a TRA in conjunction with their use of appropriate safeguards (such as the IDTA, Addendum to the EU Standard Contractual Clauses or Binding Corporate Rules). This additional requirement has represented a challenge for businesses, as conducting a TRA is a complex exercise in cross-jurisdictional data protection risk assessment. 

The ICO’s new guidelines make clear that, to streamline the TRA process, organisations can leverage the Department for Science, Innovation and Technology (DSIT)'s analysis, which evaluated relevant US laws and practices for the purposes of establishing the US data bridge (the UK Extension to the EU-US Data Privacy Framework). Organisations can refer to the DSIT analysis in their TRAs, rather than needing to carry out their own analysis. Note that organisations should still stay abreast of any future changes to DSIT’s analysis which may impact the outcome of the assessment. 

The aim of the ICO guidelines is to help organisations to expedite the TRA process before carrying out data transfers to the US. This is a helpful development for businesses wishing to export personal data from the UK to a US-based data importer which is not certified under the US data bridge. 

Source:

ICO Website

Enforcement action

The New York Times sues Microsoft and OpenAi

In its Complaint, filed in the US District Court for the Southern District of New York, the New York Times has alleged that Microsoft and Open AI unlawfully used the New York Times' published works and valuable, copyrighted materials to train their generative AI chatbots. This training, which relies on large language models, enabled those chatbots to get the benefit of New York Times content, access to which is primarily provided by the New York Times via a subscription service or by licence for commercial use.

The New York Times is the first major US media outlet to sue Microsoft and OpenAI in relation to their use of materials to train artificial intelligence and follows apparent failed negotiations between the parties to reach a commercial agreement. In its Complaint, which makes claims including breach of copyright, unfair competition and trademark dilution, the New York Times has submitted evidence which includes examples in which Microsoft and OpenAI's chatbots were asked specific questions and responded to those questions by producing almost the exact wording of New York Times articles. The New York Times alleges that this circumvention of the requirement for the user to go through a paywall or obtain a commercial licence undermines its business and revenue generation.

In the Complaint, the New York Times also highlights the damage which could be caused to its reputation and brand by "AI hallucinations”.  Hallucinations are responses from an AI chatbot which contain false or misleading information not based on any real data. Using the defendants’ AI tools, the New York Times provides a number of examples of these hallucinations, which it claims were falsely attributed to the New York Times.

This lawsuit comes right on the heels of the announcement by German publisher, Axel Springer and OpenAI that they have entered into a watershed licensing deal which enables OpenAI's ChatGPT to pull certain content from the publisher’s titles, including Business Insider, Politico, Bild and Die Welt.  

The Axel Springer licensing deal has been hailed by some as a new way for media outlets to generate revenue streams in the modern world.  However, the New York Times' lawsuit has the potential to lay down a marker for businesses in terms of how they should approach the training of AI solutions, and the consequences they will face should they use the valuable, copyrighted materials of others without permission.

Source:

CJEU rules that the production of credit scores constitutes automated decision-making under GDPR 

This case was initially referred to the CJEU by the Administrative Court of Wiesbaden in Germany and concerned a data subject who was refused a loan by a German bank based on her negative credit score. The data subject made an application to SCHUFA (a German credit reference agency), to access the personal data which was used by SCHUFA to produce her credit score, and to erase some of her personal data because it was allegedly incorrect. In response, SCHUFA informed the data subject of her credit score and broadly how it was calculated. However, referring to trade secrecy, SCHUFA refused to disclose the factors which were considered in its calculation of the data subject's credit score and how each factor was weighted. Further, SCHUFA stated its contractual partners (including the data subject's bank) were the ones that actually made the decisions based on the credit scores which SCHUFA produced. 

Under Article 22 GDPR, data subjects have the right not to be subjected to a decision based solely on automated processing of personal data, including profiling, which produces legal or similarly significant effects for them. They also have the right to receive (under Articles 13 and 14) and request (under Article 15) meaningful information about the logic involved in the processing. The CJEU noted that, where a third party, to which a credit score is transferred (i.e., the data subject's bank), draws strongly on that credit score to establish, implement or terminate a contractual relationship with a data subject (e.g., a loan), then the processing by the credit reference agency constituted an automated decision for the purposes of Article 22 GDPR. This is because, in reality, the credit score produced by SCHUFA was a decision in and of itself, which produced significant effects for the data subject, because it ultimately determined whether the data subject would be granted a loan by the bank.  The court was influenced by the fact that a narrow interpretation of Article 22 would have led to a lacuna in the legislation because the data subject would not have been able to enforce her rights to information about the decision against SCHUFA, but the bank would not have been in a position to provide the information.

Beyond the immediate relevance to those involved in producing or obtaining credit scores, the decision may have wider impact as businesses increasingly contract with service providers to support algorithmic decision-making. The CJEU's ruling means that businesses which provide automatic calculations or processes to their customers to assist them in making decisions which have a legal, or similarly significant effect for data subjects, may be caught by Article 22 GDPR even if they do not make the final decision themselves. As such, these businesses should ensure that, where required, they are in a position to adequately respond to data subjects' requests for information, and to review the information which they automatically produce to inform their customers' decision-making processes.

Further, while this ruling of the CJEU is not binding on the UK, it represents a persuasive authority, and is likely to inform how the UK courts and the Information Commissioner's Office deal with claims from data subjects in relation to automated decision-making processes used by UK controllers. 

Source:

Need to know

Data Bill progresses through UK Parliament with new amendments

The DPDI Bill (first introduced on 8 March 2023) was carried over to the 2023-24 session of the House of Commons on 8 November 2023. It returned to Parliament bearing several new changes introduced by the UK Government at the last minute. Most of these changes relate to cracking down on benefit fraud and the use of biometric data by the police. However, several will be relevant to UK businesses: 

  • The timeframe for organisations to report breaches under PECR has been aligned to the timeframe for breaches under the GDPR i.e. without undue delay and, where feasible, not later than 72 hours after becoming aware.
  • The obligation on organisations when responding to data subject access requests has been limited to conducting "reasonable and proportionate searches". This reflects the ICO's current guidance.
  • Where a court has been asked to determine if a data subject is entitled to information under a data subject access request, the court may require a controller to make the disputed information available for the court's inspection and until the dispute has been resolved, a court may not order the information to be disclosed through the litigation process (i.e. discovery and disclosure).

The DPDI Bill is now at the Committee stage of the House of Lords and we anticipate that it will receive Royal Assent in the Spring of this year. However, we will keep you updated on further milestones of this new law that will shape the future of the UK's data protection landscape.

Sources:

Cookie update: Google's restriction of third party cookies and the ICO's recent warning

Third party cookies on websites have been a mainstay of personalised online advertising. However, their ability to collect data and track user activity across websites has raised privacy red flags. To allay these, Google, as part of its “Privacy Sandbox” initiative, is developing new privacy-conscious technology to replace third party cookies. 

Google has now announced that it will restrict third party cookies for 1% of its users in early 2024, with the aim to do the same for all remaining users by the end of the year. It will replace them with other technologies, such as the Topics API which allows direct advertising to users based on their broad categories of interests. Digital advertisers will need to grapple with the loss of cookie-based advertising and the specificity of user data to which they historically have had access. The Competition and Markets Authority (CMA) will also be keeping a close eye on any competition concerns arising out of the use of the Privacy Sandbox.  

Separately, on 15 November 2023 the Information Commissioner's Office (ICO) warned 53 of the UK's 100 most visited websites that they must, within one month, ensure their cookie banners and placement of cookies for direct marketing purposes comply with data protection law. The ICO guidance is clear that organisations must obtain valid consent to direct marketing cookies, ensure consent is obtained prior to placing the cookies and respect user choices.  Websites should make it easy for users to reject advertising cookies by having a "reject all" button in the top layer of their cookie banner which is as prominent as the “accept all” button.    In an update on 31 January 2024, the ICO stated that it had received an "overwhelmingly positive response" to their warning letters, with 38 organisations having changed their cookie banners, 4 committing to be compliant within the next month and several others working to develop alternative solutions. 

Sources

The ICO and Government grapple with the privacy issues related to crime prevention and facial recognition

The ICO’s blog post clarifies that the sharing of criminal offence data by retailers is permissible only where "necessary and proportionate" to detect or prevent crime. This involves a balancing act between crime prevention and respecting individuals’ right to privacy. The ICO draws a distinction between sharing information with a small group of individuals versus publication on a social media platform. The post gives some practical examples of what it considers is likely to be appropriate sharing for the prevention or detection of shoplifting, including the sharing of information with the police and with a manager of another store in the retailer's shopping centre. The ICO's examples of where sharing may not be appropriate include putting up pictures in shop windows and publishing information on social media, which is widely available to the general public in the area.

Separately, a group of parliamentarians has called on the ICO to review its approach to FRT surveillance, noting the serious risk of harm that can be caused to individuals from its use. The letter urges the ICO to enforce against providers of facial recognition technology which do not comply with data protection law. The letter also remarks that the ICO’s approach does not align with the harder line approach taken in other jurisdictions. In this regard, it is worth noting that the proposed text of the AI Act prohibits the untargeted scraping of facial images to create facial recognition databases, recognising the potential threat to individuals' rights.

The ICO has issued a preliminary response to this letter saying that it is aware of the potential harms when FRT is used unlawfully and that it would respond to the parliamentarians' specific concerns in due course.

Sources: