Cyber Bytes banner RPC law

Cyber_Bytes - Issue 51

Published on 28 March 2023

Welcome to Cyber_Bytes, our regular round-up of key developments in cyber, tech and evolving risks.

Lack of data raises concern around how to define emerging risks

Peter Mansfield, partner at RPC, recently chaired a panel on digital risks at RPC's Global Access Week. During the discussion, Mansfield warned that new and emerging risks, which arise as we move into the fourth industrial age (the digital transition), will make it increasingly difficult for insurers to price and understand future exposures. Typically, insurers use historical data in order to quantify risks. However, Mansfield underlined that the market was ultimately entering an age of unknown risks.

The panel also explored the potential for cyber insurance to improve commercial cyber security as company's engage in the security-related "basic hygiene" required by their insuring policies. However, Eleonora Sorribes, partner at French law firm, HMN & Partners, warned that organisations would need to be careful not to adopt an increasingly lax approach to their engagement with safety protocols simply because they are covered by a cybersecurity policy.

Click here to read the Insurer article.

"Same Interest" test clarification for representative actions under CPR 19.6

CPR 19.6 provides that where more than one person has the same interest in a claim, that claim may be brought by or against one or more of the persons as representatives if they have the same interest as other parties.

The High Court in Commission Recovery Ltd v Marks & Clerk LLP [2023] EWHC 398 (Comm), has recently revisited the "same interest" test for representative actions under CPR 19.6. The Court had to decide whether the entitlement of the Claimant class could be calculated on a common basis. The claim in question was for secret commission. There were some differences in the claims and the remedies sought.

The High Court held that the "same interest" test does not require claimants to have identical claims or interests. It relied heavily on the Supreme Court's decision in Lloyd v Google [2021] UKSC 50. That case affirmed that the entirety of a class may be represented by a sole representative in circumstances where the position of other class members would not be prejudiced.

The High Court's ruling solidifies the existing position and potentially leaves room for further development as to the system of collective redress. In circumstances where a data breach may cause mass harm to a collective group of individuals, the "same interest" test may be relevant to a group of claimants potentially seeking to bring a claim through a sole representative.

Click here to read the full Judgment.

The Data Protection and Digital Information (No.2) Bill introduced in the House of Commons

On 8 March 2023, the UK government introduced the Data Protection and Digital Information (No.2) Bill (the "Bill") to Parliament. The first version of the Bill was prepared in July 2022 and paused in September 2022 to allow for further consideration with businesses and data experts.

The changes seek to amend current legislation that the UK "inherited" from the EU in the form of the GDPR. The new data laws are set to "cut down pointless paper work for businesses" according to the UK government's press release.

Key changes from a cyber breach response perspective

The Bill includes the following changes:

  • Article 5(1)(b) - This Article prohibits data processing that is not compatible with the original purpose for which the personal data was collected. The amendments clarify the rules around compatibility of further processing of personal data.

  • Article 6 - The Bill provides new examples of legitimate interests in processing personal data, including "national security, preventing crime, direct marketing, intra-group transmission of personal data where necessary for administrative purposes and ensuring security of network and information systems".

  • Article 12A - Requests from data subjects under Articles 14-22 and 34 can be rejected where they are found to be "vexatious or excessive". The Bill provides examples of such requests, including requests which are "intended to cause distress, are not made in good faith, or are an abuse of process".

  • Article 30A - The Bill extends record keeping requirements to include all controllers/processors (including small businesses) that carry out processing of personal data which is likely to result in a high risk to the rights and freedoms of individuals. These controllers/processers must maintain appropriate records of processing. The Bill specifies exactly what the controller's and processor's records must include.

  • Article 33 - A personal data breach notification to the ICO shall communicate the name and contact details of the "senior responsible individual" rather than the "data protection officer", following amendments made by the Bill. The senior responsible individual is a "designated individual [who] must be part of the organisation’s senior management". Notification obligations to the ICO do not apply to personal data processed for law enforcement purposes if it is for the purposes of safeguarding national security.

  • Article 34 - Data subject notifications obligations relating to personal data breaches do not apply to personal data processed for law enforcement purposes if it is for the purpose of safeguarding national security.

  • The ICO has been granted new power to issue a new notice compelling a person to attend an interview to answer questions for the purposes of investigating a suspected offence under data protection legislation. Failure to comply with an interview notice can result in a monetary penalty. It will be a criminal offence to knowingly or recklessly make a false statement in response to an interview notice.

Click here to read the UK Gov press release.

The NCSC discusses the cyber security risk of Chat GPT and large language models

Artificial Intelligence (AI) has been trending across the past 12 months, with OpenAI's ChatGPT (an AI chatbot that uses deep learning to produce human-like text) claiming headlines. The platform operates on large language models (LLM), an algorithm which can be trained on a large amount of text-based data. The LLM technology can analyse the relationship between different words and accordingly execute a probability model. Users can then proceed to "prompt" the algorithm by asking it questions that lead to the provision of an answer based on the relationships of the words in its model.

ChatGPT can allow users to ask an LLM questions as if holding a conversation with a chatbot. This includes the ability to use "prompt augmentation" which involves providing context information about the question.

Privacy concerns have begun to emerge since query "prompts" remain visible to the LLM host (this being OpenAI in the case of ChatGPT). These companies tend to store queries and use them as a foundation to develop the LLM service/model at a future point in time. Users of these products are being encouraged to thoroughly understand the use and privacy policies of public LLM platforms prior to asking sensitive questions which may include user-identifiable information.

Additional concerns include the potential for threat actors to coax an LLM into writing highly capable malware or assisting in escalating privileges and finding data once a threat actor has gained access to a network. There is also scope for LLM's to assist threat actors in carrying out social engineering attacks by helping to write convincing phishing emails in the native language of a target. Although AI remains an exciting development which has the potential to boost efficiency within society, organisations and individuals must remain vigilant of bad faith actors who seek to exploit new systems for their own malicious gain.

Click here to read the NCSC blog post.

Ransomware gang claims to have breached Amazon-owned Ring

The infamous ransomware group known as ALPHV claims to have compromised Ring, the Amazon-owned company that builds smart doorbells with cameras. ALPHV became popularised following the group's use of the BlackCat encryptor malware. Ring's logo recently appeared on the groups "leak site", alongside a threat to publicly leak the smart doorbell operator's data.

Amazon itself has remained silent on the matter, issuing a short statement that it has "no indications" of Ring experiencing any ransomware attacks. The US tech giant did however announce that a third-party vendor fell victim to a ransomware attack and that Ring is now engaged in an effort to learn more about the incident. Amazon reiterated that the impacted vendor does not have access to its own customer records.

Questions have emerged regarding the data which ALPHV has accessed and is now leveraging, as well as how the group was able to compromise the target network. It is not yet clear which third-party vendor has been compromised and whether it is considering negotiating with hackers or paying a ransom. No further details will likely be known until ALPHV leaks the data, or the targeted company files a report with the Securities and Exchange Commission (SEC).

Click here to read the Tech Radar article.

ICO shares resources to help designers embed data protection by default

The Information Commissioner's Office (ICO) has produced new guidance which aims to assist tech service providers in embedding data protection into their products and services from their inception.

The guidance deals with key privacy considerations for each stage of product design, covering "kick-off" up until the "post-launch" period. It includes examples of good practice as well as practical steps which the ICO would expect organisations to take, when designing products and services, in order to comply effectively with data protection laws. Key takeaways for organisations include the need to involve other stakeholders in privacy discussions, as well as ensuring that there is a lawful reason for processing any personal information, in line with a Data Protection Impact Assessment (DPIA) and keeping track of personal information that is handled. The guidance also recommends that organisations should always check whether any privacy risks arise from new products or features which involve new uses of personal information. Organisations should think about how threat actors could use these new sources maliciously.

Whilst the ICO has recommended technical privacy-enhancing methods such as hashing or encryption, there remains no substitute for a genuine consideration of privacy during the design process. Especially in the light of the potential consequences where leaked information and personal data end up in unauthorised hands.

Click here to read the full ICO blog post.

Artificial intelligence in the role of assessing cyber risk

The University of Warwick has produced a review assessing the opportunities for using AI to help reduce cyber risk and threat exposure within the insurance sector. Cybercrime has been on the rise since the onset of the COVID-19 pandemic, with the emergence of sophisticated new methods of attacks. The integration of an efficient digital form of cyber security through the use of AI could help. If employed effectively, AI could help combat cyber risks and perform tasks such as detecting and preventing cyber-attacks in real-time, resisting novel cybercrime and increasing the effectiveness of cyber security teams.

AI is already used in back-end functions such as fraud detection. Machine Learning (ML) techniques are also being used. For example, Support Vector Machines used a ML algorithm that learns from examples of fraudulent and non-fraudulent activity reports to identify credit card fraud.

There are further opportunities that the insurance sector can take with AI. Natural Language Processing has been earmarked as a leading interdisciplinary focus which, when applied to cybersecurity, can encourage interactions in the insurance industry between people and machines. This can assist in identifying the risk of a phishing attack by scanning vast amounts of datasets for email conversations or tracking emails that enter an organisation's network in order to identify patterns of malicious behaviour.

AI and ML could also help defend against DDoS attacks by comparing network traffic with real-time data streams collected from threat-intelligence sources to spot attack trends.

Click here to read the full WTW post.

Countering Ransomware Financing

The Finance Action Task Force (FATF) has produced a report analysing the methods used by criminals to carry out ransomware attacks. The study also covers how payments are made and laundered, with the aim to improve global understanding of the financial flows linked to ransomware and highlight actions that countries can take to effectively disrupt ransomware-related money laundering.

The report explores how ransomware criminals tend to opt for virtual assets such as cryptocurrencies to facilitate large-scale cross-border transactions. This circumvents the involvement of traditional financial institutions that have anti-money laundering and counter terrorist financing (AML/CFT) programs. Of particular concern therefore are jurisdictions with weak or non-existent AML/CFT controls. The FATF report proceeds to explore potential solutions to the problem, with the key takeaway being a need to regulate the virtual asset service provider (VASP) sector and build upon and leverage existing international cooperation and information exchange mechanisms. This is due to the globalised nature of ransomware attacks which necessitate an increased focus on rapid cross-border funds tracing and effective asset recovery.

Finally, ransomware attacks have been found to be generally underreported, with detection potentially a challenge in the private sector alongside the negative potential reputational impacts to the victim’s business. Moving forward, the key objective for regulators will be to create an environment where victims feel encouraged to report incidents. This is even more crucial given the current state of underreporting hampering regulators' ability to substantively investigate money laundering related to ransomware.

Click here to read the full FATF publication.