Reflection of surrounding buildings on RPC's building.

UK publishes response to consultation on online harms

Published on 09 June 2021

What does the government’s response to its consultation on the Online Harms White Paper mean for “Big Tech”?

The key takeaway

Companies are going to be obliged to ensure that their services and platforms provide safe spaces for users, as well as take steps to halt the proliferation of harmful misinformation.

The background

The government has published the response to its Online Harms White Paper, following the paper’s first publication in April 2019. The White Paper sets out significant evidence of harmful content and activities taking place online, as well as the increasing public awareness and concern about online content that is not illegal but is potentially harmful. It covered online content or activity that harms individual users, particularly children, or threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights, responsibilities and opportunities to foster integration”. The types of content/ activities range from cyber-bullying to misinformation. While the White Paper acknowledges that these activities may not be illegal, it does recognise they can have significantly damaging effects as well as having a detrimental impact on user’s online experiences, particularly amongst children and young adults.

In order to address the harmful content and activities identified in the White Paper a new duty of care, aimed at making companies take responsibility for user safety, was proposed. Its aim is to improve the safety for users of online services and prevent people from being physically or psychologically harmed as a direct consequence of content and activity on those services, as well as holding content providers and/or facilitators accountable. The consultation gathered views on various aspects of the government’s plans for regulation and tackling online harms, including:

  • the online services in scope of the regulatory framework

  • options for appointing an independent regulatory body to implement, oversee and enforce the new regulatory framework

  • the enforcement powers of an independent regulatory body

  • potential redress mechanisms for online users

  • measures to ensure regulation is targeted and proportionate for industry.

The development

The government has committed to making the Online Safety Bill ready in 2021, which will give effect to the new regulatory framework outlined in the response. This follows criticism from the House of Lords regarding the urgency with which a new regime was needed, and the fact that the COVID-19 pandemic has meant that the risks posed by illegal and harmful content and activity online have also been thrown into sharp relief as digital services have played an increasingly central role in our lives”. The incoming regulatory framework, to be overseen and enforced by Ofcom, will apply to companies whose services host user-generated content, or who facilitate public or private online interaction between users and search engines. This means that as well as applying to, for example, publicly shared content on social media platforms, it will also apply to online instant messaging services and private social media groups.

There are several exemptions provided, including for business-to-business services and services used internally by organisations. Additionally, the legislation will not impact journalistic content published by a newspaper or broadcaster on its website. It should also be noted that regardless of the country in which a company is based, if they provide services to users in the UK then they will be in scope of the new regulatory framework.

One question of importance to those companies likely caught by the incoming regulations is what exactly constitutes harmful content or a harmful activity and what steps need to be taken to ensure compliance with the rules? The response states that the legislation will provide a general definition and that it will include content or activities that give rise to a reasonably foreseeable risk of harm to individuals. The framework will also take a tiered approach that outlines the steps that need to be taken in relation to harmful activities or content.

Most services provided by companies will be Category 2 services, such as dating apps and private messaging services.

Providers of Category 2 services will need to take proportionate steps to address illegal content and activity (in each case which meet the definition of harm) and to protect children from content that would be harmful to them, such as violent or pornographic content. There is then a small group of high-risk, high- reach services, that will be designated as Category 1 services, mainly consisting of large social media sites. Providers of these services will additionally be required to act in respect of content or activity on their services which is legal but harmful to adults. All companies in scope will also have several additional duties in addition to the core duty of care, including providing mechanisms to allow users to report harmful content or activity and to appeal the takedown of their content.

Additionally, the regulations aim to take on the recently spotlighted phenomenon of “fake news”. The new duty of care will cover misinformation and disinformation and oblige companies to implement specific transparency requirements that are likely to be more stringent that the steps already being taken by social media organisations to curb the potential harm caused by fake news.

The government has also confirmed that Ofcom will have robust enforcement powers in order to ensure compliance with the regulatory regime. The current proposal is to give Ofcom the power to issue fines of up to £18m or 10% of global annual turnover, whichever is the higher, for non-compliance with the new regime.

Why is this important?

Continued accessibility to the internet as well as the increasingly central role that online services are playing in our day-to- day lives mean that there is more and more of a spotlight being shone on the content that is able circulate across platforms.

Organisations must ensure that effective technical, organisational and administrative measures are in place in order to ensure compliance with the new regulations as well as taking steps to increase both government and public confidence in the ability of organisations to properly police the services they provide.

Any practical tips?

Implementing and maintaining appropriate measures to ensure compliance with the regulations will be a cheaper alternative that getting stuck with an investigation and a potentially sizeable penalty from Ofcom.

It will be important to keep an eye on publications and any enforcements coming from Ofcom to understand how they will interpret and enforce the regulations. In the meantime, organisations should be starting to implement robust procedures to ensure that harmful content is not propagated through their platforms. Some measures could include:

  • ensuring fast responses to reports of harmful content
  • ensuring that effective monitoring procedures are in place in order to detect and remove harmful content
  • updating codes of conduct for users, and
  • considering bans for users found to be in breach.