Abstract of machinery with blue tint.

Online Harms White Paper proposes regulatory framework to entrench online safety

Published on 04 July 2019

Can the UK become the safest place in the world to go online?

The background

Since it published its Digital Charter in January 2018, the Government has not been shy about its desire to combat, what it perceives as, the unacceptable levels of illegal and harmful content online in the UK. As reported in our Spring 2019 edition of Snapshots, the House of Lords Communications Committee published a high-level report entitled “Regulating in a digital world” outlining ten key principles which it proposed should guide the development and implementation of digital regulation in the UK.

On 8 April 2019, the Department for Digital, Culture, Media & Sport (DCMS) published the Online Harms White Paper. This White Paper forms part of the Government’s drive to make internet companies more accountable for user-created content and contains a number of proposals aimed at introducing a new regulatory framework to ensure the UK is “the safest place in the world” to go online.

The development

The White Paper proposes the implementation of a new statutory duty of care to tackle “online harms”. The scope of “online harms” to be covered by the proposed duty is wide, and covers harm caused by child sexual exploitation and terrorist activity, to those caused by cyberbullying, disinformation and the advocacy of self-harm.

A wide variety of companies will be caught by the proposed legislation. Although the White Paper stops short of providing examples, it states that the statutory duty of care will apply to those companies which host, share and/or allow the discovery of user-generated content or facilitates public and private user-interaction online.

These companies will need to take a proactive approach to user safety. They will be expected to take reasonable steps to remove harmful content and activity on their platforms and to introduce effective and easy-to-use user complaints functions. Where a complaint is made, prompt action will be required. Further, companies will need to actively combat sexually exploitative, and terrorism-related, content through targeting monitoring.

Compliance with the statutory duty of care will be overseen and enforced by an independent regulatory body. The regulator will be responsible for developing new codes of practice, and will be provided with a full suite of powers to take effective enforcement action against companies in breach of the new statutory duty. This will include the ability to impose substantial fines, up to 4% of their global turnover. 

Why is this important?

In the words of the UK Digital Secretary, Jeremy Wright, the proposed changes mark the end of “the era of self-regulation” for online companies. At the more extreme end of the scale, the Government wants to prevent a repeat of the recent tragedy in Christchurch, where a terrorist attack was live-streamed to a global audience through social media platforms. However, it also wants online companies to become more accountable for the online abuse, bullying and fake news which affect internet users on a daily basis. In its current, widely-drafted, form, the statutory duty of care will apply to global social media platforms and search engines, as well as internet forums and even review sites. Such companies will be expected to actively respond to online harms, taking action proportionate to the severity and scale of the harm. To increase transparency, companies will also be expected to provide annual reports to evidence the effectiveness of the measures and safeguards they have in place as well the processes used to identify, block or remove harmful content.

Since being published, the White Paper has come under scrutiny for what some perceive as a clumsy, heavy-handed attempt to police the internet. Some commentators have labelled the proposals a violation of freedom of speech, while others have accused the Government of aggressive censorship. The White Paper pre-empts this criticism, stating that the regulator’s powers will not be responsible for policing truth or accuracy online, nor will they encroach on current data protection measures in force under the GDPR. Nevertheless, the scope of the new regulator’s responsibilities is yet to be finalised, so it remains to be seen whether such criticism is justified.

Any practical tips?

The current proposals are not yet solidified in statute; instead they form part of a public consultation which is due to end on 1 July 2019. Companies facing the proposed regulations (in particular, those with larger online presences) would be well advised to consider their current ability to deal with online harms effectively and, if necessary, re-vamp their current complaints functions.