Online platform liability: Global web companies seek EU clarity on liability for user content fake news

Published on 01 June 2017

Global web companies seek EU clarity on liability for user content/fake news

Background

For the purposes of EU commerce law, online platforms are intermediaries as they provide online services that allow users to add their own unmoderated content. Historically, in contrast to traditional media outlets, intermediaries are not liable for any illegal content posted on their platform unless they become aware of its existence.

The development

With the rise of fake news and other problematic content being posted on social media (eg terrorist-linked videos), some of the major online platforms have asked the EU to provide guidance to “explore the need for guidance on the liability of online platforms when putting in place voluntary, good faith measures to fight illegal content online.”

In particular, the platforms have requested clarity on:

• the circumstances when a platform becomes “aware” of (and liable for) illegal content and
• the types of content scanning systems that could be implemented without violating the fundamental rights of EU citizens eg freedom of speech.

This request was made as part of recent workshop, on 6 April 2017, between a variety of online platforms (including Google and Facebook) and the European Commission.

What’s next

Subsequently, the European Commission has released its mid-term review in which it has promised that it will provide guidance for platforms on dealing with the removal of illegal content and the scenarios when companies will not be held liable for illegal content posted that they are scanning for.

This is an area to watch closely, as the European Commission will host another three workshops this year on the following related issues:

• notifying and removing unwanted content
• diffculties that national judges encounter when trying to apply liability exemptions and
• the impact of voluntary measures on fundamental rights.

Why this is important?

The minutes from the workshops mentioned above made clear that a significant proportion of the platforms canvassed were “keen to put voluntary measures in place…because maintaining a safe environment on their platform that consumers could trust was crucial to their business model and reputation.”

Platforms will need to keep an eye out for any guidance published, to ensure that their processes for handling fake news strike a balance between preventing damage to the businesses’ reputation whilst not crossing the line for which they become liable for the illegal content.

Additionally, the German parliament is considering introducing legislation that could result in platforms facing a liability of up to €50 million if they fail to promptly take down hate speech postings. Therefore, any platforms with a presence in Germany may find that they need to take a nuanced approach to the management of hate speech postings in that jurisdiction, that does not see them become liable for that content in other member states.