EU Proposal on Illegal Online Content: Perspectives on a Paper Tiger
The European Union, initially under pressure from governments concerned about illegal content promoting terrorism, has been looking for ways to effectively regulate online content for years. In October 2014, the European Commission hosted a private dinner for tech firms and Member States, during which ministers expressed “strong interest to enhance the dialogue with major companies from the internet industry on issues of mutual concern”. Since then, the EU has repeatedly urged web platforms like Facebook, Google and Twitter to find means to detect and swiftly remove hate speech, violence and terrorism-related content. It has also gradually developed the means to incentivize web platforms to play along.
Four years later, it is clear the Commission’s bet on issuing non-binding guidelines to the Internet industry, instead of regulating it outright, has paid off. As the carrot, the Berlaymont made clear that it would adopt a self-regulatory approach to web platforms that is less disruptive to business, in exchange for compliance. As the stick, it threatened imposing legislative measures at every step. “If the tech companies don’t deliver, we will do it”, said Commissioner for Justice and Consumers Jourová.
Web platforms did team up to demonstrate proactive compliance. To that end, tech firms agreed to a code of conduct on hate speech in 2016, which required them to review most hateful content within 24 hours. They also created a database of image and video content to help take down content faster. In September 2017, the Commission issued guidelines for web companies proposing common tools to detect illegal content and notify authorities, effectively remove illegal content and prevent its re-appearance. The present recommendation solidifies that groundwork and paves the way forward.
Considering its history, it is no surprise that the Commission’s Recommendation on measures to effectively tackle illegal content online of 1 March 2018 is not legally binding and foresees neither penalties nor fines for non-compliance. It translates the political commitment of the September 2017 communication into a legal form, providing legal definitions for previously nebulous concepts. While the approach remains voluntary, the Commission reiterated that online platforms will have to act if they want to avoid legislative action, and that their actions will be monitored.
Eager to crack down on terrorism-related content at a time when a spectrum of issues like fake news, intellectual property theft and product safety are also on the agenda, the proposed measures cover all forms of illegal online content: incitement to terrorism, illegal hate speech, abuse material, IP infringements and the protection of consumers’ rights online. While these are all pressing issues, they present very different challenges and are not necessarily related to each other by anything other than the need to fit into Commission President Juncker’s political vision for the Digital Single Market.
Importantly, the recommendation applies to all platforms, whether big or small. The Commission has taken note that hate speech and terrorist propaganda has moved away from the major social media platforms, which use both AI and human screening to remove content, to smaller sites which do not have the resources to police content. Because these sites refuse to self-regulate, the Commission continues to stand powerless to enforce its rules online. While Commissioners expressed hope that larger platforms would share their web filtering and AI tech with smaller platforms to help them comply with new recommendations (such as a one-hour takedown rule for terrorist content), the Commission has failed to provide players like Facebook with any incentives for such a technology-sharing programme to take place. Without the participation of these smaller actors, any recommendation will have limited impact.
Whereas many websites will take no action at all despite the present recommendation, the EU itself is content with the fact that it cannot have all hate speech and other illegal content removed from the web. It knows it is walking a fine line between protecting freedom of speech and laying down rules which could potentially inhibit it. It is for this reason that the Commission proposed new safeguards like notification and counter-notification mechanisms to avoid unintended or erroneous removal of content that is not illegal, but might be caught up in automatic filters employed by tech firms.
If the intent is to discourage censorship by web platforms by allowing for some leeway, then the result is that the measures it has proposed might not be very effective. And while the EU is still far away from decreeing which websites it believes to feature illegal content, civil rights defenders already complained that the Commission is, in effect, privatizing law enforcement. In pushing Internet companies to monitor their platforms and decide by themselves which forms of speech are legal, the Commission is circumventing the role of courts and makes censorship too easy, the Center for Democracy & Technology has argued. At the very least, it is clear that the Commission is trying to shift that burden of responsibility from its own shoulders. The Commission is too afraid that, by regulating, it will be accused of becoming the “Ministry of Truth”.
While the Commission’s recommendation makes its way forward on thin spring ice, it is unwilling to go as far as to touch a directive which would upend the game completely for web platforms. The proverbial sacrificial lamb, the Commission’s 2000 e-Commerce Directive, will live yet another day. The Commission has been explicit in saying that the liability regime (which exempts online service providers from liability for the content they manage) as part of the e-Commerce Directive will stay in place, and that amending it is not the way forward to tackle illegal content.
Meanwhile, both the Commission and web platforms know that the current situation is untenable in the long run. Newspapers remain heavily regulated as publishers because they curate content, while web platforms—largely unregulated—cannot pretend to be mere conduits of information for much longer as their curation capabilities improve.
The major platforms have bought themselves and their peers some time. The Commission won’t reopen a directive it cannot hope to close before the European Parliament elections of 2019. But platforms must succeed in their quest to remove illegal content and that success cannot be limited to the large players. Big tech companies will have to push and help smaller players to follow their way. Or else they run the risk that the next Commission will move from waving to applying its regulatory stick.