The European Commission has issued a set of guidelines to tackle terrorist content posted online - recommending that internet firms remove the offending material within one hour of being flagged by authorities.
"Considering that terrorist content is particularly harmful in the first hours of its appearance online, companies should as a general rule remove such content within one hour of its flagging by law enforcement authorities and Europol," reads the Commission's proposal.
While the "one-hour rule" is not mandatory, the commission can also pass legislation into law, it said. The recommendation also applies to "illegal hate speech", child sexual abuse, commercial scams and frauds, or breaches of intellectual property.
In order to police users, the commission suggests the use of automated detection methods and "trusted flaggers" - such as YouTube's use of the Southern Poverty Law Center (SPLC) and the Anti-Defamation League (ADL) - a practice which has recently come under fire in the U.S. for discriminatory action against conservative accounts.
"While several platforms have been removing more illegal content than ever before —showing that self-regulation can work— we still need to react faster against terrorist propaganda," said Andrus Ansip, Vice President of the commission.
The EDiMA trade association which represents various internet companies such as Google, Facebook and Twitter, believes that the one-hour rule could end up overwhelming online providers - harming the overall effectiveness of existing takedown systems.
"Our sector accepts the urgency but needs to balance the responsibility to protect users while upholding fundamental rights – a one-hour turn-around time in such cases could harm the effectiveness of service providers’ take-down systems rather than help," said the trade association.
The privacy group, European Digital Rights, also took issue with the one-hour rule. It'll afford little time to "assess the illegality of the content" or whether deleting the data might have counter-productive effects, like interfering with criminal investigations, it said. -pcmag
While the European Commission report doesn't recommend fines, several European countries have made efforts to punish social media companies which allow "hate speech" to remain on their platforms.
While several countries have strict penalties for hate speech, Germany began punishing social media platforms in January - enforcing a new law which gives companies such as Facebook, Twitter and YouTube 24 hours after a complaint to remove postings containing hate speech.
Failure to remove the offending posts in time so will expose the platforms to fines of up to 50 million euros (~$61 million USD).
The new law was passed last June and went into effect in October - however social media companies were given until January 1 to prepare for compliance, such that they maintain an "effective and transparent procedure for dealing with complaints" which users can submit freely. Upon receiving a complaint, social media companies have 24 hours to block or remove "obviously illegal content" - and up to a week in "complex cases."
Germany has unique hate speech laws which criminalize certain language - such as incitement to racial or religious violence, speech denigrating religions, and other posts deemed to be offensive.
Facebook hired over 500 German contractors in November out of a reported 3,000 to help comply with the new law, who will work for a service provider called CCC out of a new office in the western city of Essen. Meanwhile, the German government has reportedly hired a staff of 50 people assigned to the task of implementing and policing the law.
The new law isn't just for the big three (Facebook, YouTube and Twitter) either:
Social platform giants such as Facebook, YouTube and Twitter were couched as the initial targets for the law, but Spiegel Online suggests the government is looking to apply the law more widely — including to content on networks such as Reddit, Tumblr, Flickr, Vimeo, VK and Gab. -TechCrunch
Social media giants face a gigantic task - which some might say is impossible. It is estimated that around 37.3 million Germans will use social media in 2017.
Now, consider that every single flagged post from millions of users must be evaluated before the post is removed or blocked.
And all of this must be caught and removed within 24 hours.
Moreover, political enemies can attempt to game the system by filing false reports on each other. If social media companies then set "removal" as the default action upon a complaint, they will be inundated with appeals which an AI may not be able to handle - requiring a person to manually review each case that warrants human attention.
As Cnet points out:
The massive amount of hate content, in particular, has been a problem for the sites. In June, Facebook said it removes 66,000 such posts every week. The company said it wants to do better but adds that the task is not easy. Last month, Facebook added new tools to try to curb abuse on the site. One new feature tries to make sure that when you block someone who's been harassing you, the person can't simply create a new account and continue the harassment. The tool does that by looking at various signals, like the person's IP address.
Also last month, Twitter escalated its fight against hate, enforcing an updated policy that bans people from promoting violence and hate in their usernames and bios, and threatens to remove accounts if users tweeted hate speech, symbols and images.
And just like that, AI and an army of "trusted flaggers" will be policing hate speech and other content deemed offensive. Given the overwhelmingly liberal bias among social media platforms, the "trusted flaggers" and EU leadership itself - Europeans who wish to voice their concerns over cultural decay, migrant violence, or even post the name of the latest terrorist suspect may find themselves muted indefinitely.