Facebook "Explains" Why It Failed To Remove Christchurch Shooter's Gruesome Livestream

Though Facebook's AI-powered censors managed to "mistakenly" flag Zero Hedge as a repeat violator of the social network's "community standards", when it comes to livestreamed videos depicting horrific and extreme violence, the company is still working out some kinks in its ability to immediately identify and remove livestreams depicting horrific acts of terror and violence like the video published Friday by the Christchurch Shooter.

In a blog post mea culpa published Thursday, Facebook's VP of Integrity Guy Rosen explained why the company failed to immediately remove the horrifying livestream of the attacks that the shooter, a 28-year-old Australian who also published a manifesto laying out his violent, islamophobic ideology, posted to Facebook Live, and which was viewed 4,000 times before being taken down.

Facebook

According to Rosen, one reason the video lingered for so long on its platform - Facebook didn't remove the video until police responding to the incident reached out to the company, despite it being reported multiple times - was that the video wasn't prioritized for immediate review by the company's staff. As it stands, Facebook only prioritizes reported livestreams tagged as suicide or self harm for immediate review.

To rectify this, the company is "reexamining its reporting logic" and will likely expand the report categories prioritized for immediate review.

In Friday’s case, the first user report came in 29 minutes after the broadcast began, 12 minutes after the live broadcast ended. In this report, and a number of subsequent reports, the video was reported for reasons other than suicide and as such it was handled according to different procedures. As a learning from this, we are re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review.

Or as one twitter wit summed up:

Also, for anybody wondering why Facebook refuses to implement a time delay on Facebook Live content? Rosen has an answer:

Rosen also explained how the video managed to circulate even after the original was removed by Facebook. Apparently, a group of "bad actors" working to spread the imagery across the Web managed to capture the video and share it to 8chan and various video sharing platforms.

It was then repackaged by multiple users with slight visual variations (some users took film grabs of the video playing on their desktops) and reposted to Facebook, inhibiting Facebook AI's ability to match and automatically remove the content.

1. There has been coordination by bad actors to distribute copies of the video to as many people as possible through social networks, video sharing sites, file sharing sites and more.

2. Multiple media channels, including TV news channels and online websites, broadcast the video. We recognize there is a difficult balance to strike in covering a tragedy like this while not providing bad actors additional amplification for their message of hate.

3. Individuals around the world then re-shared copies they got through many different apps and services, for example filming the broadcasts on TV, capturing videos from websites, filming computer screens with their phones, or just re-sharing a clip they received.

Eventually, some 300,000 copies of the video evaded Facebook's AI filters to be re-posted on its platform. Those videos were removed eventually, but not before they were seen by an unreported number of users.

In the first 24 hours, we removed more than 1.2 million videos of the attack at upload, which were therefore prevented from being seen on our services. Approximately 300,000 additional copies were removed after they were posted.

We’ve been asked why our image and video matching technology, which has been so effective at preventing the spread of propaganda from terrorist organizations, did not catch those additional copies. What challenged our approach was the proliferation of many different variants of the video, driven by the broad and diverse ways in which people shared it:

First, we saw a core community of bad actors working together to continually re-upload edited versions of this video in ways designed to defeat our detection.

Obviously, Facebook needs to do better, and, in addition to expanding the reporting categories flagged for immediate review, the company is planning to employ new AI technology to match not just the video feeds, but also the audio from removed videos to help detect reposted copies with slight visual variations.

Most importantly, improving our matching technology so that we can stop the spread of viral videos of this nature, regardless of how they were originally produced. For example, as part of our response last Friday, we applied experimental audio-based technology which we had been building to identify variants of the video.

It's worth noting that the government of New Zealand is taking a more aggressive approach: Threatening anyone found in possession of the video with a lengthy prison term.

Will this be enough? Only time will tell. Though we imagine that the 'bad actors' Facebook has implicitly blamed for the videos' spread (rather admitting that Facebook's vast content-filtering network, which regularly bans, censors and suspends conservative voices, whether by accident or intentionally, failed at its one job) will simply innovate and change up their strategies for evading detection.

And with that, the cat and mouse game continues.