In the aftermath of the recent Congressional Kangaroo Court in which Mark Zuckerberg had to explain to various House Reps and Senators why 85% of them were recipients of Facebook's generous donations, the company has been forced to inject some more transparency into its operations, and open the books, so to say, and this morning, for the first time, Facebook published its enforcement numbers for the first time, three weeks after the company published the internal guidelines its uses to enforce its Community Standards.
The first report of its kind, covers Facebook's enforcement efforts between October 2017 to March 2018, and covers six areas:
- graphic violence,
- adult nudity and sexual activity,
- terrorist propaganda,
- hate speech,
- fake accounts.
As part of today's disclosure, Facebook's numbers show i) How much content people saw that violates FB standards; ii) How much content was removed; and iii) How much content was detected proactively using technology — before people who use Facebook reported it.
Facebook also reported that 85% of U.S. law enforcement data requests from July to December 2017 produced some data, when the company received 32,742 total requests on 53,625 users/accounts in 2H 2017.
So, as part of its effort in showing the public "how much bad stuff is out there" here is what Facebook reported:
- We took down 837 million pieces of spam in Q1 2018 — nearly 100% of which we found and flagged before anyone reported it; and
- The key to fighting spam is taking down the fake accounts that spread it. In Q1, we disabled about 583 million fake accounts — most of which were disabled within minutes of registration. This is in addition to the millions of fake account attempts we prevent daily from ever registering with Facebook. Overall, we estimate that around 3 to 4% of the active Facebook accounts on the site during this time period were still fake.
- We took down 21 million pieces of adult nudity and sexual activity in Q1 2018 — 96% of which was found and flagged by our technology before it was reported. Overall, we estimate that out of every 10,000 pieces of content viewed on Facebook, 7 to 9 views were of content that violated our adult nudity and pornography standards.
- For graphic violence, we took down or applied warning labels to about 3.5 million pieces of violent content in Q1 2018 — 86% of which was identified by our technology before it was reported to Facebook.
But the most problematic category for Facebook remains hate speech, where the company admits that "our technology still doesn’t work that well and so it needs to be checked by our review teams."
And since "hate speech" means whatever one decide it should mean, it is here that wholesale censorship will take place, in the form of blanket muting or the more subtle "shadow banning" which Facebook has repeatedly used in the past to ban conservatives.
Here Facebook reveals that "We removed 2.5 million pieces of hate speech in Q1 2018 — 38% of which was flagged by our technology."
This means that 62% of the "hate speech" that Facebook took down was the result of complaints by outside readers who found the content of said "hate speech" unpleasant or disagreeable and demanded it be taken down. One wonders, as we proceed down this road, how long until every new piece of content is flagged as "hate speech" and is eventually taken down leading to a feed that is totally devoid of any intellectual opposition; one also wonders how much it will cost Facebook to hire the thousands of arbiters who decide subjectively just what is "hate speech."
Finally, in the spirit of openness, we hope Facebook will reveal examples of what it classified as "hate speech" which it then removed, especially in light of growing complaints that Facebook has become nothing more than a 1st Amendment filter of non-liberal opinions. That said, we won't be holding our breath.