Recently Facebook made public the guidelines used by its 7,500 moderators.
Now it has released some figures on how it deals with six types of content: graphic violence; adult nudity/sexual activity; terrorist propaganda; hate speech; spam; and fake accounts. The stats set how prevalent each type of content is; how much of the content Facebook discovers/takes action on before the public reports it; and figures on actions taken by Facebook. The stats cover the period from October 2017 to March 2018. In the future the reports will be published quarterly.
Key figures
Spam and fake accounts were the most prevalent types of content. In the first quarter of this year (Jan-Mar 2018) Facebook removed 837 million pieces of spam and 583 million fake accounts. Its systems spotted nearly 100% of spam and nearly 99% of fake accounts before the public reported them.
When it comes to hate speech, 2.5 million items were removed in the first quarter of 2018. However, the automatic systems are much less successful when it comes to identifying this content before it is reported by the public (only 38%).
The full report is available here.