(SAN FRANCISCO) — Getting rid of racist, sexist and other hateful remarks on Facebook is more challenging than weeding out other types of unacceptable posts because computer programs still stumble over the nuances of human language, the company revealed Tuesday.
Facebook’s self-assessment showed its policing system is far better at scrubbing graphic violence, gratuitous nudity and terrorist propaganda. Automated tools detected 86 percent to 99.5 percent of the violations Facebook identified in those categories.
For hate speech, Facebook’s human reviewers and computer algorithms identified just 38 percent of the violations. The rest came after Facebook users flagged the offending content for review.
Facebook also disclosed that it disabled nearly 1.3 billion fake accounts in the six months ending in March. Had the company failed to do so, its user base would have swelled beyond its current 2.2 billion. Fake accounts have gotten more attention in recent months after it was revealed that Russian agents used them to buy ads to try to influence the 2016 elections.
Even after all that disabling, though, Facebook has said that 3 percent to 4 percent of its active monthly users are fake, meaning up to 88 million fake accounts slip through.
The report was Facebook’s first breakdown on how much material it removes for violating its policies. The statistics cover a relatively short period, from October 2017 through March of this year, and don’t disclose how long it takes Facebook to remove material violating its standards. The report also doesn’t cover how much inappropriate content Facebook missed.
“Even if they remove 100 million posts that are offensive, there will be one or two that have some really bad stuff and those will be the ones everyone winds up talking about on the cable-TV news,” said Timothy Carone, who teaches about technology at the University of Notre …read more
Source:: Time – Technology