Facebook

Facebook Publishes Evolving Efforts to Combat Hate Speech in New Report

Facebook’s facilitation and amplification of both hate speech and misinformation has been a key topic of focus over the past four years, but even more so within the chaos of 2020. COVID-19, the #BlackLivesMatter movement, and the US Presidential election have all, in their own ways, underlined the role that Facebook can play in amplifying dangerous narratives, and sowing division as a result.

Indeed, Facebook is currently facing tough questions once again over the role it’s playing in spreading false claims made about the election results, while in other regions, misinformation campaigns on the platform have to lead to violent conflicts, and threatened the democratic process.

There’s little doubt that Facebook’s massive reach plays a significant role on both fronts – but Facebook is keen to point out that it is working to address such claims, and that it is evolving its processes to tackle these concerns.

That’s the message of Facebook’s latest update around content removals as part of its Community Standards Enforcement Report, which, for the first time, now includes a measure of the prevalence of hate speech across the platform as part of its overview.

As explained by Facebook:

“Prevalence estimates the percentage of times people see violating content on our platform. We calculate hate speech prevalence by selecting a sample of content seen on Facebook and then labeling how much of it violates our hate speech policies. Based on this methodology, we estimated the prevalence of hate speech from July 2020 to September 2020 was 0.10% to 0.11%. In other words, out of every 10,000 views of content on Facebook, 10 to 11 of them included hate speech.”

Which is not very much right? Just 0.10% of Facebook content views include some form of hate speech. That’s pretty good. Right?

The important consideration here is scale – as Facebook notes, 10 out of 10,000 isn’t much. But Facebook has more than 2.7 billion active users, and as an example, if every single Facebook user only saw one post per month, would still mean that 2.7 million people would still be seeing some hate speech content. And that’s nowhere near indicative of total views on Facebook – so while the prevalence data here is positive, at Facebook’s scale, that still means a lot of people are being exposed to hate speech on the platform.

But Facebook is detecting and removing more of it – according to Facebook, its automated detection tools are now picking up far more instances of hate speech before anybody sees them.

“When we first began reporting our metrics for hate speech, in Q4 of 2017, our proactive detection rate was 23.6%. This means that of the hate speech we removed, 23.6% of it was found before a user reported it to us. Today we proactively detect about 95% of hate speech content we remove.”

Facebook has also ramped up its removals in recent months, which would incorporate its decision to remove QAnon-related content.

Indeed, Facebook notes that it has upped its enforcement action against more hate-related groups:

“We’ve taken steps to combat white nationalism and white separatism; introduced new rules on content calling for violence against migrants; banned holocaust denial; and updated our policies to account for certain kinds of implicit hate speech, such as content depicting blackface, or stereotypes about Jewish people controlling the world.”

This is all positive, but there are still concerns, because of Facebook’s scale. When you control a platform that can provide anybody with reach to potentially billions of people, that comes with an onus of responsibility, and while Facebook is looking to improve its enforcement action now, in many cases, it’s still moving too late.

In the case of QAnon, Facebook was repeatedly warned of the potential dangers posed by QAnon-related movements, with examples of real-world violence stemming from QAnon discussions occurring as far back as 2016. Despite this, Facebook allowed QAnon groups to thrive on its platform – an internal investigation conducted by Facebook this year, and leaked by NBC News, found that the platform had provided a home for thousands of QAnon groups and Pages, with millions of members and followers.

Facebook started to change its approach to QAnon back in August, but by that time, the damage was largely done.

As noted by former President Barack Obama in a recent interview with The Atlantic:

“Now you have a situation in which large swaths of the country genuinely believe that the Democratic Party is a front for a pedophile ring. This stuff takes root. I was talking to a volunteer who was going door-to-door in Philadelphia in low-income African American communities and was getting questions about QAnon conspiracy theories. The fact is that there is still a large portion of the country that was taken in by a carnival barker.”

Obama labeled this ‘the single biggest threat to our democracy’ – and Facebook, again, was a key facilitator of such, for a long time.

Of course, there’s not much Facebook can do about this in retrospect, so the fact that it’s taking more action now is positive, as is the launch of its independent Oversight Board to assist with content rulings. But significant concerns still linger in regards to how Facebook, and social media more broadly, enables the spread of hate groups.

This then leads to misinformation. As noted, aside from anti-vaxxer conspiracies and questions around the integrity of the US election results, Facebook is also facilitating the spread of dangerous misinformation in other regions around the world, with smaller nations particularly susceptible to the spread of lies via The Social Network.

As reported by Vice, a recent, politically-motivated disinformation campaign orchestrated on Facebook has to lead to civil unrest in Ethiopia, with rival groups using the platform’s reach to smear rivals and sow division.

The most recent incident was caused by the murder of a popular singer, who had been falsely touted as a political activist. His death sparked violent clashes in the nation.

As per Vice:

“This bloodshed was supercharged by the almost-instant and widespread sharing of hate speech and incitement to violence on Facebook, which whipped up people’s anger. Mobs destroyed and burned property. They lynched, beheaded, and dismembered their victims.” 

Linking back the entirety of the unrest to Facebook may not be fair, as the seeds of conflict were already present. But the platform has helped to fuel such by giving activists a platform to spread false messaging.

Facebook is also working to combat this – in a separate overview, Facebook has detailed how it’s improving its AI detection systems to better detect false information, and in particular, to weed out repeated versions of the same false reports in order to slow their spread.

As explained by Facebook:

“We’re now introducing new AI systems to automatically detect new variations of content that independent fact-checkers have already debunked. Once AI has spotted these new variations, we can flag them in turn to our fact-checking partners for their review.”

It’s also adding new processes to detect deepfakes – so Facebook is improving, it is doing more to address these concerns.

But then again, as a group of Facebook moderators noted in a letter to the company, which was released to the media this week, Facebook’s AI detection tools are still “years away from achieving the necessary level of sophistication to moderate content automatically.”

On one hand, Facebook says it’s getting much better, but on the other, the people working on the frontlines say they’re still a way off.

Does that mean we can trust that Facebook is doing enough to stop the spread of hate speech and misinformation, or does more need to be done, outside of Facebook, to push for increased regulation and enforcement of the same?

It seems, despite Facebook’s assurances, the problems are still significant.

Similar Posts