Twitter has updated its Policies to now Include Links to ‘Hateful’ Content
Twitter has updated its policies to now include links to hateful content within its parameters to unacceptable activity.
As outlined by Twitter:
“At times, Twitter will take action to limit or prevent the spread of URL links to content outside Twitter. This is done by displaying a warning notice when the link is clicked, or by blocking the link so that it can’t be tweeted at all.”
Among the URLs that Twitter may block, it now includes:
“Content that promotes violence against, threatens or harasses other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.”
This is in addition to malicious/spammy links, terrorism, illegal activity, and private information.
Twitter detects such violations via user reports, automated detection systems, and third-party reviewers, so it’s entirely possible that some of this type of content will still make it through. But Twitter now has more specific rules against posted links to hateful conduct, in addition to policing hate speech directly posted on its own platform.
The update marks the latest in Twitter increased action against hate speech, which appears to have ramped up since Twitter added warning labels to two tweets from US President Donald Trump back in May. Those warnings were in relation to comments made about the voting process, another element of Twitter’s rules, but since then, it seems that Twitter has been pushing harder to address hate as well, which could also be tied into the #BlackLivesMatter movement and the calls for all social platforms to address such incidents – no matter who it is that shares them.
Earlier this month, Twitter announced a new crackdown on right-wing conspiracy group ‘QAnon’, impacting around 150,000 accounts, while Twitter also revised its rules around hateful conduct in March.
Hate speech on social media more broadly has become a bigger focus of late, with a group of civil rights activists leading an ad boycott at Facebook over its decision not to take action against offensive comments made by Trump.
Twitter has more actively aligned with public calls for more action in this respect – yet, this week, Twitter has also been caught up in another hate speech-related controversy, with British rapper Wiley tweeting offensive remarks about Jewish people, which were left up on Twitter for 12 hours, despite users reporting the issue.
Users have criticized the platform’s slow response, which has eventually lead to a group of high profile British users logging off Twitter for 48 hours in protest.
It’s a difficult area – in some respects, the price you pay for running an open platform, where millions of people can have their say, on anything, is that, at times, you’re also going to end up hosting offensive, divisive, and potentially dangerous content. Even the best moderation systems in the world can’t catch everything, and Twitter, like all platforms, is often reliant on user reports before it can address such. In this latest incident, 12 hours does seem excessive, but there are internal limits on how Twitter can act on such, which will always be at least somewhat problematic.
But revising the rules around what qualifies as hate speech can make it clearer for moderation teams to make a call. That process also has a level of fallibility in human error, but by broadening the slate of what’s not acceptable, it should help Twitter work towards creating a more efficient, effective enforcement system.