YouTube

YouTube Shares New Measures to Combat Hate Speech on its Platform

While Facebook takes the brunt of criticism around the spread of hate speech and misinformation online, YouTube is also a key source, with many now reliant on the platform for news updates.

Indeed, a study conducted earlier this year found that over a quarter of the most viewed YouTube videos on COVID-19 contained misleading information, reaching millions of viewers worldwide. Reports have also suggested that YouTube often inadvertently promotes extremist content, and can further indoctrinate users through its recommendations, links, and video comments.

These are problems that all social platforms need to address, but YouTube and Facebook have the broadest reach, which puts them in the spotlight to do more in tackling such concerns.

And this week, YouTube has provided some insight into how it’s doing just that, with an update on how it’s looking to tackle hate speech on the platform, in order to provide a more welcoming environment for all users.

As per YouTube:

“Since its early days, YouTube has always strived to be a place where creators of all backgrounds can have a voice, find a community and even build a business, including many that may be underrepresented or might not otherwise have had a platform. We’re committed to supporting the diverse creator communities on YouTube and their continued success.”

YouTube is adding a range of new measures, some of which it’s been working on for a while, along with some new, innovative ideas for improved detection.

First off, YouTube’s testing a new filter in YouTube Studio which will remove potentially inappropriate and hurtful comments that have been automatically held for review.

YouTube reported this back in October – now, YouTube’s system will detect potentially harmful comments and move them to the ‘Held for Review’ area. If the channel owner chooses not to take action on them after 60 days, they’ll be removed automatically.

That means that channel owners won’t have to look at those comments if they choose not to, which could save people from dealing with potentially abusive or offensive remarks.

In addition to this, YouTube’s also adding new warnings when users go to post comments which its systems detect may be offensive to others, ‘giving them the option to reflect before posting’.

Instagram implemented similar warnings last year,  and LinkedIn recently added it’s own variation. Those small prompts can have a big impact and can help address potential issues where the commenter may not have even considered that what they’re posting could be inappropriate.

These warnings have appeared for some users over the past couple of months.

YouTube’s also looking to take a more technology-lead approach to hate speech by matching the content of the video to the comments posted, which could help to better detect problematic remarks.

YouTube says its work on this front is already having an impact:

“Since early 2019, we’ve increased the number of daily hate speech comment removals by 46x. And in the last quarter, of the more than 1.8 million channels we terminated for violating our policies, more than 54,000 terminations were for hate speech. This is the most hate speech terminations in a single quarter and 3x more than the previous high from Q2 2019 when we updated our hate speech policy.”

As another element on this front, YouTube’s also taking new steps to address representation, and ensure a greater level of equality for all creators in the app.

“Starting in 2021, YouTube will ask creators on a voluntary basis to provide us with their gender, sexual orientation, race, and ethnicity. We’ll then look closely at how content from different communities is treated in our search and discovery and monetization systems. We’ll also be looking for possible patterns of hate, harassment, and discrimination that may affect some communities more than others.”

The data will provide YouTube with more insight in order to adjust its systems to address any imbalance.

If we find any issues in our systems that impact specific communities, we’re committed to working to fix them. And we’ll continue to share our progress on these efforts with you.”

YouTube notes that the information gathered won’t be used for any other purpose – like, say, ad targeting – but you can imagine that some creators will be hesitant to provide such, due to concerns around data privacy. But it seems like a good way to address such concerns, and ensure its systems are working to provide the opportunity to all – if enough people take part and provide their details.

As noted, all platforms need to deal with hate speech, and these issues have always been present, but 2020 feels like a bit of a turning point in regards to awareness around such, with the #BlackLivesMatter protests opening many people’s eyes to social inequalities they may not have been aware existed.

These measures will help YouTube move towards addressing such – and while it was a way to go in fixing its various concerns, it seems like a good starting point in evolving its platform.

Similar Posts