Facebook is Launching a New Investigation into Potential Bias within its Algorithms
Facebook is launching a new investigation into potential bias within its algorithms, as it works to improve its systems in response to the #BlackLivesMatter movement, and in light of its recent civil rights audit.
As reported by The Wall Street Journal, both Facebook and Instagram will launch new examinations of their core algorithms.
As per WSJ:
“The newly formed “equity and inclusion team” at Instagram will examine how Black, Hispanic and other minority users in the U.S. are affected by the company’s algorithms, including its machine-learning systems, and how those effects compare with white users, according to people familiar with the matter.”
Facebook will establish a similar team for its main app.
As noted, the move comes in response to the rising calls for improved representation on all levels, after the recent #BlackLivesMatter protests, while Facebook’s own civil rights audit, conducted over two years, and published earlier this month, found various concerns with the platform’s systems, including the potential for algorithmic bias.
As per the report:
“Because algorithms work behind the scenes, poorly designed, biased, or discriminatory algorithms can silently create disparities that go undetected for a long time unless systems are in place to assess them.”
Facebook’s algorithms have inadvertently facilitated discriminatory processes in the past. Back in 2016, a report from ProPublica showed that it was possible to use Facebook’s ‘ethnic affinities’ demographic segmentation to eliminate specific racial groups from your ad reach, which is in violation of federal laws.
Facebook subsequently suspended the ability to target ads by excluding racial groups, yet, at the time, Facebook also noted that many ad targeting options like this were being built by Facebook’s machine learning systems, based on usage trends. As such, they were more a result of the algorithm providing options based on the available data, as opposed to Facebook deliberately facilitating such.
Facebook eventually removed all potentially discriminatory targeting options for housing, employment, or credit ads last year. But even then, experts noted that any algorithmically defined system remains susceptible to inherent bias, based on the input data set.
As per Pauline Kim, a professor of employment law at Washington University:
“It’s within the realm of possibility, depending on how the algorithm is constructed, that you could end up serving ads, inadvertently, to biased audiences.”
That’s because the system is reading the data as it’s input.
As a basic illustration, if your company hires more white people, there’s a chance that an algorithm, looking to display your job ads to candidates, would only serve your job ads only to white users, based on the data it has available.
Essentially, the concern is that any algorithm based on real-world data will always reflect current-world biases, and Facebook won’t be able to detect such within its processes without conducting a full examination of its systems.
This is a significant concern, and it’s good to see Facebook looking to address such, particularly given that it was a key focus of the recent civil rights audit.
If Facebook can improve its systems, and weed out algorithmic bias, that could go a long way to improving equality, while the lessons learned may also help other platforms address the same in their own systems.
The move may also help Facebook repair relations with civil rights groups, who have to lead a boycott of Facebook ads in July over the company’s refusal to address hate speech posted to the network by US President Donald Trump.
There’s a long way to go on this front, but addressing key elements like this could help Facebook show that it’s taking its responsibilities seriously in this respect.