Support the fact-based journalism you rely on with a donation to Marketplace today. Give Now!
Bias in facial recognition isn’t hard to discover, but it’s hard to get rid of
Mar 22, 2021

Bias in facial recognition isn’t hard to discover, but it’s hard to get rid of

HTML EMBED:
COPY
Joy Buolamwini of the MIT Media Lab created a mirror that would project aspirational images onto her face, but the software didn't recognize her until she wore a white mask.

Joy Buolamwini is a researcher at the MIT Media Lab who pioneered research into bias that’s built into artificial intelligence and facial recognition. And the way she came to this work is almost a little too on the nose. As a graduate student at MIT, she created a mirror that would project aspirational images onto her face, like a lion or tennis star Serena Williams.

But the facial-recognition software she installed wouldn’t work on her Black face, until she literally put on a white mask. Buolamwini is featured in a documentary called “Coded Bias,” airing tonight on PBS. She told me about one scene in which facial-recognition tech was installed at an apartment complex in the Brownsville neighborhood of Brooklyn, New York. The following is an edited transcript of our conversation.

A headshot of Joy Buolamwini, a researcher at the MIT Media Lab.
Joy Buolamwini (Photo courtesy of Buolamwini)

Joy Buolamwini: We actually had a tenants association reach out to us to say, “Look, there’s this landlord, they’re installing the system using facial recognition as an entry mechanism. The tenants do not want this. Can you support us? Can you help us understand a bit of the technology and also its limitations?” And what I found was that the tenants, they already had [the data we wanted]. And it was a question not just about the performance of these technologies — it seemed like every group where we’ve seen struggle, that was the group that was predominant in that building. But it was also a question of agency and control to actually even have a voice and a choice to say, “Is this a system that we want?”

Molly Wood: You know, I feel like there’s this kind of double whammy with this technology, which is that there’s bias built in. And because it is fundamentally good for use in surveillance and punishment, it feels like it’s almost disproportionately being used in communities where it is least likely to be effective, or to cause the most problems, frankly.

Buolamwini: Absolutely, and here we’re seeing that if it doesn’t work, in terms of the technical aspect, you get misidentifications, you get false arrest and so forth. But even if it does work, you can still optimize these systems as tools of oppression. So putting surveillance tools into the hands of police departments, where we see time and time again the overcriminalization of communities of color, is not going to improve the situation, it just automates what has already been going on.

Wood: There is now this industry of algorithm auditors being brought in to tell companies if there is bias in their work. Can that be a solution, or are these problems too fundamental?

Buolamwini: I think there is absolutely a role for algorithmic auditing to play in the ecosystem when it comes to accountability and understanding the capabilities and limitations of AI systems. But what I often see when it comes to algorithmic auditing, is it’s done without context. So if you just audit an algorithm in isolation, or a product that’s using machine learning in isolation, you don’t necessarily understand how it will impact people in the real world. And so it’s a bit of a Catch-22 insomuch as “Well, if we don’t know how it’s going to work on people in the real world, should we deploy it?” But this is why you want to have systems like algorithmic impact assessment because it’s a question of looking at the entire design — is this even a technology we want?

If it has some benefits, what are the risks? And importantly, are we including the “excoded,” the people most likely to be harmed if the systems that are deployed go wrong? And I think that is an important place to include people. And beyond algorithmic auditing, we really have to think about redress. So you can audit the systems, you can do your best to try to minimize the bias, try to minimize the harms. But we have to also keep in mind that the systems are fallible, there will be mistakes in the real world. And so what happens when somebody is harmed? And this is part of ongoing work we’re doing with the Algorithmic Justice League to look at: What does harms redress look like in the context of an AI-powered world? Where do you go when you’re harmed? And we want that place to be the Algorithmic Justice League.

Wood: Tell me more about that phrase, the “excoded.” I have not heard that before.

Buolamwini: The excoded is a term I came up with as I was seeing the people who suffer most at the hands of algorithms of oppression or exploitation or discrimination. And so it’s a way of describing those who are already marginalized in society. And no one is immune, but the people who are already most marginalized bear the brunt of the failures of these systems.

Wood: We’re at a moment where companies who are attempting to improve ethics in AI or, at least, have the burden of improving ethics in AI are in some cases firing the prominent people of color that they hire to work on these problems. Are we moving backwards?

Buolamwini: What we’re seeing is change can’t come from the inside alone because the work that you’re doing will fundamentally challenge the power of these companies, and it will also fundamentally challenge the bottom line. If you find that there are harmful effects or harmful bias in the systems that they’re creating, or even impacts on climate change, this then forces companies to reckon with those externalities and those impacts. And so it can be easier to try to get rid of the researchers who are pointing out these problems for which you hired them in the first place, instead of actually addressing those problems head on.

And in some ways, it does seem as though companies want it both ways. They want to be able to say, “We have a team. We’re looking at issues of AI, ethics, and we’re concerned.” But when it really comes to looking at issues of justice, issues of who has power, issues of actually redistributing that power and having the ability to say no to harmful products, or harmful uses of AI, even if it means less profit, companies are not incentivized to do so by construction, nor should we expect them to do so. So when I see the firing of Dr. Timnit Gebru, for example, or then subsequently, Meg Mitchell — pioneers in this space when it comes to looking at the harms of algorithmic systems — it’s a major wake-up call that we cannot rely on change from within alone. We need the laws, we need the regulations, we need an external pressure, and that’s when companies respond. But the change will not come from within alone because the incentives are not aligned.

Related links: More insight from Molly Wood

Here’s a PBS write-up of the documentary, which includes a roundup of reviews, many of which note that the documentary actually manages to be hopeful on such a difficult and infuriating topic. And thanks to the work of Buolamwini, and other researchers and mathematicians, the problem of AI bias is now more widely acknowledged, and some companies and researchers are trying to find proactive ways to root it out.

There’s a good story at ZDNet about Deborah Raji, a research fellow at the Mozilla Foundation who studies algorithmic harms and works with Buolamwini’s organization, the Algorithmic Justice League. One thing Raji has been exploring is whether companies could use bug bounties — a system in which companies pay ethical hackers to find security vulnerabilities — to entice data scientists to try to detect instances of bias. Of course, as the article notes, the biggest barrier to such a solution is that there isn’t yet an accepted standard that defines algorithmic harm — let alone an actual method for detecting bias. Which tells you how new the discovery of this bias really is and how dangerous it is that AI technology is just rocketing out there into the world more and more every day, in cameras and smart speakers and resume scanners and search engine results and map directions and bank loan applications and medical decision-making and policing. So, more, please, and faster. White masks are not the answer.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Molly Wood Host
Michael Lipkin Senior Producer
Stephanie Hughes Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer