Why the racism in facial recognition software probably can’t be fixed
It’s been proven that facial recognition software isn’t good at accurately identifying people of color. It’s also known that police departments around the country use facial recognition tools to identify suspects and make arrests. And now we know about what is possibly the first confirmed wrongful arrest made as a result of mistaken identification by software. The New York Times reported last week how Robert Williams, a Black man, was wrongfully arrested in Detroit in January.
I spoke with Joy Buolamwini, who has been researching this topic for years as a computer scientist based at the MIT Media Lab and head of the nonprofit Algorithmic Justice League. She said that, like racism, algorithmic bias is systemic. The following is an edited transcript of our conversation.
Joy Buolamwini: This is not a case of one bad algorithm. What we’re seeing is a reflection of systemic racism in automated tools like artificial intelligence. Right now, we’re gambling with people’s faces, and when we’re gambling with people’s faces, we’re gambling with democracy.
Molly Wood: It does seem like awareness and policy around facial recognition are starting to catch up, if slowly, but what more needs to happen? Is it the role of companies alone to not offer the software, or is it the role of governments to stop police departments from using this technology?
Buolamwini: We can’t leave it to companies to self-regulate. While we have three that have come out to step back from facial recognition technology, you still have major players who supply facial recognition technology to law enforcement. We absolutely need lawmakers. We don’t have to accept this narrative [of] “the technology is already out there. There’s nothing we can do.” We have a voice, we have a choice. What we are pushing for with the Algorithmic Justice League is a federal law that bans face surveillance across the nation. I am encouraged by the latest bill I’ve seen from Congress, so I would highly encourage support of that bill so that at least across the country, there are some base-level protections.
Wood: You wrote this post that ended with the line, “Racial justice requires algorithmic justice.” Given that humans write the algorithms— this is the constant question — how can that be accomplished? Algorithmic justice, I mean.
Buolamwini: When we’re talking about algorithmic justice, it’s really this question about having a choice, having a voice and having accountability. What we’re seeing with this Wild, Wild West approach to deploying facial recognition technology for surveillance is the lack of choice and lack of consent. So, when we’re thinking about algorithmic justice, it is this question about who has power and who has a voice, and it needs to be the voice of the people. When we’re talking about racial justice, the reason racial justice requires algorithmic justice is increasingly, we have AI tools that are being used to determine who gets hired, who gets fired, who gets stopped or arrested by the police, what kind of medical treatment you might receive, where your children can go to school. So much as AI is increasingly governing and serving as the gatekeeper to economic opportunities, educational opportunities, we can’t get to a racial justice without thinking about algorithmic justice.
Wood: Technology companies are so sure that there must be a way to make this technology work and make it work largely for good. I wonder if you think the events of the past month have really changed whether they keep working on this technology, or whether they might actually start to see the real limitations that you have seen for so long?
Buolamwini: Companies respond to the pressure of the people. I want to emphasize that the announcement for IBM to stop selling facial recognition technology, for Amazon to do a one-year moratorium and for Microsoft to not sell to law enforcement came after the coldblooded murder of George Floyd, protests in the streets, years of employee activism. We have a responsibility to think about how we create equitable and accountable systems, and sometimes what that means is you don’t create the tool.
Related links: More insight from Molly Wood
Speaking of tools we should not create, researchers at Harrisburg University last week said they’ve created facial recognition software that can actually predict crime. We already know what could go wrong.
The Harrisburg team hasn’t actually published their research, but as you might imagine, it sparked a pretty immediate backlash. The research team includes at least one former police officer, who wrote that the software could literally identify “the criminality of a person from their facial image.” The team said, much like the CEO of Clearview AI has tried to say, that the facial recognition tech operates with “no racial bias.” About 1,700 academics immediately signed an open letter pointing out that years of research show that claim simply cannot be true. Furthermore, even the idea that facial recognition software can predict a tendency toward crime has been debunked.
The Harrisburg research was submitted to publisher Springer Nature, which said it had already rejected the paper by the time the open letter had been sent. This idea of predictive policing, even if it’s not based on facial recognition, still has lots of appeal for law enforcement — the idea that AI can somehow magically determine where and when crime might occur, and that this could somehow happen separately from all the social and economic and racial issues that might just complicate the data. Last week, the city of Santa Cruz, California, became the first U.S. city to ban predictive policing completely.
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.
Support “Marketplace Tech” in any amount today and become a partner in our mission.