Will the future of AI repeat past injustices?
Artificial intelligence has changed our world in major ways: autonomous vehicles, speech-recognition technology and algorithms that change what we see and hear on social media platforms. But the technology and the data fueling AI is often powered by low-paid workers in developing countries, including many nations in the Global South.
Some academics describes this as AI colonialism, suggesting that what goes into artificial intelligence is repeating exploitative colonial history. I spoke with journalist Karen Hao, who recently published a series on AI colonialism in MIT Technology Review. The following is an edited transcript of our conversation.
Karen Hao: With artificial intelligence, what we now have is these really wealthy companies that have become the empires of people’s data, of computational resources. And they are going to other communities that don’t have the same financial resources, computational resources, taking their data, their precious voices, or their faces, or their body movements and then turning that into software that powers, essentially, our internet.
Kimberly Adams: You write about how deep-learning AI techniques rely on “ghost work.” Can you explain what that is?
Hao: One of the more famous ghost work examples is content moderation. There are actually thousands and tens of thousands of people that are labeling videos and saying “this has violence, this has nudity, this has some other inappropriate content.” So that’s what ghost work is, it’s like this entire economy of people, primarily in the Global South, that do work to make sure people in the Global North have a clean, efficient internet experience.
Adams: Is this a legacy of colonialism, or a new kind of colonialism?
Hao: I think it’s both. There’s a sort of path dependence that we now have, where because of our global history there are certain populations that now are sort of predisposed to do certain kinds of work in this AI development pipeline. But then we’re also seeing this history be codified into artificial intelligence, because these AI algorithms, when they learn on data, what they’re really doing is they’re learning on historical data. It’s essentially bringing our past into the future with us.
Adams: Your series focuses on the Global South. Can you give a few examples of how this is showing up there?
Hao: In the first story, we go to South Africa, and we look at the surveillance industry there, specifically AI surveillance technologies, so technologies that are built on the extraction of people’s movements and people’s faces to then reidentify them, to track them. South Africa, obviously, has this really awful history of apartheid. But what is also happening is it’s now sort of perpetuating a digital apartheid of sorts, because the people that are able to buy these surveillance technologies — because South Africa has a very privatized surveillance industry — are the people that traditionally have wealth. And so it’s predominantly white people. And then the people who are surveilled, and don’t have the wealth to actually object to this kind of surveillance, are the people who didn’t have wealth before — they’re predominantly Black. So we look at stories like this all around the world, where different phenomenons are playing out, where both the local population is subjected to something because of the dispossession that they had globally with former colonizers suppressing their economic development, and therefore leaving them at the bottom of the food chain, and also potentially perpetuating the hierarchy within the countries as well, because of these dynamics that play out.
Adams: How much awareness do you think there is of this dynamic among the general population?
Hao: I don’t think there’s even an awareness among artificial intelligence researchers, to be honest. With this series, what I was trying to do was really expand the surface of the conversation: Why does AI not fundamentally work for everyone? And only by identifying really the root of the issue, can we then eradicate it and actually reimagine an AI that does work for everyone.
Adams: Who would be the major players involved in shifting the balance, or perhaps that are already involved in perpetuating the system?
Hao: I think, in shifting the balance, really everyone can be involved. But the heavyweight AI players that are the ones that entrench these systems currently — those are the big tech companies, and all of the companies that support them. The smaller tech companies as well, they’re all sort of engaged in this stuff: Instacart, Uber, Lyft, the entire ecosystem of companies that use any form of algorithms and automation to increase the efficiency and convenience for people in the Global North. They are definitely part and parcel of this broader colonial dynamic that’s happening.
And so to shift it, it really requires not just people within these companies to actually push and pressure their leadership, it also requires civil society to push and pressure regulators to introduce regulation. It requires international coordination, too. In my second story, when I talk about Venezuela and the fact that a lot of Venezuelans are now being caught up in this ghost work labor, a researcher said to me: “It doesn’t really matter if Venezuelans rise up and try to resist this, because then the company is just going to move to the next poor population.” And so there has to be sort of an international consortium of, not just tech employees, regulators and civil society organizations all have to coordinate as well, to figure out what are the norms that we should be establishing internationally to make sure that these technologies are more humane.
Related links: More insight from Kimberly Adams
Karen has an introductory piece for the series where she writes more about the concept of AI colonialism, a phrase that’s been bubbling up in academic circles for a few years now.
A 2020 paper by AI researcher and ethicist Abeba Birhane talks about how the trend is showing up in Africa. For example, she argues fintech and digital lending, while being hailed as improving access to the unbanked in many countries, often ends up profiting off the poorest people. And some of the biggest companies profiting often have ownership ties back to wealthier countries in the Global North.
Finally, here’s an interview I did last year with AI ethicist Timnit Gebru, who runs the Distributed AI Research Institute. Part of the reason Gebru started that institute, she said at the time, was to make sure that, as AI becomes a bigger part of our lives, the people in marginalized groups and places aren’t left behind.
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.
Support “Marketplace Tech” in any amount today and become a partner in our mission.