Make a difference in our non-profit newsroom... and help Marketplace meet our year-end goal! Donate Today 💙
Econ Extra Credit with David Brancaccio

Gig workers pay a human price for being managed by algorithms

David Brancaccio and Rose Conlon Dec 23, 2021
Heard on:
HTML EMBED:
COPY
From erroneous terminations to racially biased facial recognition software, a new report from the nonprofit advocacy group Worker Info Exchange details the hidden issues with popular gig economy apps. Adam Berry/Getty Images
Econ Extra Credit with David Brancaccio

Gig workers pay a human price for being managed by algorithms

David Brancaccio and Rose Conlon Dec 23, 2021
Heard on:
From erroneous terminations to racially biased facial recognition software, a new report from the nonprofit advocacy group Worker Info Exchange details the hidden issues with popular gig economy apps. Adam Berry/Getty Images
HTML EMBED:
COPY

Sixteen percent of Americans have earned money through online gig economy platforms like rideshare and delivery apps, according to a recent survey by the Pew Research Center. And while algorithms are able to efficiently and accurately connect millions of gig platform workers with those requiring their services, a new report from the U.K.-based nonprofit advocacy group Worker Info Exchange says app-based automated management has serious drawbacks for gig workers — including erroneous terminations, racially biased facial recognition technology and a lack of transparency about how worker data is used.

The report details the experiences of workers like Alexandru, a London-based Uber driver who was told his account would be banned for “fraudulent behavior” but who had little luck determining the cause of the warnings from the platform’s driver support workers — who themselves didn’t appear to understand why the algorithms had flagged the account.

“These algorithms are dependent upon machine learning, so oftentimes, even managers do not fully understand how they work,” said James Farrar, the founder and director of Worker Info Exchange, in an interview with Marketplace’s David Brancaccio.

Three months after Alexandru initially contacted Uber, he received an apology from the company that said the warnings were sent in error. But the instance highlights the human costs of management by robot and the difficulty workers face in refuting software errors.

“It is an example of that is too often the case in the gig economy — where the machine flags some kind of behavior, managers are not able to explain it, and, oftentimes, these workers face termination because of it,” Farrar said.

Worker Info Exchange is petitioning gig work platforms like Uber to give workers more transparency about how algorithms are using their data to make decisions about work allocation and disciplinary action. And while the organization is focused on the gig economy, Farrar said questions about digital rights for workers are becoming more urgent across all industries as remote work and digital surveillance are becoming more common.

“One of the things that’s been most interesting about the gig economy is this nagging feeling that the way of working, the casualization, the surveillance, the digital control is something that could easily, and is quickly, spreading to the rest of the economy very soon,” Farrar said.

The following is an edited transcript of Farrar’s conversation with Brancaccio.

David Brancaccio: You’ve been talking to a lot of gig economy workers. When they dispute what the bot decides, what happens to them?

When an algorithm glitch leads to firing

James Farrar: These algorithms are dependent upon machine learning, so oftentimes, even managers do not fully understand how they work. So to explain this a little bit better, one of the drivers featured in a report is a man called Alexandru. And he received a final warning from Uber that if he continued with so-called “fraudulent” behavior on his work account, that he would be terminated. So when he challenged it and got somebody on the phone, the management team said that they didn’t understand why the machine had flagged him for a final review, and then eventually turned the tables on him and asked him, “What have you done wrong? Because you must have done something wrong.”

But as he started to exercise his digital rights a bit more and asked for his data and asked for the algorithmic transparency that we’re entitled to, at least in Europe under the European Union General Data Protection Regulation, then eventually they explained it as a glitch. So he did get an apology in the end, but it is an example of that is too often the case in the gig economy — where the machine flags some kind of behavior, managers are not able to explain it, and, oftentimes, these workers face termination because of it.

Brancaccio: And it may not be all that easy to get the actual meeting with the company that this example suggests that worker was able to get in that case.

Farrar: Well, there will be no meeting. You may eventually get a phone call with somebody from a call center somewhere else in the world, but there will be no meeting with a manager to discuss your case. And we’ve seen many workers who have been suspended under investigation; they’re promised that their case is being reviewed by an expert team — and this could be for facial recognition failure — and that consultation never happens, but maybe three weeks later they get a message to say that they’ve been terminated. So it is lucky Alexandru did get that call, but in many cases, the call never comes. What does come is a termination.

Brancaccio: Let’s zoom into that facial recognition part of this. People may not fully understand — for instance, the car services want to know that the person driving is the person that has been assigned to be driving, and it’s often done through facial recognition. But we know facial recognition is not perfect.

Farrar: Yeah, that’s right. So Uber was the first to introduce in London; it became a condition of their license renewal here in London. And they the introduced facial recognition technology Microsoft FACE API. But the problem with the Microsoft technology is that there was an MIT study done — that Timnit Gebru, who was a data scientist at Microsoft at the time, co-wrote — that identified that their own product had some serious issues. It is 97% accurate for white people; it has a 12% failure rate for people of color overall, and a 20% failure rate for women of color. And the workforce in London is 94% from minority communities, so it’s quite a diverse workforce that would be at risk from this type of technology not working properly.

Now, Microsoft has stopped selling that product to U.S. police forces after they were asked to do that by the American Civil Liberties Union last year. And Microsoft admits that there is a problem with the technology, that it requires close governance. But what we would say is that we haven’t got that type of proper governance from Uber, from other companies to use that type of technology the way that it’s being used in the U.K. But it’s not only just Uber. The other major competitors have now followed suit to introduce that technology as well. And we’re quite concerned about how that technology is not only used, but governed.

Brancaccio: Worker Info Exchange has a petition. What are a couple of key points that it’s trying to make the public and technology companies more aware of?

“The link between digital rights and worker rights”

Farrar: We’ve teamed up with Privacy International, an international NGO, and we’ve put a petition together with them to challenge the major companies in the gig economy on being transparent around providing full access to data for workers so that they can inspect the data and understand how it’s being used and if it’s correct. One key area is around work allocation. All workers want to know — especially if you work in the gig economy, where it’s a kind of a piecework type of arrangement — that you are getting a fair share of work available. But what we’re seeing is that these platforms have introduced algorithms that make automated decisions about work allocation, and those algorithms are compiled of profiles on workers.

So for example, we found that through litigation that we did against Ola, which is the No. 2 rideshare platform in the world, that they were maintaining profiles on drivers with a “fraud probability score” — so they were predicting future “fraudulent” behavior of a worker. But when we challenged, “What do we mean by fraud?” what they said was, “Your propensity not to obey the rules.” But then when we asked what the rules are, [they said] “We can’t tell you that” — because, of course, we would be in an employment relationship if we start telling you what the rules are that you need to abide by at work. And then also, if you think about it from a rational human point of view, if you believe that there’s fraud present in your business, as a manager, you would want to remove it — you wouldn’t want to use it to prioritize work dispatch. And that, to us, is the big giveaway that isn’t really fraudulent behavior that [they’re] looking to identify, it’s actually misclassification; it is a euphemism for performance management. And that brings us to the link between digital rights and worker rights.

Brancaccio: Just so we’re more clear on this: If the platforms spell out precisely what the rules are that govern their terms of employment, the more the law would treat those, for instance, drivers as employees — and that’s not something the platforms want to have happen.

Farrar: That’s pretty much it in a nutshell. In order to access employment rights, you need to first prove you’re in an employment relationship. And that’s the bizarre game we’re in: “We’re not in a relationship. You’re really independent. You’re in control of your work.” But what we’re seeing is that there is control being exerted behind the digital curtain. What’s a little bit different is the data protection rights that we can access in Europe, and I think you’re beginning to see some expansion of these rights in the United States with the California Consumer Privacy Act. So I think it’s a sign that we need to go beyond data protection and privacy rights into understanding digital rights at work. And as people are working more remotely; they’re being surveilled and managed digitally — I think there is an acceleration in the need and understanding of where digital rights fit within worker rights.

Brancaccio: And this is not just a gig economy worker thing. All of us may be, at some level, reporting to a robot.

Farrar: Absolutely. I think that is one of the things that’s been most interesting about the gig economy, is this nagging feeling that the way of working, the casualization, the surveillance, the digital control is something that could easily and is quickly spreading to the rest of the economy very soon.

There’s a lot happening in the world.  Through it all, Marketplace is here for you. 

You rely on Marketplace to break down the world’s events and tell you how it affects you in a fact-based, approachable way. We rely on your financial support to keep making that possible. 

Your donation today powers the independent journalism that you rely on. For just $5/month, you can help sustain Marketplace so we can keep reporting on the things that matter to you.