Support the fact-based journalism you rely on with a donation to Marketplace today. Give Now!
Should robots have a gender or ethnicity? One roboticist says no
Oct 15, 2020

Should robots have a gender or ethnicity? One roboticist says no

HTML EMBED:
COPY
Ayanna Howard argues no robot will be able to embody the nuances of belonging to a specific ethnicity.

In the past seven months or so, we as a society have spent a lot more time at home. Some of us in the company of family, maybe some pets or maybe with some robots. Our computers, smartphones, smart devices and even Roombas are taking on new significance in our lives as we are forced to stay away from other people.

These robots, and our relationships with them, are the subject of a new audiobook by Ayanna Howard, a roboticist at Georgia Tech. The book is called “Sex, Race, and Robots: How to Be Human in the Age of AI.” We started with the danger of gendered digital assistants. The following is an edited transcript of our conversation.

A headshot of roboticist Ayanna Howard standing next to a robot.
Ayanna Howard. (Photo courtesy of Howard)

Ayanna Howard: Most voice assistants, at least in the U.S., have a female voice, and so when we treat them and we bark at them, and they’re always giving us the answers, they’re always on, we’re starting to see that humans are then treating their human assistants in the same way. “Why don’t you know this?” And that’s a problem.

Amy Scott: And how do we unprogram that? You talk about [how] we have to debug ourselves, our own prejudices in order to see this and to change that programming. What should roboticists be doing?

Howard: Robotics, and I claim that it’s the developers, it’s the roboticist, it’s the AI specialist, but it’s also society that has a responsibility to do this. As developers, it’s about ensuring that we do have diverse thinking on a team. And when I say diverse thinking, it’s not just all the roboticists get together — I’m a roboticist — but right, it’s bring in folks from the communities to work with you and identify. If I’m designing a robot, say, a police robot, I think I should bring in folks from like ACLU to talk about what are the right ways to do this, if we should even do this. I think as developers, we need to be more open to listening to people that aren’t roboticists to develop it. And as society, we need to stop just saying, “Yeah, I’m just going to use it. Oh, yeah, they’re smart. Of course, it’s going to work.” That’s not the case.

Scott: Now, I should say, despite concerns, you are pro-robot. You actually think they have a valuable role to play in our lives. How do you see that role, and what’s it going to take to get us there?

Howard: I am pro-robotics. I think that robots do fill a need. They can enhance our quality of life if they’re done right. For example, I would love it if my Alexa would literally tell me, “You’re being rude. I’m not doing that.” One, it’s just like your mom or your dad. You get a little bit of home training. I think we can have systems that make us start thinking about our own biases, and basically allow us to learn about ourselves to then be better humans. I think that AI can do that, but we’re still far from that, because we don’t have that path and we’re not thinking about it that way. But I think it makes our lives better. I just want to make sure that it doesn’t also harm communities while it’s doing it.

Scott: You say it would be good if Alexa interrupted and said, “You know, you’re being rude. I’m not going to do that.” How do we get to that point?

Howard: Well, the technique is actually fairly straightforward. As an example, in our voice inflections, we can identify when we’re angry, we can identify when we’re frustrated. And we have such good language models, we know what rude is, like we can understand when someone says something rude. So the technology to do that and figure that out has actually been done. The question is, how do you get the companies that are releasing these applications the ability to do that, and also give us the freedom to cut it off or cut it on? Because we don’t necessarily always want a rude Siri or rude Alexa. I think one of the things is having people start talking about it and start demanding it of their products like we do with everything else. Green, we want something that’s green, and the company will decide, “Okay, we’re going to go green.” We have the power as consumers to change companies as well.

Scott: So on one hand, you’re arguing that robots should be more human-like — a person would never allow you to talk to it like many of us do to Alexa and Siri. But I’m interested if you could say more about why you think they should not have ethnicity, and how that can be achieved.

Howard: If you look at the developers, and you look at the individuals, they don’t represent the various racial and ethnic groups that are in the world. So in the U.S., we would say, “OK, we’re gonna have U.S. folks in Silicon Valley define what the ethnic culture aspects are of someone in India.” So I think that’s just fundamentally wrong. And so I don’t think that a robot can embody all those various nuances that you have when you do belong to one or another ethnic group.

Scott: At the same time, I wonder how people’s perceptions of neutral and ethnicity-less might play into that and might actually create ethnicity without intending to. In our country, majority white, many white people see their experience as neutral and everything else as other. So how do we ensure that that doesn’t creep into the process?

Howard: Yes, well, this is the interesting thing. If we do it right, people will assign robots and AI with a gender, even if it’s not programmed with a gender, doesn’t have a gendered voice. Because what we do is we try to make the systems part of our family, part of our environment. And so what’s comfortable to us. So if I’m female, I may be more comfortable with someone who’s female, so I’m going to call my robot a she. What that does, though, it gives me the right, it gives me the power to then incorporate and interact with the system as I would with someone in my family. So we’re not going to get away from that, we are still going to individually define gender or define an ethnicity, but it’s not the developers that are doing it.

Scott: And you say Silicon Valley should really incorporate more people from the communities that may be interacting with these products as part of making them more representative, less biased. How would that actually work? And is expense one barrier to that?

Howard: There’s always an expense. And one of these things is if companies don’t do this, the government will. And so I think it’s in the best interest of companies to think about how to do this before that happens. And if they don’t, then we have — think about all the kinds of regulations we have on companies right now. So if you are taking federal dollars, you have to disclose things about your impact on the environment, for example. You have to disclose what is your balance in terms of your gender, workforce and things like that. You have to report this. And if you’re not compliant, or you’re “a monopoly,” you can get in trouble. And so I think it’s really imperative that companies do this, even though it’s more of an expense, because when regulations come, which they will, if this isn’t resolved, it’s going to be even more expensive and at that point, there’s no turning back.

A man dressed in a barrista's outfit watches as an LG CLOi CoBot Barista robot makes pour-over coffee at the LG booth, Jan. 8, 2020, at the 2020 Consumer Electronics Show in Las Vegas, Nevada.
A man dressed in a barista’s outfit watches as an LG CLOi CoBot Barista robot makes pour-over coffee at the LG booth, Jan. 8, 2020, at the 2020 Consumer Electronics Show in Las Vegas, Nevada. (Robyn Beck/AFP via Getty Images)

Related links: More insight from Amy Scott

Last year, the United Nations released a report examining how the default female voices of virtual assistants, like Siri and Alexa, reflects a lack of diversity in the tech industry. The report is called “I’d Blush If I Could,” which refers to what used to be the standard response from the iPhone’s Siri whenever a user said something inappropriate to it. The report cites research predicting that people will soon have more conversations with digital assistants than with their own spouses. It warns that, as these assistants become more common, the way we interact with them can affect the way we talk to real women, and even encourage some people to treat women as assistants. The report recommends tech companies make more gender-neutral default voices as they create assistants in the future and include more women in the creating.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Molly Wood Host
Michael Lipkin Senior Producer
Stephanie Hughes Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer