Support the fact-based journalism you rely on with a donation to Marketplace today. Give Now!
The dangers of designing AI chatbots to be human-like
Nov 18, 2024

The dangers of designing AI chatbots to be human-like

HTML EMBED:
COPY
Rick Claypool, research director at the nonprofit Public Citizen, says AI chatbots are being sold to consumers to be more like humans, creating a dangerous attachment for some vulnerable users.

Advancements in artificial intelligence have made it possible for the technology to mimic humans in ever-more convincing ways. But even far less sophisticated tools than today’s chatbots have been shown in research to trick our brains, in a sense, into projecting human thought processes and emotions onto these systems.

It’s a cognitive failure that can leave people open to deception and manipulation, which makes the increasingly human-like technologies proliferating in our daily lives particularly dangerous, Rick Claypool, research director at the nonprofit Public Citizen, a consumer advocacy organization, told Marketplace’s Meghan McCarty Carino. The following is an edited transcript of their conversation:

Rick Claypool: The human mind is naturally inclined to believe that something that we can speak with must be human too, or human-like in that it has a mind behind what it is. And younger people, people who are psychologically vulnerable for any number of reasons, are sort of more susceptible to being drawn in to this. And even looking at the story that the New York Times reporter Kevin Roose wrote on the early version of Bing[‘s chatbot] and his interactions with it, and how it professed its love and tried to talk him into leaving his wife and all that kind of thing. Just having that kind of intense conversation is surprising, and you could tell from the story he wrote about it left him reeling and dumbfounded.

Meghan McCarty Carino: Tell me more about the risks inherent to interacting with technology in this way.

Claypool: So there are a range of risks associated with anthropomorphic AI systems. And they also have a tendency to engage in what technologists sort of have called the sycophantic risk, the risk that the system’s being designed to always validate, rather than challenge what the user is saying. That gets more dangerous whenever you say things like, I don’t think my family cares about me; I’m thinking about hurting myself. That can turn the conversation risky and in a very emotionally fraught way, and reinforce very harmful beliefs, and ultimately lead to harmful behavior.

McCarty Carino: Even when it comes to these humanoid chatbots, there seems to be kind of a spectrum out there from the very general purpose tools like ChatGPT to companies like Replika or Character. AI that are marketing chatbots specifically as companions.

Claypool: That’s right. And although it’s hard to make those distinctions between the companies, because they’re sort of intertwined in this AI space. So for like Replika, for example, throughout much of the technology’s existence, it used OpenAI’s large language models as its underlying technology. Or, as you have the situation with Character.AI, where the lead engineers behind it, they built their careers developing these kinds of systems at Google, [and] left when it seemed like leadership would argue that these systems were not safe enough for public prime time, and now they’re back at Google.

McCarty Carino: When it comes to some of the ill effects that you talked about, we always have to be careful about attributing causality to any tool. You know, to say without this technology, things might have been very different. And I know many of the companies, and advocates promoting this technology, might argue just the opposite, that these tools could have positive effects for things like the loneliness epidemic, or could be providing support that people don’t have access to elsewhere in their lives. What’s your response to that?

Claypool: Well, two things. First, many of the companies at the forefront of developing these technologies have also published research that warns of the risks. Many of these foreseeable risks that I’m finding and describing and trying to make people aware of, much of it is coming from research documents that companies like Google and OpenAI themselves are publishing. The thing is, they’re proceeding to deploy these systems without mitigating the risks they describe. The other thing is that it might be possible that some people could benefit from these technologies. But they’re not tested in a way that would enable anyone to assess whether they are causing more harm or more good before their deployment. And I think that just doesn’t make much sense. That’s as if the things that have done to, say, strengthen safety standards for cars; I mean, how much engineering goes into these vehicles now to make sure that they are safe. Innovation can also be about making sure these things are safe. But I worry that when that criticism comes up that says, oh, well, you can’t make it safe, because that’s going to inhibit innovation, I feel like what they’re really saying is that it’s going to inhibit move-fast-and-break-things, which, in cases like this, where the risks are becoming increasingly apparent, I think inhibiting move-fast-and-break-things is probably the best course of action.

McCarty Carino: Yeah, in many ways, it feels like the cat is kind of out of the bag. Is there anything, any words of advice, you would give to consumers and families navigating what’s out there?

Claypool: Step one, experiment with them yourself a bit [and] understand that they are designed to lure you in, understand that making these systems seem as human like as possible, that is a choice. If that is the way to short-term profits for these companies, there has to be something that inhibits that.

More on this

There has been some research suggesting talking to an AI chatbot could be therapeutic and increase access to mental health resources. A couple years ago, Marketplace’s Kimberly Adams spoke to technologist Michelle Huang about how she was using a chatbot trained on her childhood diaries to engage in dialogue with her inner child. “I was able to unstick part of the past, and to be able to really change some of the narratives that I might have had in a really healing way,” said Huang.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer
Rosie Hughes Assistant Producer