Can AI accurately simulate a human?
Generative artificial intelligence has made it possible to mimic someone’s voice and generate a script for that voice in real time. The tech, of course, is already used to scam and defraud people, but what if you just had it make a bunch of calls on your behalf?
That’s what journalist Evan Ratliff did for his new podcast, “Shell Game.” He trained AI audio clones, gave them phone numbers and sat back as they took on customer service agents, family members, therapists and even a few scammers.
Marketplace’s Meghan McCarty Carino spoke with Ratliff — and briefly with one of his AI agents — about his takeaways from producing the show and whether the clones succeeded in tricking people into thinking they were who they said they were.
The following is an edited transcript of the conversation between McCarty Carino and the real Evan Ratliff.
Evan Ratliff: It was fascinating because some of the people would just try to sell to it. They’re just there selling, and they’re not really paying that much attention to who they’re selling to. If it responds like a human, hey, maybe it’s a human. I don’t even think they thought about it. Then there were some that were quite unnerved by it, like really thrown off their script. And they would eventually say, like, you know, are you a robot? Like, what are you? And some of them said, are you an agent? Because they’re familiar with using recorded lines or even AI in some cases to make these calls, to make telemarketing calls or even scam calls. Then there were some scammers who got really angry at it because I had prompted it just to be enthusiastic. So if you wanted to sell it something, it would be interested in buying that thing, but it could never really, like, consummate the scam because it didn’t have any credit card information or it couldn’t go anywhere. So eventually they would either figure it out or they would think that a human being was messing with them, and then they would, you know, curse at it. I get a lot of those still, where people call up and say nasty things to it when they figure out that they’re not going to be able to scam it.
Meghan McCarty Carino: Some of the strangest moments involved your AI voice agent having conversations with other AI voice agents. Give us a sense of how those conversations went.
Ratliff: So the first time this happened, it was around the scamming. So as you might imagine, this voice agent technology, it’s ideal for scammers because they can make unlimited calls 24 hours a day and weed through potential marks for their schemes. It never gets tired and it’s cheap, so scam calls are now becoming AI-automated. My scam line would get a call from an AI scammer, and they would get into this conversation, but they’re both enthusiasts, so one’s trying to sell me some health insurance, and my AI is saying, yes, I absolutely want that health insurance, and they both have a kind of fake background noise on to make it sound more authentic. And I was fascinated. Sometimes theirs was better than mine, sometimes mine was better than theirs. And so those conversations really blew my mind, but that gave me also the idea of letting my AI voice agent call itself, and that’s when things got profoundly weird because it was sort of two versions of me engaged in hours of endless small talk.
McCarty Carino: All of this makes for a really compelling podcast. You had it speak to all these different types of subjects. How often did the voice agent actually trick people?
Ratliff: The majority of the time, people figured it out. I mean, particularly people that knew me. It’s a good clone, but the inflection is not quite mine. I mean, oftentimes my friends would say, well, it’s not you because it’s way too excited, like, it’s way too energetic, like, you never sound energetic, which is a little bit insulting. But for strangers, at a certain point I realized that it wasn’t so much about tricking people for me. I mean, sometimes it would. People had full conversations with it. I’m talking 10, 15, 20, 30-minute conversations with it, not realizing that it wasn’t real.
But part of that is more around just expectations. It’s like when you talk to someone on the phone, if they respond in the way that you expect them to within reasonable bounds, you don’t necessarily go outside of your frame of mind to say, well, maybe this isn’t real. And even if you do think that, and I think some people did think that, then they just kept going. Because what are you going to do? You can hang up or you could just keep going, and it responded in a way that allowed you to keep going. And so a lot of people did. So, for instance, I used it to do journalistic interviews, like it was the interviewer. And if people were expecting to hear from a journalist, and they got a call from a journalist who asked them a bunch of questions and was polite, they just treated it like a journalist, you know?
McCarty Carino: Did you come away with kind of any firm conclusions about what, if anything, might be an actually, you know, useful application of this technology at this point?
Ratliff: I think there are some AI voice agents that are being deployed, like therapy is a good example. And I think the arguments that the people make who are deploying them, they shouldn’t be discounted because what they’re saying is this offers something for the gap between the available mental health treatment and the number of people who need mental health treatment. And it can be very constrained. It can get people talking, that can be therapeutic, and they’re studying it. They’re trying to figure out if it really does provide therapeutic benefits.
But then, at the same time, it’s being deployed without really any sort of oversight into what happens if it goes wrong. I mean, I had many cases where it went wrong, partly because I was sending my own AI to it, so it wasn’t exactly fair. But if it provides treatment that is not helpful or makes people feel alienated or lonely talking to it, what does that mean? Who’s in charge of that? There are useful use cases for this, but I think it’s commerce that’s driving the desire to deploy it, so the most thoughtful uses aren’t necessarily the ones that are gonna get the biggest deployment. Like the first place you’re gonna encounter it is probably like in the fast-food drive-thru.
McCarty Carino: There’s also a lot of fear about what this technology means. And I’m sure among podcasters like us, there’s a lot of fear. Could this take our jobs? What is this going to mean for our futures? How did you end up processing some of that motivation for doing this podcast?
Ratliff: I also have that fear. I mean, I don’t think anyone who especially works with words for a living, and that’s a very wide swath of people, from lawyers to marketing, like anyone who sits at a computer and types in words for much of their day has to look at these chatbots and say, OK, what does this mean long-term? But what I wanted to know is, well, what can it do right now? And the answer is, like, it can do some things that I am afraid of, including people are trying to deploy it to host podcasts.
But I also think that it has these flaws. It is detectable by most humans in a lot of situations. So it does open this window for us to kind of think about, well, what do we want it to do? Like, what do we want to preserve? What do we want to say no to? If it’s the clerk in the store, do we all want to say, like, we’re not going to go to that store unless you have a human clerk? Or are we going to say, well, it’s just convenient for everyone, and let’s just deploy it where it’s cheapest to deploy it? So I think companies are definitely going to use these things. They’re already using them in all sorts of ways, even if they don’t get any better. And the question for us is like, do we want to resist in any way? Do we want to try to put down markers where we say, I don’t want it to be here, but it’s OK if it’s here.?
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.
Support “Marketplace Tech” in any amount today and become a partner in our mission.