Support the fact-based journalism you rely on with a donation to Marketplace today. Give Now!

AI amplifies scam calls and other deceptions

David Brancaccio, Alex Schroeder, and Erika Soderstrom Jul 14, 2023
Heard on:
HTML EMBED:
COPY
People have received calls from what sounds like a relative asking for money, but the voice was generated by artificial intelligence in a fraudulent scheme. Tero Vesalainen/Getty Images Plus

AI amplifies scam calls and other deceptions

David Brancaccio, Alex Schroeder, and Erika Soderstrom Jul 14, 2023
Heard on:
People have received calls from what sounds like a relative asking for money, but the voice was generated by artificial intelligence in a fraudulent scheme. Tero Vesalainen/Getty Images Plus
HTML EMBED:
COPY

Among the many things we’re learning that generative artificial intelligence can do is copy voices and likenesses so that people can be made to appear to be saying or doing just about anything. You’ve probably heard about “deepfake” videos. Something very similar is happening with voices, and the technologies that clone images and voices are able to create almost perfect replicas of real people.

Exhibit A:

Wait. David, is that you? I guess we did say “almost perfect.”

But seriously, this stuff is so good, it’s scary. And here’s why you should be very afraid: AI-generated voices are helping scammers get even better at stealing people’s money.

Wasim Khaled is CEO and co-founder of Blackbird.AI, which has been taking a closer look at the risk these AI-enabled scams pose. He delved into the details with “Marketplace Morning Report” host David Brancaccio. The following is an edited transcript of their conversation.

David Brancaccio: Before we get into how AI can make fraud worse, give me a sense of the work that your company, Blackbird.AI, does.

Wasim Khaled: Sure. Blackbird has essentially built an AI platform that helps organizations, both public and private, understand narrative warfare and narrative conflict. And you can really think of it as narrative and risk intelligence. And that includes things like misinformation, disinformation, generative AI, etc. But the wider component really is how do we foster trust, safety and integrity across the information ecosystem, which is more our core mission.

Brancaccio: I have reported on efforts to scam people. Often the phone rings, it’s a voice on the phone purporting to be a nephew stuck in jail somewhere, you’ve got to send money now. Artificial intelligence can actually make that kind of fraud worse?

Khaled: I think infinitely worse. If you’re looking at this space, we’ve seen, what, $2.6 billion in impersonation scams reported by the [Federal Trade Commission] in 2022 without generative AI. So, just one example of what generative AI could do to empower these threat actors: They could go on social media, get a couple of seconds of someone’s voice, or really any other audio source, and be able to generate entire scripts of whatever they want that person to say. And so, yeah, we’ve seen recently children calling parents asking for a wire transfer for a ransom, things of that nature. And this has become infinitely easier to do, at very low cost very quickly, with no technical experience needed. And so absolutely, I think when we look at those numbers at the end of this year, it’s going to be pretty alarming.

Brancaccio: So AI can do what they used to do in old movies, get what they called a voiceprint, based on, what, a couple of seconds of the real voice?

Khaled: That’s right. And these tools can just be low-cost or free websites that almost anyone can go and search for and just go to work. Much like generative AI is the supercharger of any kind of work, the same applies to the work of people who are running these scams and grifts for financial or other types of gain.

“What I tell anyone today is to have some sort of a security phrase with family members, at least, that only you would know.”

Wasim Khaled, CEO and co-founder of Blackbird.AI

Brancaccio: So people listening should be aware. And also we’ve done reporting that when one of these situations happens, you get anxious. The person who picks up the phone gets worried. And when you’re anxious, you may not think clearly. And we have cases where people actually really do send the money thinking it’s their nephew or their son or daughter in jeopardy. Other than being alert for the possibility of fraud, is there anything anyone can do?

Khaled: Well, what I tell anyone today is to have some sort of a security phrase with family members, at least, that only you would know. And this is a phrase that if you have any doubt, [ask] “What is the secret phrase?” And if they can’t tell you, then you better hang up.

At the end of the day here, I know that sounds like low-tech advice. But when I’m talking to the widest audience possible at the individual level, I think that’s something simple that anyone can do. And I think it’s very important that everyone has that kind of family security phrase, even though it sounds a little extreme. But that’s kind of what we’re looking at here. Because the technology, it’s very realistic. And you talked a little bit about the anxiety and nervousness that someone might have when something like this is occurring. And it works two ways. One, you make emotional decisions because you are anxious that your loved one is actually in danger. But on the AI side, when you take someone’s voice, even if it’s not 100% perfect, if you tell the prompt to make it more anxious and scared, then that heightened, AI-driven voice, it’s harder to really detect because most people are used to listening to people just conversational, right? And so you have this kind of dual effect of more effectiveness for this kind of scam because of that heightened fear.

Brancaccio: I have a financial services company that lets me download some of my information based on what it thinks is my voice. It’s captured my voice in the past. I wonder if AI can be used to cheat those systems.

Khaled: It’s already begun. There are quite a few examples if one goes and digs for it now of people calling up, impersonating their boss, like, to the procurement division, and saying that a wire transfer needs to be sent out. And so this has already become pretty commonplace. And even in the [corporate mergers] space or in the [venture capital] space, when those final wires are coming in, the last component of authentication is voice. And that voice could be somebody’s accounting department, chief of staff, it could be the CEO, but just as easily forged as anyone else’s. So in terms of financial crimes at the larger level, I think we’re going to see more of that as well.

Brancaccio: Mr. Khaled, what I’m hearing from you is the entire population has to get even more suspicious of any kind of interactions involving some sort of audio authentication, identifying a caller based on what you think is a voice.

Khaled: Yes. Unfortunately, just due simply to the advent of kind of multimedia generative AI — so audio, video, etc. — we are kind of in a zero-trust situation here when it comes to consuming information that’s coming into us in these forums. Now, it used to be one thing when those were text-based only and there may be a social media post. But when you weave in things like audio and video, I think the real issue is people having to get used to this kind of warped reality, where all of the typical senses that they used to depend on to tell them when they could believe something was authentic have been bypassed now with such accurate and easy-to-produce content that can be built to deceive.

Brancaccio: So this new technology is making this worse. Can our engineers and smart people come up with systems now to maybe help us authenticate some of this?

Khaled: Yeah, I know it sounds like a pretty doomy scenario, doomy outlook. And like any critical innovation or big innovation, there are going to be good users and bad users. I mean, you take something like nuclear fission. You’ve got either nuclear energy or you’ve got nuclear missiles, right? And it all depends on whose finger’s on the red button and who’s on the defense system. So to your question, absolutely. There are many people working right now to make sure that we don’t end up in a post-truth world, that there are ways for the individual, governments and everyone else to understand better when the things that they’re seeing, hearing, consuming are actually authentic or that they’re being manipulated in some way. And so there are a lot of people working on this problem, but sometimes these things take time and defense always trails offense, unfortunately. There’s a light at the end of the tunnel on this, but it could take some time to really get to everyone. So all is not lost. There are people working on this.

There’s a lot happening in the world.  Through it all, Marketplace is here for you. 

You rely on Marketplace to break down the world’s events and tell you how it affects you in a fact-based, approachable way. We rely on your financial support to keep making that possible. 

Your donation today powers the independent journalism that you rely on. For just $5/month, you can help sustain Marketplace so we can keep reporting on the things that matter to you.