AI in the election: misinformation machine or meme generator?
Perhaps you’ve heard — maybe even on this show — that generative artificial intelligence has the potential to supercharge mis- and disinformation in our elections.
A bad actor could use AI tools to produce a video, for example, of a candidate saying or doing something incriminating that, in real life, they didn’t actually say or do.
But with 68 days until Election Day, we haven’t seen the widespread AI misinformation campaigns that experts warned about. Instead, as Will Oremus pointed out in a recent analysis for The Washington Post, we’ve seen a whole lot of silly AI-generated memes.
He told Marketplace’s Meghan McCarty Carino that the most recent examples have come from one particular presidential candidate.
The following is an edited transcript of their conversation.
Will Oremus: Former President Donald Trump amplified a couple of AI-generated images. One of those images showed a Soviet-style rally where Vice President Kamala Harris is presiding, and there are all these identical people in uniform in the crowd, and hanging over them all is this big, Soviet-style hammer-and-sickle flag. And so, this goes along with the idea that Kamala is a communist, which is a name that Trump has called her in the past. Nobody’s going to think that this is the real Democratic convention. Maybe a few people will, but what it does is it gets across the message in a vivid way. It’s kind of like calling her a name, but instead, it’s done with a picture.
Another AI-generated image that he amplified was of a picture of a bunch of young women with T-shirts that say Swifties for Trump. And this was at a time when there were rumors that Taylor Swift might be a guest at the Democratic convention or that she might come out and endorse Kamala Harris. And Trump was kind of trying to counter this by saying, well, actually I’ve heard a lot of Taylor Swift fans are voting for me and they’re upset with Kamala. And so, to illustrate that idea, he amplified a tweet. He screenshotted a tweet from Elon Musk’s X network that somebody else had made, but it was these AI images seeming to show a lot of Taylor Swift fans who were actually for Donald Trump. But it wasn’t real, it was AI-generated. There was actually one real image of someone, a Taylor Swift fan who is, in fact, a Trump supporter, but the rest were automated by AI.
Meghan McCarty Carino: And how obvious was it that these images were generated by AI?
Oremus: That’s a really tough question. So, for you and me, I think it’s pretty obvious that these are AI-generated, right? I mean, we cover technology, we’ve seen what AI images look like. I don’t want to necessarily assume that the average person knows all the telltale signs of an AI image at this point, but, especially with the Kamala Harris one, they probably wonder, “Wait, is that real? That looks a little funny.”
McCarty Carino: Yeah, it has a kind of an uncanny quality to it.
Oremus: Yeah. I wouldn’t say it’s superphotorealistic. But it’s certainly more realistic than just a drawing or an illustration or a comic. I think it’s playing a similar role, but it also has this potential to deceive people who aren’t really paying close attention.
McCarty Carino: But, as you say, the extent to which these images are likely to trick people, or even are intended to trick people, is not very high.
Oremus: Yeah, we can’t rule out the possibility of a very sophisticated AI fake that tries to make people think something happened that didn’t happen. We just haven’t seen much of that yet, and I think we’re likely to continue to see AI used more to generate memes or to illustrate a message, basically visual propaganda, more than really a hoax or an attempt at deception. Of course, if some people do get tricked into thinking it’s real, maybe the people who are posting it would be fine with that, but it’s really more about another tool in the arsenal of political messaging.
McCarty Carino: Right, it seems like more kind of an incremental increase in sophistication over something like Photoshop or basic memes, rather than a kind of exponential step up in deception.
Oremus: I think that’s exactly right. We can think of this as an evolution of the ability to manipulate images and video and audio. It’s easier than Photoshop, it might look more realistic than Photoshop, it doesn’t require any particular skills, and so it is now democratized — and I don’t mean that in a positive sense — but just in the sense that anybody can do it easily now. And so, there’s this blurring of the line between truth and unreality that I think is problematic. It’s just a bit of a different manifestation of the problem than some experts were expecting and fearing. And I think it leads to different responses. If the concern is a deepfake that’s going to fool everybody, then you might need visual forensics experts to go in and find out what’s true and what’s false, and you need fact checkers to debunk it and spread the word far and wide, to say, “Well, this image looked like this was true, but actually it’s fake and we can prove it.” But if it’s just a meme and there’s sort of a plausible deniability here and you can say, “Well, this was just a joke and everybody knows it’s just a joke,” then debunking doesn’t really help. And, in fact, debunking could make it worse by spreading the meme or the propaganda further than it would have reached otherwise.
McCarty Carino: You note that these particular images might just be jokes, but they do contribute to this phenomenon called the liar’s dividend. Explain that.
Oremus: Yeah, there’s this idea called the liar’s dividend. I don’t know who came up with it, but I first heard it from disinformation researcher Joan Donovan. It’s this idea that the more you can blur the line between truth and fiction, the more that people know that it’s possible to create realistic fakes, the more people who actually do get caught doing something are able to deny it. Because you can say, “That wasn’t me with my hand in the cookie jar. Somebody generated an AI image of me with a hand in the cookie jar.” It just creates a general confusion, and it can make the average person sort of throw their hands up and say, “Well, who knows what’s real and who knows what to believe?”
We’ve actually seen sort of an example of this with Trump fairly recently. So, just a week or so before Trump amplified those fake images, Trump and his campaign accused Harris of faking an image of a large, enthusiastic crowd that was greeting her outside of her airplane. Trump said that it was “A.I.’d,” implying that she had faked it to make it look like she had this big crowd. But in fact, all signs point to the fact that this was a real image. But everybody knows that fakes are everywhere now, and so it sounded plausible that maybe she didn’t really have that crowd and that maybe she was the one lying, even though that wasn’t the case. You want to say that this is an abuse of the technology, but in some ways, this is what the technology is for, right? I mean, the existence of AI generators is inviting us all to create realistic-looking images on demand of things that aren’t actually real, and so it’s just going to be a more confusing world, I think, going forward.
McCarty Carino: Clearly, this flood of AI imagery on the internet has the potential to create this kind of epistemic problem of no one believing anything. But I also wonder, if the proliferation of low-stakes AI fakes could kind of train audiences to be more alert to the deceptive powers of artificial intelligence.
Will Oremus: Yeah, that’s an optimistic take. I like it. New technologies come out, and they catch us by surprise, and they cause problems, and they disrupt things, and then gradually we figure out how to adapt. And it doesn’t mean that the problem gets fully solved necessarily, but I think people will adapt. People will learn that just because you see an image of something that looks real doesn’t mean that it really happened. They’ll learn the same eventually with audio and with video. And so, you’ll have to come up with new ways of figuring out what to trust or maybe fall back on old ways, right? Maybe you go back to sources that have track records of credibility and of telling the truth rather than just trusting what you see in your social media feed.
One thing Will Oremus noted in our conversation is that while AI-generated images probably aren’t fooling many people in this election, AI-generated voices may be a different story.
By now, voters are kind of used to getting these robotic-sounding automated voice messages from politicians, but during the Democratic Party primary in New Hampshire back in January, voters in the state were subjected to an AI-generated voice spoof of President Joe Biden telling them not to go to the polls.
The hoax was allegedly orchestrated by Steve Kramer, a political consultant who was working for the campaign of Democratic U.S Rep. Dean Phillips, an opponent to Biden in that primary. But he said he orchestrated the call on his own as a stunt to demonstrate how easy it is to do.
Last week, the Federal Communications Commission fined the telecom company that distributed the robocall $1 million for its role in the scam.
Kramer still faces a $6 million fine from the FCC and 26 criminal charges for voter intimidation and impersonating officials.
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.
Support “Marketplace Tech” in any amount today and become a partner in our mission.