AI labels on digital political ads might backfire on candidates, research shows
We are in the midst of the first major U.S. election of the generative AI era. The people who want to win your vote have easy access to tools that can create images, video or audio of real people doing or saying things they never did — and slap on weird appendages or other make-believe effects along with targeted slogans.
But the potential for deception has led about two dozen states to enact some form of regulation requiring political ads that use artificial intelligence to include a label. So how do voters respond to the disclosure of AI’s role in a campaign? That’s what Scott Brennen and his team at New York University’s Center on Technology Policy set out to answer in a recent study.
The following is an edited transcript of Brennen’s conversation with Marketplace’s Meghan McCarty Carino.
Scott Brennen: We ran an online experiment. We showed participants two ads for pretend candidates in pretend county commission races, in real, in real states, in real districts. And the ads that we showed them, they contained either the label that is now required by Michigan, or the label that’s required by Florida, or no label at all. And then some of those ads seem to come from a Republican, some from a Democrat, and then some from no clear party affiliation. And then once they had seen these ads, we asked them to rate the appeal and trustworthiness of the candidates, the accuracy of the ad, give their intention to share it, like it or flag it if they had seen it on social media. And then several other questions about, you know, their opinions on policy in this area.
Meghan McCarty Carino: And what were your major takeaways? How did these disclaimers affect how people saw these ads?
Brennen: Well, we saw AI labels hurt candidates that use generative AI. These effects seemed to be a result of basically, when respondents saw these labels, they lowered their assessment of members just of their own party or of nonpartisan candidates, but not necessarily members of the opposite party. And the label effects were honestly pretty small. And this is, in part, I think, a result the fact that many viewers didn’t even notice the labels. What this means, though, is that design and wording actually really matter. And interestingly, respondents were the least supportive of the policy approach enacted by most of the states, and that’s requiring labels only on deceptive uses of AI. We also asked about requiring labels on all uses of AI in political ads, or just outright banning deceptive uses of generative AI in political ads. And those are the three kinds of main approaches that states have so far adopted.
McCarty Carino: Tell me more about how partisanship affected people’s response, because this was kind of surprising.
Brennen: Yeah, exactly. This was exactly opposite to what we had hypothesized. But I think after we did it, we’re like, oh, that actually sort of makes sense, right? So, you know, respondents basically went into the experiment with pretty low opinion of candidates from the other party. And so I think what was happening is there was really nowhere for those assessments to go, like they already had rated them as low on appeal and trustworthiness. But I think what’s so interesting about this is, you know, we know that political ads actually don’t have much impact on which candidate voters choose. But where they can have an impact is on mobilization, fundraising, turnout, and those are ads that you create for your own party. But that’s where we saw the most significant impacts here, right? So it suggests that labels might actually be having slightly more impact than we would otherwise assume.
McCarty Carino: You noted that the specific language that these labels use can sort of change the way people respond to them. What was your takeaway about what kinds of tweaks might be made to that language to kind of have more of its desired effect?
Brennen: Yeah, we saw clear evidence that the wording of labels matters. So we tested two labels that are actually required by the states. One of them uses the word “manipulated.” That’s the Michigan label: “This video has been manipulated by technical means and depicts speech or conduct that did not occur.” So what’s notable there is it uses the term “manipulated,” but it doesn’t use the phrase “artificial intelligence.” The other label that we tested was from Florida: “This video was created in whole or in part with the use of generative artificial intelligence.” It doesn’t use “manipulated,” does use “artificial intelligence.”
Now, I should say really quick that these labels actually are required in different contexts. So Michigan, it’s only required on deceptive uses. Florida, it’s required on all the uses. But even still, there are pretty significant differences. I think probably what it is is that, on one hand, that word “manipulated” communicates a lot. On the other hand, we asked respondents to basically tell us which technologies they thought might have been responsible for creating these ads that they saw. And basically, in the Michigan ad and the Michigan label, or without a label, respondents generally chose things like video editing. They didn’t automatically assume it was artificial intelligence, like unless it was specifically pointed out that it was artificial intelligence. And so, again, it really matters exactly how the label identifies particular technologies, the words that are used to describe it. And unfortunately, we’re in this moment when we don’t really know how best to word these labels to maximize benefits and minimize costs.
McCarty Carino: What surprised you most about this research?
Brennen: Probably the backfire effect on attack ads. So without labels, the candidate doing the attacking in an attack ad and the candidate being attacked were roughly rated about equally — similar trustworthiness, similar appeal. That, of course, like, tells us something about the impact that attack ads have. But when you included a label, nothing really happened. There was no change in assessment of the candidate being attacked, but we saw that significant decrease in assessment of the candidate doing the attacking. I did not anticipate that. We did not hypothesize that.
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.
Support “Marketplace Tech” in any amount today and become a partner in our mission.