Support the fact-based journalism you rely on with a donation to Marketplace today. Give Now!
How weaponizing AI could alter the outcomes of elections
Jun 29, 2023

How weaponizing AI could alter the outcomes of elections

HTML EMBED:
COPY
Mike Hamilton, co-founder of cybersecurity firm Critical Insight, says there are currently no guardrails against AI’s power to surgically target voters for disinformation. He hopes that it could also be applied to detecting fraud.

Politics is a game in which the truth often gets stretched. But new artificial intelligence tools are making it easy for anyone to bend reality into a pretzel.

AI-generated video, still images and fundraising emails are already popping up on the campaign trail. There are fake photos of Donald Trump embracing Dr. Anthony Fauci, exaggerated dystopian Toronto cityscapes and a stock photo of a woman with a curious surplus of arms.

The threat goes beyond the occasional extra appendage or incendiary but obvious deepfake, says Mike Hamilton, co-founder of cybersecurity firm Critical Insight. He spoke with Marketplace’s Meghan McCarty Carino about AI’s power to enable election manipulators to finely target specific groups of voters with disinformation.

The following is an edited transcript of their conversation.

Mike Hamilton: The biggest threat to me would be assisting in identifying soft targets for disinformation. So not only are you throwing out the messaging that, hey, your election day has changed or, you know, whatever, but it’s knowing exactly what targets to hang that in front of that will produce the greatest return on the investment.

Meghan McCarty Carino: Right. I mean, this is something that’s sort of been increasing on a spectrum for years. We saw this, you know, kind of with Cambridge Analytica, finding, I guess, what they call “persuadables” and targeting information. I mean, like, what is the worst-case scenario? What is kind of the next level in this that keeps you up at night?

Hamilton: Here is what is in the back of my mind. There have been large, unauthorized disclosures of personally identifiable information from the [federal] Office of Personnel Management and from Equifax. And when the SolarWinds global incident was underway, there was very intentional theft of a lot of federal agency records. So someone — and we think it’s China — is in possession of all this stuff. Well, if you can use the immense computing power that the Chinese bring to bear on things like facial recognition and social scoring and things like that, if you can bring all that to bear on this particular data set, that targeting becomes very, very, very surgical. And in our antiquated system of the Electoral College, really, you need to sway a few tens of thousands of voters across maybe three or four counties in the United States to take the presidency. That is my concern.

McCarty Carino: Are there any existing guardrails in this area of AI, not just generative AI, but AI influencing our election cycle?

Hamilton: I have to say no. Lawmakers are essentially, you know, legal people. And it’s hard for them to understand a lot of this technology, you know, and it requires consultants to come in and explain it and things like that. Eventually, we’re going to have to talk about the ethics of the companies that are producing this stuff. Some of them have already raised their their own hands and said, this cannot continue without some guardrails and some kind of technology that would allow us to identify these fabricated videos and things like that. That’s what I would hope Congress would take up. Until that time, it’s kind of been up to every company. I had to write us a policy on the use of generative AI and what we can use that for and what we can’t. But some of these other flavors of AI are just too complex for me even to write a policy around.

McCarty Carino: Yeah. What specific kinds of guidelines do you think we would need here?

Hamilton: Well, if I’m a company and I hire a company and it’s using some kind of AI in the product that I buy, do I know exactly what kind of information is being collected with that? Again, the ethical standards and a detection capability. There’s a lot of balls in the air, and nobody’s really trying to pull all these together. And I just hope that happens soon because I can’t write more policies about things I don’t understand.

McCarty Carino: I mean, when it comes to the sort of worst-case scenario that you outlined, kind of a national security risk of threatening industrial-level misinformation to specific groups of voters, what kinds of policies could protect against that?

Hamilton: Well, because this is illegal activity conducted by nation-states, activists, you know, things like that, they’re not going to care about policies. I don’t think there’s much we can do about that. I will tell you that having been to an … exercise that was ostensibly on electoral cybersecurity, it ended up being much more about disinformation. And I know that states have sought funding, some have obtained funding specifically to monitor social media, and other avenues, and find disinformation and counter it quickly. To the extent that we could bring to bear technology on that and just have a human doing oversight, we might be using AI to fight AI. But you know, that kind of levels the playing field a little bit.

McCarty Carino: Are there any positive uses of generative AI or related tech that you see in our elections?

Hamilton: Well, I mean, the positive use would be as a detection analytic or capability to find that disinformation and things that are just patently false to accelerate our ability to counter this stuff. Another way could be to determine if in some voting systems that are kind of not mainstream voting systems, for example, UOCAVA voting, right? That’s Uniformed Overseas, etc., etc. These are expatriates in deployed military. Well, they vote using a system that is much like DocuSign. And the narrative out there is oh, this is Internet voting and it’s never going to be secure. Well, AI would be a way to analyze data collected during an election and determine whether or not any votes were fraudulent. So that might be a way to bring it to bear as well, but I just don’t know that anyone is working on that.

Some lawmakers are trying to create clearer guidelines around the use of AI and related tech in elections.

Senate Majority Leader Chuck Schumer, a Democrat from New York, recently announced plans to convene a series of expert forums to discuss how to safely regulate AI with regard to national security concerns, copyright law and more.

Hamilton also mentioned UOCAVA voting — a form of digital voting submission, thanks to the Uniformed and Overseas Citizens Absentee Voting Act. That’s what the acronym stands for. It allows overseas U.S. voters to submit electronic ballots.

In recent years, some states have expanded electronic ballot submission to disabled voters. My colleague Kimberly Adams explored how that’s expanded voter turnout among people with disabilities on the show.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer