Election season is already in full swing, and 2024 will be the first elections of the generative-AI era. There are a lot of questions — and concerns — about how artificial intelligence will be used in, and potentially impact, elections.
Mekela Panditharatne, counsel at the Brennan Center for Justice in the Elections and Government Program, joined The Show to talk more about it.
MARK BRODIE: What are some of the areas in which AI may play a role in this year's elections?
MEKELA PANDITHARATNE: We expect artificial intelligence to play a role in a variety of ways. You know, that could look like deepfakes or the use of large language models to produce written content. Then of course, there is political advertising and fundraising, the use of AI in campaigns, generally speaking, and the use of AI in election administration as well.
BRODIE: How prevalent is AI in terms of campaigning so far based on what you've seen and what you've heard?
PANDITHARATNE: Well, there's the more sort of traditional use of AI to process and synthesize information and that kind of AI has been used for a while. Although it's getting more sophisticated all the time and can also now be combined with this newer kind of generative AI which can produce new content.
That latter kind of AI, the generative AI, you know, is starting to be used more by campaigns, particularly in advertising and in communications with voters. But we'll sort of have to see how this plays out and how prevalent it becomes.
BRODIE: Does that type of AI seem particularly problematic to you?
PANDITHARATNE: I think both kinds of AI produce both opportunities and risks. The generative AI does sort of pose a concern in terms of deception, particularly, when voters aren't sure what they're seeing is genuine or whether it is fake and manufactured. That does pose sort of a risk that there'll be confusion about what is real and what isn't.
BRODIE: In terms of like, did a candidate actually say what you're seeing on the video or what you're hearing on audio, things like that?
PANDITHARATNE: Yeah, that's right. So, you know, campaigns could put out, and in fact, they have put out video images and audio that seem to portray other candidates as doing things that they didn't do or saying things that they didn't say.
BRODIE: That of course, has been an issue for a while, right? Like does the AI just kind of make it easier for campaigns to do that sort of thing and maybe more difficult for voters and others to detect that it's happening?
PANDITHARATNE: Yeah, absolutely, and this is something that we sort of talk about, generally speaking, in many cases AI is an amplifier, so it's exacerbating existing risks. So, of course, you know, you could use Photoshop or other tools before, as you mentioned, to create these deepfakes. You know, some of the more simplistic creations were called “cheapfakes” before. But now you could do that on a more massive scale, you can do it at lower cost and making it look more sophisticated is easier now as well.
BRODIE: So I guess in the universe of potential risks that folks who do what you do are concerned about leading up to the 2024 elections, where does AI fall relative to sort of all the other things that we keep hearing about?
PANDITHARATNE: You know, I think it is a substantial risk. You know, we are concerned about sort of spoofing of election websites. You also have the ability to impersonate election officials in ways that are potentially more sophisticated or again done at a more massive scale than before, you know, so we are very concerned about it.
But you know, as we've discussed, many of these problems are long standing, so to the extent that there are a sort of ways to address this or consider this, you know, it is in some ways more the same issues.
BRODIE: Well, it kind of sounds like what you're saying is that AI in and of itself can pose problems, but really the biggest issue is that it sort of adds another layer onto existing problems, especially in terms of misinformation or disinformation when it comes to elections. And it just makes it harder to sort of rein it back in or to stop it once it gets out.
PANDITHARATNE: I think that's right, generally speaking. I do think that there are some issues that will sort of demand us to think about this in sort of a new way. One of the things I'm most worried about and, you know, this may or may not manifest in the next election. But I do think it is a concern for future elections and the possibility of interactive digital disinformation.
So where AI systems are connected, for example, to robo dialers or other kinds of messaging systems and can engage in essentially conversations with voters that are potentially designed to manipulate or deceive and can adapt in real time to voters responses in the future. You know, you might see sort of emotion recognition AI also being deployed in this context, or more advanced microtargeting of voters’ racial and demographic characteristics. That's the kind of deployment of AI that I think is particularly troubling and it might be considered to be an amplifier of an existing threat. I do think it sort of significantly levels up the risk.
BRODIE: From your perspective, are there any potential positives from AI, as it relates to elections?
PANDITHARATNE: Yeah, of course, you know, I think that there are potential positives but it is very important to have guardrails in place, so that those benefits can be realized without sort of the risks surpassing the benefits. So, for example, we can think of, you know, AI is currently used in election administration to perform some functions. Again, that's the more, typically, that's the more traditional use of AI to process and synthesize information.
And, you know, you could imagine potential uses of AI to expand sort of access to the vote by, you know, analyzing some of the data that typically goes into identifying polling places. So maximizing accessibility using factors like geospatial analysis and traffic patterns, access to public transportation, that sort of thing, you know, it could also facilitate communications with voters.
But again, there do need to be guardrails in place so that we know that voters are getting accurate information, reliable information and the information that analysis that officials are relying on is reliable and accurate as well.