Carolyn Kaster/AP Photo, File
Jeffersonville Masonic Lodge on Election Day in Jeffersonville, Ohio, November 7, 2023
If you were going to put all your chips on the one federal agency that would even attempt to deter the havoc that artificial intelligence mis/disinformation could unleash in elections, which would it be?
The Federal Elections Commission? Unlikely. That agency, originally vested with upholding campaign finance laws since Watergate, has been undone by congressional Republicans who’ve seen to it that GOP appointees (each party appoints three of its six commissioners) have little interest in the agency’s mission. The commission needs four votes to move on regulations, so it had all but morphed into elections powerlessness long before AI appeared on the scene.
The Department of Justice? It carries a big stick but takes a long time to swing it.
Fifty-one states attorneys general contacted the Life Corporation, the alleged perpetrator of the fake Biden robocall deployed during the New Hampshire primary, and cautioned it “to cease originating any illegal call traffic immediately,” adding that, “Transmission of these calls may be violations of the Telephone Consumer Protection Act.” Just two days later, the Federal Communications Commission came out with a ban on AI voice-cloning technology used to create deepfake robocalls—using its authority under that act.
A spokesperson for the FCC told the Prospect that there are exemptions for noncommercial calls and tax-exempt callers, such as 501(c)(4)s, when using an artificial or prerecorded voice in calls to a residential telephone number. Such calls can be made without the prior express consent of the called party but must satisfy certain conditions, such honoring opt-out requests. For wireless calls, however, tax-exempt callers do not have exemptions. They must have the consent of the person being called.
Deepfake videos, fake websites, and AI-altered images are all tactics employed by a range of actors—from the dark-money groups that hide behind 501(c)(4) designations, to hostile powers seeking to destabilize the presidential election, and to people looking for the notoriety they can leverage with a viral deepfake.
Still, Nick Penniman says when it comes to disrupting elections, election officials and voters are grossly unprepared for these powerful new technologies. Penniman is the founder and CEO of Issue One, a Washington-based cross-partisan political reform group studying democracy fixes, ethics and accountability, and government transparency. He was formerly a journalist at the Huffington Post and its investigative fund, The American Prospect, and Washington Monthly. “AI-generated deepfake robocalls are probably my biggest concern,” he told the Prospect. “Because they are so cheap and easy to do.”
This conversation has been condensed and edited.
Gabrielle Gurley: How destructive are robocalls?
Nick Penniman: The big difference between the calls of old and the calls that we’ll be seeing this year, is that the calls of old were one-way calls. They were basically a recording that was sent out, normally to people’s voicemail boxes. If they picked up the phone, they would just listen to the recording.
The new robocalls are going to be conversational because of advanced AI. Not only are they going to be conversational, the most sophisticated players will very likely know something about the person who’s picking up the phone, which will make the conversation even more potent.
The outgoing calls will be something like: “Hi, Joan. It’s Bill Smith from your local voting rights center. I don’t know if you heard, but there’s been a water leak at the polling place and, as a result, we moved it to 10 to 15 miles away.” Or he’ll say, “There’s widespread violence at polling places across the state, the governor has extended voting, and, by the way, you should tell your husband Mark the same thing.”
If Joan says, “Well, how long has the governor extended voting for?” the voice coming back—it won’t sound robotic, it will sound like I'm sounding right now—it’ll say, “Oh, actually, he’s extended it through…”—they’ll be an um and a hmm in there—“through Wednesday.”
This stuff is so unbelievably sophisticated. I don’t think people have grasped how true-to-life the technology is.
What did you think of the FCC decision? Did you expect something like that so quickly?
We did not expect it so quickly. We did know that the FCC would act at some point, but we were worried that it would be so late in the game that it wouldn’t have an impact on this election cycle. But that’s probably what forced them to act faster. The FCC knew that if they came up with some kind of a rule that was too late, that they wouldn’t be able to help reduce the chaos.
The 501(c)(4)s are the dark-money hitmen of politics. It’s really important to flag that because anyone who wants to do something bad, they’re either going to do it through a (c)(4) or they’re going do it through a fly-by-night LLC—and it’ll be here today and gone tomorrow.
What have we learned about mis- and disinformation since the midterms?
In terms of the midterms, that there wasn’t a ton of disinformation or AI-related disinformation, out there, [but] we are going to see an explosion of it in 2024. If you go back to 2020, what you’ll see is that those who are interested in putting up disinformation, especially about the results of the election, have proven that they can do that very successfully and that they can sustain it very successfully. What we’re seeing with the Big Lie is the durability of mis- and disinformation about elections—and that’s pretty scary.
What mis- and disinformation tactics concern you most during this election season?
Three things. Number one: AI generated deepfake robocalls are probably my biggest concern because they’re just so cheap and easy to do.
Number two is fake news websites that look and feel 99.9 percent real, but are publishing single stories and misleading people about how, when, and where to vote. Those sites will seem like a totally legitimate news site; 99.9 percent of the stories will be summarized or paraphrased versions of actual news stories, including local news stories. Then there will be one story that says, “Polling places have been moved”—with a list of a bunch of addresses. Or “Polling has been extended through Wednesday.” Or “If you vote tomorrow and haven’t paid your bills, you could be arrested.”
The third piece that I’m worried about is, of all the stories about processing and vote counting, all the fake news stories that will be put out, and will include fake images, and probably some deepfake videos. I’ll give one example: In 2020, in Atlanta, there was an observer in one of the vote-processing centers. He was behind glass and he bought his cell phone and captured an image of a Black guy taking envelopes that had ballots inside of them, feeding them into a machine that would slice open the envelope and then spit the ballot into another container which would then be taken to be processed.
So, he’s doing this and something happens and he reaches into the machine. He pulls out an envelope, he rips the envelope open, pulls something out, crumples it up, and throws it on the floor. That’s what you see in the video. Now what actually happened is someone had put the instructions in the envelope along with the ballot. It jammed the machine. He threw the instructions away and put the ballot in an appropriate place. Imagine now the ability to create deepfake videos of similar things happening. And then also, of course, photographic images of things happening because they’ll have images of people who work at those polling places.
They’ll be able to take, let’s say, Joan’s face and Jim’s face, put it through deepfake technology, put their faces on whatever image they want. Such as Joan and Jim carrying boxes of ballots into a dumpster in a dark alleyway in the middle of the night. The ability for deepfake technology to spread massive lies about the vote processing and tabulating is going to be at everyone’s fingertips.
Then you dial up the racial component to generate outrage. You saw that in 2020 with the two Black women who were accused of totally fictitious things.
Ruby Freeman and her daughter, precisely.
Dealing with hostile powers—China, Iran and Russia—is a job for the national-security agencies. In the domestic realm, there’s everyone from extremists on the right to young hackers who want to flex their skills. What are your concerns about domestic bad actors?
My concerns are mainly MAGA operatives, the Roger Stones of the world, who spend their careers trying to manipulate people and spread lies when it comes to politics and voting. These types of players are going to be extremely active. They’re professionals. They know who they’re targeting and why they’re targeting them. Whether or not they're actually coordinating with someone like a Roger Stone, or not, it doesn’t matter. They’re going to be coordinating with each other.
I’m sure there’ll be some teenagers, 4chan people, Reddit people, who are messing around trying to show off on the dark web. That doesn’t make it any less nefarious. I don’t differentiate between those players whatsoever.
Where should the pressure be placed to handle these issues?
Secretaries of states’ ability to actually really find, identify this stuff, flag it, and then flood the zone or whatever, whatever the appropriate zone is, with good information, is extremely limited. I think that they feel like, you know, a tsunami is coming and they don’t even have a break-wall.
The communications companies can’t control some of the stuff, like the robocalls. That’s hard. But they actually can control text messages. Verizon, for instance, has some sense of what’s being texted to me. No one wants a Big Brother situation, but if it’s election-related, if it’s emergency-related, like let’s say there’s a hurricane in my area, it is within their purview to make sure that I’m getting the right information.
Beyond the communications companies are the tech companies. As you know, X [formerly Twitter] doesn’t care anymore about anything related to mis- and disinformation. They’ve also eliminated their integrity division. Facebook has significantly cut back its integrity division, including its election integrity people. The most you could ask of them would be significantly beef up, to the tune of 10,000 to 20,000 employees, their election integrity division—which is just a rounding error for them—for the next 12 months, let’s say. But the next level ask would be to help to flood the zone with good information, knowing that bad information is going to be out there. The only choice is to dilute it with good information.
Look at some of what they did during COVID, for instance. It took many months, but eventually Facebook had a little transparent box that would pop up over COVID-related or vaccine-related information. And the little transparent box just said something like, ‘We cannot verify the information; check with an official source like the [Centers for Disease Control and Prevention].’ Those little transparencies are something that they could do with election-related information, in addition, of course, to just constantly pushing and promoting good sources of information into people’s newsfeeds.
And congressional action seems to be a non-starter at this point.
That’s unfortunately true for this cycle. There are actually two bills out there. There’s the Protect Elections from Deceptive AI Act, and then there’s the REAL Political Advertisements Act, which would require disclaimers on political ads that use images or videos generated by AI. Both of them have a modicum of bipartisan support. The problem is that there’s very little momentum, which is kind of crazy, because members of Congress are the ones who are going to be the victims of these things, and they’re not willing to protect themselves from becoming victims.
Is there anything state election officials can do as far as rapid response, or are they compromised by slim budgets and understaffing?
They are totally compromised. The question is, how do we and others knock down threats in real time and then get the right information in front of people‚ because a lot of the stuff is going to be coming within 12 hours of opening the polls. If you can’t respond in real time, it’s almost irrelevant.
So an October surprise is an outmoded tactic?
It’s a night-before surprise. Election officials have no capacity to deal with this. The doors are going to be blowing off the plane. The question is, in the wake of it, are we going to take the right safety measures to fix the problems or not?