In 2024, election officials, social media platform managers, and artificial intelligence developers were poised for an “AI apocalypse” that didn’t arrive. They feared that fake content would deceive voters and sway election outcomes. But the fakes that voters did run into—like the 2024 New Hampshire presidential primary robocall imitating former President Joe Biden’s voice that told voters to wait until November to cast their ballots—had limited impact.

There were also telltale signs to spot AI-generated visuals that were easier to catch, like hands with extra or fewer fingers or short-form videos shot from a single angle. But back then, campaign operatives also feared public blowback from either openly adopting tools that were unfamiliar to most voters or surreptitiously employing AI-generated content and being found out.

The prospects for effective federal action are dim—the Trump administration has taken a hard line against any kind of AI regulation.

Today, Americans find themselves in a political ecosystem where democratic norms have been shattered. The risks of AI-generated fakes are more pronounced, but public objections are less of a deterrent. In January, a White House account posted a photograph of an African American protester in Minnesota, altered to darken her skin and depict her in tears. Anti-ICE activists have shared images of immigration officers who have been “unmasked” by AI—an “ill-advised” tactic because of AI’s penchant for hallucinating facial features and other details.

Fast-forward to the 2026 midterm election season. The National Republican Senatorial Committee released a deepfake video of Texas state Rep. James Talarico, the Democratic Senate candidate, “reading” his own years-old social media posts. The tiny words “AI Generated” appear in all caps at the bottom right-hand corner of the video. But many voters might miss that and believe the politician had recorded words that he’d never actually said.

The AI experts and media literacy experts interviewed by the Prospect say it’s more important than ever for voters to maintain good digital hygiene, especially when governments and tech companies fall short in helping voters distinguish between authentic political media and material expressly created to deceive them.

Deepfakes—hyperrealistic audio, video, or photographic representations of a person saying or doing something that did not occur—can fundamentally subvert viewers’ sense of the truth. Adam Rose, a fellow at the Starling Lab for Data Integrity, a joint project of Stanford University and the University of Southern California, explains that bad actors can use generative AI to fabricate content, leaving voters susceptible to the “liar’s dividend”: A perpetrator’s goal is to increase their own credibility by sowing doubt and eroding trust in real images, and thereby shredding the public’s faith in political content and the news and information systems.

“Without visual evidence, we lack the ability to understand what’s happening and to make judgments both in a court of public opinion and in a court of law,” Rose says. “If people in either of those courts do not trust the evidence, it makes it very difficult to function as a civil society.”

Rose cites several examples of real evidence being dismissed as fake, including the actual videos of Alex Pretti’s earlier confrontation with immigration officers days before his fatal interaction with them in Minneapolis.

Anyone with a computer, internet access, and decent IT skills can produce and distribute deepfakes. “[They] literally can come from anywhere: from bad actors who are politicians, people who are getting involved in the campaigns and being paid, people who have a stake in a campaign that are not being paid, but are just doing it, people who are just random people on the internet, whether it’s someone in the U.S. or internationally,” Rose says.

Yet the prospects for effective federal action are dim. The Trump administration has taken a hard line against any kind of AI regulation. In March, President Trump called on Congress to “take steps to remove outdated or unnecessary barriers to innovation [and] accelerate the deployment of AI across industry sectors.” Democrats in Congress led by Sen. Brian Schatz (D-HI) and Rep. Don Beyer (D-VA) have pushed back with companion proposals that would repeal the order, while Sen. Mark Warner (D-VA) has suggested more than a dozen measures that social media companies could take to respond to “maliciously manipulated media.”

More than two dozen states have enacted laws concerning the dissemination of deepfaked political content during the election season. But most of those laws only require that political actors disclose AI usage; they don’t limit or prohibit its use. Besides, state lawmakers also operate in slow motion and cannot really keep up. “[Legislation is] a noble effort, but the technology is moving so fast,” says Sarah Kreps, director of the Cornell Brooks School Tech Policy Institute. “You are going to be addressing yesterday’s problems.”

Social media platforms and voters can still act to stanch the flow of false information. Tim Harper, the elections and democracy project lead at the Center for Democracy and Technology, says platforms should re-up their 2024 commitments to election safety. They can increase public awareness about deepfakes, help voters spot AI-generated content, and assure people that they are actively searching out that content.

“The [political] campaigns should invest heavily in using content provenance—watermarking any of their authentic press releases and videos and images—not only to give a trust signal to voters that this content is coming from them, but also to prevent the risk that they would be deepfaked,” Harper adds.

To educate Latino voters about deepfakes and the wider world of mis/disinformation, Factchequeado, a fact-checking and media literacy network, distributes reliable Spanish-language news to their media outlet partners across the country. The group also manages a WhatsApp chatbot that users take advantage of to send claims and posts that Factchequeado’s staff will verify or debunk. The service encourages users to pause before they share potentially misleading information.

Laura Zommer, Factchequeado’s co-founder and CEO, wants voters to get into the habit of consulting trusted organizations: “You need to continue using your eye and continue training your eye to look for details that can show you a clue that it is not necessarily true or authentic content, but you don’t need to 100 percent trust your own ability.” She also cautions that while voters must be discerning about the types of material they consume and share, creating media-literate consumers is just the first step in stopping the spread of false information.

The Poynter Institute, a nonprofit newsroom and journalism training organization, does similar work. MediaWise, its media and AI literacy initiative, empowers people to critically interrogate the content they encounter and builds resources and tool kits for libraries, newsrooms, and specific demographic groups. For example, the organization created a brief instructional video for reverse image searching aimed at seniors. For teens, MediaWise’s “AI Unlocked” tool kit has tips for visually recognizing AI-generated materials and using AI responsibly.

Sean Marcus, an interactive learning designer at MediaWise, says that the best thing voters can do is remain vigilant and “expect to see more and more extreme misinformation, twisted information, and out-of-context information.” Still, he warns against fatalism. “We don’t necessarily have to accept the fact that misinformation [and] disinformation can flood in and flow in without us taking action against it, and without audiences being sharp enough to discern good information from bad.”

Read more

Finley Williams is an editorial intern at The American Prospect.