Jonathan Raa/Sipa USA via AP Images
On Monday, the secretaries of state of Minnesota, Michigan, New Mexico, Pennsylvania, and Washington had a short and sweet message for Elon Musk, the billionaire disinformationist who converted Twitter into his personal megaphone: Step up and rein in the Grok AI chatbot, which spouted false information about ballot deadlines in nine states after Vice President Kamala Harris became the Democratic nominee for president.
A July post on Grok, just hours after Joe Biden had exited the presidential race, claimed that the deadline for presidential candidates to appear on the 2024 general-election ballot had passed. But those nine states had more than enough time to change the ballots to insert the names of Harris and her running mate, Gov. Tim Walz of Minnesota. Though the inaccurate information was only available to subscribers at the Premium level and above, it took X, a site with about 250 million daily active users, a week to deal with the problem.
If the Grok episode wasn’t bad enough, Musk, the X chairman and supporter of former President Donald Trump, clearly enjoys sharing video deepfakes using generative AI to discredit Kamala Harris, further tarnishing X’s deeply compromised reputation. With Harris at the top of the Democratic ticket, the personal attacks on the first Black female presidential candidate and the distribution of malicious information to voters who would support her are bound to increase.
(The secretaries of state aren’t the only ones fed up with these dangerous antics, nor is X the only place mired in deceptive election content. America PAC, a Musk-supported effort, has attracted the attention of both Michigan Secretary of State Jocelyn Benson and the North Carolina State Board of Elections for allegedly collecting data from users who believed that the group’s website would help them register to vote. The site, however, did not redirect users in battleground states like Michigan who offered up their data to another site that would actually register them to vote. Michigan has laws on the books that criminalize intentionally distributing AI-generated deepfakes.)
Grok isn’t the first or only AI chatbot producing hallucinations, but the episode illustrates how AI could introduce even more chaos into campaign season as the hours tick toward Election Day and the post-election processes beyond. To combat future slipups, the state secretaries urged Musk to have Grok direct users to CanIVote.org, a nonpartisan venture established by the National Association of Secretaries of State and OpenAI, a company Musk co-founded and then left after internal battles.
In an alternate universe, X may have adopted that advice. But in the here and now of a Northern California federal district courtroom also on Monday, Musk resumed his vendetta with his former OpenAI colleagues by filing a fresh lawsuit. He had abandoned an earlier lawsuit in June that accused the company, originally a nonprofit, of selling more advanced AI capabilities to private companies.
Without apparent irony, Musk’s case argues that the technology was to “be openly shared with the public for the benefit of all, and not for private profiteering.” He warns that the dangers of AI include “supercharging the spread of disinformation,” the very thing that the secretaries of state asked him to address. Suffice to say that the likelihood that X will embrace the NASS-OpenAI partnership is in the negative numbers.
The distribution of inaccurate times, dates, and places to cast ballots in person has been a long-standing feature of voter suppression in Black neighborhoods.
That leaves voters at the mercy of chatbots, which are ready, willing, and able to amuse them with snark in the place of the facts that they can’t produce.
The first clue that something is not quite right about Grok is that it’s billed as a “humorous” AI search assistant. Available to users who fork out for Premium X subscriptions, the search assistant features a disclaimer urging people to verify the information it dispenses. That’s because, as X warns, Grok is an “early version” search assistant that “may confidently provide factually incorrect information, missummarize, or miss some context.”
Those admissions alone should send a user to the nearest public library to consult human librarians, or never click on Grok to begin with. But only the more curious will find that out; clicking on Grok on the X home page only invites users to subscribe, before they hear about the deficiencies of the product they would buy.
Gowri Ramachandran, director of elections and security for the Brennan Center’s elections and government program, has been monitoring the abuse and misuse of generative AI in other democracies that are holding elections this year. She explains that “a number of the really prominent chatbots that are out there fortunately have taken the action that when people put in searches or prompts looking for election information, they do not hallucinate an answer and instead send the user to an appropriate secretary of state or election website, or a portal that will help them find the right place to get the information that they need, which is a great improvement.”
Ramachandran also says that X has finally moved to address Grok’s failings since the letter appeared. “Although it took maybe longer than would be ideal, my understanding is that the Grok chatbot is making some sorts of improvements as well.” She adds, “Of course, acting socially responsibly is a welcome, welcome thing, but it doesn’t eliminate the need for enforceable rules.”
AI regulation continues to lag. The concerns raised by the Harris deepfakes prompted Senate Majority Leader Chuck Schumer (D-NY) to declare that he’d liked to see two bills proposed by Sen. Amy Klobuchar (D-MN) pass soon. One would require political ads to indicate that they were created using AI; a second would prohibit “distribution of materially deceptive audio or visual media that is generated by artificial intelligence relating to federal candidates” with the intent to “solicit funds” or “influence an election.”
Can Schumer successfully plow through the Republicans’ reflexive obstinance—even though they claim to be concerned about disinformation—with another government shutdown looming in an election year? A bill with teeth would be one for the history books. More likely, however, Congress won’t be able to act. But there is one remaining federal tool. Last month, the Federal Communications Commission proposed rules that would require disclosure about AI content in political ads by broadcasters, cable operators, and others.
In the meantime, voters are left to sort out disinformation and deepfakes for themselves.
Federal government agencies like the Cybersecurity and Infrastructure Security Agency (CISA) may advise voters to seek out trusted government sources for voting information, but there’s little to prevent bad actors from impersonating relatively unknown election officials or distributing materials online or through mails that appear to be genuine, but are fakes.
“A lot of people may not know their secretary of state’s website,” says Danielle Davis, the technology policy director at the Joint Center for Political and Economic Studies, a Washington-based think tank that analyzes African Americans’ socioeconomic status and civic engagement. “What if someone else puts up something that looks exactly like it, but it’s not true?”
Few voters are aware that one lie could reshape the presidential contest dramatically. Communities of color are especially, but not exclusively, subject to these attacks. The distribution of inaccurate times, dates, and places to cast ballots in person has been a long-standing feature of voter suppression in Black neighborhoods. Ramachandran explains that voters need to obtain information from sources like .gov websites that are vetted, created, and monitored by federal, state, and local government entities that “are very difficult to spoof,” she says, since .gov domains are reserved for government agencies at all levels.
In 2016, the Russian government used disinformation tactics such as photos of African Americans emblazoned with raised firsts and slogans to convince African Americans not to vote for Hillary Clinton. On Facebook, Davis says that Russian operatives targeted Black audiences with ads that either ignored the election, discouraged Black Americans from voting, or advocated for third-party candidates. Certain Russian-backed Instagram profiles, Davis notes, essentially “committed blackface,” and held “themselves out to be Black, talking about Black history, Black nationalism, but they were, in actuality, a Russian person who’s not even in the United States.”
“I can definitely see there not being enough urgency in cleaning up issues that come from these platforms,” says Davis. She cautions, therefore, that Black communities must exercise care and caution when sharing information on social media sites like Instagram that seems “interesting” but could very well be false, especially if a voter can’t find any details about that “interesting” post on a reputable news site.