Francis Chung/POLITICO via AP Images
Louisiana Attorney General Liz Murill and Missouri Attorney General Andrew Bailey speak with reporters outside the U.S. Supreme Court after justices heard oral arguments in Murthy v. Missouri, March 18, 2024.
When it comes to disinformation, fear, uncertainty, and doubt is all around. Labeling social media and artificial intelligence accelerants “fueling polarization [and] confusion,” on Monday Secretary of State Antony Blinken sketched out how disinformation spreads fissures in democratic societies across the globe at the 2024 Summit for Democracy in Seoul, South Korea. “Citizens and candidates will face a flood of falsehoods that suffocate serious civic debate,” Blinken said. It’s a pernicious development in a year when dozens of countries, including the European Union, Iceland, India, Mexico, Algeria, South Africa, and Palau are going to the polls in national elections.
The secretary of state, himself a former journalist, offered a market basket of media-focused remedies: upholding media freedom and journalists’ safety, ensuring the independence of media regulators, promoting transparency in media ownership and distribution networks, and making investments in media and civic literacy. The U.S., Blinken noted, sponsors “TechCamps” for youth and has created a “Democratic Roadmap” to help people develop the skills to detect questionable content.
Historically, the U.S. has been quick to identify the right answers and devise exportable solutions for its international allies, even when evidence of the same is mostly absent on the home front. From climate to civil rights, Americans aren’t necessarily the exemplars. Similarly on disinformation strategies, there’s a profound disconnect between our rhetoric and what little on the regulatory front can be stamped “Made in the USA” and held up to anything more than polite applause abroad.
Fear, uncertainty, and doubt (FUD) is the disinformation endgame, with chaos added for good measure. This country wallows in a cesspool of homegrown threats to its own elections and what was once a peaceful transition of power. That environment has been obliterated, twisted and tweaked for maximum disruptiveness by foreign and domestic actors. Social media and generative AI make chipping away at the American project that much easier. Instead of being a beacon, America is an illustration of the profound risks that disinformation poses to elections and the related institutions in democratic societies.
Blinken’s disinformation virtue signaling was a discomforting overture to the Supreme Court oral arguments in Murthy v. Missouri. The First Amendment disinformation case is Exhibit A in this disinformation disconnect. Biden administration officials, from the president to the surgeon general, attempted to persuade social media platforms to address the distribution of COVID-19 misinformation and “false statements” on social media platforms, exhorting them to “do better,” as deputy solicitor general Brian Fletcher suggested before the high court on Monday.
Instead, those exhortations led to a Fifth Circuit Court of Appeals ruling that the federal government had violated the First Amendment by exerting undue pressure on social media companies’ content moderation policies. The administration contends that it did not threaten the companies or try to punish them if they did not remove content deemed harmful. The individual Louisianans and Missourians and their attorneys general argued that the federal government limited their rights and “coerced” X (formerly known as Twitter), YouTube, Facebook, and others to take punitive action against them by limiting the visibility of their posts, actions they claimed were tantamount to freedom-of-speech limitations.
Historically, the U.S. has been quick to identify the right answers and devise exportable solutions for its international allies, even when evidence of the same is mostly absent on the home front.
An amicus brief filed in Murthy on behalf of the secretaries of state of Arizona, Colorado, Connecticut, Maine, Minnesota, New Mexico, Oregon, and Vermont underlines the dim outlook for combating disinformation in 2024. “[F]or the coming election season, that dialogue has essentially ended,” the brief declared, “likely influenced by fears of legal liability for communicating too closely with the government. Forcing social media platforms to block all direct contact with government officials, then, will squelch uncontroversial, commonplace communications. That, in turn, will increase the risk that dangerous, and even illegal, falsehoods about elections and voting will spread unchecked.”
An unusual majority of the justices, led by Chief Justice John Roberts, appeared persuaded by the administration’s view that the federal government has a role to play in trying to persuade, not coerce, social media companies to act in the public interest when circumstances warrant. The court will likely render its decision in June.
U.S. policymakers who could offer exportable pathways in the disinformation fight struggle to have their concerns heard, validated, and addressed by appreciable numbers of members of Congress. Days before the Blinken speech and Supreme Court oral arguments, a quintet of those election and voting rights officials from Alabama, Michigan, Nebraska, South Carolina, and the NAACP Legal Defense Fund took their worries and fears to Capitol Hill, and the Senate Rules and Administration Committee.
“The template that was laid out in 2016 [by Russia’s attempt to disrupt that year’s presidential election] was literally pennies on the dollar,” says Sen. Mark Warner (D-VA), a Rules Committee member. “It’s a heckuva lot cheaper to use technology to disrupt and undermine another nation-state’s elections than it is to buy airplanes, submarines, and tanks.”
But the comparatively underfinanced state election agencies require dollars, not pennies, and Washington has not showered those offices with the funds they need to combat disinformation. much less the roster of other threats. They stand little chance against domestic or international actors with superior know-how, tech, and money. They want more and better assistance from the Election Assistance Commission, as well as the Department of Homeland Security and the Cybersecurity and Infrastructure Security Agency.
“Where misinformation and foreign threats will affect our elections is not just through AI,” said Michigan Secretary of State Jocelyn Benson, “but through running multiple, multi-scale attempts to fool voters about their rights in an effort to cause confusion, chaos, and instill fear [and] to deceive voters, to divide us as Americans and to deter us from believing in our voice and in our votes.”
States like Michigan have made it a state crime to knowingly propagate AI-generated deepfakes and want federal regulation. The AI Transparency in Elections Act, sponsored by Rules Committee Chair Amy Klobuchar (who shared her own experience of receiving incorrect polling information from ChatGPT), is pending in the committee. In a dysfunctional Congress, the proposal, like the decision in Murthy, won’t deliver relief on a timeline to make a difference with the waves of disinformation expected to be unleashed as election season revs up.
If there is a germ of optimism, it is that if Congress ever finds itself with exportable AI regulatory solutions, members will have to credit the European Union. On March 13, the European Parliament passed the Artificial Intelligence Act. The globe’s first-ever regulations on artificial intelligence create “obligations for AI based on its potential risks and level of impact.” Six months ago, The Washington Post reported that Dragos Tudorache, a Romanian member of the European Parliament, dazzled and dismayed the Senate’s AI Insight Forum, hosted by a four-member working group led by Senate Majority Leader Chuck Schumer (D-NY), that is nudging Congress into some semblance of consciousness on AI.
“While the United States was congratulating itself for starting the regulatory process,” the Post notes, “Europe was basically finished. And its package of rules was so good that Congress would soon be forced to choose between spending years trying to top it or copying the homework of an obviously superior student.”
That said, the AI Act is product safety regulation, which doesn’t cover all the comprehensive threats from the technology. There are still places, like AI’s role in consolidating corporate power, for the U.S. to make an impact. But Congress hasn’t done much of anything yet.
Confronted by a tsunami of known generative AI impacts, American policymakers from Foggy Bottom to Capitol Hill should leave their homilies on the shelf. The potential threats posed by disinformation demand actions equivalent to the enormity of the FUD threats ripping up democratic social fabrics. Rhetorical flourishes have never stopped bots, deepfakes, or black hats.