Jodi Hilton/NurPhoto via AP
Even the call by networks in favor of Joe Biden hasn’t slowed the flurry of election misinformation.
Before the polls even closed on Election Day, misinformation was spreading on social media platforms to sow doubt about the American election process and its results. Hashtags and all-caps claims of an election “RIGGED” or “ROBADO” continue to be posted and shared, in addition to the creation of “Stop the Steal” groups, where people who question the integrity of vote-counting procedures and election processes connect and trade falsehoods. Even Saturday’s election call by news networks in favor of President-elect Joe Biden hasn’t slowed the flurry of election misinformation and disinformation online.
The proliferation of election misinformation may catch unawares people who are excluded from these Facebook groups or who are shielded from misleading videos in their YouTube algorithms. But those who study and track misinformation online have seen these claims building for months. There has been consistent attention on these platforms and their policies for monitoring misinformation, since they have been operating under the shadow of their failures to halt misleading health information during the start of the COVID pandemic and their vulnerabilities from 2016 to foreign interference and exploitation of user privacy.
“I think in the past, by the time that Congress was able to look at what happened with social media platforms, tech really was already changed. They [representatives] were taking too long to have the insights read and I think that’s really different this year,” says Jiore Craig, vice president and director of digital practice at Greenberg Quinlan Rosner.
The difficulties with preventing misinformation now could play into policy proposals in the next Congress. “I think there are some members in the House that are more familiar with social media than others,” Craig says, “but I do think that gap is rapidly changing and I do think that Democrats will probably focus on it and have a lot of material and background to work with.”
Twitter, Facebook, and YouTube, owned by Google, all laid out varying thresholds for how they would deal with misinformation and disinformation on their platforms ahead of the election. But for social media trackers, these measures were not only unclear but also seem to mostly have fallen short.
On election night, a synthesis of YouTube live streams, tweets, and Facebook posts published to the internet several debunked conspiracy theories, some of which later made it onto Fox News. Voters in Arizona where warned about “Sharpiegate,” a conspiracy theory suggesting that if you filled in your ballot with a sharpie, your vote would not count. Eric Trump tweeted a video of sample ballots being burned while claiming they were real ballots cast for his father; it reached 1.2 million views. Media Matters for America tracked 60 specific cases of voting and election misinformation leading up to and on Election Day, 25 of which came from Pennsylvania alone.
The Trump campaign legal team has also been trying to harness the power of these dubious claims and has created a hotline and website where people can report their experiences with voter fraud or other claims of cheating in the election. (Democratic partisans and TikTok teens have relentlessly prank-called these hotlines, so much so that the campaign had to change the number.) The “red mirage,” the nickname for the period of time between Election Day and the early results that showed Trump winning, has been a driving force for the Trump base online, as well as the campaign’s litigation efforts.
The inability to stop these incidents of misinformation wouldn’t be as big a problem in a rich media ecosystem with many sources of news. But because Facebook and Google have so much power to distribute information, falsehoods can easily spread without much of a counter. The problem, in other words, is the monopoly power of the platforms.
Misinformation ahead of the election could be categorized in two ways, Craig explains: attacks specific to candidates and political parties, and attacks on the voting and election process. The latter, she says, “was so big and had so many specific subplots that it really had to be a category on its own.”
When it comes to Spanish-language political content, there’s also been a steady increase in output compared to four years ago, with many Spanish-speaking influencers parroting similar falsehoods that the far right disseminates in English, explains Jacobo Licona, disinformation research lead at Equis Labs. Claims that Biden is not yet the president-elect or that there was voter fraud in swing states sometimes even spread further when they’re in Spanish.
“What we often see is that same content as [in] English, in Spanish content does not get flagged as false information or it just lives longer in these places without getting taken down,” Licona says. “[The platforms] also need to be flagging it in Spanish as well, and they need to have a focus in Spanish language and even other languages to make sure that that content is also being monitored.”
The inability to stop these incidents of misinformation wouldn’t be as big a problem in a rich media ecosystem with many sources of news.
One Colombian Facebook personality—whose page was created in March of this year—posted a video claiming Trump’s victory was being censored by Twitter. According to The New York Times, this video got more than 500,000 views, more views than Russian troll–created Black Lives Matter posts in 2016. The page has continued posting that it’s not possible to call the election yet.
Despite the failures, the spread of misinformation could have been much worse, says Media Matters President and CEO Angelo Carusone. “Overall I thought that there were improved antibodies amongst the larger media echo chamber,” he says. “I think that everybody did a slightly better or much better job than they traditionally would have done, and as a result it had a net effect of keeping a lot of these things at bay.”
This can be attributed to enhanced pressure on social media companies to enforce their own policies when it comes to the spread of misinformation, disinformation, and incitements of violence, in addition to growing bipartisan scrutiny of how these platforms may have operated irresponsibly during past elections.
At least one “Stop the Steal” Facebook group with more than 300,000 members was taken down after there were posts calling for violence. Others have remained, including Stop the Steal 3.0, which is now a private group with a smaller following of 51,500 members. The group’s “About” blurb is only one sentence: “Let’s see how big we can make this before Facebook censors our free speech!”
Twitter shielded misleading or false information—including many posts from President Trump’s personal account—with a gray warning box. Similarly, a pop-up on any post about the election on Instagram (which is owned by Facebook) directs users to curated information about the results, sourced mostly from Reuters and Edison/NEP.
The least active platform was YouTube (owned by Google), which prioritized content from mainstream news sources over influencers but did not remove false election content, including a video that claimed Trump won re-election. YouTube did ban former Trump adviser Steve Bannon’s account after he said he would like to behead Dr. Anthony Fauci and FBI Director Christopher Wray. But then Bannon made an appearance on someone else’s YouTube channel, circumventing the ban.
“I emphasize all the time that social media platforms are going to tell a story about all their efforts around the election and hopefully we won’t have short memories,” Craig says. “There are a lot of people who are hugely upset with how they handled things this year and I think that will definitely be something that plays out next year, no matter how they spin what they did in November.”