
Jaap Arriens/NurPhoto via AP
The only way to reduce the stakes of content moderation is to increase the number of outlets for that content.
On January 7, 2021, Mark Zuckerberg announced that then-President Trump’s Facebook account would be indefinitely blocked because the president had condoned his supporters’ violent storming of the U.S. Capitol the day before. The president was a danger to democracy and the peaceful transfer of power, Zuckerberg declared. Four years later, also on January 7, Zuckerberg announced a reversal in Meta’s content moderation policies: Fact-checking would be replaced by community notes, and the algorithms that remove hate speech would be dialed back. Moderating content was censorship, he declared.
Inadvertently, Zuckerberg had made the strongest case possible for breaking up Meta. Many reactions to Zuckerberg’s announcement focused on the wisdom of the new or old content moderation schemes. But the content moderation wars are beside the point. Beyond outright illegal content, there is no right or wrong approach to content moderation. Just like restaurants, offices, or classrooms, online platforms may thrive under permissive or strict rules. Whatever the rules are, however, one person should not have the power to unilaterally set those rules for millions of Americans because he feels “a cultural tipping point.” Simply stated, Meta’s content moderation policies are not incompatible with democratic discourse. What is incompatible is Meta itself, as are other speech monopolies. For three reasons, content-moderating monopolies like Meta are a threat to democracy.
Zuckerberg’s announcements, both in 2021 and 2025, testify to the outsized individual power over discourse.
First, platform scale and market concentration raise the stakes of individual content moderation decisions and platform design choices. Raising the stakes of content moderation decisions, however, is highly problematic because, as Zuckerberg rightly points out, content moderation is inherently prone to mistakes. When platforms assess billions of posts, they will take down some that should have stayed up and leave up some that should have been taken down—even when acting with the best intentions. Some accounts will be mistakenly blocked, while others remain overlooked. When Facebook blocked President Trump’s account, the company may have acted too early or too late. Or maybe the platform should not have blocked the account at all.
Stricter approaches to content moderation (like Meta’s in 2021) or laxer ones (like Meta’s in 2025) do not lower these stakes and can hardly hope to reduce the risks of errors. All they do is favor one type of error over another. It “is a trade-off,” Zuckerberg acknowledges. The only way to reduce the stakes of content moderation is to increase the number of outlets for that content. If there were 50 Metas, none of the company’s individual decisions would matter much; none of their inevitable mistakes would have potentially devastating consequences for public discourse.
Second, market concentration facilitates “cooperation and cooptation” between speech platforms and the government—an affront to free-speech values. Both in 2021 and 2025, Zuckerberg’s pivotal announcements came just days before the inauguration of a new administration, when he could be clear-eyed about the regulatory and political risks of his actions. In 2021, Zuckerberg harshly condemned President Trump for his actions on January 6th. Despite countless earlier violations of Facebook’s terms of service, Zuckerberg suspended the president’s account indefinitely only after it became clear that Trump’s attempts to remain in power had failed. Meta aligned itself with calls for fact-checking as preferred by the incoming Biden administration. In 2025, Zuckerberg expressly cited recent election results to justify changes that are tailor-made for the policy preferences of the incoming Trump administration. In passing, the Meta CEO took a swipe at the outgoing Biden administration, lamenting difficulties in standing up for free speech globally “over the past four years when even the US government has pushed for censorship.” Referring to the removal of content as “censorship” and declaring that “allegations of mental illness or abnormality … based on gender or sexual orientation” are now permitted, Meta embraced the incoming administration both in tone and explicit policy preferences. Zuckerberg also restructured his team and hired Trump administration confidante Joel Kaplan as global policy chief.
Third, Zuckerberg’s announcements, both in 2021 and 2025, testify to the outsized individual power over discourse. Equipped with a $900,000 watch but no public mandate for his policies, Zuckerberg proclaimed the new speech rules for 250 million Facebook and 160 million Instagram users in the United States. No single politician commands a fraction of that power over discourse, despite their actual public mandate. Enabled by its monopoly position, which affords users opportunities neither to exit to a comparable platform nor to articulate their voice, Meta engages in illegitimate private governance.
The content moderation project that seeks to mitigate speech harms by relying on platforms to self-regulate has failed abysmally. It is finally time to create a market structure “conducive to the preservation of our democratic political and social institutions,” as the Supreme Court stated in 1957 in finding a company in violation of antitrust law. Focusing on content over structure and cooperation over regulation ignores market power, disregards interests, naïvely trusts the benevolence of billionaires, and mistakes cheap opportunism for loyalty to democratic norms. There are important content moderation policies for government to pursue; but any new content moderation directives must be paired with steep structural reforms.
It’s not just the content of speech that must change. It is the power of monopolistic speech platforms over speech that must be diminished. There is no alternative to breaking up Big Tech.