This article appears in the October 2023 issue of The American Prospect magazine. Subscribe here.
Digital Empires: The Global Battle to Regulate Technology
By Anu Bradford
Oxford University Press
Building Back Truth in an Age of Misinformation
By Leslie F. Stebbins
Rowman & Littlefield
The online spread of misinformation and deliberate disinformation is worsening. COVID-19 denial and the spread of vaccine rejection, disinformation about the Russia-Ukraine war, the growth of unregulated generative AI, and lies spread by politicians and social media sites make that clear. The 2024 elections in the U.S. and elsewhere have policymakers (or at least Democrats) worried about what is coming next. Any hope that the problem will just go away, or that Big Tech will “clean up the mess they created,” has evaporated.
There is no shortage of initiatives to shore up democracy and clean up the information ecosystem. Nobel laureate Maria Ressa’s ten-point plan for fixing the internet, the 2023 launch of an International Panel on the Information Environment, and multiple recommendations from the Forum on Information and Democracy, organized by Reporters Without Borders, all offer strategies. Many of these initiatives call for enforcement of data privacy rights, together with audits and fines to incentivize social media platforms to stop promoting potentially harmful falsehoods (while still respecting freedom of expression) as well as increased transparency of algorithms and the strengthening of the media ecosystem as a whole. The United Nations Secretary-General’s office has launched consultations on its Code of Conduct for Information Integrity, while UNESCO is already organizing worldwide consultations and gathering comments in an attempt to come up with guidelines for regulating Big Tech while still protecting freedom of expression.
Tracking all the research and policy proposals can feel overwhelming. Two timely books, thankfully, guide us through the thicket.
Building Back Truth in an Age of Misinformation, by Leslie F. Stebbins, largely examines the pros and cons of “demand-side” solutions: fixes that focus primarily on individual responsibility. Anu Bradford’s Digital Empires is about “supply-side” solutions: those that are more structural in nature. Bradford’s approach is the more persuasive.
Back in 2017, I began writing about what I called “demand- and supply-side solutions” to make sense for my Columbia University students of the universe of fixes that had arisen for the problem of online mis/disinformation. The votes for Trump, Brexit, and Bolsonaro had led to a plethora of forums, papers, conferences, foundation funding, and projects on the subject, and it became necessary to categorize the ideas floating around.
Demand-side solutions, I pointed out, are the solutions that focus on the individual. These emphasize media literacy training, fact-checking, labeling mis/disinformation, and journalists’ efforts to engage with audiences and build trust. Such solutions skirted the problem of how to regulate the tech giants; Google and Facebook even funded some of these initiatives for PR reasons and in an effort to forestall regulation.
Demand-side responses are the American way. Instead of a role for government, the onus is on the audience not to be stupid by acting on claims that are obviously untrue. In the same way, instead of authorities researching and regulating chemicals that cause cancer, Americans are encouraged to make individual decisions not to smoke.
The rest of the world wasn’t buying it. The European Union called in the social media platforms early on and told them they had to start taking down illegal content. When the companies didn’t do much, the Europeans began regulating. Their “supply-side” approach looks more to the producers, suppliers, and purveyors of information.
Legislating of microtargeting and privacy will inevitably affect the business models of Google and Facebook.
Supply-side solutions can be broken down into two parts: firstly, suppressing poor-quality, dangerous, or illegal information; and secondly, creating and/or promoting high-quality information.
Laws like Germany’s 2017 Network Enforcement Act, which fines social media companies for failing to remove illegal content despite multiple warnings, and the European Digital Services Act of 2022 both aim at suppressing false or potentially harmful mis/disinformation, as do defamation suits against purveyors of falsehoods and the use of AI to screen and filter information. Efforts to promote high-quality journalism include Google trying to provide accurate information in its search or on YouTube and government efforts to support public broadcasters, or programs to fund local news.
Of course, the divide is not absolute, and overlaps between supply- and demand-side solutions do exist. Fact-checking creates a supply of reliable information—but that doesn’t matter unless audiences actually want it.
In Building Back Truth in an Age of Misinformation, Stebbins, a former librarian, devotes several chapters to the importance of media literacy for young people and building trust in journalism. She is realistic about the limits of this approach. “Learning to think more critically about the information we consume—whether we are five or fifty years old—is no match against sophisticated microtargeting, expertly doctored images, and sensationalized content. Equipping people with media literacy skills will not decrease the amount of misinformation online. Media literacy skills are important but can only have limited effectiveness against platforms that are prioritizing profits over serving the public.”
She concludes by calling for alternatives to Big Tech, and by promoting small-scale local platforms, which she terms “New Digital Public Squares.” Stebbins optimistically describes grassroots attempts to found community-based public squares, such as the Front Porch Forum in Vermont, where people can send helpful and friendly messages to their neighbors. As an example, Stebbins quotes a message about a lost ball of mozzarella that “Ken in Montpelier” picked up and wrote a cheery post about, saying, “I saw it rolling down the sidewalk … We have it safe in the fridge, hopefully it wasn’t for tonight’s dinner!” Stebbins cites research that, unsurprisingly, has found that this network “improves social cohesion and is improving the resilience of local Vermont communities.”
Local community platforms may be better for society than Facebook or Musk-controlled Twitter, but they’re clearly not going to solve the democratic deficit facing much of the world or defeat the deluge of disinformation. Stebbins’s book argues for the need to rebuild journalism and to require platforms to serve the public interest, but whether Americans like it or not, this will inevitably require government policies and regulation.
Digital Empires, by Columbia University law professor Anu Bradford (famous for coining the phrase “Brussels Effect,” referring to the outsize effect that European Union legislation has on the rest of the world), describes what governments are already doing. Bradford argues that three different paths to tech governance have emerged—that of the European Union, the U.S., and China—and says that in the coming years countries will have to choose between them, or their path will be chosen for them. Bradford explains that the U.S. digital economy followed a market-driven approach. Believing that free speech, free internet, and innovation are essential, the U.S. left the internet unconstrained. This lack of regulation has made the U.S. an outlier compared to much of the world. Because the U.S. has not regulated Big Tech, other countries are now well ahead. In Europe and the U.K., patience with self-regulation by the Very Large Online Platforms (VLOPs) has worn thin. The European Union is implementing the Digital Services Act, designed to curb online harms by making platforms produce regular risk assessments and plans to address the risks, and the U.K. is likely to pass the Online Safety Bill by the end of 2023.
While European attempts to regulate online privacy have not been as effective as hoped, regulators, and Bradford, believe that regulation will continue to evolve and that enforcement will improve. The EU has a long history of enforcement and the ability to take measures that the U.S. government cannot do. It can, and has, levied large fines, and the EU is already staffing up the bodies that will oversee Big Tech. The Digital Services Coordinators, who under Article 38 are responsible for national enforcement and implementation of the DSA, will be appointed in 2024.
At the other extreme from the U.S., China has been able to keep a grip on its tech companies in a way U.S. antitrust regulators could only dream of. China is fining Alibaba a whopping $2.75 billion for antitrust violations. China has used tech regulation to expand its industrial policy, surveillance of its citizens, social credit system, extreme censorship, and repression of human rights. Not only has China done this at home, but it uses its foreign aid and technological prowess to spread this agenda abroad. China’s Digital Silk Road exports systems of surveillance to Kenya and the Philippines through its “smart city” technology, while its wiring of digital infrastructure in countries across Africa will create dependency on Chinese technology in the future, helping Chinese tech companies grow their markets. As Bradford points out: “By nurturing leading tech companies such as Alibaba, Huawei, JD.com, and Tencent, China has shown to the world that political freedom is not necessary for economic success.”
Which path should prevail? For Bradford, there is only one answer. The EU is the region that is focused on social cohesion and fairness. Antitrust legislation shows citizens that there is a level playing field and that big companies don’t get special treatment. The question of how to protect democracy from the harms caused by online mis/disinformation while still guaranteeing freedom of expression is also addressed by EU member states whose laws reflect the belief that free speech doesn’t mean the right to incite genocide or destroy elections. The EU has constitutional protections of free speech but balances this right against other constitutional rights and other rights including labor rights, social rights of platform workers, and economic rights of smaller companies. “Even though the EU shares the US’s commitment to protecting free speech, it is prepared to restrict that fundamental right in the name of other fundamental rights and important public policies, be it human dignity, personal privacy, public safety, or democracy.”
The European Union’s rights-driven agenda has made it the world leader in privacy regulation. So, too, the EU’s aggressive antitrust policies, including regular fines of Google and Meta, create benefits for the rest of the world, especially when countries like Australia and Japan follow Europe’s lead. Bradford explains that EU policies will spread, in part because it is easier for Meta and Google to roll out one standard across many markets than to have to customize according to different countries’ laws. Detroit building cars to meet Californian emission standards rather than creating different models for different states is a comparable example that springs to mind.
Similarly, Bradford believes that the new Digital Services Act’s requirements on algorithmic transparency and user choice will spread globally, partly because it is easier for Meta and Google to have unified standards across markets. The DSA requires companies to give researchers access to data, and that could have an effect globally. Researchers all over the world will be able to see more detailed information about platforms’ design choices, microtargeting, and algorithms. This legislating of microtargeting and privacy will inevitably affect the business models of Google and Facebook, which rely on targeted advertising and user impressions for their revenue. The DSA includes a ban on microtargeting of children and bans targeted advertising based on religion, gender, or sexual preferences.
The U.S. could learn from Europe and give up on its libertarian hands-off approach to Big Tech, but it’s not clear that it will. This is more about politics than ideology. Bradford writes that the U.S. tech companies are so powerful and have so much lobbying clout that they may succeed in remaining largely unregulated locally. However, she’s such a believer in the European model that she hopes that U.S. regulators will become tougher: “It is possible that this backlash against tech companies—unfolding in the US but also around the world—will pose the biggest threat yet to the American market-driven regulatory model. The coming years will reveal whether the ongoing debate about the downsides of techno-libertarian ideals, and the proposed legislation that harnesses that sentiment, will usher in a new era of regulation in the US.”
The 1990s was marked by its optimism that self-governance by Big Tech would work; a second wave in the 2000s was optimistic about how the state can assert authority over the tech sector—but the tech-lash has changed all that. Bradford writes: “We are now moving into an era where there is increasing consensus that tech companies’ self-governance does not work and governments need to get involved, but there is increasing doubt about governments’ ability to do so effectively.”
Despite the massive concentrated power of the tech giants, Bradford still believes the state can exercise muscle: “The codes, community guidelines, and any other rules written by large tech companies ultimately remain subject to the laws written by governments possessing the coercive authority to enforce compliance with those laws.”
In the U.S., there has been some state regulation, with the main victory so far California’s extensive 2018 privacy law. Otherwise, no major federal laws have been approved by Congress. To name a few that have not been passed: the Journalism Competition and Preservation Act, which would push Google and Meta to pay for news they use; attempts to modify Section 230 of the 1996 Communications Decency Act so that companies can be held liable for harms caused by the mis/disinformation they circulate; competition laws that might break up Big Tech; “know your customer” laws; and proposals that would limit campaign advertising online.
Stebbins’s proposals of more public education and the need to rebuild journalism—which Europe is also tackling—are also useful, but not enough. Bradford makes a persuasive argument that, for anyone worried about online mis/disinformation and corporate power and human rights, the EU path is the only way forward. European regulators have spent years developing muscle, and they are the only authorities (along with the U.K.) who are prepared to rein in Big Tech both as a monopoly and as a destroyer of democracy.