For the second time in as many days, Meta has been found liable in court for negligence. On Wednesday, a jury in Los Angeles decided that Meta and YouTube, owned by Google, did not warn users of harms related to constant use of their platforms. That followed a jury verdict on Tuesday in New Mexico, fining Meta $375 million for failing to protect adolescent users from predatory adults on its social media platforms Facebook and Instagram.
You can look at these rulings a couple of ways. Meta made $60 billion in revenue just last quarter: $375 million is about half a day. Extrapolating that fine to the entire U.S. population, it’s an entire quarter of revenue, which is significant, but that would take a long time and this federal government, which just appointed Meta founder and CEO Mark Zuckerberg to a council on AI policy, isn’t about to make that happen. The Los Angeles trial, meanwhile, yielded $6 million in compensatory and punitive damages for only one user, so the potential cost extrapolated across all Americans is immense, if theoretical at this point.
But there are many other implications that signal if not the end of social media, then a turning point downward in its history. Their traditional protection from liability is over; their goodwill with the public has dissipated; their ability to buy off public officials has limited reach; and their attempts to transfer the fortunes they made on their platforms to the next generation of technology may become mired in a sea of litigation.
The first important aspect of these trials is that they took place at all. Social media and other websites have benefited for 30 years from Section 230 of the Communications Decency Act, which prevented sites from being held liable for defamation for the conduct of their users, unlike traditional publishers (and with some exceptions). But these trials worked around that restriction. The private lawsuit in Los Angeles was a personal injury case, and the case brought by the state of New Mexico had similar charges. They faulted Meta (and in the L.A. case, YouTube) for knowing that design features like autoplaying videos, sending continual notifications, or designing the algorithm for maximum addiction would keep younger users on the site and expose them to harms. That avoids the 230 shield and shifts the legal landscape to things social media sites affirmatively did, rather than just hosting misconduct by other users.
There are at least two other cases using this legal strategy, one in the Northern District of California and one in New York City. The California case is a multi-district litigation that dates back to 2022, with hundreds of combined cases involving children and teens, school districts, and states. The verdicts this week show that cases of this type can pierce the armor of social media sites, which have hitherto operated as if they were invulnerable to liability for anything that happened on them. Indeed, those liability shields likely increased the platforms’ recklessness. “When a platform abuses market power and user data, putting profit ahead of the customers, you get these issues,” said Jason Kint of Digital Content Next, which represents publishers.
The second important factor is that these were both jury trials. Whether because of inherent conservatism or a social status that values business, judges are often reluctant to take action to punish companies that violate the law. The preposterous remedy from Obama appointee Judge Amit Mehta in the case that found Google to be monopolizing search, which did not even prevent the company from the core actions it was taking to monopolize search, is a good example.
Juries of ordinary people do not have the same biases and preoccupations as judges. They heard evidence of children being abused, of Facebook and Instagram and YouTube users becoming addicted to scrolling, suffering from anxiety, depression, body dysmorphia, and worse, and determined that the companies made choices that led to these outcomes. (Incidentally, the fact that a jury is hearing the Live Nation monopolization case makes me more hopeful about the states’ chances in that trial.)
Part of this stems from the fact that people have come to hate social media, even if they’re heavy users of it. (Present company included.) Social media use is dissipating, one sure sign of disapproval; in polls, respondents reporting no social media use are rising among both seniors and younger Americans. Polling also often shows numerous concerns about the effects of social media and the desire to live without it. These polls are from people who haven’t been put into a courtroom and forced to listen to evidence that social media companies know how to addict their customers, and that this has a direct through line to users becoming depressed, suicidal, and at risk for exploitation, something the companies also know.
The attorney general of New Mexico, Raúl Torrez, will use the second phase of the Meta trial, which will determine whether Meta created a “public nuisance” in the state, to seek specific changes to Facebook and Instagram platforms. These will include “changes to the design features of the platform itself, real age verification, changes to the algorithm … and fundamentally a demand that they do business differently in New Mexico,” he said on CNBC.
That could mean that Facebook couldn’t autoplay videos in New Mexico, or bombard notifications. I doubt that Meta would design something solely for New Mexico’s two million residents, so it would likely mean changes to the entire U.S. product. Even if they did narrowly target it, other states are bringing their own cases. While Donald Trump has been Big Tech’s errand boy, clearly many states are recognizing that the harm to their constituents and worsening public sentiment demands that they act.
The real context for these cases is that technology is already moving on to a next generation. The companies that dominated Web 2.0 want to transfer that dominance over to AI. But unlike Web 2.0, they begin with a skeptical public that wants protections. It’s informed by the slow drip of awful news about how these companies managed their previous platforms.
The liability breaking through now is going to be a lead weight on companies trying to make the AI transition. Meta already made one wrong turn and had to shut down its Metaverse, which it literally changed the name of the company to reflect, after $80 billion in losses. The sheer size and dominance of Meta allows it to absorb such a blow, but continued lawsuits over its legacy product will be an ever-present pressure, especially given the enormous capital spending needed for data center construction and high-end chips used to create AI models.
The cases have been compared to what Big Tobacco suffered in the 1990s, which ended with a master settlement that fundamentally changed how the industry operated. Tobacco use plummeted in the aftermath. Philip Morris is still a big company, but it’s several rungs below the top corporate giants these days. A post–social media era where the litany of lawsuits are combined, the business model is forced to change, the companies most associated with it are permanently hobbled, and the user base drifts away could be upon us.
Read more
Here’s the Proposed Deal to Fund Most of DHS
Everything would be funded except ICE enforcement, and some reforms added, while Republicans would try a long-shot reconciliation bill for ICE money afterward.
Organized Money: The Business of Betting on Murder with Sen. Chris Murphy
Bet you can’t guess who is insider trading the Iran War.
Tom Steyer Is Trying Politics
To revive a dead political culture that could lead to a Republican becoming governor in one of the nation’s most Democratic states, the billionaire former investor is talking to voters.

