Paul Sancya/AP Photo
Former House Speaker Nancy Pelosi (center) and Rep. Zoe Lofgren (right) oppose the AI bill in the California legislature. Gov. Gavin Newsom (between them) must decide whether or not to sign it.
The California Assembly just passed SB 1047, a first-of-its-kind AI safety bill. The legislation would require producers of next-generation artificial-intelligence models to create safety plans designed to prevent mass damage or casualty events. It’s a first step to regulating AI in the state with more activity in this space than anywhere else. Supporters describe the bill as commonsense, light-touch regulation. The industry that seeks to profit handsomely off these models fiercely disagrees, and has pulled out all the stops to kill it.
In spite of this resistance, the bill has flown through the legislature. Today’s initial Assembly vote came out to 41 in favor and nine against, with support from at least two Republicans. The Assembly’s full tally will come tonight. Now the bill goes back to the Senate for a concurrence vote. It is expected to have an easier time there—an earlier version of the bill passed the Senate with 32 votes in favor, one against, and seven abstentions.
Just ahead of the final floor votes, the California Chamber of Commerce was touting a poll finding a plurality of surveyed likely voters in opposition to SB 1047.
But while the Chamber got their poll into the opinion-influencing Politico California newsletter, it did not initially publish the full poll results. (The results were quietly added sometime in the afternoon or evening.) After I obtained them, it’s easy to see why. Between August 9th and 12th, this is how over 1,000 Californians were introduced to the bill:
Lawmakers in Sacramento have proposed a new state law—SB 1047—that would create a new California state regulatory agency to determine how AI models can be developed. This new law would require small startup companies to potentially pay tens of millions of dollars in fines if they don’t implement orders from state bureaucrats. Some say burdensome regulations like SB 1047 would potentially lead companies to move out of state or out of the country, taking investment and jobs away from California. Given everything you just read, do you support or oppose a proposal like SB 1047?
After hearing all that, 28 percent of respondents supported the bill, 46 percent opposed, and 26 percent were neutral.
The poll was conducted by Adam Rosenblatt of Bold Decision, who did not provide comment by the time of publication. On behalf of the California Chamber of Commerce, Denise Davis wrote back to me that the poll question came directly from language in the bill, highlighting text related to financial penalties developers could be exposed to. But SB 1047 only applies to unprecedentedly large and expensive AI models; specifically, to developers spending more than $100 million on training, which is greater than the estimate of any known model. The California attorney general can sue if the model caused harm or imminently threatened public safety and the developer either ignored their own safety plan, or it fell short of industry best practices.
As the bill’s author, state Sen. Scott Wiener, likes to point out, every leading AI company has signed on to voluntary safety commitments.
SB 1047 would apply to anyone doing business in California, the world’s AI epicenter and fifth-largest economy. Tellingly, multiple researchers at leading AI companies I spoke with scoffed at the idea that companies would modify their behavior to avoid being covered by the bill. (One called the idea of an exodus “bullshit”; another called it “complete nonsense.”)
A Democratic strategist unaffiliated with the bill wrote to me that the Chamber poll’s question “is really badly biased … This is bad polling practice, and, if anything, it’s surprising they only managed 46 percent opposition after such a push question.”
In response to the poll, Sen. Wiener wrote, “It’s the most over the top, manipulative push poll question I’ve ever seen. It describes a made-up bill no one is authoring.”
Even Politico could not ignore that other polling found sharply different views of the bill. A just-released poll from the AI Policy Institute (AIPI), a pro-regulation polling shop, found 70 percent support for the bill. AIPI conducted two previous statewide surveys of SB 1047, finding support from 59 percent of respondents in July, and 65 percent in early August.
The authors of the Politico newsletter did not reply to questions by the time of publication.
For a comparison, this is the first question in the latest AIPI survey:
Some policy makers are proposing a law in California, Senate Bill 1047, which would require that companies that develop advanced AI conduct safety tests and create liability for AI model developers if their models cause catastrophic harm and they did not take appropriate precautions.
The Democratic strategist wrote that the AIPI poll “at least attempts to present balanced arguments, and while you can argue about which exact pro and con arguments they should have used, it’s a much fairer question.”
The lobbying arm of the Center for AI Safety (CAIS), a co-sponsor of SB 1047, commissioned its own statewide poll in May, which found 77 percent support for the bill.
Teri Olle, director of Economic Security California, another bill co-sponsor, wrote to me that the Chamber poll is “questionable at best and devious at worst.” She added that it “appears to be a thinly veiled attempt to manipulate public opinion rather than an honest effort to gauge it … It’s particularly telling that even with such leading questions, they still couldn’t muster a majority opposition.”
THE TECH INDUSTRY IS NOT THE ONLY GROUP to take unusual measures to defeat the AI safety bill. On August 15th, eight congressional Democrats from California urged Gov. Gavin Newsom to veto the bill. The next day, former House Speaker Nancy Pelosi followed with her own statement, marking what appears to be her first opposition to state-level legislation from a fellow Democrat in her decades in Congress.
The congressional letter was organized by Rep. Zoe Lofgren, ranking Democrat on the House Committee on Science, Space, and Technology, and a senior member of the House Judiciary Committee. Her signature was followed by Reps. Anna Eshoo and Ro Khanna. These three members represent parts of Silicon Valley.
After analyzing Open Secrets data on each of their career top-20 contributors, I found that they have collectively taken in more than $2.7 million from leading AI companies and AI investors, with another $1.5 million from software companies belonging to industry groups opposed the bill, coming out to $4.2 million total. (The above figures include contributions from employees of these firms.) This money accounts for nearly half of their donation totals from their respective career top-20 contributors.
Google, which published its own letters against SB 1047, was by far the biggest contributor to these three representatives, with nearly $1 million donated in total. Lofgren’s daughter works on Google’s legal team. And as the Prospect and The Intercept have previously reported, Lofgren has been a leading opponent on legislation regulating Big Tech.
The congressional letter echoed a lot of language and arguments used by Big Tech lobbyists and letters from their industry groups.
Nancy Pelosi’s 2023 financial disclosure shows that her household owned between $16 million and $80 million in stocks and options in Amazon, Google, Microsoft, and Nvidia. In June and July 2024, Nancy’s husband, Paul Pelosi, bought 20,000 shares of Nvidia, at an estimated total price of $2.4 million. Also in July, Paul Pelosi sold 5,000 shares in Microsoft for an estimated $2.1 million.
Andreessen Horowitz (known as a16z), perhaps the world’s wealthiest venture capital firm, has spearheaded an all-out campaign to kill SB 1047. Fast Company reported that the firm has hired Jason Kinney to lobby Newsom to veto the bill. The lobbyist got a career bump after Newsom was infamously caught violating the state’s COVID rules at Kinney’s birthday party at The French Laundry.
Y Combinator (YC), the prestigious startup accelerator previously run by OpenAI CEO Sam Altman, has engaged in formal lobbying in California for the first time, hiring their own lobbyist with close ties to Newsom.
While they position themselves as speaking up for “little tech,” a16z and YC are both invested in OpenAI, which has opposed SB 1047. A16z is also invested in Facebook, where co-founder Marc Andreessen serves on the board.
Juliana Yamada/AP Photo
The California state legislature is likely to pass the AI safety bill.
SINCE SB 1047 HAS HAD OVERWHELMING SUPPORT in the California legislature, opponents see a Newsom veto as their last, best hope of killing the regulation.
There is a very recent precedent for this: A California bill meant to tax large tech platforms to fund local journalism was gutted in a last-minute backroom deal orchestrated by Google. The tax has been replaced with voluntary contributions from Google and the state, much of which goes to a “National AI Accelerator.”
Opposition from the AI industry has been intense but not completely uniform. After SB 1047 was amended to address many of Anthropic’s worries, the leading AI firm published a letter coming just short of endorsing the bill, writing that the updated version’s “benefits likely outweigh its costs.”
And in a last-minute move that surprised many, Elon Musk endorsed SB 1047 on Monday. Musk has publicly expressed concern about AI’s risks for at least a decade (though he also co-founded OpenAI and now leads xAI). Dan Hendrycks, the director of CAIS and safety adviser to xAI, wrote in Time that Musk’s endorsement came days after they spoke about the bill, which Hendrycks helped craft. (Musk’s endorsement is not out of personal loyalty to Wiener, whom he called a “pedophile-apologist” in July.)
In the U.S., tech has largely been left to regulate itself, with mixed results. Over tacos in San Francisco in February, a senior safety researcher at a leading AI company told me that the voluntary governance approaches adopted by OpenAI and Anthropic work until you get close to human-level AI, but then competitive pressures take over. And whether or not they’re right, the top companies say that human-level AI could come within five years. The leaders of all these companies, along with many of the pioneers of deep learning, also say that human-level AI could lead to extinction.
A researcher at a leading AI company wrote to me, “The conditions required by SB 1047 really aren’t that bad.” But going from nonbinding commitments to anything with teeth sets an important precedent—one that many in the industry want very badly to avoid.
Olle wrote to me that “it’s no surprise the public doesn’t trust Big Tech to regulate themselves. They can’t even be trusted to ask an honest question.”
An earlier version of this story suggested that a partner at a16z had early access to the poll data. He did not. We regret the error.