
Omar Marques/SOPA Images/Sipa USA via AP Images
After grinding through the House, Donald Trump’s big beautiful reconciliation bill is now pending in the Senate, where weary Republicans and even wearier Democrats probe and ponder. As committees prepare to mark up the package this week, a particularly egregious and unprecedented carve-out for artificial intelligence has bubbled to the surface. Tucked into the House-approved legislation is a ten-year moratorium on state regulation of AI.
This could be wielded to prevent regulation of anything involving an algorithm or machine learning, from AI-generated deepfakes to strategies to set personalized prices based on extracted data. More than 30 states adopted AI-based legislation or resolutions last year, according to the National Conference of State Legislatures.
While the Trump administration has engaged in wide-scale deregulation by demolishing federal agencies and purging employees, a preemptive blockade that not only restricts regulation but wipes current laws off the books strikes a new chord.
But the provision has run into trouble, not because of its dubious budgetary impact but because some Republicans are backing up their states’ ability to protect their populations. “I’ll try to do everything I can to kill that,” Sen. Josh Hawley (R-MO) told NOTUS last month. At a hearing on the NO FAKES Act, a deepfake regulation bill, Sen. Marsha Blackburn (R-TN) backed up Hawley, pointing to the successful state regulations in her state, a hotbed of the music industry, that ban the unauthorized AI replication of artists and musicians. The ELVIS Act, one of the first statewide AI regulations, updated a prior state law that came about after a lawsuit brought by the Presley estate over the use of the King’s reproduced likeness.
The Tennessee legislation points to the patchwork of laws that have managed to pass, despite vast sums of lobbying by Big Tech giants. Wiping these laws clean and leaving Congress—inundated with even more special-interest spending than states—to pass comprehensive AI protections would all but ensure open season for AI to continue its vast experimentation with private user data, faces, and voices.
Given the sheer size of the reconciliation bill and the proliferation of special interests wanting to protect their favorite pieces, senators will be forced to choose which respective hill to die on. Whether they make their stand on AI regulation or cut deals for other priorities remains to be seen.
Despite his heartland conservatism and decades of service as an avatar of the New Right, Sen. Chuck Grassley has long held the view that regulation is critical.
But at the same time that the reconciliation bill is hacked apart and sewn back together in the Senate, momentum is building for a stand-alone bill that would open the floodgates for regulation from the inside.
Introduced by Sen. Chuck Grassley (R-IA) earlier this month and joined by Sens. Blackburn, Hawley, Chris Coons (D-DE), Amy Klobuchar (D-MN), and Brian Schatz (D-HI), the AI Whistleblower Protection Act would create new protections, and in some cases monetary incentives, for whistleblowers who disclose wrongdoing by the AI firms employing them.
Despite his heartland conservatism and decades of service as an avatar of the New Right, Grassley has long held the view that regulation is critical and best carried out through oversight, specifically by whistleblowers. When an insider from the FBI, EPA, or corporate America comes forward to testify to Congress, it is almost uniformly Grassley to whom they are directed. This bill from Grassley seeks to broaden the protections afforded to AI developers and users by striking restrictive nondisclosure agreements and mandating back pay and reinstatement for employees retaliated against by their employers.
Because the reconciliation bill only bans state regulation of AI, the whistleblower bill would be fair game for Congress, if it can earn enough support in both chambers and get Donald Trump’s signature.
In a letter to OpenAI founder Sam Altman last August, Grassley raised the public message signed by OpenAI employees warning of the failure of Altman’s firm to self-regulate. Grassley also pointed to widespread reporting that OpenAI sped through safety testing for its products to meet the deadline for product releases.
The risks posed by shoddy safety testing range from the existential to the mundane, from spitting out bad advice that could lead an impressionable youth to do something stupid, to explicitly mapping out the path to building chemical or biological weapons, to simple math errors that, in aggregate, could lead to catastrophic business failure. As senior OpenAI researcher Jan Leike wrote in a resignation tweet after the launch of GPT-4o, “safety culture and processes have taken a backseat to shiny products.”
Conservative attacks on AI have often focused on these shiny objects: demonic voice replication, AI manipulating (or abusing) children, and a general anti-Christian aura surrounding our new cybernetic gods. But in the promise of the AI Whistleblower Protection Act, the band of AI-critical senators is also targeting the vast economic risk posed by an unregulated AI economy.
In the wake of the 2008 financial crash, the Dodd-Frank Act included a provision to create a financial crime bounty-hunting office inside the Securities and Exchange Commission. The law created a pathway for whistleblowers to earn a cut of whatever penalty is issued to a bank or financial institution as the result of their disclosures, incentivizing regulation from within.
Since the program’s implementation, the SEC has recovered billions of dollars from firms for securities violations, and disincentivized further wrongdoing by reminding corporations that corruption has a price, and that the SEC is willing to pay corporate employees millions more than their companies ever will.
Just as the financial crash revolved around a bubble of dirty home mortgages repackaged as certified fresh, OpenAI is similarly built on a mountain of shit. As Bryan McMahon wrote for the Prospect last month, OpenAI’s entire business model is premised on spectacular and vague breakthroughs that will grow its annual revenue to $100 billion, at which point the company will stop losing money.
As OpenAI’s technological ambitions grow, so too does the financial cost of building ever more data centers and computing hubs to support its models. Simultaneously, OpenAI is integrating itself into every crevice it can squeeze, expanding with a ravenous hunger that increasingly approaches “too big to fail” status. And when a company is too big to fail, as we know from recent history, tax dollars can be used to inflate the failing firm back to its original size.
Whistleblowers who are coaxed out of hiding by the promise of protection and compensation are in no way the end of the battle to regulate AI. But as congressional whistleblower testimony on both banks and tech firms has demonstrated, encouraging employees to call foul serves as an internal cudgel for market-dominant firms, and can serve as an off-ramp for towering financial bubbles. And perhaps more important, it’s a framework that Grassley, the longest-serving Republican in the Senate, is comfortable with.
In early May, OpenAI’s Altman testified before the Senate during a hearing on AI competitiveness. “I think it is very difficult to imagine us figuring out how to comply with 50 different sets of regulations,” Altman said, previewing the provision in the House bill that would strip states of their ability to regulate AI. “One federal framework that is light touch that we can understand and that lets us move with the speed that this moment calls for seems important and fine,” he concluded. As Altman seeks endless speed for OpenAI products, it now rests with a small band of senators to slow things down.