How to regulate artificial intelligence is becoming a major public issue. One set of questions involves national security. Will AI companies be allowed to do business with, and in, China at their pleasure? Nvidia has just secured permission to ship its H200 chips to China and is trying to get President Trump’s approval for its even more powerful Blackwell chips. These chips are already being smuggled into China using third-country subterfuges.
Another threat involves the use of AI in espionage, creation of fake news, and disruption of U.S. elections. A third issue is consumer protections. Walmart just won a patent for AI-based dynamic pricing.
There are also challenges like AI’s impact on jobs, AI porn, chatbots leading to suicides, and AI copyright infringements. Fake music created by AI bots has pilfered untold sums from music streaming sites and legitimate musicians. All of this will only worsen as AI becomes more sophisticated.
The AI industry wants self-regulation—basically no regulation—and the Trump administration is totally captive to the industry. Trump’s top officials on AI policy are protégés of Palantir’s Peter Thiel, who epitomizes all that is pernicious about Big Tech. AI czar David Sacks, who co-authored a book with Thiel, and Office of Science and Technology Policy chief Michael Kratsios, Thiel’s former chief of staff, have followed the industry’s wishes on AI to the letter.
In January 2025, Trump rescinded a Biden order that directed companies to notify the government when they developed models that posed health, economic, or national security risks, which Trump’s order characterized as barriers to innovation. The administration’s own AI blueprint, unveiled last July, emphasized promoting AI use at home and as an export industry. “America must once again be a country where innovators are rewarded with a green light, not strangled by red tape,” Trump said at a tech industry event.
It is increasingly hard to serve both Big Tech and the national interest.
The recent Anthropic case has shown that when self-regulation threatens to turn ethical, it will be slapped down. The Pentagon squeezed Anthropic out when its CEO, Dario Amodei, refused to allow its AI product, Claude, to be used for mass surveillance of Americans or for autonomous lethal weapons.
In the absence of federal safeguards, several states have acted. The Colorado AI Act law includes data privacy standards and protections against “algorithmic discrimination.” Other states, including California, Illinois, New York, and Utah, have passed laws restricting the use of AI in automated employment decisions; limiting how chatbots can be used in mental health services; and protecting people from the unauthorized use of their images, voices, or likenesses. The Trump administration’s response is to press for federal preemption, unsuccessfully so far.
You might expect that national Democrats would be more alert to these threats. But just as in the case of crypto, AI has scads of money to spread around, and Democrats are divided.
IN DECEMBER, TO REINFORCE THE COZINESS of key House Democrats with the AI industry, House Democratic Leader Hakeem Jeffries announced a new “House Democratic Commission on AI and the Innovation Economy.” Reps. Josh Gottheimer (D-NJ) and Valerie Foushee (D-NC) were named as co-chairs, plus Zoe Lofgren (D-CA), ranking member of the Science, Space and Technology Committee, and caucus vice chair Ted Lieu (D-CA). All are allies of the industry, and at least one—Foushee—was given the assignment so she could get financial support from AI super PACs for a primary challenge that she barely survived.
Jeffries’s effusive statement, sounding a lot like Trump, declared, “The brilliance and ingenuity of the innovation community has positioned America to lead the world in artificial intelligence and pioneer potentially life-changing breakthroughs in medicine and other fields of human endeavor that will benefit humanity. It is important that American companies continue to thrive in this area.” Jeffries also acknowledged the need to “prevent bad actors from exploiting this transformative technology,” but support for the industry was the headline.
Of course, there is a lot more to the question of how to protect the public than keeping out “bad actors.” Absent safeguards, the entire industry is potentially a bad actor.
An emerging key Democratic player on AI is Jake Sullivan, Biden’s former national security adviser. Sullivan holds Harvard’s newest distinguished professorship, a chair named for Henry Kissinger. Its full name is the Kissinger Professor of the Practice of Statecraft and World Order. The chair was endowed by Ray Dalio, one of the richest hedge fund operators. Dalio, of Bridgewater Associates, has urged Trump to reach a trade deal with China.
What would you think Jake Sullivan is teaching at the Kennedy School? Emerging challenges in U.S. foreign policy? The world order after Trump? A history of U.S.-Russia relations after the fall of the Soviet Union?
Nope. Sullivan is teaching one course, on artificial intelligence.
Sullivan does not have a history of expertise on AI. But he is a very rapid learner, and his gravitation toward AI policy is very astute of him, for AI policy will continue to be at the center of several national security and domestic regulatory issues.
Here’s the course description:
This course will examine national and international policy efforts to respond to and shape rapid advancements in artificial intelligence capabilities and applications. We will study the strategies of the major players when it comes to harnessing the benefits and managing the risks of AI—the U.S., China, the European Union, the U.K, India, Japan, the ROK, and the Gulf nations, as well as countries in Africa, Latin America, and Southeast Asia. We will go in depth on the U.S.’ and China’s respective approaches to what both see as an AI race—including the “promote” side (e.g., public investment and national champions) and the “protect side” (e.g. export controls)—while also exploring the two countries’ calculus on placing guardrails around the development and diffusion of AI.
Wow! I’d like to take that course, wouldn’t you?
Harvard, burned by various conflict-of-interest embarrassments as well as buildings named for felons, has a disclosure policy on outside income, but the policy turns out to be more of a tease than a disclosure. Look at Jake Sullivan’s Kennedy School page and you will see a tab with the odd name “Transparent Engagement.”
It discloses that Sullivan, among other outside interests, consults for the hedge fund D.E. Shaw & Co. How much do they pay him, and for what kind of work? That is not disclosed on Sullivan’s “transparency” page.
I sent an email to the Kennedy School asking Sullivan’s salary, the amount of his consulting income, and how much Dalio had given Harvard to endow the chair. I put the same questions to Harvard’s media relations office and to Sullivan’s personal assistant. All declined to provide the information.
We do know from the disclosures that Larry Summers made in 2008 that he received $5.2 million from D.E. Shaw for about a day a week’s work, or nearly ten times Summers’s roughly $600,000 Harvard salary at the time. I was told by one source that Shaw pays Sullivan in the seven figures, but I haven’t been able to confirm it.
This raises the intriguing question of why D.E. Shaw would put Jake Sullivan on their payroll as a consultant. Unlike some hedge fund operators like George Soros, who make bets on geopolitical risks, Shaw is famously a “quant” firm, using technical formulas.
It’s a good bet that Sullivan could be a very senior official in the next Democratic administration, though Biden’s extreme indulgence of Israel’s obliteration of Gaza may give some Democrats pause. If you are worth $8.8 billion, as David Shaw is, why not spend a million, give or take, so that you can get the next Democratic secretary of state or White House chief of staff on the phone if you ever need to.
This may be just a coincidence, but D.E. Shaw is also extremely interested in AI. D.E. Shaw’s top investments are in Nvidia, followed closely by Microsoft, Palantir, and Advanced Micro Devices, all major AI players.
A benign interpretation is simply that Sullivan is seeking to gain expertise in AI; that teaching a course on AI is one way to do it; and that the affinity of interests between Sullivan and D.E. Shaw is one more convergence that justifies a nice retainer. Learning probably takes place both ways.
Is this corrupt? It’s the way the world works, or at least how America works. Step aside from a senior government post while you are awaiting your next post, and there are myriad opportunities. Still, it’s hard not to pull your punches just a bit when you are juggling your possible future in government with your paid advice to major private players.
But not all Biden alums are consulting for industry. Former labor secretary Julie Su is working for New York City Mayor Zohran Mamdani, along with the former director of the Bureau of Consumer Protection at the FTC, Sam Levine. Former FTC Chair Lina Khan, former Justice Department antitrust chief Jonathan Kanter, and the CFPB’s Rohit Chopra have not taken corporate work; Chopra is working with state attorneys general on antitrust and consumer protection cases. Cashing in does not have to be an iron law.

AS FAR AS THE SUBSTANCE OF AI POLICY is concerned, Sullivan appears to have co-authored just one high-profile article. The article, written with Tal Feldman, was published in Foreign Affairs in January. It’s titled “Geopolitics in the Age of Artificial Intelligence.”
The piece speculates about the future of AI, and lays out several possible scenarios. Rather than clearly setting out a forceful argument or viewpoint, the article reads more like one of those “options papers” that staffers provide to leading politicians, as if Sullivan is keeping his own options open, too.
By contrast, in his other writings, co-author Feldman is crystal clear about what he thinks, and he is sounding an alarm. He favors strong regulation of AI.
Feldman, now 35, is the son of Jewish émigrés from the Soviet Union. He was the AI wunderkind of the Biden administration. He led a team in building AI tools for the State Department, developed AI tools for the Pentagon, and built AI models for the Federal Reserve. Rather than going to Silicon Valley after he left government, Feldman enrolled in Yale Law School.
In a piece written for MIT Technology Review with Aneesh Pappu, Feldman warned that the next generation of AI will be used in elections in far more persuasive and insidious ways, and called for tough regulation: “We need a strategy that treats AI persuasion not as a distant threat but as a present fact. That means soberly assessing the risks to democratic discourse, putting real standards in place, and building a technical and legal infrastructure around them. Because if we wait until we can see it happening, it will already be too late.”
In another article co-authored with Laura Galante, who was director of the Cyber Threat Intelligence Integration Center (CTIIC) under Biden, Feldman wrote, “Through its Standards 2035 initiative, Beijing is moving aggressively to shape the rules that govern global infrastructure, from industrial protocols to wireless networks. The U.S., by contrast, still treats standards as a narrow technical matter and lacks a strong interagency framework to tie them to the challenge of strategic competition. This is not some peripheral bureaucratic battle. It is a bid to define the default settings of the modern world.”
This is the sort of sensibility and expertise we need in the person who should be making AI policy for the next administration.
In the age of Trump, where multibillion-dollar conflicts of interest and paydays are the norm, making a few bucks on the side from a hedge fund while holding a chair at Harvard seems pretty tame. It’s just normal revolving-door stuff. Trump’s flagrant grifting almost makes you nostalgic for the good old days of gray-area conflicts.
But as Feldman’s writings suggest, it is increasingly hard to serve both Big Tech and the national interest. Clearly, it would be far better to have Jake Sullivan in charge of AI policy than Trump’s allies of the worst people in the AI industry. Even so, I’d feel better if Sullivan, who makes around half a million a year for teaching one Harvard course, decided to forgo outside retainers that give privileged access to billionaires.
Read more
Dire Strait
Trump’s fantasy about clearing the Strait of Hormuz risks leading to a much wider and more prolonged war.
How Trump Lost the Courts
With every passing day, another federal judge issues a scathing order to contain Trump’s autocracy and Trump keeps alienating the Supreme Court.
Competition for Jeff Bezos’s Shabby Washington Post
Robert Allbritton, who founded and then sold Politico, is launching a kind of rival to the Post.

