
Jaap Arriens/NurPhoto via AP
At the end of Dr. Strangelove, the classic exploration into a very American mindset, the U.S. and Soviet Union blunder into an exchange of nuclear weapons. The only recourse is to send some citizens deep underground into secure mines for their safety, in the hopes of repopulating the planet and resurfacing once the radiation dissipates. But even in this desperate moment, Gen. Buck Turgidson pleads with the president to protect U.S. mineshafts as critical national-security infrastructure.
“It’d be extremely naïve of us to imagine that these new developments are going to cause any change in Soviet expansionist policy!” Turgidson thunders. “We must be increasingly on the alert to prevent them from taking over other mineshaft space, in order to breed more prodigiously than we do, thus knocking us out through more superior numbers when we emerge! Mr. President, we must not allow a mineshaft gap!”
Then the world explodes.
I was reminded of this scene in watching the meltdown of tech stocks upon the entry of several open-source Chinese AI models from a company named DeepSeek, which were reportedly made at far lower cost than state-of-the-art U.S. versions. (Mind you, DeepSeek’s versions have been out for a month; only when they were put into an app did anyone wake up to panic.) America spent the past several years trying to prevent a mineshaft gap: tightening export controls on semiconductors, throwing massive amounts of money at developing AI, facilitating the buildout of data centers for computing power while damning the consequences for energy and water usage. It turns out none of that worked.
What this embarrassment really speaks to, as in Strangelove, is the absurdity of preoccupying ourselves about a mineshaft gap in the first place. We are sinking an insane amount of time and treasure into making software that may never really evolve too far past digital assistants for lazy people. And we’re attempting to wall off that innovation to the rest of the world in a way that has apparently now been rendered irrelevant.
U.S. strategy didn’t just involve cash, but prohibitions on higher-end technology power leaking out to China. It seems like that didn’t matter either. Whether because the U.S. was too late, because there was no real means to control leakage of intelligence, or because China made do with the computing power they had, it really dampens the strategic argument of this and the last administration in its bid to control AI innovation.
We stupidly engineered a very concentrated stock market that rises and falls in part on the level of AI joy or despair.
There are real-world impacts to this mistake. Much of our stock market runs on AI hype. The five leading Big Tech firms comprise more than a quarter of the entire value of U.S. equities, and if you combine with the IT sector, you’d get closer to half. Their fervor to win the “race” on AI is in many ways the engine that is currently driving the U.S. economy. And when that’s revealed as a bubble, we all feel the pop.
That’s why you are seeing the invocation of a “Sputnik moment” in discussions of DeepSeek. In American mythology, this occurs when the nation’s betters realize they are slipping behind in global dominance and must have more treasure thrown at them to compensate. In other words, the promise of winning the AI race necessitates spending more money, and the peril of losing the AI race also necessitates spending more money. Every problem rolls back to the same conclusion of stuffing the oligarchy with more cash.
That was one purpose of the “Stargate” announcement made last week—in the Oval Office, if you needed more symbolism—that commits another $500 billion (if you believe SoftBank’s numbers, and you shouldn’t) to establishing even more infrastructure for AI. Much of that infrastructure was already being built, mind you, and also rendered obsolete or at least gratuitous within three days of the announcement. The various DeepSeek models are freely available with open “weights” (though the training data is not public); any company, or indeed individual with a reasonably powerful computer, can run and tweak it today if they wish.
The Sputnik analogy also conveniently elides the nature of the U.S. failure. The companies that are trying through brute force to “win” AI are uniquely unequipped to build anything valuable. They see the problem through the lens of grabbing control of all innovative technology, and holding it for ransom behind a high wall of market power. It’s basically the opposite of Silicon Valley’s origin stories of two guys in a garage outwitting the majors. In their hearts, the incumbents know that they aren’t set up to deliver; better products are not the way to win in modern American capitalism. As Lina Khan said last year, “To stay ahead globally, we don’t need to protect our monopolies from innovation—we need to protect innovation from our monopolies.”
Approximately all of Stargate is intended to benefit OpenAI, the self-anointed guy in the garage in this space. I won’t go into tedious detail about the rivalries and recriminations between OpenAI’s Sam Altman and other tycoons like shadow president Elon Musk, mainly because DeepSeek just made it even more tedious and beside the point. But suffice to say that there is no upstart here; OpenAI is essentially a full partner of Microsoft, a pattern we see with other “startup” AI firms and their cloud computing colleagues.
You can grumble that DeepSeek’s story is dubious, and it probably is; I certainly don’t buy that it was built for $5.6 million. You can also be creeped out by DeepSeek saving your keystrokes, or any other nexus of surveillance and machine learning. But as Matt Stoller notes, China has allowed competition to dictate success in its technology sector rather than finance. The leap forward in AI, to the extent that this is one, mirrors similar leaps in social media algorithms and electric vehicles. The absence of such competition here may be driving our meager results.
There are several levels of fallout. We stupidly engineered a very concentrated stock market that rises and falls in part on the level of AI joy or despair. Stock ownership also happens to be concentrated, but inevitably workers will pay some price for a downturn. And as Joe Weisenthal notes, for those bought into the American dream of perpetually increasing financial assets, you’re now forced into cheerleading for Big Tech to figure out AI faster. What happens if it doesn’t?
Second, there’s an upside-down nature to the whole thing. If we can get whatever it is we’re supposed to get out of AI for less, traditionally that would be positive for the U.S. companies trying to do it. That’s productivity growth: more for less. That these AI behemoths never saw that as an option speaks to the blind alleys of our economy, focused more on throwing around one’s weight than building something faster and cleaner than the competition. That one of these dudes is screaming about government efficiency while the U.S. tech sector has been outplayed by a startup with a tiny fraction of its capital is deeply revealing.
Third, we might want to step back and ask what exactly we’re racing to build here. Investors, a panicky bunch, have been given only rough sketches of artificial general intelligence and its use cases. Are we preventing a mineshaft gap just to prevent a mineshaft gap? Is the spending an end in itself? What are we buying when we buy AI dominance? What is all of this supposed to do?
I don’t believe in holding back the tides of progress. I also know bullshit artists when I see them, and in Silicon Valley these days you have to watch your shoes to dodge the excrement. For me, the current offerings of AI rise only to the level of a cool trick, but I do see the chance for them to develop into something transformative. I just don’t trust the crowd we’ve empowered to pull it off. And I don’t trust betting the futures of most of the U.S. population on their success.