DEPARTMENT OF DEFENSE
A Marine Corps unmanned aerial system, used as an intelligence-gathering asset
This article appears in the November/December 2021 issue of The American Prospect magazine. Subscribe here.
The Age of AI: And Our Human Future
By Henry A. Kissinger, Eric Schmidt, and Daniel Huttenlocher
Little, Brown
The term “artificial intelligence” is widely recognized by researchers as less a technically precise descriptor than an aspirational project that comprises a growing collection of data-centric technologies. The recent AI trend kicked off around 2010, when a combination of increased computing power and massive troves of web data reanimated interest in decades-old techniques. It wasn’t the algorithms that were new as much as the concentrated resources and the surveillance business models capable of collecting, storing, and processing previously unfathomable amounts of data.
In other words, so-called “advances” in AI celebrated over the last decade are primarily the product of significantly concentrated data and computing resources that reside in the hands of a few large tech corporations like Amazon, Facebook, and Google. At the same time, AI technologies are increasingly shown to be brittle, systemically biased, and applied in ways that exacerbate racialized inequality.
The Age of AI works to take the debate about artificial intelligence off the table by obscuring the relevant technologies and the political economy behind them. Its title alone—The Age of AI: And Our Human Future—declares an epoch and aspires to speak on behalf of everyone. It presents AI as an entity, as superhuman, and as inevitable—while erasing a history of scholarship and critique of AI technologies that demonstrates their limits and inherent risks, the irreducible labor required to sustain them, and the financial incentives of tech companies that produce and profit from them.
While the book’s intellectual contribution is marginal, the political agenda of its authors merits careful consideration.
Henry Kissinger needs no introduction. Even at 98 years old, he remains an influential voice in foreign policy despite his sustained commitment to U.S. exceptionalism, military dominance, and entrenching the military-industrial complex.
The public is recognizing that it has a choice in whether AI is developed and widely adopted.
Eric Schmidt is the former chief executive of Google and former executive chairman of its parent company Alphabet. He has worked over the last decade to encourage investments by the military and intelligence establishments in Big Tech infrastructures and to market their products, including Google’s AI technologies, as indispensable to U.S. military prowess. He’s also a billionaire and a philanthropist, whose Schmidt Futures underwrites positions throughout the federal government, and many tech-related civil society organizations and initiatives. Over the last several years, he chaired the National Security Commission on Artificial Intelligence (NSCAI), an advisory board to Congress and the Pentagon comprising Big Tech executives, military and intelligence professionals, and academic elites.
Daniel Huttenlocher is the dean of MIT’s Schwarzman College of Computing, an AI-focused mega-lab that was launched thanks to a $350 million gift from foreclosure profiteer and longtime Trump supporter Stephen Schwarzman, the co-founder of the investment group Blackstone. Huttenlocher is also board chair of the MacArthur Foundation, which funds progressive nonprofits and initiatives focused on tech accountability.
This book provides Eric Schmidt and his co-authors a new occasion for a well-funded PR campaign, during which they will be given opportunities to present their views to large audiences, and likely to brief policymakers and other political actors.
In this way, The Age of AI should be understood as a companion to the work that the NSCAI has already done under Schmidt’s leadership. In March, the NSCAI issued a report that echoed Cold War rhetoric to recommend $40 billion in federal investments in AI, warning that the U.S. must maintain AI supremacy or risk being eclipsed by China. The NSCAI report and The Age of AI serve Big Tech’s agenda through three rhetorical strategies.
First, they position Big Tech’s AI and computing power as critical national infrastructure, across research and development environments, and military and government operations. Second, they propose “solutions” that serve to vastly enrich tech companies, helping them to meet their profit and growth projections, while also funding AI-focused research programs at top-tier universities. This serves to bring Big Tech and academia closer together, further merging their interests and deterring meaningful dissent by a new wave of researchers critical of Silicon Valley. Third, and most importantly, by providing arguments against curbing the power of Big Tech companies, the book frames these companies as too important to the American national interest to regulate or to break up. Those arguments could be read against the antitrust advocates and tech critics within the Biden administration who have committed to checking the concentrated power of Silicon Valley.
OVER THE LAST FIVE YEARS, a chorus of researchers, policy advocates, and tech workers have pushed a rejection of Big Tech into the mainstream. Movements calling for bans on facial recognition, worker surveillance and control, surveillance advertising, algorithmic content amplification, and other harmful applications of artificial intelligence have increased. Significant battles have been won in the process.
A complementary turn to tech antitrust and a growing willingness signaled by the Federal Trade Commission to crack down on concentrated power and deceptive practices is also opening questions about the future of ubiquitous AI deployment, and the surveillance business models and concentrated resources on which it relies.
This background is important to understanding why The Age of AI has as a central theme establishing artificial intelligence’s inevitability. Throughout, this refrain is relentless: AI is “already ubiquitous,” “undeniably, inevitably” set to “change both humans and the environments in which we live.” AI “may soon prove indispensable” and cannot be “uninvented.”
This recitation is necessary because AI is not inevitable. In fact, the public is recognizing that it has a choice in whether AI is developed and widely adopted, and this poses a threat to the Big Tech interests whose funding, revenue, and growth projections depend on ubiquitous AI.
Just as The Age of AI goes to great lengths to emphasize AI’s inevitability, it also warns of the dangers—even cowardice—of AI refusal. The authors assert that “[a]ttempts to halt its development will merely cede the future to the element of humanity courageous enough to face the implications of its own inventiveness,” while tech whistleblowers are “leakers and saboteurs.” Adopting AI is a moral imperative, such that “[o]nce AI’s performance outstrips that of humans for a given task, failing to apply that AI—at least as an adjunct to humans—may appear increasingly decadent, perverse, or even negligent.”
WITH ALL OF ITS SUPERLATIVES, this book describes something bordering on the divine, which bears no resemblance to the automated decision systems or even the large language models and other so-called cutting-edge approaches that are currently developed by AI companies. The reader is offered a false portrait of AI, described as a fundamental break in human history, one auguring a new epoch involving “the alteration of human identity and the human experience of reality at a level not experienced since the dawn of the modern age.” We are told that AI’s “functioning portends progress toward the essence of things—progress that philosophers, theologians, and scientists have sought for millennia.”
At the same time, The Age of AI sidesteps the vested interests responsible for AI, in the process eliding Big Tech’s monopoly over data and infrastructural resources. For the authors, Big Tech companies, as “network platform operators,” are providing a public service “on a scale that represents a civilizational event.” In contrast, government is painted as ill-equipped to regulate and oversee these companies. The message of the authors is clear: Regulation is dangerous, especially regulation that would hamper AI’s development.
The Age of AI is also, quite explicitly, offering product placement for Google’s AI products and capabilities. Of the examples presented, most are produced either by Google, its parent company, or companies that it has purchased: AlphaZero (an AI model developed by DeepMind, famous for its prowess at the games of chess and Go), BERT (a significant large language model developed at Google), Google Assistant, Google Translate, Google Search, AlphaFold (an AI model that predicts protein structures), DeepMind’s data center energy reduction accomplished using machine learning, and MuZero (derived from AlphaZero). AI efforts from Amazon, Apple, Microsoft, and Facebook get shout-outs, but in Facebook and Microsoft’s case the examples named are not particularly flattering: flawed content moderation AI in Facebook’s case, and the racist chatbot Tay in Microsoft’s.
Tomascastelazo/Wikimedia Commons
A surveillance tower is positioned beyond the barrier at the U.S.-Mexico border between San Diego and Tijuana, Mexico.
TO CLAIM, AS THE AGE OF AI DOES, that this book fills a “gap” in “basic vocabulary and concepts for an informed debate about this technology” requires erasure of an extensive journalistic and academic literature. Acknowledging these writings would undermine the authors’ grand prognostications, the hazy image of AI as all-powerful and (largely) beneficial, and the Big Tech–friendly political agenda this book is working to bolster.
Selling this agenda, in other words, requires some willful ignorance. References to race, gender, and labor are largely absent even as the co-authors explore historical terrain where racism, patriarchal power, and colonialism are central. For example, the authors celebrate the Dutch East India company and the stock exchange where its shares were traded as an example of a positive network effect, without remarking on its genocidal colonial practices, or its role in the Dutch slave trade.
The book’s erasure of white supremacy, colonialism, and slavery from its historical overview is mirrored in the minimal engagement with the extensive research that has exposed how AI replicates and amplifies racialized, gendered, and other forms of inequality. There’s no mention of the AI-powered wall at the United States’ southern border, or police and law enforcement use of AI to hunt and track protesters, or the exploitative use of AI to control workers by companies like Uber and Amazon, even though these harmful and oppressive applications of AI are by now well documented.
The book also fails to mention climate change, or the significant climate costs of large-scale AI systems. To acknowledge climate would tear a hole in its narrative, suggesting an existential threat not coming from China and the mythical specter of Chinese dominance.
ERIC SCHMIDT’S LATEST ENDEAVOR, the Special Competitive Studies Project (SCSP), launched in early October 2021, just in time to be central to a press tour arranged around the book. Described in quasi-governmental language and with a “bipartisan board of national security leaders,” SCSP is, in fact, a self-funded, shadow lobbying organization created to advance the interests of the tech industry. By filling the project’s board and leadership positions with many of the same cast that constituted the National Security Commission on Artificial Intelligence, this initiative inherits the patina of an official government endeavor whose work deserves serious consideration.
Schmidt says that the project is modeled on the Rockefeller Special Studies Project (SSP), which Henry Kissinger led in the 1950s and used to advocate for the vast expansion in U.S. military spending. SSP was also privately funded by one of the most powerful men in the world, Nelson Rockefeller. That program advocated for a resource-intensive Cold War arms race, based on the premise that the alternative was apocalypse at the hands of the Soviet Union.
Schmidt and his associates could be read as trying for a repeat of the SSP, drawing on the version of AI presented in The Age of AI and reheated Cold War urgency that focuses on China as the looming threat. This time, however, we need to call the bluff, rejecting the mystified portrait of AI that is central to this agenda, and naming related influence campaigns for what they are.
A more rigorous treatment of AI that included problems of discrimination and the climate and labor costs of producing AI would suggest very different trade-offs. It would suggest, as well, answers to questions of security that look more like international solidarity and equitable resource distribution, and less like technological brinkmanship and a mindset premised on a new Cold War.