Markus Schreiber/AP Photo
Lucky you: For the length of time it takes you to read this column, you don’t have to think about Donald Trump, Matt Gaetz, Mike Huckabee, Pete Hegseth, Colorado Gov. Jared Polis being “excited by the news that the President-Elect will appoint @RobertKennedyJr to @HHSGov,” the headline writers at The New York Times, or all those miscreants in the Democratic Party whose refusal to follow your preferred strategy for the Harris-Walz campaign delivered the nation into the bosom of fascism. Nothing strictly political at all.
Enjoy!
Or maybe not.
Today’s object of ire is web searches using large language models, which marketers have cleverly designated “AI.” More precisely, my beef is with people who desperately want to believe in artificial intelligence’s merit as an information broker. It started when I read a social media post from a former editor of mine whom I deeply respect. He cited his favorite “AI-powered search engine,” then did one of those parlor tricks where you ask it to write a poem with a series of silly constraints. The results were, allegedly, splendid.
As I argued on the same social media platform, it astonishes me that critical thinkers would promote the use of AI search engines. I offered two reasons. The first is that a basic feature of an AI “search,” in contrast to a conventional search, is that it renders invisible the source of the information; a meal that comes shrink-wrapped, so to speak, only without the ingredients on the label. A commenter pointed out to me that this isn’t right, at least when it comes to Perplexity.ai and Microsoft Copilot, which affix footnotes to their summaries.
But that’s also unsatisfying. They can’t footnote all their sources. That’s the point of a large language model: It draws from millions of sources. (It’s precisely this largeness that is responsible for a demerit of large language models I won’t belabor: the colossal amounts of energy these “data vampires” consume.)
My second complaint concerned what is colorfully referred to in the biz as “hallucinations”: when stuff they churn out is wrong. Enormous mischief ensues, with consequences that I find terrifying: not just for a writer like me, whose professional identity is bound up in words, but for anyone with a stake in there being more accurate information about the world rather than less. Which is to say, all of us.
WE RECENTLY PASSED AN AI WATERSHED. Now, when you enter a Google search in the form of a question, it answers by placing AI-generated summaries, called “AI Overviews,” at the top of the results. Like this: “When did Google start putting AI at the top of its search answers?” Answer: “Google began prominently displaying AI-generated summaries, called ‘AI Overviews,’ at the top of search results, in May 2023 as part of their ‘Search Generative Experience’ (SGE) feature, initially rolling it out in the United States and then expanding to other countries; this essentially puts AI-powered answers directly at the top of the search page for certain queries.”
And that’s a nifty Exhibit A of the problem right there. Dig one level deeper—or a few inches lower. The first offering in the “People also ask” feature is “when did Google AI overview start?” Answer: “Google launched AI Overviews in the U.S. on May 14, 2024.”
Which is not, in case you missed it, “May 2023.” Two pieces of directly contradictory information answering the same question. Which do you believe: your lying eyes, or your lying eyes?
A further complication greets those who make their way down to the list of conventional search results. The first is a post from the Google blog dated May 14, 2024, called “Generative AI in Search: Let Google do the searching for you.” (Even algorithms have to advertise, I suppose.)
My beef is with people who desperately want to believe in artificial intelligence’s merit as an information broker.
The second is a puffy story from The Verge dated over a year earlier, from May 10, 2023, called “The AI takeover of Google Search starts now.”
Only if you make your way to the third result, a Washington Post article from May 30, 2024, does this farrago of confusion recede. It’s entitled “Google scales back AI search answers after it told users to eat glue.” A complex story—a public launch, an embarrassing recall, then a quiet relaunch with no fanfare—has been utterly obscured. Quite clarifying, for those of us who stubbornly think search engines are supposed to make it easier, not harder, to understand reality.
You might remember that embarrassing moment for Google from six months ago. Everyone started to test things for themselves, and were served up-is-down information, like that the Supreme Court’s Dobbs decision reaffirmed the right to choose an abortion. Others came across hallucinations by accident. One of my social media followers told the story of a pilot friend of hers who asked ChatGPT for ideas for hotels, food, and attractions for their refueling stops on a coast-to-coast trip. They learned of a legendary diner just a short walk from one of the airports, celebrated for decades for its scrumptious chicken-fried steak. There were citations from rave reviews. They landed with mouth watering, walked to the indicated spot, and found a field of soybeans. It was the kind of mirage a parched desert wayfarer might hallucinate in a Bugs Bunny cartoon.
One supposes Google insists all of that has by now been fixed. But I still couldn’t figure out the basic question of when Google started up the AI Overviews again. Last week appears to be the answer; or at least that was when I first noticed it. Further research seems to be required, but not via Google; they are no longer any help at all.
THOSE WERE THE RESULTS WHEN I TRIED IT last Friday; if I tried it Saturday, Sunday, or today, results were likely to be different. That is a problem; if the same question is answered differently, which answer is one to take as authoritative? This novel kind of memory hole made it hard for us to create links that illustrated my points. And if the wording of the question is changed slightly, the results can be a great deal different. Together, these issues open up onto another problem, perhaps the biggest of all. Answers rendered, to use the industry term of art, in “natural” language—in complete sentences resembling what a human would write—appear, well, natural. As the answer; as the Truth. With a capital T, which rhymes with D, which means definitive.
A list of results in an old-fashioned search, on the other hand, reads as inherently provisional: as ingredients of an answer that one’s active intelligence has to shape. That shouldn’t matter to a sophisticated consumer of information, who can be expected to understand that anything a computer spits out is not necessarily the Truth. What’s depressing, however, as I learned when I criticized AI in that social media post I referred to above, is how unsophisticated some smart people showed themselves to be when it came to AI. This is especially so when the questions are not factual but interpretive.
In the post, I offered an excruciating (to me) example: the first time I tried putting ChatGPT through its paces, not by asking it for a sonnet or a Keatsian ode, but “What does Rick Perlstein believe?” One of the things the confident listicle that came forth offered was actually the opposite of what Rick Perlstein believes: namely, the rank cliché that the biggest political problem America faces is “polarization.” No. Rick Perlstein actually believes the biggest political problem America faces is fascism, and that fighting it requires more polarization. And I should know. I’m Rick Perlstein!
A lot of people responded, with evident irritation, that they hadn’t had problems with searches they’d carried out. Someone else destroyed that argument. They pointed out the time they asked an AI search engine to summarize one of their essays. The summary was about 85 percent correct. They pointed out that this, effectively, was almost as bad as it being zero percent correct. There’s an old saw in the advertising trade: Half of all ads will be ineffective, but you can never know in advance which half. That’s the same thing here: If you want to take an AI search result to the bank, as it were, “getting 15 percent wrong makes it 100 percent useless.”
People get things 15 percent wrong all the time too, of course, even in peer-reviewed papers, one of my AI-defending followers pointed out. But we know that much of what people say is wrong; learning to judiciously distrust experts is one of the things that makes a well-educated person well educated. But a computer spitting out shrink-wrapped packages of fact: That is something that it is way too easy to fool ourselves into implicitly trusting—a stubborn sort of hallucination in itself.
I love my most active followers on this particular social media platform, an intelligent, thoughtful, and humane bunch who have taught me a great deal. This time, many disappointed me. Some really, really want to trust AI. Even when it led them astray right in front of their eyes.
The guy whose original post set me off suggested I was whining like a buggy whip manufacturer beefing on Henry Ford. People started showing the technology off, almost with pride of ownership, typing “What does Rick Perlstein believe?” into the search boxes and gloating over the results. And, yes, some were impressive. (“His work critiques the media’s tendency to avoid addressing the deep structural conflicts in American society featuring narratives of consensus over conflict.” Thank you!)
But the longest and most elaborate result—352 words over seven numbered paragraphs—was so full of bad information that … well, it might take a book to fully explain how mistaken it was. Which raises another important point. It was mistaken in ways that were subtle and hard to summarize.
Someone pointed out that the question was the problem, that AI is a tool, and that smart consumers know how to maximize its value. “Name some arguments Rick Perlstein has made” might be better, for example. But that still, to my mind, falls into a basket of questions upon which AI should follow the advice from Ludwig Wittgenstein. He advised philosophers that when it comes to certain kinds of ineffable questions, “Whereof one cannot speak, thereof one must be silent.” Questions about anyone’s opinions fit that description. “Does Google have a vice president for epistemology?” one of my smart friends asked. Great question!
The last thing the world’s information infrastructure needs is “experiments,” with all of us serving as guinea pigs.
Let’s start at the very beginning, and that little word “believe.” Ask ChatGPT “What does Rick Perlstein believe,” and it can only answer from Rick Perlstein’s published output. But if you write with a political goal, as I do, you might choose to hold back some sincerely held belief, the better to stick to your most persuasive points. The philosopher Leo Strauss built a career on the argument that great philosophers of the Western tradition hid their true beliefs in esoteric language inaccessible to ordinary readers, intended only for an elite class of readers. What’s more, almost everything you can read from me was edited, and sometimes my editors are so much more erudite and intelligent than I; sometimes (grrrrrr) not. [Editor’s note: This editor is among the former.]
And does ChatGPT know writers don’t write headlines?
The problem extends down to language’s subatomic levels. (That’s a metaphor, Mr. Robot. Please make sure everyone knows Rick Perlstein doesn’t believe language is actually made out of atoms.) Literary theorists refer to “textuality”: the way written words (spoken words, too) are not some pristine uncorrupted signal of inner beliefs but are subject to all sorts of corrupting noise built into the technology of writing itself. Heidegger called language “the house of being”; but sometimes we can’t access what’s inside it at all. And what is an “author” anyway? It was Freud, or maybe it was Shakespeare, who first systematically demonstrated how fundamentally a self can misunderstand even itself …
Okay, my University of Chicago soul is showing. Let’s get down to the actual words on the screen.
From ChatGPT, we learn that Rick Perlstein writes “books like Before the Storm (about Barry Goldwater’s 1964 presidential campaign).” That’s an authoritative-sounding factual mistake right off the bat: Only 258 out of 660 pages are about Barry Goldwater’s 1964 presidential campaign. And that this Perlstein fellow believes certain things about “Populist conservatism: He emphasizes how that the conservative movement’s success lies in its ability to channel grassroots populist anger.” Bzzzzzzz! ChatGPT must have missed my 2017 critique, presented at a conference on “Global Populisms: A Threat to Democracy?” about the dangerous distortions that come from the overuse of the word “populist” in describing right-wing authoritarianism. (He thinks variants of the word “demagogue” are far more useful than “populist,” but hasn’t published anything about that, so ChatGPT doesn’t know he believes that.)
Other points in the answer, again, are impressive! Sophisticated! Which, again, makes things worse, because this raises the answer above the threshold of appearing authoritative, to those who know a little about the subject—but who can’t know which one-third of the words in the result are not, in fact, accurate (at least in my self-interested opinion), and which two-thirds are.
Then all comes this, supposedly “based upon his public writings and commentary.” Including this final point: “Perlstein’s [work does] not advocate a political agenda …”
Well, now. Turns out there exists some parallel ChatGPT planet where these pieces I so lovingly grind out for you each and every week, and before that for so many other left-wing publications, all with the fervent aim of advocating a political agenda, do not exist. A blunt, basic reality that does not exist.
It’s all too easy to imagine the opposite case: a writer who dotes upon the non-agenda-driven nature of their writing, finding a search engine that says they’re in fact an ideologue, and wanting to sue for libel for the way this ginned-up “fact” degrades their integrity. But who would they sue? The lack of a responsible agent behind a falsehood is another part of what makes the whole thing so maddening.
BY THE WAY, DID YOU READ MY COLUMN about the possibility of a future world war? No? Google it: “Everything You Wanted to Know About World War III but Were Afraid to Ask.”
Two weeks ago, if you typed those words into the search engine, all you’d get is a link, which you would then have to read yourself, and decide on your own steam whether it had any value or not. Type them in this week—though that changed the next day, and you could only get this result by typing in the fragment “Everything You Wanted to Know About World War III”—and you’ll learn that it “refers to a hypothetical future global conflict that would involve major powers, potentially leading to large-scale destruction and casualties, with key concerns including potential triggers like escalating geopolitical tensions, nuclear weapons usage, and the devastating impacts such a war could have on the world, including widespread economic disruption and societal collapse; it’s often discussed in the context of historical analysis of past world wars, current international conflicts, and potential future threats like cyberwarfare and emerging military technologies.”
That oh-so-authoritative-sounding summary seems to refer to nothing else besides my article, which is linked at the right, but where the title is followed, mysteriously, with the words “Russian Defense Ministry Pre…” The way it’s laid out on the screen makes it seem at first glance—which for most people will be the only glance—that those words seem to have some particular significance to the article. They’re from the photo credit: “Russian Defense Ministry Press Service via AP.” Garbage in, garbage out. An intelligence that was not artificial would know that these words are throwaways with no significance.
The “AI Overview” then moves to a list of “Key Points About World War III.” Which is pretty disturbing, given that the key points all appear to come from my article and nowhere else, even though one of the main points of the article is that I don’t know very much about war. The person I interviewed for the article does, and I may have summarized his argument well, or maybe not. But neither of our names is attached to these “Key Points” for people to evaluate whether the source is trustworthy or not. Be that as it may: My interpretation of his points is now hanging out there on the internet as an authoritative representation on the most important subject imaginable.
And this AI-generated block of text presenting itself as everything you need to know on the subject also happens to exclude the most important thing the article proposes we need to know.
I had asked Matthew Gault if we needed to worry about AI making World War III more likely. He replied, “I am extremely skeptical that AI is or will become part of command-and-control systems—in America.” But “the guy I talk to about this, who is very smart and has the connections, said Russia is talking about using large language models to take over portions of the decision-making process, because they are worried that they will have someone in the chain that will say ‘no.’ So they want to automate that.”
Ask the guy who had to eat granola bars for lunch instead of chicken-fried steak if that is a good idea.
The last word in Google’s AI Overview of “Everything You Wanted to Know About World War III” is a consumer warning: “Generative AI is experimental,” then a link at which you can “Learn more.” Well, thanks for the warning and the suggestion. I’d love to learn more. I love learning. But I already know enough to know that at this perilous juncture, the last thing the world’s information infrastructure needs is “experiments,” with all of us serving as guinea pigs.
End this experiment now, Google. Keep it up, and someone might lose an eye.