We associate manifestos with big ideas, combative theses itching tochange the world. While the roar of the manifesto has pretty much fadedfrom the culture at large, it can still be heard loud and clear in thedigital world. Digital culture continues to foster grand ambitions; itnurtures not only the ongoing quest for the killer app but also thesearch for the one idea that will make sense of most everything.
Jaron Lanier's recent "One Half of a Manifesto" has thisheaven-storming quality. The 9,000-word document (atwww.edge.org/3rd_culture/lanier/lanier_index.html) flexes the usual manifesto muscles, but with one difference: It is dedicated not to proclaiming a new theory but to deflating one that is already fully formed and, in Lanier's view, primed to wreak havoc on the world. Lanier names that theory cybernetic totalism. It is cybernetic because the computer is at its core; and in a sense, the computer, more than any written document, is its manifesto. It is totalistic because it aspires to an intellectual synthesis loath to let much of anything escape its explanatory grasp.
Whatever you think of the contents of "One Half of a Manifesto,"Lanier has to be credited with nerve for issuing it. The thinkers hesets out to oppose are some of the most formidable writers and theoristsof our time, including the geneticist Richard Dawkins and thephilosopher Daniel Dennett.
Of course, Lanier, too, is a name to reckon with. The blue-eyed,dreadlocked, multitalented visionary made his reputation as a prodigy inthe mid-1980s: Still in his twenties, he coined the term virtual reality and launched VPL, the first business to try to implement the concept. Since then, he has consulted for major institutions such as Citibank, Kodak, and the U.S. Department of Defense. Today, Lanier is lead scientist for the National Tele-Immersion Initiative (NTII), an organization that aims to build virtual reality into the fabric of the Internet. When the Internet goes broadband as is anticipated, Lanier says, NTII will let "users in different places interact in real time in a shared simulated environment, making them feel as if they were in the same room."
Under no circumstances, then, could Lanier be mistaken for a Ludditeor a defector from the digital revolution he has helped to foment. Toclear away any possible confusion on this point, he pauses early in themanifesto to declare himself "more delighted than ever to be working incomputer science" and to praise the "lovely global flowering of computerculture already in place." He adds that a full manifesto, rather thanthe half he has composed, would be sure to "describe and promote thispositive culture." Having affirmed his loyalty to the cause, he thenfeels free to go after the villain of the piece: the technologicalelite, or the "inner circle of Digerati," whose dogma of cybernetictotalism "has the potential to transform human experience morepowerfully than any prior ideology, religion, or political system everhas."
What Lanier goes on to say about cybernetic totalism may sound, atfirst, much like other recent alarms against digital overreaching. Thebest-known of these is no doubt Bill Joy's article "Why the FutureDoesn't Need Us" in the April 2000 issue of Wired magazine. Joy is the co-founder of Sun Microsystems and an author of the Java programming language. That such a legendary hacker could suddenly be afflicted by severe doubts concerning the whole digital enterprise gives his words extra weight. Joy worries that the worst thing about some of our most outlandish digital dreams is that, unfortunately for us, they can be realized. He fears, for example, that if we do not put limits on the development of nanotechnology we will be overrun by lethal, self-replicating mechanical viruses. Not long after Joy's piece was published, the stock market began to deliver its own practical rebuke to dreams of dot-communist utopia.
Lanier joins in this postmillennial mood of second thoughts aboutcomputers and the Internet. But uniquely, above and beyond practicalconcerns, he insists on a philosophical point: What he objects to mostabout cybernetic totalism is the very fact that it is a totalism. Hereserves some of his strongest language to drive this pointhome--writing, for example, that cybernetic totalism may well "catch onin a big way, as big as Freud or Marx did in their times. Or bigger,since these ideas might end up essentially built into the software thatruns our society and our lives."
Although it's what's most distinctive about his manifesto, Lanier'sdetermined anti-totalism has made little or no impression on respondentsand reviewers, who prefer to take him up piecemeal and haggle with himover practical matters. It is as if postmodernism, with its suspicion ofall-consuming syntheses, has passed digital culture by. The result isthat totalism can propagate freely within the digital culture, which hasbarely any immune response to it.
According to Lanier, the totalism to be most wary of these days isbuilt on Darwinism. Of the triumvirate of thinkers who rode astride somuch twentieth-century thought--Darwin, Marx, and Freud--only Darwinsurvives into the new millennium with his reputation not just intact butenhanced. While other grand narratives were being picked apart,Darwinism mutated into a totalism that makes Marxism look likeminimalism.
Darwinism gives the new totalists what they take to be a bridgebetween nature and technology, a way of translating between genetics andcybernetics. Crucially, Darwinism offers the new totalists what anytheory must have to undo the constraints of reason: the sense ofmounting historical tension, the charged expectation of a watershedevent--in short, an eschatology. Cybernetic eschatology focuses on thecoming of an electronic species, an artificial intelligence that isnearly ready to peck its way out of the human brain. Lanier defines thenew totalist creed by its "astonishing belief in an eschatologicalcataclysm in our lifetimes, brought about when computers become theultraintelligent masters of physical matter and life."
The work of Richard Dawkins plays a key role in cybernetic totalism,whether or not Dawkins himself subscribes to the full package. In bookslike The Selfish Gene, Dawkins shows that organic beings are no less coded entities than computer programs are. So what if one kind of code takes evolution a billion years to assemble and the other can be thrown together by a generation or two of programmers? Isn't it possible--or so the thinking goes--that computer code and genetic code differ more in their details (the time involved, the material employed) than in their logic? We know that computer programs are governed by algorithms--simple, unambiguous sets of instructions that in concert allow for the complicated behavior of operating systems. Might not evolution be algorithmic, too?
For the new totalists, the answer is a resounding yes. Nowhere isthis expressed more clearly than in the work of Daniel Dennett. InDarwin's Dangerous Idea: Evolution and the Meanings of Life, Dennett argues that evolution and software use similar strategies to build complexity out of simplicity, intelligence out of mindless routines. He writes: "The best reason for believing that robots might some day become conscious is that we human beings are conscious, and we are a sort of robot ourselves. That is, we are extraordinarily complex self-controlling, self-sustaining physical mechanisms, designed over the eons by natural selection." Dennett sets the stage for a possible encounter between Charles Darwin and Charles Babbage, the founder of computer science, in one or another of the Victorian drawing rooms they frequented. Each man's work, in Dennett's view, completes the other's. Babbage launched the study of computational algorithms while Darwin laid bare the trade secrets of Nature, a mindless but famously successful engineer. Whether or not Darwin and Babbage ever compared notes, their followers have.
Dennett may be the closest thing to cybernetic totalism'sMarx--harmonizing its various intellectual sources--but as yet themovement has no Lenin. "Some of the most dramatic renditions have notcome from scientists or engineers," Lanier observes, "but from writerssuch as [Wired executive editor] Kevin Kelly and Robert Wright [the author of Nonzero: The Logic of Human Destiny], who have become entranced with broadened interpretations of Darwin. In their works, reality is perceived as a big computer program running the Darwin algorithm, perhaps headed towards some sort of Destiny." Lanier wants to rescue Darwin from this destiny. He acknowledges that "the movement to interpret Darwin more broadly, and in particular to bring him into psychology and the humanities has offered some luminous insights." He admits, further, that as a computer scientist it is impossible not to be "flattered" by narratives that put "algorithmic computation at the center of reality." At the same time, he prefers a more circumscribed Darwinism, a Darwinism that hasn't gone nova. "While I love Darwin," he writes, "I won't count on him to write code."
Still, Lanier recognizes that today it is Darwinism rather thanphilosophy or theology that hosts the key debates about human nature. InLondon several years ago, for example, a public discussion led byevolutionary psychologist Steven Pinker and Richard Dawkins reportedlydrew 2,300 people and was sold out weeks in advance. Pinker and Dawkinsare in basic agreement on the big questions of evolution. It isinteresting to speculate about how many seats would be filled by ano-holds-barred debate between Dawkins, say, and Stephen Jay Gould, thechief opponent of the totalists in the quarrel over Darwin's legacy.
In such a face-off, Lanier would be in Gould's corner. He sees Gouldas providing evolutionary support for a belief in free will, whereasPinker, Dawkins, and Dennett would hem us in with determinism. Afterall, if evolution is as algorithmic as the totalists--Gould calls them"Darwinian fundamentalists"--would suppose, then a big-brained beastlike Homo sapiens is well-nigh inevitable, with artificial intelligence inevitably to follow. Lanier prefers Gould's view (as argued in Full House: The Spread of Excellence from Plato toDarwin) that evolution is more familiar with contingency than with inevitability. Summarizing Gould, Lanier writes: "If there's an arrow in evolution, it's towards greater diversity over time, and we unlikely creatures known as humans, having arisen as one tiny manifestation of a massive, blind exploration of possible creatures, only imagine that the whole process was designed to lead to us." That such a basic issue as free will versus determinism is now being fought out on the grounds of Darwinian logic helps explain why the Darwin wars have been and will continue to be so venomous.
Lanier takes exception to the entire "cultural temperament" oftotalists, who have become so "intoxicated" by their system that they"seem to not have been educated in the tradition of scientificskepticism." They grow reckless when they "meme"-splice Darwin toBabbage, and giddy when they add "Moore's Law" to the mix. They takeMoore's Law--according to which computer power doubles every 18 monthsor so--to guarantee that tomorrow's machines will have a million timesthe speed and memory of today's computers. With that kind of computingpower driving them, machines will hardly be able to avoid being jarredinto sentience, or so the theory goes. But Lanier has some bad news fortotalists: Moore's Law applies only to hardware. Software can be countedon to drag the whole thing down.
With tongue only somewhat in cheek, he suggests that "if anything,there's a reverse Moore's Law observable in software: As processorsbecome faster and memory becomes cheaper, software becomescorrespondingly slower and more bloated." The sad state of software, hecontinues, may turn out to be humanity's best defense against the comingof any cyberspecies. "Just as some newborn race of superintelligentrobots are about to consume all humanity," he writes, "our dear oldspecies will likely be saved by a Windows crash. The poor robots willlinger pathetically, begging us to reboot them, even though they'll knowit would do no good."
Bad software gives Lanier a novel spin on the "Turing Test," whichattempts to gauge whether machine intelligence has evolved to the pointthat it is indistinguishable from human intelligence. Lanier suggeststhat there is another way for computers to get a passing grade otherthan by becoming smarter, and that is by making people more stupid. Inhis view, that is just what's going on. He thinks that the Turing Testwon't be decided in a single big event; instead, "miniature Turing Testsare happening all the time, every day, whenever a person puts up withstupid computer software."
Why does software improve so slowly, if at all? Lanier blamescybernetic totalism, with its peculiar mix of outsize ambition anddownright complacency. If computers are rapidly advancing to the pointthat they can write their own code, why bother about software elegance?Computers will soon be debugging each other as naturally as monkeysgroom one another's fur. Until that day, pile on the features; bring onthe bloat. Moore's Law is coming to the rescue.
Still, none of this would seem commensurate with the direst warningsof "One Half of a Manifesto." It's true that if machines pass--or peoplefail--the Turing Test, and human beings and computers shake hands on thecommon ground of the algorithm, there may be little for a humanist tocelebrate. But that's no reason to raise a hue and cry about the"suffering ... [of] millions of people" or to compare cybernetictotalism to "history's worst ideologies," as Lanier does. For Lanier,however, the problems we have with software today give but the baresthint of the horrors in store when the computer becomes instrumental inhuman genetic engineering.
He predicts "that the hardware/software dichotomy will reappear inbiotechnology, and indeed in other 21st century technologies." Whengenetic code becomes "more manipulatable, more like a computer's memory,then the limiting factor will be the quality of the software thatgoverns the manipulation." With software snarled by Moore's Law inreverse, it will be expensive to rewrite DNA. Only the rich will be ableto afford the really good hacks, such as longevity; only they will haveaccess to the indisputable killer app, as it were: a geneticallyengineered elixir of immortality. Here Lanier joins a number of otherthinkers--including some, like E.O. Wilson, on the fringes of cybernetictotalism--in fearing that we'll know the real meaning of binding Babbageand Darwin together with Moore's Law when the human race splits, roughlyalong the lines of rich and poor, into different species.
Will this occur? It's, of course, impossible to say. But "One Half ofa Manifesto" has value well beyond this or any other particularprediction. It is the warning against totalism per se that stands out inthe piece, all the more so because techies and others have so adroitlyoverlooked it. You can't know in advance all the specific dangers thatwill issue from a grand synthesis; you can't forecast how much of thescenery it will devour as it gains force. But you can be alert, asLanier urges. And you can take the implication of "One Half of aManifesto" seriously--namely, that postmodernism has been only a lullbefore the gathering of another totalist storm.