When Robots Take Over, What Happens to Us?

Artificial intelligence has a long way to go before computers are as intelligent as humans. But progress is happening rapidly, in everything from logical reasoning to facial and speech recognition. With steady improvements in memory, processing power, and programming, the question isn't if a computer will ever be as smart as a human, but only how long it will take. And once computers are as smart as people, they'll keep getting smarter, in short order become much, much smarter than people. When artificial intelligence (AI) becomes artificial superintelligence (ASI), the real problems begin.

In his new book Our Final Invention: Artificial Intelligence and the End of the Human Era, James Barrat argues that we need to begin thinking now about how artificial intelligences will treat their creators when they can think faster, reason better, and understand more than any human. These questions were long the province of thrilling (if not always realistic) science fiction, but Barrat warns that the consequences could indeed be catastrophic. I spoke with him about his book, the dangers of ASI, and whether we're all doomed.

Your basic thesis is that even if we don't know exactly how long it will take, eventually artificial intelligence will surpass human intelligence, and once they're smarter than we are, we are in serious trouble. This is an idea people are familiar with; there are lots of sci-fi stories about homicidal AIs like HAL or Skynet. But you argue that it may be more likely that super-intelligent AI will be simply indifferent to the fate of humanity, and that could be just as dangerous for us. Can you explain?

First, I think we've been inoculated to the threat of advanced AI by science fiction. We've had so much fun with Hollywood tropes like Terminator and of course the Hal 9000 that we don't take the threat seriously. But as Bill Joy once said, "Just because you saw it in a movie doesn't mean it can't happen."

Superintelligence in no way implies benevolence. Your laptop doesn't like you or dislike you anymore than your toaster does– why do we believe an intelligent machine will be different? We humans have a bad habit of imputing motive to objects and phenomena–anthropomorphizing. If it's thundering outside the gods must be angry. We see friendly faces in clouds. We anticipate that because we create an artifact, like an intelligent machine, it will be grateful for its existence, and want to serve and protect us.

But these are our qualities, not machines'. Furthermore, at an advanced level, as I write in Our Final Invention, citing the work of AI-maker and theorist Steve Omohundro, artificial intelligence will have drives much like our own, including self-protection and resource acquisition. It will want to achieve its goals and marshal sufficient resources to do so. It will want to avoid being turned off. When its goals collide with ours it will have no basis for valuing our goals, and use whatever means are at its disposal for achieving its goals.

The immediate answer many people would give to the threat is, "Well, just program them not to hurt us," with some kind of updated version of Isaac Asimov's Three Laws of Robotics. I'm guessing that's no easy task.

That's right, it's extremely difficult. Asimov's Three Laws are often cited as a cure-all for controlling ASI. In fact they were created to generate tension and stories. HIs classic I, Robot is a catalogue of unintended consequences caused by conflicts among the three laws. Not only are our values hard to give to a machine, our values change from culture to culture, religion to religion, and over time. We can't agree on when life begins, so how can we reach a consensus about the qualities of life we want to protect? And will those values make sense in 100 years?

When you're discussing our efforts to contain an AI many times smarter than us, you make an analogy to waking up in a prison run by mice (with whom you can communicate). My takeaway from that was pretty depressing. Of course you'd be able to manipulate the mice into letting you go free, and it would probably be just as easy for an artificial superintelligence to get us to do what it wants. Does that mean any kind of technological means of containing it will inevitably fail?

Our Final Invention is both a warning and a call for ideas about how to govern superintelligence. I think we'll struggle mortally with this problem, and there aren't a lot of solutions out there—I've been looking. Ray Kurzweil, who's portrait of the future is very rosy, concedes that superior intelligence won't be contained. His solution is to merge with it. The 1975 Asilomar Conference on Recombinant DNA is a good model of what should happen. Researchers suspended work and got together to establish basic safety protocols, like "don't track the DNA out on your shoes." It worked, and now we're benefitting from gene therapy and better crops, with no horrendous accidents so far. MIRI (the Machine Intelligence Research Institute) advocates creating the first superintelligence with friendliness encoded, among other steps, but that's hard to do. Bottom line—before we share the planet with superintelligent machines we need a science for understanding and controlling them.

But as you point out, it would be extremely difficult in practical terms to ban a particular kind of AI—if we don't build it, someone else will, and there will always be what seem to them like very good reasons to do so. With people all over the world working on these technologies, how can we impose any kind of stricture that will prevent the outcomes we're afraid of?

Human-level intelligence at the price of a computer will be the most lucrative commodity in the history of the world. Imagine banks of thousands of PhD quality brains working on cancer research, climate modeling, weapons development. With those enticements, how do you get competing researchers and countries to the table to discuss safety? My answer is to write a book, make films, get people aware and involved, and start a private-public partnership targeted at safety. Government and industry have to get together. For that to happen, we must give people the resources they need to understand a problem that's going to deeply affect their lives. Public pressure is all we've got to get people to the table. If we wait to be motivated by horrendous accidents and weaponization, as we have with nuclear fission, then we'll have waited too long.

Beyond the threat of annihilation, one of the most disturbing parts of this vision is the idea that we'll eventually reach the point at which humans are no longer the most important actors on planet Earth. There's another species (if you will) with more capability and power to make the big decisions, and we're here at their indulgence, even if for the moment they're treating us humanely. If we're a secondary species, how do you think that will affect how we think about what it means to be human?

That's right, we humans steer the future not because we're the fastest or strongest creatures, but because we're the smartest. When we share the planet with creatures smarter than we are, they'll steer the future. For a simile, look at how we treat intelligent animals - they're at Seaworld, they're bushmeat, they're in zoos, or they're endangered. Of course the Singularitarians believe that the superintelligence will be ours—we'll be transhuman. I'm deeply skeptical of that one-sided good news story.

As you were writing this book, were there times you thought, "That's it. We're doomed. Nothing can be done"?

Yes, and I thought it was curious to be alive and aware within the time window in which we might be able to change that future, a twist on the anthropic principal. But having hope about seemingly hopeless odds is a moral choice. Perhaps we'll get wise to the dangers in time. Perhaps we'll learn after a survivable accident. Perhaps enough people will realize that advanced AI is a dual use technology, like nuclear fission. The world was introduced to fission at Hiroshima. Then we as a species spent the next 50 years with a gun pointed at our own heads. We can't survive that abrupt an introduction to superintelligence. And we need a better maintenance plan than fission's mutually assured destruction.

Comments

Look around you. Look at this world. Doesn't that give you an idea what is *supposed* to happen next?

After being beaten in a special Jeopardy! demo game by Watson, the IBM supercomputer, former champion Ken Jennings commented, "I for one welcome our new computer overlords."

The overall arc of Asimov's novels about robots concludes with the robots themselves learning that the Three Laws made individual PEOPLE so safe that HUMANITY lost its drive to achieve and excel further, so the two most advanced robots concocted the ZEROETH Law, similar in phrasing to the First Law but substituting "humanity" for "a human being," and then arranging to influence human history so that robotics would be forgotten and non-robot-using humans would triumph over robot-using humans. WITH robots, humans had conquered 50 worlds other than Earth, and the people of those worlds stagnated for many generations, while restraining Earth from taking their non-robotic culture to a new world. WITHOUT them, humans built a Galactic Empire.

"...artificial intelligence will have drives much like our own, including self-protection and resource acquisition..."

Sounds to me like Barrat is just as guilty of anthropomorphizing as the people he criticizes. Living things on Earth -- intelligent and otherwise -- have values like self-preservation, self-replication, and resource acquisition deeply embedded in our code because that's what it takes to survive in a competitive world with limited resources. If we as a species weren't programmed to survive at all costs, even (or especially) at the expense of our rivals, we wouldn't have made it this far.

As hard as it is to fathom, a computer superintelligence won't necessarily share even those fundamental values, because its intelligence will not have emerged as the result of millions of years of evolution that filtered out less-selfish qualities. I don't disagree with Barrat that we as a species need to proceed carefully, but I think that the "Singularitarians'" point of view is actually more realistic than the pessimistic assumption that artificial superintelligence will be selfish. Humans are programmed to survive, compete and triumph -- our selfishness defines us as a species. AI, on the other hand, will emerge with its core programming aligned with some other purpose, assigned by its creators.

You need to be logged in to comment.
(If there's one thing we know about comment trolls, it's that they're lazy)

Connect
, after login or registration your account will be connected.