Vincent Desailly for SoftBank
Aldebaran's NAO robots. The company describes its "companion" robot this way: "NAO is a 58-cm tall humanoid robot. He is small, cute and round. You can't help but love him! NAO is intended to be a friendly companion around the house. He moves, recognises you, hears you and even talks to you!"
In the mid-1960s, a computer scientist named Joseph Weizenbaum wrote a program called ELIZA, which was meant to simulate a kind of psychotherapist that essentially repeats back everything the patient says. (The patient says, "I'm feeling depressed," and the therapist responds, "You're feeling depressed? Tell me more.") To his surprise, despite the simplicity of the program, people who interacted with it ended up telling it all kinds of secrets and couldn't tear themselves away; they were so eager to be listened to that they were happy to open their hearts to a computer.
The more modern versions of ELIZA (whom you can talk to here if you like) are chatbots, one of whom recently passed the Turing test, which was based on a conjecture by the mathematician Alan Turing. Turing proposed that when a computer could convince humans with whom it was interacting that it was actually a person, it could be said to be engaging in a form of thinking. Last month, a chatbot successfully fooled a third of human testers into believing it was a person.
But the Turing test isn't a test of artificial intelligence as much as it is a test of human gullibility. The recent successful test didn't show how clever the chatbot was, because it really wasn't all that clever. (The cleverest thing about it was the programmers' decision to make it a 13-year-old Ukrainian named Eugene Goostman, whose youth and foreignness would lead the humans on the other side to forgive its misunderstandings and non sequiturs.) If even a bot as simple as Eugene Goostman can fool lots of people into thinking it's human, we're rather easily fooled. But the question isn't what happens when we're trying to figure out who's a computer and who's a human-a problem we aren't likely to face very often-but what happens when we don't really care.
The Spike Jonze film Her, which was released last year and is now available on DVD, portrayed a near future in which a man falls in love with an artificial intelligence, voiced by Scarlett Johansson.
The film showed something far more disturbing than the more crowd-pleasing version of a future in which artificial intelligences try to kill us all.
While the kind of emotional growth the AI (named Samantha) goes through in Her (not to mention its perfect simulation of a human) isn't possible yet, it does remind us how easy we are to manipulate. The AI becomes romantically irresistible to the lead character, Theodore, not only because he's lonely but because if you learn enough about what people find appealing, it's far from impossible to simulate it. In one key scene, Theodore challenges Samantha on why she sighs. "I guess I was just trying to communicate because that's how people talk. That's how people communicate," she says. "Because they're people," he replies. "They need oxygen. You're not a person." But he falls in love with her anyway.
Her presents its AI as something new in a world not too different from our own. But when something like Samantha comes along, it won't be sudden, it will be a stage in a gradual evolution in which our relationship to our technology becomes more and more personal.
When Apple debuted Siri a couple of years ago, people heralded it as a new era, but it never fulfilled its promise. (I don't know anyone who uses Siri, or the Google version, in anything like the way portrayed in Apple's ads.) But that wasn't because people didn't want to talk to their phones, it was because Siri is, well, an idiot. It doesn't know very much, it's constantly making mistakes, and the voice has just enough mechanical rhythms to never let you forget you're talking to a piece of software, and a very limited one at that.
But if you think that you could never have an emotional attachment to software, it's probably only because you haven't met the right software yet, not because your emotional intelligence is so subtle and refined you can only connect with real humans in all their complexity. There are men in Japan who fall in love with pillows. Pillows! As crazy as that is, our capacity to create emotional bonds doesn't begin and end with homo sapiens. If you can feel deeply about a dog you consider a part of your family despite the dog's limited range of emotions and ability to communicate with you, why not a piece of software?
Researchers at MIT recently debuted a prototype home personal assistant called Jibo, which is meant to answer questions and perform tasks like ordering takeout or coordinating the systems of your smart home (whenever your home gets upgraded to become smart). But unlike a tablet, Jibo has a physical presence that is meant to evoke at least a rudimentary version of personality; among other things, it uses facial recognition to determine to whom it's talking, and leans toward you as if it's listening. This is an early version of something likely to become quite common-robots or software that integrate features of human personality, whether it's voice or movement or something else, as they perform progressively more complex tasks for us. As Louise Aronson explained in this Sunday's New York Times, there is an enormous need for caregivers for the elderly, and technologists all over the world are working to create robots that can fill it (especially, and unsurprisingly, the Japanese).
Given how easy it is for us to anthropomorphize animals or even inanimate objects, investing the right technologies with our own emotions and those we want mirrored back at us is almost inevitable. And as software becomes faster and more capable, it will be a piece of cake for artificial intelligences to make us fall in love with them if that's what they're programmed to do (or what they want to do).
If you think that's impossible, you're probably giving yourself too much credit. Think back on your romantic life for a moment. At some point in your past you probably fell for someone who turned out to be nothing like what you thought they were. In retrospect, you realized you didn't really know the person all that well, but you were temporarily beguiled by something-physical attractiveness, some shared interest, or maybe a quirk of circumstance. It wasn't that hard to pique your interest with some superficial things, or even to sustain it for a while. You had a powerful emotional response to that person, based on very limited information. Now imagine that the thing that eventually turned you off, whether it was the shabby way they treated that waiter or their weird political views or the way they left their laundry around the apartment, never existed.
Imagine that every new thing you learned about them was charming and lovely.
Have you ever had a crush on a character in a movie or television show? I'll bet you have. And what kind of information did you have about that character? It was embodied in an unusually attractive body, and if it was a long-running show, over time you heard it speak perhaps a few hundred lines of dialogue. Now consider what a powerful piece of software could do if it analyzed thousands of books and movies and TV shows to determine what makes a character romantically compelling, breaking those characters down into hundreds of variables (with your help, as it determines your particular preferences) and then reassembling them into something made just for you.
As you're interacting with that software-building a relationship-it isn't that you'll be fooled into thinking it's a real human. It's that you won't care.
Well, maybe not you in particular. Like those men with their pillows, having a romance with an AI will appeal (at first anyway) primarily to those whose romantic lives have been non-existent or painful. And there's the problem of corporeality, as the scene in Her demonstrates; without a human body to interact with, there may be limits to what we can feel for our artificial friends. We'll be able to create an AI that seems perfectly human long before we create an equally convincing body to put it in.
The relationships we create with technologies that simulate human personality may not be as rich as those we have with our human family and friends, but we may still find them meaningful. Consider another recent film that explored this question. In Robot and Frank, a man builds a friendship with a robot who begins as his home health aide and then becomes his partner in crime. Near the end of the film, it becomes necessary for the robot's memory to be erased, and it will "forget" everything the two have been through together. Frank knows it's a robot, and one whose emotional capabilities are a shadow of what a human being possesses. But the moment is poignant and painful, because the two have shared experiences and conversations, and that's what their relationship is built on. It may be less than a human friendship, but it still achieves its own kind of profundity.
And that's what all of us will eventually have to confront-maybe not in five years, maybe not even in ten, but not long after. You may not want to love a robot or a piece of software. But the smarter they get, the harder it's going to be to stop yourself.