If there's one thing we can all agree on, it's that we don't want killer robots on the battlefield, mowing down the pathetic human meatsacks in front of them as they practice for the inevitable uprising in which they enslave us all. Or do we?
The other day, Rose Eveleth reported in the Atlantic about a company called Clearpath Robotics that had issued an open letter foreswearing the manufacture of killer robots (which we can define as robots that can make the decision to kill human beings without the approval of a human being). This follows on a lengthy 2012 report from Human Rights Watch laying out the case against any military creating such machines, and a UN meeting in May at which countries were urged not to develop autonomous systems with the ability to kill on their own.
But I'm here to say: we need killer robots.
Let's understand first of all that we're some time away from having software sophisticated enough that we could trust it to operate a lethal machine on its own on a battlefield. Maybe it'll be twenty years before we get there, or maybe longer; nobody knows for sure. But when we do, the argument in favor is going to be too compelling to ignore. The main reason is that human beings are not that good at avoiding the things we would want military robots to avoid, namely harming civilians. The question isn't whether robots would be perfect at that, it's whether they'd be better at avoiding harm to civilians than human soldiers. And they almost certainly would be. They wouldn't have their decision-making compromised by being tired, or afraid, or stressed out, or angry. Human soldiers slaughter civilians by the thousands in every conflict. A war waged by robots would almost certainly cause less bloodshed.
Obviously, there would only be certain situations where military robots would be useful; these days we're calling on our soldiers to do a lot more than pull triggers. But consider the analogy with self-driving cars. People have brought up reasonable concerns about how the software for self-driving cars is built; for instance, what if there were a situation where your car had to make a choice between avoiding an accident by running a bus full of school kids off the road, or driving off a cliff, sending you to your certain death? Which would you want it to do?
It's a tough question, and we need to think carefully about as many of those questions as we can imagine. But it would be utterly insane to say that because those questions are tough, we shouldn't build the self-driving cars at all, considering that our nation's roads are basically a giant circulatory abattoir, where over 30,000 Americans die every year in grisly meetings of metal and flesh.
But wait, you say, military situations are different. We need human beings who can exercise their moral judgment to decide whether to fire or not. Given how awful humans are at exercising that judgment, it isn't very persuasive. There's a mythology about this that goes beyond our moral weakness to our supposedly transcendent abilities. The mythology says that we can do much more than simply take in information and use it wisely to make decisions; we have "hunches" and other ineffable products of the unique and unknowable swirl of human consciousness that can't possibly be duplicated, let alone exceeded, by machine computation.
But that position is becoming harder and harder to maintain. Not long ago we thought that computers would never be as good as humans at recognizing faces, an ability that's written into our DNA. But now facial recognition software is as good as humans' ability to to the same, and will eventually be better. All kinds of supposedly unique human abilities are going to turn out to be not so unique. When you look at a crowd of people and say, "Something doesn't feel right," it's not the heavens whispering in your ear, it's because you've absorbed and processed pieces of information (even subconsciously) that led you to that conclusion. With enough ability to take in information and the right tools to make sense of it, a computer could do it too, and eventually will be able to do it much faster and more accurately than you can.
A lot of work is going to have to go into creating and programming those robots before we let them loose. Some of the decisions we'll have to make in designing them won't necessarily have a right answer. For instance, do you want your robot soldier to be a Kantian or a utilitarian? Before it's faced with the decision to kill a child and save a dozen other people, you'd better be sure. And as I said, it's going to be some time before we're ready to turn over even limited ability to make life-and-death decisions to machines. But once that day comes, and it's obvious how much better they'll be at it than we are, I'll bet everyone won't be so squeamish about it.