Last week, Human Rights Watch released a report raising alarms about the specter of "killer robots." The report urged that we develop an international treaty to prohibit the development of fully autonomous robotic weapons systems that can make their own decisions about when to use deadly force. So is that day coming any time soon? The Pentagon wants everyone to know it has no plans to allow robots to make decisions on when to fire weapons; Spencer Ackerman at Wired points us to this memo from Deputy Secretary of Defense Ashton Carter released two days after the HRW report, making clear that the DoD's policy is that robots don't get to pull the trigger without a human being making the decision (or in bureacratic-speak, "Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force"). It seems obvious that we don't want a bunch of Terminators walking through our streets deciding whom they're going to shoot. Or is it?
As sophisticated as drone technology is right now, it doesn't have the kind of subtle intelligence necessary to reliably distinguish friend from foe and make morally defensible decisions when using lethal force. But it will. The only question is how long that takes. Not only are processing power and speed continuing to increase, the algorithms that govern decision-making get better all the time, and it's not unreasonable to assume that eventually there will be some quantum leaps in artificial intelligence that will solve some of the stickiest problems the field now faces, even if we don't yet know what those leaps might look like.
Thought it may be a ways off, it's entirely possible—I'd even say likely—that a few decades from now, robots won't just be as good as human soldiers at making battlefield decisions, they'll be better. Mistakes happen in war all the time—civilians are killed, buildings are destroyed unnecessarily, friendly fire incidents abound. Those things happen both because of inadequate information and coordination, and because of simple human failings like confusion and fear. There will come a point when we realize that robots are better able to minimize this kind of collateral damage than even the best-trained troops. And what do we do then?
After all, as the folks at Human Rights Watch say, their primary concern is the protection of civilians. They argue that the presence of emotions allows soldiers to make moral decisions, because they can be horrified at the prospect of killing a child, or empathize with someone in danger. That's certainly true, but it isn't as though human-based militaries have a particularly good record when it comes to protecting civilians.
Let's assume that our future robot warriors will never be perfect. But what do we do when we realize that they are far, far better than we are at minimizing collateral damage? This is the same problem we're going to confront (and much sooner) with driverless cars. Humans are terrible at driving; though traffic fatalities have been steadily decreasing, 30,000 Americans still die on the road every year. But the first time a pedestrian is hit by a driverless car, there will be calls to scrap all of them, even though a fully driverless system is going to be far safer than the national asphalt killing ground we have now. As complicated as driving is, it's much easier than reliably deciding whom to shoot and whom to save. Nevertheless, at some point, one side of this debate will be able to argue—and pretty persuasively—that autonomous military robots, even those that make occasional mistakes, do a better job of protecting our moral values on the battlefield than we can do ourselves.