As you might know, American service members in Iraq and Afghanistan have been using a variety of robotic assistants, from bomb defusing robots to small hand-launched drone aircraft to surveillance robots. What none of these robots do, however, is carry weapons. There's a lot of understandable resistance within the military to this -- not as much to the current generation of remote-controlled bots but to the idea of future generations of autonomous robots making independent decisions on when to use deadly force is something few people are comfortable with.
But as the robots get more and more sophisticated, it's a question we're going to have to confront. As concerned as we are about our robots turning against their meat-based overlords, is it possible they'd actually be an improvement on human soldiers not just in physical ways but ethically as well? That's what's suggested in this piece at National Geographic (h/t Andrew Sullivan):
In the tumult of battle, robots wouldn't be affected by volatile emotions. Consequently they'd be less likely to make mistakes under fire, Arkin believes, and less likely to strike at noncombatants. In short, they might make better ethical decisions than people.In Arkin's system a robot trying to determine whether or not to fire would be guided by an "ethical governor" built into its software. When a robot locked onto a target, the governor would check a set of preprogrammed constraints based on the rules of engagement and the laws of war. An enemy tank in a large field, for instance, would quite likely get the go-ahead; a funeral at a cemetery attended by armed enemy combatants would be off-limits as a violation of the rules of engagement.
A second component, an "ethical adapter," would restrict the robot's weapons choices. If a too powerful weapon would cause unintended harm—say a missile might destroy an apartment building in addition to the tank—the ordnance would be off-limits until the system was adjusted. This is akin to a robotic model of guilt, Arkin says.
To quote Kyle Reese, these robots wouldn't feel pity, or remorse, or fear. Which is exactly what could make them superior. But the fact is that we probably won't be comfortable with anything less than perfection from armed robots. Despite the fact that in the chaos of war, soldiers make plenty of mistakes -- friendly fire incidents, accidental killings of civilians -- armed robots wouldn't be given much of a chance to screw up. The first time a robot kills one of our own, there would be tremendous pressure to remove them from the field.
There's an analogy in self-driving cars, which are also coming faster than you might realize (in part thanks to Google). How secure would you feel if your car, and every car on the road, was driving itself? Let's say the system was ready to go, and its designers assured us that while malfunctions would occur, no more than 10,000 Americans would die in their self-driving cars every year. What chance would the system have of being implemented? In all likelihood, no chance at all. What, we're going to turn over the keys to a bunch of robot cars that will kill 10,000 of us? But that would be a vast improvement over the status quo. Unfortunately, people stink at driving. We do it when we're tired, or drunk, or texting, and we let 16-year-olds with minimal experience, skills, and judgment behind the wheels of two-ton death machines. In 2009, 33,808 people were killed on American roads. So if the self-driving cars killed only 10,000, that means we'd be saving 23,000 lives. As compelling as that might be, many if not most people will resist. But robots that make their own decisions as they interact with the world is something we're going to have to get used to.