Skip to main content

The debate on lethal robots is starting too late

The debate on lethal robots is starting too late

/

Critics want a ban on automated killing, but drawing the line is harder than they think

Share this story

The killer robots are coming, and they’re coming soon. Whether it's self-piloted drones or Big Dog-style walkers, self-piloted crafts are already here, and adding autonomous weapons is a natural next step. By now, the question isn't can we, but should we?

That question came to a head this April, when the Convention on Certain Conventional Weapons (or CCW) met in Geneva to discuss the issue, and in the wake of the meeting, a number of researchers and professors have weighed in publicly. Earlier this week, political scientists Michael Horowitz and Paul Scharre urged caution on autonomous systems in The New York Times, while Berkeley computer scientist Stuart Russell encouraged further debate with a column in Nature.

Neither piece called for an outright ban on Lethal Autonomous Weapons Systems (or LAWS), although others like nanophysicist Mark Grubud have in the past. Instead, these pieces propose treating lethal robots as a controlled area of research, akin to nuclear reactors or chemical weapons. The crucial point, the critics say, is maintaining human autonomy over the system — in essence, a person behind the wheel. That person can be trusted to make moral decisions, and face the consequences if those decisions go wrong. They are familiar with the Geneva Convention, to be even more specific, and could be prosecuted if they violate it. A lethal autonomous robot, on the other hand, would be bound by no such logic.

"Humans must... face the horror of war squarely, not outsource it to machines."

Part of the urgency of the debate is that, for many, lethal machines seem like a logical next step. Computers have been taking over human tasks for decades now, leeching away human autonomy over simple tasks, and extending that to battlefields could save soldiers’ lives and confer strategic advantages. But LAWS critics argue that the decision to engage in combat — literally, the decision to kill — is different. Without human beings making the decision to kill, the concern is that killing will happen indiscriminately, slowly lowering the bar for the use of violent force. Once death happens by algorithm, what’s the incentive to preserve life? "Humans must ultimately bear moral responsibility and face the horror of war squarely, not outsource it to machines," Sharre and Horowitz tell us.

It’s a morally intuitive place to draw the line. The act of killing is powerful, and empowering machines or algorithms to take it on should make us queasy. In talking about LAWS, that queasiness gets reduced to a clear decision — to deploy or not to deploy — and we have complete control over the answer. We grant humans autonomy from the beginning and worry about extending that autonomy to machines.

But there's something missing from the debate. Killer robots really are troubling, and if we can, we should stop them from being created and deployed. It’s good to raise doubts about the wisdom of crossing that line. But in making that case, we miss how far past the line we’ve already come. In today’s wars, human autonomy is already compromised, maybe hopelessly so, and the truly frightening machines aren’t the ones carrying weapons.

A few days after the Geneva meeting let out, the White House announced that a drone strike had mistakenly killed two hostages in Pakistan earlier this year. The victims were an American named Warren Weinstein and an Italian named Giovanni Lo Porto. They were newsworthy because of their passports, but they were far from the first collateral casualties. The Bureau of Investigative Journalism estimates between 400 and 900 civilian deaths in Pakistani drone strikes since 2004, roughly a quarter of which have been children.

It's hard to say who's responsible for killing Weinstein or Lo Porto or the hundreds of others, even with a human at the controls. President Obama has emphasized the precision of US drone strikes, as part of his larger moral defense of the program, but many of the structural elements of the drone program make such casualties inevitable. The NSA often geolocates targets using SIM cards, but a single target might have as many as 16 SIM cards, many of which would be used by friends and family. Without ground support, it's difficult to tell if a SIM-targeted strike reached its target or just a fellow traveler. "They might have been terrorists," a former drone operator told The Intercept, describing one such strike. "Or they could have been family members who have nothing to do with the target’s activities."

It's hard to call this autonomy

Even when the intelligence is good, it's easy for the strike to miss. Many targets travel with their families, and once the group is inside a building, it can be hard to tell one warm body from another. As a result, PTSD is common among drone pilots, including one pilot named Brandon Bryant who was responsible for 1,626 deaths and described performing his duties in "a fugue state of mind."

In some sense, this is how the system is supposed to work. If you want a human actor soaking up the horrors of war, he will look something like Bryant, sitting in a military base surrounded by screens and orders. His orders come from elsewhere, often based on metadata and algorithmic rules for surfacing possible targets. He can’t leave his post until his tour is up, and then his spot will be filled by someone else. The machine trundles along without him, and there’s no shortage of replacement parts.

It's hard to call this autonomy. Scharre and Horowitz worry about outsourcing the horror of war to machines, but currently, that horror is outsourced to people like Bryant, who face down the realities of war without any means of changing them. We want moral actors piloting these weapons, but we haven't made room for them. In practice, the person pulling the trigger is just another part of the machine.

That’s not an argument for deploying autonomous killing machines, but we should be honest about why they seem so inevitable. Our military systems are already so automated, it’s easy to see where lethally autonomous bots would fit in, and much of that automation has happened without any debate at all. Having come this far, how could we stop?

Our weapons shape us, and they have shaped us into something unthinkable. It's hard to imagine how we might go back, even for the world's would-be military philosophers. "Almost all states who are party to the CCW agree with the need for ‘meaningful human control’ over the targeting and engagement decisions made by robotic weapons," Russell writes. "Unfortunately, the meaning of ‘meaningful’ is still to be determined."