Skip to main content

A conversation on the ethics of Dallas police's bomb robot

A conversation on the ethics of Dallas police's bomb robot

Share this story

Today we learned the Dallas police used a bomb disposal robot to deliver and detonate an explosion, killing the suspect in last night's shooting. It was surprising, and shocking, and left me with a lot of questions. But, really, there's only one question: is this ethical?

So I spoke to an expert. Ryan Calo is a law professor at the University of Washington, who focuses on robotics law and policy. For years he's been pushing for better and clearer policy around robotics, and exploring the ethical and legal realities of robots among us.

The interview was lightly edited for clarity, but I kept the entire length of our conversation intact because basically it’s a free college lecture on the ethics of this exact situation. It can’t answer every doubt and concern, but I think Calo did a great job of framing where exactly the Dallas scenario fits in the larger conversation of robot ethics.

Paul Miller: What were your first thoughts when you heard this news?

Ryan Calo: I wasn’t surprised. I can’t even tell you how many of the major national events there are that we read about, especially calamities — whether it's oil spills and leaks, or a mine collapsing, or a hostage situation — in which robots factor. They’re increasingly used to solve or address emergency situations just like this one. I was surprised by the this particular use of a robot, to deliver a lethal payload. But I was hardly surprised that robots would be involved. It’s just rare that you would see a large national issue that doesn’t involve them in some way.

I’m seeing some reports that this was done in Iraq using MARCbot, where soldiers would send a robot in with an explosive. Have you heard of that anecdote?

I think that in the military field you see soldiers using the robots in all kinds of ways. Those robots, like MARC and the Packbot and TALON, they’re really multipurpose platforms, and you can imagine them being used in all kinds of ways. I’d be surprised if the police themselves domestically hadn’t done some version of this in the past. I’m not aware of any previous use, but I bet you that we’ll dig some up. And, so, yeah I’m aware of various uses of robots by military.

Usually, of course, what they are used to do is for reconnaissance, to diffuse bombs, to see whether there are bombs, used for protection and situational awareness. But because they are a open platform, because they are a versatile platform, and because the situations that happen in the theater of war are so varied, it doesn’t surprise me that soldiers used robots in this way.

You said that we should "distinguish between the use of a robot to kill and so-called killer robots." Could you explain what you mean by that?

There is at the moment an international movement to limit the abilities of people and nations to create systems that make their own decision about whether or not to kill. These are for lack of a better word autonomous weapons. And there’s a whole conversation about whether a human being always needs to be in the loop. And whether or not it's okay to live in a society in which a decision to kill is made by a robot.

There’s also another conversation of course going on around the adequacy of the process by which we determine who to put on a kill list and then use drones to kill them. And there is some thought in general that we don’t know enough, that we don’t have enough transparency around the drone killing program.

And then third, that if we had an all-robot army, even if it were remotely operated by soldiers, is that too little an impediment to war? Would we engage in greater violence because we would ourselves not be at risk?

So those are sort of the three major debates going on globally. What I’m trying to say is that this particular incident does not implicate any of those. I mean, here the officers were justified in using lethal force. So, any court that looked at this, barring something bizarre, would probably be pretty agnostic as to the means by which they delivered violence.

For example, imagine a situation where an officer confronted a suspect. The suspect was going to use lethal force and the officer’s gun jammed. And instead of shooting the suspect, they stabbed them with a knife, or they hit them repeatedly with a gun. We would find that shocking, because it’s not the usual way of delivering lethal force. But, it would be justified. And in this situation they had tried other options. It’s wasn’t like they weren’t putting people at risk. Police officers were risking themselves all over the city, trying to protect citizens. But they were risking themselves. It’s a situation where they had exchanged fire, they couldn’t get this guy out of there, they’re worried about what he was going to do, and they got creative in using a tool, an instrument, in order to deliver justified, lethal force.

Maybe there should be policies in place for how robots are used, maybe we can be considered about the overuse of robots in policing, but this debate is not connected to the greater debate about the military use of robots, in my view.

Almost all of these robots are directly remote controlled. Would it be a different conversation if this was somehow autonomous?

Yes, it’d be quite a different conversation. And as a matter of fact, it would be a different conversation if this were a routine use of even non-lethal force. Imagine a situation in which officers don’t feel like approaching a homeless person who’s yelling in a subway station, and instead they fly a drone over to him and they taze him. I would have very specific issues with that, having to do with the fact that it’s the opposite of community policing.

There’s a great danger that you’ll over use non-lethal force delivered by a robot because you don’t have situational awareness. It’s too convenient. You mistakenly believe that non-lethal force is not dangerous. I would find that problematic in a way that I don’t find this problematic. This is a situation in which they are authorized to use lethal force, they are in the process of using lethal force, and they’re trying to find a way to stop the situation and detract the risk for themselves and the public without hurting other civilians, without compromising the building itself. You know what I mean? So they came up with a creative use of the robot.

In another domain, occasionally people will do things with robots that usually people do, and then courts have to decide what the legal effects are. I wrote about a bunch of robot cases last year. One of the cases involved was the use of an unmanned submarine to "discover" a shipwreck. Usually, in order for you to get salvage rights in maritime law, you had to physically go down to the shipwreck and pull some of it up. But in this case, the salvage company had only reached it through a tele-operated robot. So the court had to decide: does that count as exclusive possession for the purpose of maritime law? And they created a new doctrine called tele-possession. So basically what happened is the law thinks about what applies to people, and whether the use of telepresence counts.

And in this situation [referring to Dallas incident] I think that they'd do quite the same thing. Could an officer walk in there and shoot that person? Yes, of course. Could a robot be sent in to shoot the person or blow them up? Yes. And so that’s not an answer to the question of whether or not there should be careful policies in place around robotics both in the air and on the ground. There should be. But I think it pretty clearly to me answers both the moral and the legal question in this particular instance.

That’s a pretty clear example for me: it would be morally permissible for a police officer to go in and shoot this guy who is a threat to them, as it seems in this case, and so therefore using a robot in this case is acceptable. Is that a legal doctrine that exists, or is that something that is still forming?

No, the truth is that we define the lethal use of force and its justification by reference to the possibility or probability of death. So the use of lethal force is justified only where there is a threat to the officer or to civilians of grievous bodily harm or death. We don’t specify the means... The only way it would have different ramifications in law, would be if there were no danger to the officer, because they were using a robot. You see what I mean?

Now they have these sort of patrol robots that are put on people's properties, or companies’ properties. If one of those were armed with lethal or nonlethal force, then we have a different conversation, because there is no threat to bodily harm to a person in that instance. In this instance, this person is actively shooting at officers outside of where he is. It’s obviously a threat. If they can throw a grenade in there, or send an officer in there to shoot somebody, or if they can just send a grenade around the corner, they can send a robot in there. And the reason to do that, the reason to send the robot in there, is actually to get some situational awareness, because of course the robot has cameras and stuff, and also to deliver the device with a greater precision.

Who knows, imagine if you throw a grenade in there and compromise the integrity of the building, or you throw a grenade in there and you didn’t realize there were other people in there, but you had no idea. In this instance you can send in the robot, look at the room, and you can detonate it as close the individual as possible. The means is not really important unless the means removes the danger to the officer, or if it were excessively cruel. The law would not permit you to beat someone to death, or torture them. Just because you can kill someone doesn’t mean you torture them. But this is not that either. This is the delivery of a lethal payload.

Some people have noted the possible security ramifications of having a lethal payload on a (presumably) wirelessly controlled robot that could be easily hacked.

I don’t know what model they used. But I do know that security in robotics is critical; it should be one of the first things that robot manufacturers think of. I’m reminded of evidence that our enemies had footage of our drones when we were sending video feed in the clear. I’m reminded of the fact that Iran downed one of our spy drones by spoofing the GPS. Motivated people can hack into these systems. I think the danger in this situation is really low because nobody knew that the robot was being used, but you can imagine some very sophisticated criminals in the future having a cyber response to the use of robots. I think that’s pretty far down the line, and you'd have to anticipate that a particular robot was going to be used. Here there was really no danger because obviously the person had no idea: no one else did. We’re all learning about it after the fact.