Skip to main content

Only five countries actually want to ban killer robots

Only five countries actually want to ban killer robots

/

Cuba, Pakistan, Egypt, Ecuador, and the Vatican backed a lethal autonomous weapons ban at the UN; everyone else wasn't sure

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

Last month, Russia announced a new mobile robot guard designed to gun down trespassers at ballistic missile bases. The twist? They don’t need human permission. The mobile robot, which resembles a gun-mounted tank, can be set to patrol an area and fire on anything it identifies as a target. The announcement came shortly after Russia’s deputy prime minister had called on the military to develop robots with artificial intelligence that can "strike on their own."

Russia is one of many countries developing self-guided killer robots. None have hit the battlefield yet, but this week the issue came to the United Nations.

Eighty-seven countries participated in the summit on lethal autonomous weapons, which concluded today with a resolution to address the issue again in November. The summit was an informal meeting of experts hosted by the UN Convention on Certain Conventional Weapons, the body that bans or restricts the use of weapons that "cause unnecessary or unjustifiable suffering to combatants or... affect civilians indiscriminately."

Five countries support a preemptive ban

The prospect of robots that can kill presents a number of ethical challenges for human rights laws. What standards of precision must a robot be held to before it can be deployed? If a robot accidentally kills a civilian, who is responsible? And is it ever ethical to delegate the decision to kill to a machine?

Proponents say robotic weapons will result in fewer civilian casualties than using human soldiers. Sending a robot into battle means you’re only risking the robot, not a human life. The robot can also take on more risk than a human, going deeper into enemy territory or loitering until it is absolutely sure it has the right target, and it’s typically a better shot.

Peace activists say letting machines decide when to use lethal force eliminates room for mercy, an important check on war. They say using machines instead of humans artificially decreases the costs of war, which could lead to more conflicts. Beyond that, there is an ineffable revulsion to the idea of dying on a robot’s whim.

But before the UN can decide to ban autonomous killer robots, it has to decide how to define them. The CCW informal meeting was the first step — information-gathering — in the process that would lead to a ban. Experts discussed the definition of autonomy, the degree of human control that is desirable in war, and human rights laws that might apply.

Most countries with significantly advanced robot technology were party to the meeting, including the US, Russia, China, and Israel. Much of the discussion focused around the idea of "autonomous" versus "fully autonomous" lethal weapons, which has to do with where the human input ends. For example, a drone that flies itself to a target is already somewhat autonomous — by that logic, even commercial airplanes are autonomous. A drone that is programmed to reach a certain target and then drop its bombs could either be considered partially or fully autonomous, while a defense system that fires automatically at intruders might be considered fully autonomous.

Both sides agreed the meeting was productive

Cuba, Ecuador, Egypt, the Vatican, and Pakistan called for a preemptive ban on fully autonomous weapons, according to participants, while France, Germany, Netherlands, the UK, and others stressed the importance of meaningful human control over targeting and attack decisions. Israel spent some of its time talking about the utility of autonomous military robots, while China and Russia did not take strong stands. Pakistan voiced the greatest concern, likely due to its history with American-ordered drone strikes.

"I was impressed. The attendance and engagement was excellent, civility reigned, and multiple divergent points of view were expressed," says Ron Arkin, a roboticist and ethicist at the Georgia Institute of Technology who has worked with the Pentagon.

"It seemed most delegations had something to say, to the point that we consistently ran over time," he says. "But if you were looking for consensus, not quite so much, as terminology and definitions was a consistent problem."

The issue will be addressed again in November

Activists from Human Rights Watch and the International Committee for Robot Arms Control (ICRAC) were encouraged by the meeting. "The sense is that this has been a successful meeting, and the most energetic anyone can remember in this body," says Mark Gubrud, a member of ICRAC.

He was disappointed in the US presentation, however, which seemed dismissive of the issue by advocating vaguely for "appropriate" human control over weapons systems. The Department of Defense has issued a temporary directive on the development of robotic systems that says there must be "appropriate levels of human judgment." The US delegate at CCW suggested this directive should be a model for the rest of the world, but most activists feel it is too weakly worded.

"There was emphasis up-front that the discussion is not about current weapons, especially drones," Gubrud recalls of the American position. "The US was there to tell everybody how good its model is and not to listen to what anybody else has to say."

The next step is for the CCW chair, Ambassador Jean-Hugues Simon-Michel of France, to prepare a summary of the meeting for the convention’s next gathering with all 117 party countries in November. At that time, the convention may choose to continue exploring the proposal of a ban or escalate to the next phase of an international arms agreement.