Skip to main content

Have autonomous robots started killing in war?

Have autonomous robots started killing in war?

/

Reports warned of killer robots — the reality is messier

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

quadcopter
Quadcopters equipped with munitions and machine vision systems are creating cheap, autonomous-enough weapons.
Image: The Verge

It’s the sort of thing that can almost pass for background noise these days: over the past week, a number of publications tentatively declared, based on a UN report from the Libyan civil war, that killer robots may have hunted down humans autonomously for the first time. As one headline put it: “The Age of Autonomous Killer Robots May Already Be Here.”

But is it? As you might guess, it’s a hard question to answer.

The new coverage has sparked a debate among experts that goes to the heart of our problems confronting the rise of autonomous robots in war. Some said the stories were wrongheaded and sensational, while others suggested there was a nugget of truth to the discussion. Diving into the topic doesn’t reveal that the world quietly experienced the opening salvos of the Terminator timeline in 2020. But it does point to a more prosaic and perhaps much more depressing truth: that no one can agree on what a killer robot is, and if we wait for this to happen, their presence in war will have long been normalized.

It’s cheery stuff, isn’t it? It’ll take your mind off the global pandemic at least. Let’s jump in:

What’s the actual news here?

The source of all these stories is a 548-page report from the United Nations Security Council that details the tail end of the Second Libyan Civil War, covering a period from October 2019 to January 2021. The report was published in March, and you can read it in full here. To save you time: it is an extremely thorough account of an extremely complex conflict, detailing various troop movements, weapon transfers, raids and skirmishes that took place among the war’s various factions, both foreign and domestic.

News stories were based on just two paragraphs from a UN report

The paragraph we’re interested in, though, describes an offensive near Tripoli in March 2020, in which forces supporting the UN-backed Government of National Accord (GNA) routed troops loyal to the Libyan National Army of Khalifa Haftar (referred to in the report as the Haftar Affiliated Forces or HAF). Here’s the relevant passage in full:

Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (see annex 30) and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true “fire, forget and find” capability.”

The Kargu-2 system that’s mentioned here is a quadcopter built in Turkey: it’s essentially a consumer drone that’s used to dive-bomb targets. It can be manually operated or steer itself using machine vision. A second paragraph in the report notes that retreating forces were “subject to continual harassment from the unmanned combat aerial vehicles and lethal autonomous weapons systems” and that the HAF “suffered significant casualties” as a result.

The Kargu-2 drone is essentially a quadcopter that dive-bombs enemies.
The Kargu-2 drone is essentially a quadcopter that dive-bombs enemies.
Image: STM

But that’s it. That’s all we have. What the report doesn’t say — at least not outright — is that human beings were killed by autonomous robots acting without human supervision. It says humans and vehicles were attacked by a mix of drones, quadcopters, and “loitering munitions” (we’ll get to those later), and that the quadcopters had been programmed to work offline. But whether the attacks took place without connectivity is unclear.

These two paragraphs made their way into the mainstream press via a story in the New Scientist, which ran a piece with the headline: “Drones may have attacked humans fully autonomously for the first time.” The NS is very careful to caveat that military drones might have acted autonomously and that humans might have been killed, but later reports lost this nuance. “Autonomous drone attacked soldiers in Libya all on its own,” read one headline. “For the First Time, Drones Autonomously Attacked Humans,” said another.

Let’s be clear: by itself, the UN does not say for certain whether drones autonomously attacked humans in Libya last year, though it certainly suggests this could have happened. The problem is that even if it did happen, for many experts, it’s just not news.

The problem of defining “killer robots”

The reason why some experts took issue with these stories was because they followed the UN’s wording, which doesn’t distinguish clearly between loitering munitions and lethal autonomous weapons systems or LAWS (that’s policy jargon for killer robots).

Loitering munitions, for the uninitiated, are the weapon equivalent of seagulls at the beachfront. They hang around a specific area, float above the masses, and wait to strike their target — usually military hardware of one sort or another (though it’s not impossible that they could be used to target individuals).

The classic example is Israel’s IAI Harpy, which was developed in the 1980s to target anti-air defenses. The Harpy looks like a cross between a missile and a fixed-wing drone, and is fired from the ground into a target area where it can linger for up to nine hours. It scans for telltale radar emissions from anti-air systems and drops onto any it finds. The loitering aspect is crucial as troops will often turn these radars off, given they act like homing beacons.

The IAI Harpy is launched from the ground and can linger for hours over a target area.
The IAI Harpy is launched from the ground and can linger for hours over a target area.
Image: IAI

“The thing is, how is this the first time of anything?” tweeted Ulrike Franke, a senior policy fellow at the European Council on Foreign Relations. “Loitering munition have been on the battlefield for a while - most notably in Nagorno-Karaback. It seems to me that what’s new here isn’t the event, but that the UN report calls them lethal autonomous weapon systems.”

“the line between something being autonomous and being automated has shifted over the decades”

Jack McDonald, a lecturer at the department of war studies at King’s College London, says the distinction between the two terms is controversial and constitutes an unsolved problem in the world of arms regulation. “There are people who call ‘loitering munitions’ ‘lethal autonomous weapon systems’ and people who just call them ‘loitering munitions,’” he tells The Verge. “This is a huge, long-running thing. And it’s because the line between something being autonomous and being automated has shifted over the decades.”

So is the Harpy a lethal autonomous weapons system? A killer robot? It depends on who you ask. IAI’s own website describes it as such, calling it “an autonomous weapon for all weather,” and the Harpy certainly fits a makeshift definition of LAWS as “machines that target combatants without human oversight.” But if this is your definition, then you’ve created a very broad church for killer robots. Indeed, under this definition a land mine is a killer robot, as it, too, autonomously targets combatants in war without human oversight.

Artificial intelligence makes it worse

If killer robots have been around for decades, why has there been so much discussion about them in recent years, with groups like the Campaign To Stop Killer Robots pushing for regulation of this technology in the UN? And why is this incident in Libya special?

The rise of artificial intelligence plays a big role, says Zak Kallenborn, a policy fellow at the Schar School of Policy and Government. Advances in AI over the past decade have given weapon-makers access to cheap vision systems that can select targets as quickly as your phone identifies pets, plants, and familiar faces in your camera roll. These systems promise nuanced and precise identification of targets but are also much more prone to mistakes.

AI vision has been proven to be brittle, time and time again

“Loitering munitions typically respond to radar emissions, [and] a kid walking down the street isn’t going to have a high-powered radar in their backpack,” Kallenborn tells The Verge. “But AI targeting systems might misclassify the kid as a soldier, because current AI systems are highly brittle — one study showed a change in a single pixel is sufficient to cause machine vision systems to draw radically different conclusions about what it sees. An open question is how often those errors occur during real-world use.”

This is why the incident in Libya is interesting, says Kallenborn, as the Kargu-2 system mentioned in the UN report does seem to use AI to identify targets. According to the quadcopter’s manufacturer, STM, it uses “machine learning algorithms embedded on the platform” to “effectively respond against stationary or mobile targets (i.e. vehicle, person etc.)” Demo videos appear to show it doing exactly that. In the clip below, the quadcopter hones in on a mannequin in a stationary group.

But should we trust a manufacturers’ demo reel or brochure? And does the UN report make it clear that machine learning systems were used in the attack?

Kallenborn’s reading of the report is that it “heavily implies” that this was the case, but McDonald is more skeptical. “I think it’s sensible to say that the Kargu-2 as a platform is open to being used in an autonomous way,” he says. “But we don’t necessarily know if it was.” In a tweet, he also pointed out that this particular skirmish involved long-range missiles and howitzers, making it even harder to attribute casualties to any one system.

What’s next for LAWS and the law?

What we’re left with is, perhaps unsurprisingly, the fog of war. Or more accurately: the fog of LAWS. We can’t say for certain what happened in Libya and our definitions of what is and isn’t a killer robot are so fluid that even if we knew, there would be disagreement.

For Kallenborn, this is sort of the point: it underscores the difficulties we face trying to create meaningful oversight in the AI-assisted battles of the future. Of course the first use of autonomous weapons on the battlefield won’t announce itself with a press release, he says, because if the weapons work as they’re supposed to, they won’t look at all out of the ordinary. “The problem is autonomy is, at core, a matter of programming,” he says. “The Kargu-2 used autonomously will look exactly like a Kargu-2 used manually.”

Elke Schwarz, a senior lecturer in political theory at Queen Mary University London who’s affiliated with the International Committee for Robot Arms Control, tells The Verge that discussions like this show we need to move beyond “slippery and political” debates about definitions and focus on the specific functionality of these systems. What do they do and how do they do it?

“there is critical mass building amongst nations and international organizations to push for a ban”

“I think we really have to think about the bigger picture [...] which is why I focus on the practice, as well as functionality,” says Schwarz. “In my work I try and show that the use of these types of systems is very likely to exacerbate violent action as an ‘easier’ choice. And, as you rightly point out, errors will very likely prevail [...] which will likely be addressed only post hoc.”

Schwarz says that despite the myriad difficulties, in terms of both drafting regulation and pushing back against the enthusiasm of militaries around the world to integrate AI into weaponry, “there is critical mass building amongst nations and international organizations to push for a ban for systems that have the capacity to autonomously identify, select and attack targets.”

Indeed, the UN is still conducting a review into possible regulations for LAWS, with results due to be reported later this year. As Schwarz says: “With this news story having made the rounds, now is a great time to mobilize the international community toward awareness and action.”