Skip to main content

The future of war will be fought by machines, but will humans still be in charge?

The future of war will be fought by machines, but will humans still be in charge?

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

Unmanned Combat Air System Executes Touch and Go Landing On Flight Deck
The US Navy’s uncrewed X-47B prototype drone is the first aircraft of its type to perform a touch and go landing on an aircraft carrier.
Photo by Mass Communication Specialist 2nd Class Timothy Walter/U.S. Navy via Getty Images

Drone swarms. Self-driving tanks. Autonomous sentry guns. Sometimes it seems like the future of warfare arrived on our doorstep overnight, and we’ve all been caught unprepared. But as Paul Scharre writes in his new book Army of None: Autonomous Weapons and the Future of War, this has all been a long time coming, and we’re currently reaping the culmination of decades of military development. That doesn’t mean it’s not scary, though.

Scharre’s book provides an excellent overview of this field, tracing the history of autonomous weapons from early machine guns (which automated the firing and reloading of a rifle) to today’s world of DIY killer drones, cobbled together in garages and sheds. As a former Army Ranger and someone who has helped write government policy on autonomous weapons, Scharre is knowledgeable and concise. More importantly, he pays as much attention to the political dimension of autonomous weapons as the underlying technology, looking at things like historical attempts at arms control (e.g., Pope Innocent II’s ban on the use of crossbows against Christians in 1139, which didn’t do much).

The Verge recently spoke to Scharre about Army of None, discussing the US Army’s current attitude toward autonomous weapons, the feasibility of attempts to control so-called “killer robots,” and whether or not it’s inevitable that new military technology will have unexpected and harmful side effects.

This interview has been condensed and lightly edited for clarity.

This book has come an opportune time, I’d say, just when the discussion about autonomous weapon systems is back in the news. What was your motivation for writing it?

I’ve been working on these issues for eight or nine years, and I’ve been engaged in discussion on autonomous weapons at the United Nations, Nato, and the Pentagon. I felt like I had enough to say that I wanted to write a book about it. The issue is certainly heating up, particularly as we see autonomous technologies develop in other spaces, like self-driving cars.

People see a car with autonomy, and they make the connection between that and weapons. They work out the risks for themselves and begin to ask questions, like, “What happens when a military drone has a much autonomy as a self-driving car?” It’s because we’re at this very interesting point in time when the technology is getting real and these questions are less theoretical.

How did the US military get to its current position? Our readers are familiar with the development of tech like self-driving vehicles by private companies, but how and when did the Army get interested in this?

In the case of the United States, they sort of stumbled into this military robotic revolution through Iraq and Afghanistan. I don’t think it was deliberately planned, to buy thousands of air and ground robots, but that’s what happened. Most people would have said that wasn’t a good idea, but it turns out, these robots were incredibly valuable for very specific tasks in these conflicts. Drones provided overhead surveillance, and [bomb disposal robots] decreased the threat of things like IEDs on the ground.

The US Army says for the time being it wants humans in the loop

During these conflicts, you saw the US military waking up to this technology, and beginning to think strategically about the direction they wanted to take. So one common theme has been wanting to develop more autonomy because [robotic] systems in the past have had such brittle telecommunication links to humans. If those are jammed, then your robots can’t do anything. But when the military says they want “full autonomy,” they’re not thinking of the Terminator. They’re thinking of a robot that goes from point A to point B by itself. And they’ve not articulated that clearly.

I quote the US Air Force Flight Plan from 2009 on an [uncrewed] aircraft system, which explicitly raises these questions of [autonomous weapon systems], and it was the first civil defense doc to do so. The doc says we can envision this period of time where the speed advantages make it best to go to full autonomy, and this raises all these tricky ethical and legal questions, and we need to start talking about it. And I think that was right.

There are only a few fully autonomous weapon systems deployed around the world, including the Aegis combat system (pictured) and the Israeli Harpy drone.
There are only a few fully autonomous weapon systems deployed around the world, including the Aegis combat system (pictured) and the Israeli Harpy drone.

The Air Force Flight Plan says in a situation where computers can make decisions faster than humans, it might be advantageous to hand over control to machines. You point out that this has been the case with the very small number of autonomous weapons systems currently in use — that they’re designed for situations where humans just couldn’t keep up.

Like, for example, the US Navy’s Aegis Combat System, which is used on ships to defend against bombardment from precision-guided missiles, which are themselves a sort of semi-autonomous system. Given this fact — that autonomous weapons systems are being built in response to autonomous weapon systems — do you think the forward march of this technology is unstoppable?

I think that is one of the central questions of the book. This path that we’re on, is the destination inevitable? It’s clear that the technology is leading us down a road where fully autonomous weapon systems are certainly possible, and in some simple environments, they’re possible today.

Is that a good thing? There are lots of reasons to think not. I’m inclined to think that it’s not a great idea to have less human control over violence, [but] I also don’t think it’s easy to halt the forward pace of technology. One of the things I try to grapple with in the book is the historical track record on this because it’s extremely mixed. There are examples of success and failures in arms control going all the way back to ancient India, to 1500 BC. There is this age-old question of “Do we control technology, or does our technology control us?” And I don’t think there are easy answers to that. Ultimately, the challenge is not really autonomy or technology itself, but ourselves.

One thing I think your book does really well is help define the terms of this debate, distinguishing between different types of autonomy. This seems incredibly important because how can we discuss these issues without a common language? With that in mind, are there any particular concepts here that you think are regularly misunderstood?

[Laughs] This is always the challenge! I put down 10,000 words in the book talking about this problem, and now I have to sum it up in a paragraph or two.

“autonomy and intelligence are not the same thing.”

But yes, one thing is that people tend to talk about “autonomous systems,” and I don’t think that’s a very meaningful concept. You need to talk about autonomy in what respect: what task are you talking about automating? Autonomy is not magic. It’s simply the freedom, whether of a human or machine, to perform some action. As children get older, we grant them more autonomy — to stay out later, to drive a car, to go off to college. But autonomy and intelligence are not the same thing. As systems become more intelligent, we can choose to grant them more autonomy, but we don’t have to.

When tracing the history of autonomous weapons, you start with the American Civil War and the inventor of the Gatling gun, Richard Gatling. This was a precursor to modern machine guns, and you include a fantastic excerpt from one of Gatling’s letters, in which he says his motivation was to save lives. He thought that a gun that fired automatically would mean fewer soldiers on the battlefield and therefore fewer deaths. Of course, this turned out to not be the case. Do you think it is inevitable that new technologies in warfare will have these unintended, bloody consequences?

Many technologies certainly look great when you are the one that has them. You say, “Wow, look at this! We can save our troops’ lives by being more effective on the battlefield!” But when both sides have them, as with machine guns, all of sudden it takes war to a far more awful place. I think that’s a definite concern with autonomy and robotics. There’s this risk of an arms race, where, individually, nations are pursuing various military advances that are very reasonable. But collectively, that makes war less controllable and is overall to the detriment of humanity.  

With the Gatling gun, it was one of those fascinating things I stumbled across while researching the history of this field. And automation there did reduce the number of people needed to deliver a certain amount of firepower: four people with a Gatling gun could deliver as much firepower as 100 people. But the question is, what did militaries do with that? Did they reduce the number of people in their armies? No, they expanded their firepower, and in doing so, they took violence to a new level. It’s an important cautionary tale.

Russia’s “Platform-M” combat robot platform.
Russia’s “Platform-M” combat robot platform.
Image: Russian Ministry of Defense

You point out that people wrongly assume there’s a rush to autonomy in the US military when there is, in fact, a lot of internal resistance. Unlike Russia, for example, the US is not building land-based robots for the front line, and the autonomous aircraft it’s developing are intended for support roles, not combat. How would you summarize America’s current policy on autonomous weapons? 

There’s a lot of rhetoric you hear about the US defense establishment and AI and autonomy. But if you look at what they’re actually spending money on, the reality doesn’t always match up. In particular for combat application, there’s this disconnect where you have engineers in places like DARPA running full-tilt and making the tech work, but there’s a valley of death between R&D and operational use. And some of the hurdles are culture because the warfighters just don’t want to give up their jobs — particularly the people at the tip of the spear.

The upshot is that US Defense Department leaders have said very strongly that they intend to keep a human in the loop in future weapon systems, authorizing lethal force decisions. And I don’t hear that same language from other nations, like Russia, who talk about building a fully roboticized combat unit capable of autonomous operations.

Russia and China obviously come up a lot in the book, but experts seem to be more worried about non-state actors. They point out that a lot of this technology, like autonomous navigation and small drones, are freely available. What’s the threat there? 

Non-state groups like the Islamic State already have armed drones today that they’ve cobbled together using commercially available equipment. And technology is so ubiquitous that it’s something we’re going to have to grapple with. We’ve already seen low-level “mass” drone attacks, like the one on a Russian airbase in Syria. I hesitate to say this was a drone swarm because there’s no indication they were cooperative. But I think attacks like that will scale up in sophistication and size over time because the technology is so widely available. There’s no good solution to this.

This fear that AI is a “double use” technology, that any commercial research can have malicious applications, seems to have motivated a lot of the people arguing that we need an international treaty controlling autonomous weapons. Do you think such a treaty is likely to happen?

There is some energy after the recent meetings in the United Nations because they saw significant moves from two major countries: Austria, who are going to call for a ban, and China, stating at the end of the week that they’d like some sort of ban on autonomous weapons. But, I don’t think we see the momentum for a treaty in the CCW [the 1983 Convention on Certain Conventional Weapons, which limits the use of mines, booby traps, incendiary weapons, blinding lasers, and others] vein from the UN. It’s just not on the cards. [The UN] is a consensus-based organization, and every country would have to agree. It’s not going to happen.

A treaty on ‘killer robots’ isn’t likely to happen any time soon

What’s happened in the past is that these movements have matured for a while, in these large collective bodies in the UN, and then migrated out to standalone treaties. That resulted in the treaties on cluster munitions, for example. I don’t think we’re at that point yet. There isn’t a core group of Western democratic states involved, and that’s been critical in the past, with countries like Canada and Norway, leading the charge. It’s possible that Austria’s move changes that dynamic, but it’s not clear at this point.

The big difference this time around is the lack of direct humanitarian threat. People were being killed and maimed by landmines and cluster munitions, while here, the threat is very theoretical. Even if countries like China and the US did sign up to some sort of treaty, verification [that they were following the treaty’s rules] would be exceptionally difficult. It’s very hard to imagine how you would get them to trust one another. And that’s a core problem. If you can’t figure that out, there’s no solution.

Given that you think a UN ban or set of restrictions is not going to happen, what is the best way that we can guide the development of autonomous weapons? Because no one involved in this debate, even those arguing that autonomous weapons will definitely save lives, thinks there are no risks involved.

I think that more conversations about the topic by academics in the public sphere are all for the good. This is an issue that brings together a whole array of disciplines: technology, military operations, law, ethics, and other things. And so this is a place where having a robust discussion is helpful and much needed. I’d like to think that this book might help advance that conversation, of course, by broadening the set of people that are engaged in it.  

What I think is important is establishing the underlying principles for what control of autonomous weapons looks like. Stuff like defining what we mean by “meaningful human control” or “appropriate human judgment” or the concept of focusing on the human role. I like that, and I want to see more of that conversation internationally. I think of it as posing the question: if we had any and all technology we could think of, what role would we want humans to play in war? And why? What decisions require uniquely human judgment? I don’t know the answers, but these are the right questions to be asking.