Skip to main content

Scientists advising the US military say fears of an AI existential threat are ‘uninformed’

Scientists advising the US military say fears of an AI existential threat are ‘uninformed’

/

There are plenty of regular threats though

Share this story

DARPA ATLAS robotics challenge STOCK

Public perceptions of military AI can lean toward the apocalyptic — understandable when prominent figures like Elon Musk warn us about the possible rise of Terminator-style artificial intelligence. But when it comes to the judgements the military actually relies on, things are a little more sober. As a recent report into the use of military AI funded by the Department of Defense states: “To most computer scientists, the claimed ‘existential threats’ posed by AI seem at best uninformed.”

“AGI has high visibility, disproportionate to its size or present level of success.”

These fears “do not align with the most rapidly advancing current research directions of AI as a field,” says the report, “but rather spring from dire predictions about one small area of research within AI, Artificial General Intelligence (AGI).” The report goes on to say that the current boom in artificial intelligence is not likely to bring us much closer to the faraway dream of a true AGI. It notes: “AGI has high visibility, disproportionate to its size or present level of success.”

The document in question here is the product of JASON — an advisory group of US scientists that briefs the government on science and technology policy. Published earlier this month, it outlines current trends in artificial intelligence and makes recommendations about where the US military should invest and research.

Some of the findings it outlines can be a little dry (e.g., “DoD should create and provide centralized resources for its intramural and extramural researchers”) but, as pointed out by Motherboard, reading between the lines produces some interesting insights into the sort of thinking that might influence government and military policy.

AI has advanced massively in recent years — but only in narrow domains

Starting with an overview of the field, the report states that from 2010, the field of AI research was “jolted by the broad and unforeseen successes” of multi-layer neural networks. This was facilitated by the availability of large, labeled data sets (thanks internet!) and hardware in the form of GPUs (thanks gamers!). This has led to some big breakthroughs, says the report, but only in narrow domains. The AI that can beat the best human players at Go will still get thrashed by the average chess player.

A quadrupedal robot made by Boston Dynamics considered for military use

Like some other notes within the report, this seems like a distinction that’s both important and trifling. Elsewhere in the report it lists autonomous military hardware currently in use around the world, including the Samsung SGR-A1 sentry gun installed on the South Korean border. The SGR-A1 is capable of asking humans for a password and shooting them with either lethal or non-lethal rounds if it doesn’t hear the correct answer.

In the next paragraph the report says that while this demonstrates a certain amount of “autonomy,” it’s not autonomy as it maps to the human experience (the “freedom of will or action”), but the “prosaic ability” to act in accordance with a pre-defined set of complex rules. To a person standing in front a machine gun that will kill them if it can’t understand what they’re saying, the difference seems trivial. The important thing is not the exact definition of autonomy, but the fact that responsibility has been transferred from human to machine.

At any rate, the report is scathing of the capability of AI to manage complex military systems in a sort of Skynet-style system. This would take the near-impossible — the creation of an artificial general intelligence. Instead, it thinks AI will best serve to augment human decision making. It gives the example of a convoy of military trucks using self-driving software: “One might imagine convoys of trucks where only the first truck has a driver and the others can use the driver’s plans and all the trucks’ sensors.”

Whether or not the DoD will heed this report’s recommendations isn’t clear. Steven Aftergood of the Federation of American Scientists, told Motherboard: “JASON reports are purely advisory. They do not set policy or determine DoD choices. On the other hand, they are highly valued, very informative and often influential. The reports are prepared only because DoD asks for them and is prepared to pay for them.”