Skip to main content

The Pentagon plans to spend $2 billion to put more artificial intelligence into its weaponry

The Pentagon plans to spend $2 billion to put more artificial intelligence into its weaponry

/

Officials say they want computers to be capable of explaining their decisions to military commanders

Share this story

US Conducts Air War Against ISIL From Base In Persian Gulf Region
Photo by John Moore/Getty Images

The Defense Department’s cutting-edge research arm has promised to make the military’s largest investment to date in artificial intelligence (AI) systems for U.S. weaponry, committing to spend up to $2 billion over the next five years in what it depicted as a new effort to make such systems more trusted and accepted by military commanders.

The director of the Defense Advanced Research Projects Agency (DARPA) announced the spending spree on the final day of a conference in Washington celebrating its sixty-year history, including its storied role in birthing the internet.

The agency sees its primary role as pushing forward new technological solutions to military problems, and the Trump administration’s technical chieftains have strongly backed injecting artificial intelligence into more of America’s weaponry as a means of competing better with Russian and Chinese military forces.

The DARPA investment is small by Pentagon spending standards

The DARPA investment is small by Pentagon spending standards, where the cost of buying and maintaining new F-35 warplanes is expected to exceed a trillion dollars. But it is larger than AI programs have historically been funded and roughly what the United States spent on the Manhattan Project that produced nuclear weapons in the 1940’s, although that figure would be worth about $28 billion today due to inflation.

In July defense contractor Booz Allen Hamilton received an $885 million contract to work on undescribed artificial intelligence programs over the next five years. And Project Maven, the single largest military AI project, which is meant to improve computers’ ability to pick out objects in pictures for military use, is due to get $93 million in 2019.

Turning more military analytical work – and potentially some key decision-making – over to computers and algorithms installed in weapons capable of acting violently against humans is controversial.

Google had been leading the Project Maven project for the department, but after an organized protest by Google employees who didn’t want to work on software that could help pick out targets for the military to kill, the company said in June it would discontinue its work after its current contract expires.

While Maven and other AI initiatives have helped Pentagon weapons systems become better at recognizing targets and doing things like flying drones more effectively, fielding computer-driven systems that take lethal action on their own hasn’t been approved to date.

A Pentagon strategy document released in August says advances in technology will soon make such weapons possible. “DoD does not currently have an autonomous weapon system that can search for, identify, track, select, and engage targets independent of a human operator’s input,” said the report, which was signed by top Pentagon acquisition and research officials Kevin Fahey and Mary Miller.

But “technologies underpinning unmanned systems would make it possible to develop and deploy autonomous systems that could independently select and attack targets with lethal force,” the report predicted.

while AI systems are technically capable of choosing targets and firing weapons, commanders have been hesitant about surrendering control

The report noted that while AI systems are already technically capable of choosing targets and firing weapons, commanders have been hesitant about surrendering control to weapons platforms partly because of a lack of confidence in machine reasoning, especially on the battlefield where variables could emerge that a machine and its designers haven’t previously encountered.

Right now, for example, if a soldier asks an AI system like a target identification platform to explain its selection, it can only provide the confidence estimate for its decision, DARPA’s director Steven Walker told reporters after a speech announcing the new investment – an estimate often given in percentage terms, as in the fractional likelihood that an object the system has singled out is actually what the operator was looking for.

“What we’re trying to do with explainable AI is have the machine tell the human ‘here’s the answer, and here’s why I think this is the right answer’ and explain to the human being how it got to that answer,” Walker said.

DARPA officials have been opaque about exactly how its newly-financed research will result in computers being able to explain key decisions to humans on the battlefield, amidst all the clamor and urgency of a conflict, but the officials said that being able to do so is critical to AI’s future in the military.

Human decision-making and rationality depend on a lot more than just following rules

Vaulting over that hurdle, by explaining AI reasoning to operators in real time, could be a major challenge. Human decision-making and rationality depend on a lot more than just following rules, which machines are good at. It takes years for humans to build a moral compass and commonsense thinking abilities, characteristics that technologists are still struggling to design into digital machines.

“We probably need some gigantic Manhattan Project to create an AI system that has the competence of a three year old,” Ron Brachman, who spent three years managing DARPA’s AI programs ending in 2005, said earlier during the DARPA conference. “We’ve had expert systems in the past, we’ve had very robust robotic systems to a degree, we know how to recognize images in giant databases of photographs, but the aggregate, including what people have called commonsense from time to time, it’s still quite elusive in the field.”

Michael Horowitz, who worked on artificial intelligence issues for Pentagon as a fellow in the Office of the Secretary of Defense in 2013 and is now a professor at the University of Pennsylvania, explained in an interview that “there’s a lot of concern about AI safety – [about] algorithms that are unable to adapt to complex reality and thus malfunction in unpredictable ways. It’s one thing if what you’re talking about is a Google search, but it’s another thing if what you’re talking about is a weapons system.”

Horowitz added that if AI systems could prove they were using common sense, ”it would make it more likely that senior leaders and end users would want to use them.”

An expansion of AI’s use by the military was endorsed by the Defense Science Board in 2016, which noted that machines can act more swiftly than humans in military conflicts. But with those quick decisions, it added, come doubts from those who have to rely on the machines on the battlefield.

“While commanders understand they could benefit from better, organized, more current, and more accurate information enabled by application of autonomy to warfighting, they also voice significant concerns,” the report said.

DARPA isn’t the only Pentagon unit sponsoring AI research. The Trump administration is now in the process of creating a new Joint Artificial Intelligence Center in that building to help coordinate all the AI-related programs across the Defense Department.

But DARPA’s planned investment stands out for its scope.

DARPA currently has about 25 programs focused on AI research

DARPA currently has about 25 programs focused on AI research, according to the agency, but plans to funnel some of the new money through its new Artificial Intelligence Exploration Program. That program, announced in July, will give grants up to $1 million each for research into how AI systems can be taught to understand context, allowing them to more effectively operate in complex environments.

Walker said that enabling AI systems to make decisions even when distractions are all around, and to then explain those decisions to their operators will be “critically important…in a warfighting scenario.”

The Center for Public Integrity is a nonprofit investigative news organization in Washington, DC.

Today’s Storystream

Feed refreshed 21 minutes ago Not just you

E
Twitter
Emma Roth21 minutes ago
Rihanna’s headlining the Super Bowl Halftime Show.

Apple Music’s set to sponsor the Halftime Show next February, and it’s starting out strong with a performance from Rihanna. I honestly can’t remember which company sponsored the Halftime Show before Pepsi, so it’ll be nice to see how Apple handles the show for Super Bowl LVII.


E
Twitter
Emma Roth55 minutes ago
Starlink is growing.

The Elon Musk-owned satellite internet service, which covers all seven continents including Antarctica, has now made over 1 million user terminals. Musk has big plans for the service, which he hopes to expand to cruise ships, planes, and even school buses.

Musk recently said he’ll sidestep sanctions to activate the service in Iran, where the government put restrictions on communications due to mass protests. He followed through on his promise to bring Starlink to Ukraine at the start of Russia’s invasion, so we’ll have to wait and see if he manages to bring the service to Iran as well.


E
External Link
Emma Roth5:52 PM UTC
We might not get another Apple event this year.

While Apple was initially expected to hold an event to launch its rumored M2-equipped Macs and iPads in October, Bloomberg’s Mark Gurman predicts Apple will announce its new devices in a series of press releases, website updates, and media briefings instead.

I know that it probably takes a lot of work to put these polished events together, but if Apple does pass on it this year, I will kind of miss vibing to the livestream’s music and seeing all the new products get presented.


E
External Link
Emma RothSep 24
California Governor Gavin Newsom vetoes the state’s “BitLicense” law.

The bill, called the Digital Financial Assets Law, would establish a regulatory framework for companies that transact with cryptocurrency in the state, similar to New York’s BitLicense system. In a statement, Newsom says it’s “premature to lock a licensing structure” and that implementing such a program is a “costly undertaking:”

A more flexible approach is needed to ensure regulatory oversight can keep up with rapidly evolving technology and use cases, and is tailored with the proper tools to address trends and mitigate consumer harm.


Welcome to the new Verge

Revolutionizing the media with blog posts

Nilay PatelSep 13
A
Youtube
Andrew WebsterSep 24
Look at this Thing.

At its Tudum event today, Netflix showed off a new clip from the Tim Burton series Wednesday, which focused on a very important character: the sentient hand known as Thing. The full series starts streaming on November 23rd.


A
The Verge
Andrew WebsterSep 24
Get ready for some Netflix news.

At 1PM ET today Netflix is streaming its second annual Tudum event, where you can expect to hear news about and see trailers from its biggest franchises, including The Witcher and Bridgerton. I’ll be covering the event live alongside my colleague Charles Pulliam-Moore, and you can also watch along at the link below. There will be lots of expected names during the stream, but I have my fingers crossed for a new season of Hemlock Grove.


T
Twitter
Tom WarrenSep 23
Has the Windows 11 2022 Update made your gaming PC stutter?

Nvidia GPU owners have been complaining of stuttering and poor frame rates with the latest Windows 11 update, but thankfully there’s a fix. Nvidia has identified an issue with its GeForce Experience overlay and the Windows 11 2022 Update (22H2). A fix is available in beta from Nvidia’s website.


A
External Link
If you’re using crash detection on the iPhone 14, invest in a really good phone mount.

Motorcycle owner Douglas Sonders has a cautionary tale in Jalopnik today about the iPhone 14’s new crash detection feature. He was riding his LiveWire One motorcycle down the West Side Highway at about 60 mph when he hit a bump, causing his iPhone 14 Pro Max to fly off its handlebar mount. Soon after, his girlfriend and parents received text messages that he had been in a horrible accident, causing several hours of panic. The phone even called the police, all because it fell off the handlebars. All thanks to crash detection.

Riding a motorcycle is very dangerous, and the last thing anyone needs is to think their loved one was in a horrible crash when they weren’t. This is obviously an edge case, but it makes me wonder what other sort of false positives we see as more phones adopt this technology.