American Department of Defense policy already bans artificial intelligence from autonomously launching nuclear weapons. But amid rising fears of AI spurred by a plethora of potential threats, a bipartisan group of lawmakers has decided to make extra-double-sure it can’t.
As announced earlier this week, Senator Edward Markey (D-MA) and Representatives Ted Lieu (D-CA), Don Beyer (D-VA), and Ken Buck (R-CO) have introduced the Block Nuclear Launch by Autonomous AI Act, which would “prohibit the use of Federal funds to launch a nuclear weapon using an autonomous weapons system that is not subject to meaningful human control.” The act would codify existing Pentagon rules for nuclear weapons, which, as of 2022, read thusly:
“In all cases, the United States will maintain a human ‘in the loop’ for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment.”
The bill, by the same token, says that no autonomous system without meaningful human oversight can launch a nuclear weapon or “select or engage targets” with the intention of launching one. “Any decision to launch a nuclear weapon should not be made by artificial intelligence,” the text reads.
If this is already forbidden, why introduce the bill? Its sponsors note that a 2021 National Security Commission on Artificial Intelligence report recommended affirming a ban on autonomous nuclear weapons launches, not only to prevent it from happening inside the US government but to spur similar commitments from China and Russia. Publicizing the bill calls attention to the potential dangers of current-generation autonomous artificial intelligence systems, a going concern in Congress and the tech world alike. And as indicated by the press release, it offers a chance to highlight the sponsors’ other nuclear non-proliferation efforts — like a recent bill restricting the president’s power to unilaterally declare nuclear war. I’ll let you make the obvious War Games joke yourself.