Skip to main content

Google promises ethical principles to guide development of military AI

Google promises ethical principles to guide development of military AI

/

The company says the guidelines will include a ban on the development of AI weaponry

Share this story

The Google logo on a colorful, geometric background
Illustration by Alex Castro / The Verge

Google is drawing up a set of guidelines that will steer its involvement in developing AI tools for the military, according to reports from The New York Times and Defense One. What exactly these guidelines will stipulate isn’t clear, but Google told the Times they will include a ban on the use of artificial intelligence in weaponry. The principles are expected to be announced in full in the coming weeks. They are a response to the controversy over the company’s decision to develop AI tools for the Pentagon that analyze drone surveillance footage.

Although tech companies regularly bid for contracts in the US defense sector, the involvement of Google (a company that once boasted the motto “don’t be evil”) and cutting-edge AI tech has raised eyebrows — both inside and outside the firm. News of the Pentagon contract was first made public by Gizmodo in March, and thousands of Google employees have since signed a petition demanding the company withdraw from all such work. Around a dozen individuals have even resigned.

Internal emails obtained by the Times show that Google was aware of the upset this news might cause. Chief scientist at Google Cloud, Fei-Fei Li, told colleagues that they should “avoid at ALL COSTS any mention or implication of AI” when announcing the Pentagon contract. “Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google,” said Li.

But Google never ended up making the announcement, and it has since been on the back foot defending its decision. The company says the technology it’s helping to build for the Pentagon simply “flags images for human review” and is for “non-offensive uses only.” The contract is also small by industry standards — worth just $9 million to Google, according to the Times.

But this extra context has not quelled debate at the company, with Google employees arguing the pros and cons of military AI in meetings and on internal message boards. Many prominent researchers at the company have already come out against the use of AI weaponry. Jeff Dean, who heads AI work at Google, said this month that he had signed a letter in 2015 opposing the development of autonomous weapons. Top executives at DeepMind, Google’s London-based AI subsidiary, signed a similar petition and sent it to the United Nations last year.

But the question facing these employees (and Google itself) is: where do you draw the line? Does using machine learning to analyze surveillance footage for the military count as “weaponized AI”? Probably not. But what if that analysis informs future decisions about drone strikes? Does it matter then? How would Google even know if this had happened?

These are tough questions, and they’re probably impossible to answer in a way that satisfies all parties. In committing to drawing up guidelines, Google has given itself a difficult task.