Google promises ethical principles to guide development of military AI

Illustration by Alex Castro / The Verge

Google is drawing up a set of guidelines that will steer its involvement in developing AI tools for the military, according to reports from The New York Times and Defense One. What exactly these guidelines will stipulate isn’t clear, but Google told the Times they will include a ban on the use of artificial intelligence in weaponry. The principles are expected to be announced in full in the coming weeks. They are a response to the controversy over the company’s decision to develop AI tools for the Pentagon that analyze drone surveillance footage.

Although tech companies regularly bid for contracts in the US defense sector, the involvement of Google (a company that once boasted the motto “don’t be evil”) and cutting-edge AI tech has raised eyebrows — both inside and outside the firm. News of the Pentagon contract was first made public by Gizmodo in March, and thousands of Google employees have since signed a petition demanding the company withdraw from all such work. Around a dozen individuals have even resigned.

Internal emails obtained by the Times show that Google was aware of the upset this news might cause. Chief scientist at Google Cloud, Fei-Fei Li, told colleagues that they should “avoid at ALL COSTS any mention or implication of AI” when announcing the Pentagon contract. “Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google,” said Li.

But Google never ended up making the announcement, and it has since been on the back foot defending its decision. The company says the technology it’s helping to build for the Pentagon simply “flags images for human review” and is for “non-offensive uses only.” The contract is also small by industry standards — worth just $9 million to Google, according to the Times.

But this extra context has not quelled debate at the company, with Google employees arguing the pros and cons of military AI in meetings and on internal message boards. Many prominent researchers at the company have already come out against the use of AI weaponry. Jeff Dean, who heads AI work at Google, said this month that he had signed a letter in 2015 opposing the development of autonomous weapons. Top executives at DeepMind, Google’s London-based AI subsidiary, signed a similar petition and sent it to the United Nations last year.

But the question facing these employees (and Google itself) is: where do you draw the line? Does using machine learning to analyze surveillance footage for the military count as “weaponized AI”? Probably not. But what if that analysis informs future decisions about drone strikes? Does it matter then? How would Google even know if this had happened?

These are tough questions, and they’re probably impossible to answer in a way that satisfies all parties. In committing to drawing up guidelines, Google has given itself a difficult task.

Comments

What’s stopping the military from taking the advances made in AI by Google and other companies, and running with their own development?

Not a damn thing.

If not Google someone else will and the DoD has more than enough money to start from scratch if they so desired.

I don’t think AI is a commodity yet, where the quality doesn’t matter and everyone can apply the basics for equal results. There’s no single box or technique that’s an agreed upon solution for everything. We’re just not there yet and advances are research based for now.

Until that point, there’s an expertise element that the DoD would have trouble matching most of the leaders in AI right now. So they contract.

It’s not Google and the US I’m worried about, but China and whatever nationalised company they pick to help. China as a country moves at a Silicon Valley type of beat. They will put it into place without much thought on ethics or efficacy, as they seem far more utilitarian. That means they will make logical moves as soon as they present themselves, and we’ll see what bad comes of it on the nightly news, while the good will mostly go uncovered.

I figure AI will be similar to nuclear power. The US will invent it and then cower at it, while other countries race to put it in place. Only question is, what will the first Chernobyl look like?

"These are tough questions, and they’re probably impossible to answer in a way that satisfies all parties."

As is true of ALL controversial decisions.

Their new motto is "do evil ethically"??

We won’t actually blow anyone up. Instead we will calculate and provide a confidence index that someone is "bad" or "really bad" and then arm weapons, suggest a course of action, and provide a button where a trained user can then automate the termination of "really bad" people.

Bad people only guys. No worries!

Not to mention, "no weapons!!1"

Well what’s a weapon? Is Stuxnet a weapon? If I had a worm that shut down a power grid, is that a weapon? Are we just talking about traditional weapons like guns and rockets?

Hell, what’s "non-offensive"? Would a missile defense system be "non-offensive"? What if that system works by preparing counter-strikes, or by firing missiles at the enemy missile site in reaction to a launch?

Too many carefully-chosen words – nobody should trust this deal.

The company says the technology it’s helping to build for the Pentagon simply "flags images for human review"

Google’s next Captcha: "Please select all boxes that contain underground bunkers or ‘terrorist looking’ people"

I understand the concern over weaponized AI – the idea of taking life-and-death decisions out of human hands raises some serious ethical questions, and I can see why these engineers would be personally uncomfortable with it. I doubt they joined Google thinking that their work would be used to direct drone strikes in Yemen.
The real problem with military AI is that it will inevitably speed up the decision-making cycle to the point where humans can’t intercede in the process without essentially forfeiting. That will make uncontrolled escalation even harder to prevent than it is now. These kinds of personal opt-outs by individuals and companies don’t address that problem. US foreign policy isn’t perfect, but we’re not the only country in the world, and even now we’re far from being the least ethical. Are Russian and Chinese AI researches doing the same soul-searching?

I think people tend to make a leap from what this is likely to be used for to what it will actually be used for.

I suspect this will be used more to reduce the noise to signal ratio versus being sent out on its own to be the arbiter of life and death.

You mention it inevitably speeding up the decision process to where people don’t have much time to choose, but do they have much time already? If anything wouldn’t a more rapid filtering and thorough analysis result in less mistaken identity? A computer would be able to compute more variables in a quicker fashion versus a drone operator who may fire just to avoid missing an opportunity.

I feel like the argument against AI relies on this idea that there is all this human intervention and judgment stopping civilian casualties, but as we continue to blow up hospitals and wedding parties clearly that isn’t there.

This work will result in super intelligent self aware machines. and the end of humanity.
The super intelligence will have as much interest in our war and politics, as we have concern for the dreams of earth-worms.
I’m sure a few of the google engineers are smart enough to realize this..
to know they are creating their own demise.

Ask yourselves, If there are billions of inhabitable planets in the universe, why haven’t we received intelligent signals from any of them ?. Are we unique ?, or is technological advancement brief, and inevitably terminal..at the point we have almost reached. I am dismayed by googles arrogant eagerness to fiddle with deep learning, while having no idea how self awareness works or understanding our own consciousness.

View All Comments
Back to top ↑