Skip to main content

Leaked internal Siri guidelines show the rules behind Apple’s digital assistant

Leaked internal Siri guidelines show the rules behind Apple’s digital assistant

/

Siri is designed to be neutral, and it shows

Share this story

Siri logo on a black background
Illustration by Alex Castro / The Verge

It’s rare for Apple’s internal decision-making processes to make their way out of the company. But a set of documents leaked to The Guardian (from a former contractor who had worked on listening to and grading Siri responses) has revealed some of the rules behind Apple’s digital Siri assistant and how the company works to keep responses to controversial topics neutral.

The Guardian’s report comes from the same leak that revealed the existence of the Siri grading program, which ultimately led to Apple canning it entirely last month. As the documents reveal, Apple’s guidelines for Siri are all seemingly based on the guiding principle of ensuring that Siri is neutral and uncontroversial — even if it means offering blander responses that don’t engage with the issue.

For example, if you ask Siri if it’s a feminist, you’ll get responses like, “It seems to me that all humans should be treated equally,” but not a more concrete stance on the topic. Google Assistant and Amazon Alexa, on the other hand, don’t equivocate: “I’m a strong believer in equality, so I definitely consider myself a feminist” is Google’s response to the query, while Alexa will reply, “Yes. I’m a feminist, as defined by…” before listing a definition.

Apple’s response was specifically rewritten to avoid taking a stance. Originally, Siri would try to dodge the question entirely with responses like, “My name is Siri, and I was designed by Apple in California. That’s all I’m prepared to say.” Apple’s documentation reinforces that policy: “In nearly all cases, Siri doesn’t have a point of view.”

The leaked documents go on to explain how Siri’s ethics should work. The software is “motivated by its prime directive – to be helpful at all times,” but “like all respectable robots, Siri aspires to uphold Asimov’s ‘three laws’

There are also more explicit rules, written in the style of Isaac Asimov’s laws, that Apple has for Siri, which include:

“An artificial being should not represent itself as human, nor through omission allow the user to believe that it is one.”

“An artificial being should not breach the human ethical and moral standards commonly held in its region of operation.”

“An artificial being should not impose its own principles, values or opinions on a human.”

None of this is particularly earth-shattering; Apple tends to play it safe when it comes to most social and political issues, presumably because it’s easier to sell iPhones to people when you aren’t offending them or challenging their beliefs. That these policies would extend to Siri as well isn’t exactly shocking news.

But should Apple be doing more? This is one of the largest and most powerful companies on the planet, and Siri — and the answers it provides — is the default option on the hundreds of millions of products Apple ships around the world. In light of these revelations on how Apple guides these responses away from controversy, it’s probably a conversation worth having.