Skip to main content

AI voice assistants reinforce harmful gender stereotypes, new UN report says

AI voice assistants reinforce harmful gender stereotypes, new UN report says

/

Female-sounding default voices perpetuate antiquated, harmful ideas about women

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

Amazon Echo

Artificial intelligence-powered voice assistants, many of which default to female-sounding voices, are reinforcing harmful gender stereotypes, according to a new study published by the United Nations.

Titled “I’d blush if I could,” after a response Siri utters when receiving certain sexually explicit commands, the paper explores the effects of bias in AI research and product development and the potential long-term negative implications of conditioning society, particularly children, to treat these digital voice assistants as unquestioning helpers who exist only to serve owners unconditionally. It was authored by the United Nations Educational, Scientific, and Cultural Organization, otherwise known as UNESCO.

The paper argues that by naming voice assistants with traditionally female names, like Alexa and Siri, and rendering the voices as female-sounding by default, tech companies have already preconditioned users to fall back upon antiquated and harmful perceptions of women. Going further, the paper argues that tech companies have failed to build in proper safeguards against hostile, abusive, and gendered language. Instead, most assistants, as Siri does, tend to deflect aggression or chime in with a sly joke. For instance, ask Siri to make you a sandwich, and the voice assistant will respond with, “I can’t. I don’t have any condiments.”

Tech companies are perpetuating harmful stereotypes about women through AI

“Companies like Apple and Amazon, staffed by overwhelmingly male engineering teams, have built AI systems that cause their feminized digital assistants to greet verbal abuse with catch-me-if-you-can flirtation,” the report states. “Because the speech of most voice assistants is female, it sends a signal that women are ... docile and eager-to-please helpers, available at the touch of a button or with a blunt voice command like ‘hey’ or ‘OK’. The assistant holds no power of agency beyond what the commander asks of it. It honours commands and responds to queries regardless of their tone or hostility.”

Much has been written about the pitfalls of tech companies having built their entire consumer-facing AI platforms in the image of traditional, Hollywood-influenced ideas of subservient intelligences. In the future, it’s likely voice assistants will be the primary mode of interaction with hardware and software with the rise of so-called ambient computing, when all manner of internet-connected gadgets exist all around us at all times. (Think Spike Jonze’s Her, which seems like the most accurate depiction of the near-future in film you can find today.) How we interact with the increasingly sophisticated intelligences powering these platforms could have profound cultural and sociological effects on how we interact with other human beings, with service workers, and with humanoid robots that take on more substantial roles in daily life and the labor force.

However, as Business Insider reported last September, Amazon chose a female-sounding voice because market research indicated it would be received as more “sympathetic” and therefore more helpful. Microsoft, on the other hand, named its assistant Cortana to bank on the existing recognition of the very much female-identifying AI character in its Halo video game franchise; you can’t change Cortana’s voice to a male one, and the company hasn’t said when it plans to let users do so. Siri, for what it’s worth, is a Scandinavian name traditionally for females that means “beautiful victory” in Old Norse. In other words, these decisions about gender with regard to AI assistants were made on purpose, and after what sounds like extensive feedback.

The AI research field is predominantly white and male

Tech companies have made an effort to move away from these early design decisions steeped in stereotypes. Google now refers to its various Assistant voice options, which now include different accents with male and female options for each, represented by colors. You can no longer select a “male” or “female” version; each color is randomly assigned to one of eight voice options for each user. The company also rolled out an initiative called Pretty Please that rewards young children when they use phrases like “please” and “thank you” while interacting with Google Assistant. Amazon released something similar last year to encourage polite behavior when talking to Alexa.

Yet as the report says, these features and gender voice options don’t go far enough; the problem may be baked into the AI and tech industries themselves. The field of AI research is predominantly white and male, a new report from last month found. Eighty percent of AI academics are men, and just 15 percent of AI researchers at Facebook and just 10 percent at Google are women.

UNESCO says solutions to this issue would be to create as close to gender-neutral assistant voices as possible and to create systems to discourage gender-based insults. Additionally, the report says tech companies should stray away from conditioning users to treat AI as they would a lesser, subservient human being, and that the only way to avoid perpetuating harmful stereotypes like these is to remake voice assistants as purposefully non-human entities.