Skip to main content

Google is using AI to better detect searches from people in crisis

Google is using AI to better detect searches from people in crisis

/

Directing more searches to help and support

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

Photo by Amelia Holowaty Krales / The Verge

In a personal crisis, many people turn to an impersonal source of support: Google. Every day, the company fields searches on topics like suicide, sexual assault, and domestic abuse. But Google wants to do more to direct people to the information they need, and says new AI techniques that better parse the complexities of language are helping.

Specifically, Google is integrating its latest machine learning model, MUM, into its search engine to “more accurately detect a wider range of personal crisis searches.” The company unveiled MUM at its IO conference last year, and has since used it to augment search with features that try to answer questions connected to the original search.

In this case, MUM will be able to spot search queries related to difficult personal situations that earlier search tools could not, says Anne Merritt, a Google product manager for health and information quality.

“MUM is able to help us understand longer or more complex queries”

“MUM is able to help us understand longer or more complex queries like ‘why did he attack me when i said i dont love him,’” Merrit told The Verge. “It may be obvious to humans that this query is about domestic violence, but long, natural-language queries like these are difficult for our systems to understand without advanced AI.”

Other examples of queries that MUM can react to include “most common ways suicide is completed” (a search Merrit says earlier systems “may have previously understood as information seeking”) and “Sydney suicide hot spots” (where, again, earlier responses would have likely returned travel information — ignoring the mention of “suicide” in favor of the more popular query for “hot spots”). When Google detects such crisis searches, it responds with an information box telling users “Help is available,” usually accompanied by a phone number or website for a mental health charity like Samaritans.

In addition to using MUM to respond to personal crises, Google says it’s also using an older AI language model, BERT, to better identify searches looking for explicit content like pornography. By leveraging BERT, Google says it’s “reduced unexpected shocking results by 30%” year-on-year. However, the company was unable to share absolute figures for how many “shocking results” its users come across on average, so while this is a comparative improvement, it gives no indication of how big or small the problem actually is.

Google is keen to tell you that AI is helping the company improve its search products — especially at a time when there’s a building narrative that “Google search is dying.” But integrating this technology comes with its downsides, too.

Many AI experts warn that Google’s increasing use of machine learning language models could surface new problems for the company, like introducing biases and misinformation into search results. AI systems are also opaque, offering engineers restricted insight into how they come to certain conclusions.

For example, when we asked Google how it verifies in advance which search terms identified by MUM are associated with personal crises, its reps were either unwilling or unable to answer. The company says it rigorously tests changes to its search products using human evaluators, but that’s not the same as knowing in advance how your AI system will respond to certain queries. For Google, though, such trade-offs are apparently worth it.