Skip to main content

Google is taking sign-ups for Relate, a voice assistant that recognizes impaired speech

Google is taking sign-ups for Relate, a voice assistant that recognizes impaired speech

/

It builds on Google’s Project Euphonia

Share this story

Google launched a new app called Relate, a voice assistant that recognizes impaired speech.
Google launched a new app called Relate, a voice assistant that recognizes impaired speech.
Image: Google

Google launched a beta app today that people with speech impairments can use as a voice assistant while contributing to a multiyear research effort to improve Google’s speech recognition. The goal is to make Google Assistant, as well as other features that use speech to text and speech to speech, more inclusive of users with neurological conditions that affect their speech.

The new app is called Project Relate, and volunteers can sign up at g.co/ProjectRelate. To be eligible to participate, volunteers need to be 18 or older and “have difficulty being understood by others.” They’ll also need a Google account and an Android phone using OS 8 or later. For now, it’s only available to English speakers in the US, Canada, Australia, and New Zealand. They’ll be tasked with recording 500 phrases, which should take between 30 to 90 minutes to record.

Volunteers will get access to three new features on the Relate App

After sharing their voice samples, volunteers will get access to three new features on the Relate App. It can transcribe their speech in real time. It also has a feature called “Repeat” that will restate what the user said in “a clear, synthesized voice.” That can help people with speech impairments when having conversations or when using voice commands for home assistant devices. The Relate App also connects to Google Assistant to help users turn on the lights or play a song with their voices.

Without enough training data, other Google apps like Translate and Assistant haven’t been very accessible for people with conditions like ALS, traumatic brain injury (TBI), or Parkinson’s disease. In 2019, Google started Project Euphonia, a broad effort to improve its AI algorithms by collecting data from people with impaired speech. Google is also training its algorithms to recognize sounds and gestures so that it can better help people who cannot speak. That work is still ongoing; Google and its partners still appear to be collecting patients’ voices separately for Project Euphonia.

“I’m used to the look on people’s faces when they can’t understand what I’ve said,” Aubrie Lee, a brand manager at Google whose speech is affected by muscular dystrophy, said in a blog post today. “Project Relate can make the difference between a look of confusion and a friendly laugh of recognition.”