Skip to main content

Alexa is implementing self-learning techniques to better understand users

Alexa is implementing self-learning techniques to better understand users

/

Alexa will understand you meant to say, ‘Play Nice for What’ instead of ‘Good for What’

Share this story

Photo by Dan Seifert / The Verge

In a developer blog post published today, Alexa AI director of applied science Ruhi Sarikaya detailed the advances in machine learning technologies that have allowed Alexa to better understand users through contextual clues. According to Sarikaya, these improvements have played a role in reducing user friction and making Alexa more conversational.

Since this fall, Amazon has been working on self-learning techniques that teach Alexa to automatically recover from its own errors. The system has been in beta until now, and it launched in the US this week. It doesn’t require any human annotation, and, according to Sarikaya, it uses customers’ “implicit or explicit contextual signals to detect unsatisfactory interactions or failures of understanding.” The contextual signals range from customers’ historical activity, preferences, and what Alexa skills they use to where the Alexa device is located in the home and what kind of Alexa device it is.

Image: Amazon

For example, during the beta phase, Alexa learned to understand a customer’s mistaken command of “Play ‘Good for What’” and correct them by playing Drake’s song “Nice for What.” This has great potential for reducing user interaction friction. Amazon says the new system is currently applying corrections to music-related requests every day.

There’s also the ability to perform name-free skill interaction, which guides customers toward Alexa skills through a more natural process. You can say, “Alexa, get me a car,” and the voice assistant will understand the command without making you specify the name of your ride-sharing service. Name-free interaction features have been expanded today beyond the US to the UK, Canada, Australia, India, Germany, and Japan.

Name-free interaction for smart home-related skills is also rolling out in the US today. Customers can simplify commands to “Alexa, start cleaning,” whereas previously, they’d have to specify and remember skills by saying, “Alexa, ask Roomba to start cleaning.”

Finally, there are improved context carryover features that allow Alexa to track references throughout rounds of conversation. Sarikaya writes:

“For example, if a customer says “What’s the weather in Seattle?” and, after Alexa’s response, says “How about Boston?”, Alexa infers that the customer is asking about the weather in Boston. If, after Alexa’s response about the weather in Boston, the customer asks, “Any good restaurants there?”, Alexa infers that the customer is asking about restaurants in Boston.”

Combined with Follow-Up Mode, which distinguishes background noise from your follow-up requests, you can now have more natural conversations with Alexa without having to repeat the “Alexa” wake word or worry about carefully wording commands for Alexa to understand them. Context carryover and Follow-Up Mode will be expanding beyond the US to Canada, the UK, Australia, New Zealand, India, and Germany today.