Skip to main content

Amazon is slowly turning Alexa into a totally automated virtual assistant

Amazon is slowly turning Alexa into a totally automated virtual assistant

/

A helper that increasingly doesn’t need your help

Share this story

An Amazon Echo Show on top of a wood table.
Alexa’s getting better at doing its thing without your input.
Image: Amazon

Amazon is making a bunch of changes to the Alexa user experience, all with the same idea in mind: making the virtual assistant easier to use. The most notable is a change in how Alexa handles Routines, which developers can now create and recommend to users instead of requiring you to manually build your own automations. Alexa’s also starting to coexist with other manufacturers’ assistants, and Amazon is working to make sure that the most important commands — like “Stop!” — work no matter what wake word you’re using.

Amazon made these announcements during its Alexa Live developer event, in which the company announced a slew of other new Alexa features mostly geared toward developers. They can add shopping to skills, more easily support Matter and other smart home systems, plug into a simpler setup flow, and understand more about their surroundings. 

But Amazon knows that none of Alexa’s flashy new features matter much if you can’t find them or figure out how to use them. And rather than build new UIs or clever voice menus, the Alexa team is increasingly leaning into just making the system do the work for you. “We want to make automation and proactivity available to everybody that interacts with Alexa and the devices that are connected to Alexa because it’s just so delightful,” says Aaron Rubenson, a VP on the Alexa team. 

The change to Routines is the most obvious example among the new announcements. Users can still configure their own routines — “when I say I’m leaving, make sure the stove is off and turn off all the lights,” that sort of thing — but now developers can build routines into their skills and offer them to users based on their activity. “So as one example,” Rubenson says, “Jaguar Land Rover is using the Alexa Routines Kit to make a routine they call ‘Goodnight,’ which will make sure the car is locked, remind customers about the charge level or fuel level, and then also turn on Guardian mode.” It’s the sort of thing a lot of people might enjoy but few will do the work to create for themselves, but now they’ll just have to turn it on.

Rubenson says that people who use Routines are some of the stickiest and most consistent Alexa users and that he wants those people to continue to have the knobs they need to build their weirdest and wildest automations. “But we also recognize that not everybody will take that step,” he says. As Alexa continues to struggle to keep users engaged, adding some proactivity to routines could make them more useful to more people.

Voice assistants are a tricky UI problem, so Amazon’s just trying to make them automatic

Voice assistants have always presented a tricky UI problem since they don’t offer a series of buttons or icons and instead are just a blank slate you can talk to or shout at. Over time, the Alexa team has chipped away at that friction by essentially trying to make it impossible to say the wrong thing. That’s part of the thinking behind its multi-assistant support, which lets developers put their own virtual helper next to Alexa inside the device. (Amazon’s newest partner is Skullcandy, so you can talk to your headphones either by saying “Alexa” or “Hey Skullcandy.”) 

Along the same lines, Amazon’s also working on a feature called Universal Commands that makes it so an Alexa-running device can do certain critical things no matter which wake word you used. For instance: You could say “Hey Skullcandy, set a timer for 10 minutes,” and Skullcandy’s assistant can’t do that, but Alexa can, so Alexa could handle it automatically. Rubenson named timers and call rejection as similarly important things that any Alexa-enabled device should be able to handle even if you haven’t been interacting with Alexa. That feature, Rubenson says, is rolling out over the next year.

Developers will have to implement and make use of these features in order for them to catch on, of course. Amazon is trying hard to incentivize them to do so: it’s changing its revenue sharing agreement so that developers keep 80 percent of their revenue instead of 70 percent and is launching the Skill Developer Accelerator Program, which Rubsenson says “will reward developers for taking the actions that we know lead to creation of a high-quality and engaging skill based on all the history we have.” Which is code for: Amazon is paying developers to make better skills.

Amazon is trying hard to incentivize developers to care about its new tech

If Amazon can make this all work, though, it will have taken a step toward solving one of the big problems with voice assistants: it’s hard to figure out what they can do, so most users default to music and lights and timers, which means there’s no reason for developers to invest in the platform, which means there’s nothing for users to do. By simultaneously making the platform more powerful and by making the platform do more of the work on users’ behalf, Amazon can get that flywheel going in the other direction. And you don’t even have to help.