Gadget makers finally reached their breaking point. After being forced to put what amounted to bad Android tablets in their devices for years, they’re ready to move beyond the screen.
The proliferation of connected devices, especially the Internet of Things, was spurred by access to cheap parts. Anyone can now affordably slap a chip, accelerometer, gyroscope, and 3D-printed shell together to build something smart. But they still face one major challenge: how to give users control of their brand-new thing. Some manufacturers opt for a smartphone app; others build a touchscreen control panel right into their gadget. The touchscreen is easy, affordable, and involves no user learning curve.
But still, the touchscreen presents its own problems.
Software needs to be continually updated, a task that requires dedicated staff, the ability to push over-the-air updates, and the willingness of users to patch. The apps on those screens also have to be kept current, which requires a development team. And perhaps most challenging: everyone already keeps their best screen in their pocket. Our phones are powerful and consistently updated — why would we want to deal with another device?
So with these issues in mind, gadget makers are developing new UI paradigms in an attempt to evolve past screens. They attribute their push for voice and gesture control to users’ disinterest in staring at screens, but it seems their effort also relates back to bad Android integrations not getting any easier to maintain. There’s more at stake, too. The war for touchscreen software control has already been fought and won by Google and Apple, so whoever comes to dominate this next era of interaction could end up with all the money and investment.
We’ve seen hints of this battle before, but the success of Amazon’s Echo and the onslaught of connected devices has reinvigorated gadget manufacturers to give alternative interactions another go. The question is whether these companies can uncover something people like enough to change their behavior and keep the technology alive. The Echo probably represented a one-time jackpot, not a sign of the tides changing, but gadget companies seem to think the device was representative of things to come. It’s not like they have any other choice.
“The future of interaction is more subliminal and more of an undercurrent,” Gadi Amit, a designer of the Fitbit Force and principal designer at NewDealDesign, tells me. “[It’ll involve a] few carefully thought-through interactions where the computing environment is more in the background than the foreground.”
Most every company is thinking about this idea. Apple’s AirPods pair with updated Apple products right out of their container sans user interaction, for example, and Snap’s Spectacles are activated by a button, no screen necessary. But perhaps no company has had as much success building hardware around screenless interactions as Amazon with its Dash buttons and Echo home device.
The company’s Dash buttons are placed around users’ homes so they can make online purchases immediately upon realizing they’re running low on an item. The buttons seamlessly integrate into a home, at least if you find a massive button unassuming, and there’s no learning curve associated with using them. Everyone knows how to push a button. Dash buttons are connected and powered by cloud infrastructure, but they’re simple in execution. Amazon says they’re selling great, though we don’t have numbers to back up those claims.
We can, however, assume Amazon is succeeding with its Echo line of products, including the Dot. The Echo exposed thousands of consumers to home voice control. Users can tell Alexa to turn on appliances, stream music, ask questions, or set a timer while a second form of interaction — a light — glows to signify the Echo’s listening.
Amazon expanded its voice software beyond its hardware with the release of its Alexa API in 2015, which allowed gadget companies to put voice controls in everything, including watches, robots, and portable Dot-esque pucks. Seriously, look at this list of Alexa-enabled gadgets from CES. It’s bananas. These companies are betting consumers want to control their devices with their voice.
Mainstream brands, including Google, Apple, Microsoft, and Samsung also believe voice is going to be essential to future gadget interactions. Google launched Home to directly compete with the Echo, and every company emphasized their voice assistants over the past year. 2017 might be the year for Siri, or Cortana, or even Samsung’s mysterious voice assistant. No one seemed interested in using these assistants in the past, but maybe better and smarter software can do the trick.
Smaller companies are experimenting with UI other than voice control, as well. They’ll likely never get a share of the market as large as Amazon, but they’re hoping to capitalize on the novelty and simplicity of their interaction technology. In some cases, they think the interactions will encourage their users to build a more natural relationship with their gadgets. These recent efforts are rooted in previous technology, all of which failed to gain traction, although that doesn’t seem to be discouraging anyone.
Jake Boshernitzan, a co-founder of Knocki, envisions smart buttons, like Dash, becoming a key part of future interactions because they’re accessible and easy for anyone to use. Boshernitzan’s Knocki converts surfaces into interactive devices, similarly to how the Clapper relied on sound to register commands. “We went with taps because it activates people’s entire environment,” he says. Knocki sticks beneath or on top of a table, countertop, or wall and registers knocks as commands. It works with IFTTT, so it recognizes distinct patterns as controls for specific devices. Essentially, functionality that would have required a touchscreen can be added to a room and never consciously thought about again. These interactions become part of day-to-day life.
That said, the Knocki’s predecessor, the Clapper, is best remembered as a silly infomercial we grew up watching. It still exists for $18 on Amazon, seemingly with the same packaging, but no one is rushing to buy a Clapper.
Where I expect to see the most growth this year is gesture-control technology. Take two smart mirrors I demoed at CES as examples: the HiMirror Plus and the Ekko. Both mirrors rely on gesture controls for the obvious reason that no one wants to muck up their clean, expensive mirror. We’ve also seen gesture technology integrated into cars’ heads-up displays, like Navdy, where looking down at a smartphone might actually be illegal.
Sang Won Lee, CEO at Qeexo, a gesture technology company, believes in gesture technology because he thinks it engages a user more thoroughly. It drives a more compelling experience, he says, which creates a better product. In some cases, gesture tech is used alongside a screen but without the conventional touch form of interaction, so in these instances, hardware makers aren’t trying to get rid of the screen so much as make it easier to handle.
Gadget makers are certainly pushing hard to get users to think beyond the touchscreen but so far, most of their efforts feel more gimmicky than life-changing. In my experiences, I’ve found gesture technology, in particular, to be unfinished. I often have to gesture more than once to register a command, and I always feel unnatural waving my hand to switch screens or modes. That’ll presumably become normal for my body as I do it more, just like swiping, but it still feels bizarre. Previous attempts at creating a gesture-controlled device completely flopped. Look at gaming devices, like Microsoft’s Kinect and Nintendo’s Wii, which were both groundbreaking and fun when they debuted, to see what happened to alternative UIs in gadgets where they actually kind of made sense.
Although the Echo has shown voice can sell, it doesn't mean it'll work for every device. The Echo succeeds because it fits into the home environment, a place where people want silence, until they don’t. Alexa serves as an obedient assistant that genuinely makes controlling music and smart lights easier. That use case is harder to prove for other gadgets, like a watch. I don’t need to talk to my watch in the same way I talk to my Echo. I also wear my watch around the city or while out, so although it might be easier to dictate a text to my watch, the conditions aren’t always stable or silent enough to warrant voice controls.
But with that in mind, maybe other companies can succeed in places outside the home. Voice or gesture control could work in cars or other hyper-specific scenarios, at least until we have self-driving cars. To usurp the phone and its touchscreen, companies have to prove their interactions are simpler and their use cases are more compelling.