Skip to main content

Apple still hasn't fixed Siri's biggest problem

Apple still hasn't fixed Siri's biggest problem

/

Calling everything ‘Siri’ doesn’t solve bad voice recognition

Share this story

If you were listening to Apple’s keynote address at the Worldwide Developer’s Conference this week, you might have walked away thinking that this is the year (and the version of iOS) that Siri finally feels like a capable digital assistant.

Apple announced it was adding a new visual interface to its digital assistant, the ability to handle follow-up questions, and even language translation. Siri is also getting a new voice completely generated by machine learning algorithms. SiriKit, the development tool that lets third-party companies integrate with Siri, is becoming more robust, too. You’ll be able to type to Siri in iOS 11, and, of course, the assistant underpins Apple’s newest product, its HomePod speaker.

What Apple failed to mention was whether it’s done anything to fix Siri’s biggest problem — understanding your spoken requests.

Voice recognition is still a major hurdle with Siri

Voice recognition has been the biggest drag on Siri since the assistant’s introduction in 2011. Too often, Siri whiffs when it tries to interpret your commands. And even when it gets that part right, there’s a good chance Siri only gets you halfway to an answer before it crashes headlong into its own limitations. If that’s still going to be the case, more results won’t be enough to make it useful. Neither will a smoother voice.

Google’s Assistant and Amazon’s Alexa aren’t perfect, but they consistently handle more advanced requests better than Siri. This is due, in part, to how Apple has lagged behind in building out its artificial intelligence efforts. Apple’s tendency for secrecy has reportedly scared away some of the best minds in AI out of fear that there’s no chance for recognition in Cupertino.

Now, if Apple catches up on AI, WWDC might be where it turned the corner. There’s reason to believe it’s finally serious about AI thanks to Core ML, and what developers take away from this conference could help shape the company’s future efforts. But Apple still didn’t bother to mention what it’s doing to fix Siri’s fatal flaw.

What Apple did instead was signal how it might try to get around those shortcomings: by changing what it considers Siri to be in the first place.

“In iOS 11, Siri does so much more in learning how you’re using your device,” Apple’s software SVP Craig Federighi said onstage, before explaining with a demo. Federighi opened the Apple News app after reading about Iceland in Safari, and Siri surfaced stories about the country that he might like to read. When he jumped to iMessage for a mock conversation about traveling there, Siri suggested new, Icelandic vocabulary that it had pulled from those articles.

It didn’t stop there. Federighi briefly flashed a slide that teased some things Apple wants developers to focus on with SiriKit, full of terms like “photo search,” “workouts,” and “QR codes.” Another short demo showed Siri populating a notification about an upcoming flight. The new scrolling set of cards on watchOS 4 that splices photo memories in between the weather, traffic, and the day’s scheduled events? Siri.

These sound like useful features, but they are far from what we considered Siri to be since Apple bought the company behind it in 2010, which is a digital assistant that we interact with directly. Many of them have nothing to do with talking (or typing) to Siri. In fact, Siri wasn’t spoken to once during the entire keynote.

Siri wasn’t spoken to at all during the event

Apple has been slowly building to this for a while, starting with things like “Siri app suggestions,” a row of icons in the Spotlight screen that changes based on which app your phone thinks you want to use. But now the company appears ready to apply that thinking broadly, using the Siri brand as a catchall for machine learning tricks that are starting to become standard on smartphones.

Some of the HomePod commands, like asking Siri to identify the drummer in a song, seem out of the digital assistant’s current reach.
Some of the HomePod commands, like asking Siri to identify the drummer in a song, seem out of the digital assistant’s current reach.

If you buy Apple’s marketing, this is a major evolution of Siri. Really though, it helps mitigate the risk associated with Siri’s biggest shortcoming, because it means you’ll have all new ways of obliquely interacting with the assistant that don’t require voice recognition. But even if it alleviates that pressure, the move opens up a new attack vector on Siri, because none of these new features that Apple showed off seem to surpass what competitors like Google, Amazon, or Microsoft are already doing with their own AI efforts.

So where does that leave what we once thought of as Siri? In August of last year, Apple told the world that Siri was finally ready to strut its stuff. Two months later, though, die-hard users were still finding the experience disappointing. These next few months leading up to (and beyond) the next iPhone release feel like another one of those moments of truth. We’ll either have more reasons to be frustrated by Siri, or things will finally turn around.

Apple obviously feels confident enough in their digital assistant to put it front and center on every device; it even went so far as to build a whole piece of hardware around it with HomePod. When iOS 11 updates roll out to a billion devices, and the HomePod starts arriving in people’s living rooms, we’ll see if Siri has managed to catch up to the competition, or if it continues to leave users with the impression that Apple doesn’t have a handle on today’s cutting-edge AI.

Ben Popper contributed to this piece.