Of all the big companies that use voice controls and interaction in their software, Google is perhaps the most low-key about it. Unlike Apple's Siri and Microsoft's Cortana, Google doesn't personify its Now voice assistant. And yet, Google might have the best voice recognition algorithms of them all: it can recognize even mumbled input and does it with almost no processing delay. And now it's getting even better.
In a new post on the Google Research Blog, members of the Google Speech Team have set out the latest developments in the company's voice search algorithms. Google had already been employing deep neural networks — the same stuff responsible for those freaky distorted pictures — to compute the most likely thing you're trying to say to your phone, but now it's evolved its approach and started using recurrent neural networks. The new voice modelling allows Google to account for temporal dependencies, which is to say that it's now better at analyzing every snippet of audio by referring to the sounds on either side of it. The upshot for users is an even faster, more accurate, and efficient voice search experience. The company even claims it's more robust to noisy environment. The Google search app for iOS and Android is already using the new improved voice input, which is also present when dictating stuff into Android.