Skip to main content

AI is Google’s secret weapon for remaking its oldest and most popular apps

AI is Google’s secret weapon for remaking its oldest and most popular apps

/

I/O 2018 was all about how AI can make every Google product more useful, including products as old and pervasive as Google Maps and Google News

Share this story

Image: Google

Google shocked the crowd at its I/O developer conference on Tuesday when it kicked off a fascinating discussion about AI ethics with Duplex, a human-like voice system for its Assistant product that makes phone calls on behalf of users. But while Duplex remains a more experimental and far-off effort — one we’ll likely be debating in the weeks and months to come — Google’s more measured approach to artificial intelligence as it pertains to legacy product development didn’t garner as many headlines. However, it’s those subtle AI-powered changes to existing and pervasive products that will have a far more visible impact on how we use software to interact with the world in the near future.

Take, for instance, the ways Google is using AI to improve both its Maps and News products, platforms that have been around for 13 and 15 years, respectively. Google executives onstage at I/O on Tuesday introduced a suite of changes that will make each more useful, personalized, and social, all thanks to self-learning algorithms that are now better at digesting and surfacing information than humans are.

Google Maps and Google News are both getting AI-powered revamps

Thanks to these advances in AI, Google Maps will soon create Street View-style visual guides for step-by-step directions overlaid onto the real world, as viewed through the smartphone camera. Going one step further, the company plans to integrate its Assistant, equipped with the computer vision platform Google Lens, into Maps. That way, you’ll be able to pan over a city street and see pop-ups highlighting restaurants and other locations in real time. It’s effectively the dream of Google Glass, but understandably more realized, thanks to the smartphone camera more than a piece of controversial wearable technology.

Google says it’s even developing a new system for geolocating objects in an environment, called the Visual Positioning System, or VPS, that will take into account everything from business storefront displays to street signs to help map out a route with more precision. “GPS alone doesn’t cut it,” Aparna Chennapragada, head of product for Google Lens, explained during the keynote. “VPS uses the visual features of an environment to figure out exactly where you are and exactly where you need to go.”

These changes, though they don’t have a concrete release date, mean that Google Maps — already one of the world’s most powerful augmented reality technologies — will contain an even richer representation of the real world. Soon enough, Maps will be able to augment it in more significant fashion by using the platform’s new recommendation engine and “For You” section to push people to visit new places, eat at new restaurants, and try new experiences. AI even underpins Google Maps’ new “match score,” which is a kind of dating app-style metric for how well it thinks you’ll like a given location, be it a landmark, museum, or place to eat.

Image: Google

Google News, while necessarily less pervasive than Maps, received its own AI-focused overhaul at the keynote on Tuesday. The changes there could have a measurable effect on Google users’ ability to make sense of conflicting information, discern real from fake news, and stay informed.

The company says Google News now utilizes a machine learning-focused approach to “take a constant flow of information as it hits the web, analyze it in real time and organize it into storylines,” explained Trystan Upstill, the product lead for Google News, in a blog post published Tuesday. “This approach means Google News understands the people, places and things involved in a story as it evolves, and connects how they relate to one another.”

This manifests most visibly in Google News’ new Full Coverage feature. For big stories — Google used the Puerto Rico power outage as an example — Google News will digest all of the web’s information into a non-personalized and algorithmically generated flow featuring a timeline of events as the story broke and then developed over time, opinion and analysis pieces of the aftermath, tweets and videos from YouTube, among other bits of news and raw information.

Google hopes this will be an unbiased way to approach complex stories, by relying on AI to capture the full scope of a story and present readers with as much information as possible. “If you want to get a deeper insight into a story, the ‘Full Coverage’ feature provides a complete picture of how that story is reported from a variety of sources,” Upstill explains. “With just a tap you’ll see top headlines from different sources, videos, local news reports, FAQs, social commentary, and a timeline for stories that have played out over time.”

Image: Google

The company is also debuting a new media format it calls Newscasts that will “bring together a collection of articles, videos and quotes on a single topic,” using natural language processing techniques that help software understand human speech and text. The goal is to give readers more perspectives on a story, going off the idea that most news consumers are fed a diet of self-affirming news or information without the necessary context to fully understand an issue or event. This is pressing for Google considering the fake news issue running rampant not just on Facebook and Twitter, but on YouTube as well and especially in the aftermath of fast-moving breaking news events. Almost every Silicon Valley company thinks AI can help with this problem, but Google’s approach with News here is one of the only concrete solutions we’ve seen beyond just standard algorithmic moderation.

These changes to Maps and News aren’t getting the same amount of attention as Duplex or even the more general advancements made to Google Lens, the company’s computer vision platform for recognizing objects and text. That makes sense. Both Duplex and Lens are more transparent, cutting-edge AI projects that really push the envelope on how well software can make sense of visuals and text and even emulate human vision and language, to a frightening degree in the minds of many technology critics.

But it’s the AI advancements made to products like Google Maps and Google News that users will feel more immediately in their day-to-day lives and make use of in novel ways to improve how we get around, explore new places, and digest information on the internet. That doesn’t quite make the public question the dystopian nature of our prospective AI future.

But it certainly does make the case that Google knows how to use this world-changing technology in productive, meaningful ways to yield tangible benefits for the public. Google is already arguably the best platform for helping people find information on the internet and get around in the real world. It’s clear now that a little bit of AI can go a long way in helping us achieve both those goals in new, more remarkable ways.