Skip to main content

Google Maps review moderation detailed as Yelp reports thousands of violations

Google Maps review moderation detailed as Yelp reports thousands of violations

/

With machine learning as a first line of defense

Share this story

Google explains how it keeps user-created reviews on Google Maps free of fraud and abuse in a new blog post and accompanying video. Like many platforms dealing with moderation at scale, Google says it uses a mix of automated machine learning systems as well as human operators.

The details come amidst growing scrutiny of user reviews on sites like Google Maps and Yelp, where businesses have been hit with bad reviews for implementing COVID-related health and safety measures (including mask and vaccine requirements) often beyond their control. Other reviews have criticized businesses for supposedly leading them to contract COVID-19 or for not keeping to usual business hours during a global pandemic.

Earlier today, Yelp reported that it removed over 15,500 reviews between April and December last year for violating its COVID-19 content guidelines, a 161 percent increase over the same period in 2020. In total, Yelp says it removed over 70,200 reviews across nearly 1,300 pages in 2021, with many resulting from so-called “review bombing” incidents where coordinated reviews are submitted from users who haven’t actually patronized a business.

“We need both the nuanced understanding that humans offer and the scale that machines provide”

Google explains that every review posted on Google Maps is checked by its machine learning system, which has been trained on the company’s content policies to weed out abusive or misleading reviews. This system is trained to check both the contents of individual reviews, but it’ll also look for wider patterns — like sudden spikes in one- or five-star reviews — both from the account itself, as well as other reviews on the business.

Google says that human moderation comes into play for content that’s been flagged by end users and businesses themselves. Offending reviews can be removed, and in more severe cases, user accounts can be suspended and litigation pursued. “We’ve found that we need both the nuanced understanding that humans offer and the scale that machines provide to help us moderate contributed content,” Google’s product lead for user-generated content, Ian Leader, writes.

It’s an interesting look at the steps Google takes to keep Maps reviews usable. You can read more in the full blog post.