Skip to main content

Google reportedly asked employees to ‘strike a positive tone’ in research paper

Google reportedly asked employees to ‘strike a positive tone’ in research paper


The search giant launched a ‘sensitive topics’ review in June

Share this story

Illustration by Alex Castro / The Verge

Google has added a layer of scrutiny for research papers on sensitive topics including gender, race, and political ideology. A senior manager also instructed researchers to “strike a positive tone” in a paper this summer. The news was first reported by Reuters.

“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues,” the policy read. Three employees told Reuters the rule started in June.

The company has also asked employees to “refrain from casting its technology in a negative light” on multiple occasions, Reuters says.

Employees working on a paper on recommendation AI, which is used to personalize content on platforms like YouTube, were told to “take great care to strike a positive tone,” according to Reuters. The authors then updated the paper to “remove all references to Google products.”

Another paper on using AI to understand foreign languages “softened a reference to how the Google Translate product was making mistakes,” Reuters wrote. The change came in response to a request from reviewers.

Google’s standard review process is meant to ensure researchers don’t inadvertently reveal trade secrets. But the “sensitive topics” review goes beyond that. Employees who want to evaluate Google’s own services for bias are asked to consult with the legal, PR, and policy teams first. Other sensitive topics reportedly include China, the oil industry, location data, religion, and Israel.

The search giant’s publication process has been in the spotlight since the firing of AI ethicist Timnit Gebru in early December. Gebru says she was terminated over an email she sent to the Google Brain Women and Allies listserv, an internal group for Google AI research employees. In it, she spoke about Google managers pushing her to retract a paper on the dangers of large scale language processing models. Jeff Dean, Google’s head of AI, said she’d submitted it too close to the deadline. But Gebru’s own team pushed back on this assertion, saying the policy was applied “unevenly and discriminatorily.”

Gebru reached out to Google’s PR and policy team in September regarding the paper, according to The Washington Post. She knew the company might take issue with certain aspects of the research, since it uses large language processing models in its search engine. The deadline for making changes to the paper wasn’t until the end of January 2021, giving researchers ample time to respond to any concerns.

A week before Thanksgiving, however, Megan Kacholia, a VP at Google Research, asked Gebru to retract the paper. The following month, Gebru was fired.

Google did not immediately respond to a request for comment from The Verge.