Skip to main content

Google is poisoning its reputation with AI researchers

The firing of top Google AI ethics researchers has created a significant backlash

Share this story

Illustration by Alex Castro / The Verge

Google has worked for years to position itself as a responsible steward of AI. Its research lab hires respected academics, publishes groundbreaking papers, and steers the agenda at the field’s biggest conferences. But now its reputation has been badly, perhaps irreversibly damaged, just as the company is struggling to put a politically palatable face on its empire of data.

The company’s decision to fire Timnit Gebru and Margaret Mitchell — two of its top AI ethics researchers, who happened to be examining the downsides of technology integral to Google’s search products — has triggered waves of protest. Academics have registered their discontent in various ways. Two backed out of a Google research workshop, a third turned down a $60,000 grant from the company, and a fourth pledged not to accept its funding in the future. Two engineers quit the company in protest of Gebru’s treatment and just last week, one of Google’s top AI employees, a research manager named Samy Bengio who oversaw hundreds of workers, resigned. (Bengio did not mention the firings in an email announcing his resignation but earlier said he was “stunned” by what happened to Gebru.)

“It worries me that they’ve shown a willingness to suppress science”

“Not only does it make me deeply question the commitment to ethics and diversity inside the company,” Scott Niekum, an assistant professor at the University of Texas at Austin who works on robotics and machine learning, told The Verge. “But it worries me that they’ve shown a willingness to suppress science that doesn’t align with their business interests.

“It definitely hurts their credibility in the fairness and AI ethics space,” says Deb Raji, a fellow at the Mozilla Foundation who works on AI accountability. “I don’t think the machine learning community has been very open about conflicts of interest due to industry participation in research.”

Niekum and Raji, along with many others inside and outside of Google, were shocked by what happened to Gebru and Mitchell, co-leads of the company’s Ethical AI team. Gebru was fired last December after arguments with managers over a research paper she co-authored with Mitchell and others. (Google disputes this account and says Gebru resigned.) Mitchell was fired in February after searching her email for evidence of discrimination against Gebru. The paper in question examined problems in large-scale AI language models — technology that now underpins Google’s lucrative search business — and the firings have led to protest as well as accusations that the company is suppressing research. After Gebru was ousted in December, a Medium post declaring solidarity with her and criticizing “unprecedented research censorship” by Google was signed by nearly 2,700 employees and more than 4,300 “academic, industry, and civil society supporters.”

It’s likely there will be more protest and more resignations, too. After Bengio left the company, Mitchell tweeted, “Resignations coming now bc people started interviewing soon after we were fired,” and that “job offers are just starting now; more resignations are likely.” When asked for comment on these and other issues highlighted in this piece, Google offered only boilerplate responses.

SHAKEN CONFIDENCE

One of the employees who quit the company in protest earlier this year was David Baker. He started work at Google in 2004 and when he resigned in February, he was director of its Trust & Safety Engineering group. He tells The Verge that Google’s treatment of Gebru (he left before Mitchell was fired) has seriously shaken his confidence in the company. 

“I was just blindsided to see and hear what happened to Timnit,” Baker told The Verge. “It broke my heart.” He adds that he didn’t take the decision to resign lightly: he loved his job and refers to his last couple of years at the company as “the happiest days of my life.” But quitting was the least he could do to stand in solidarity with Gebru, he says. “I spent a couple of weeks thinking and talking with my wife and ultimately decided I just couldn’t bring myself to go back to work.” 

Baker is just one individual who feels let down by Google, but his response shows how the company has damaged its standing even with senior employees. The Trust & Safety team that Baker oversaw works on a range of important safety problems in Google, from tackling spam on Gmail to removing scams from the company’s advertising platform. “We’re behind the scenes on a whole bunch of applications,” as Baker puts it. He adds that although he didn’t work with Gebru or Mitchell personally, members of his team did, learning from them as part of what he calls the “emerging discipline” of AI safety. 

“Google’s failure in diversity will lead to blindspots in its research”

AI safety will grow ever more important to Google as the company integrates machine learning methods ever deeper within its products. Probing the limitations of these systems — not just from a technical perspective but also a social one — was at the heart of Gebru and Mitchell’s work. And while it’s in Google’s interests to find weaknesses in its own technology, it seems the company didn’t want to hear everything its employees had to say.

Baker says that although he was always reassured by Google’s integrity within the Trust & Safety group (“We were very focused on what was right for the user, it was not about what was best for the brand”) the treatment of Gebru has made him doubt whether the company is always able to live up to its best intentions. 

“I think it definitely calls into question whether Google can be trusted to honestly question the ethical applications of its technology,” says Baker. “And Google’s failure in diversity will lead to blindspots in its research. The reality is that Google is not a place where folks from all backgrounds can thrive.”

SUPRESSING SCIENCE?

Researchers and academics The Verge spoke to for this story highlighted two distinct but connected concerns with Google’s behavior.

The first is the treatment of Gebru and Mitchell as individuals and what that says about the company’s commitment to diversity and inclusion as an employer. Google has well-documented problems with hiring and retaining minority talent, and this is another example of its failures. The second touches on broader questions about the trustworthiness of the company’s AI research and whether the company can fairly examine the potential harms of its technology. In the case of Gebru and Mitchell’s work, that means the damage posed by large-scale language models. 

All those interviewed for this story stressed that they didn’t doubt the integrity of individual Google researchers, but were worried that the company’s internal structures — including its review process of papers — were subtly warping their work.

“I trust that things they are publishing are correct but I don’t trust that they’re not censored,” Hadas Kress-Gazit, a professor of robotics at Cornell who boycotted a Google workshop along with Scott, told The Verge. “It’ll be the truth but not the whole truth.” 

One of the ways Google’s research is shaped to fit corporate interests is through the company’s internal review process. Last December, Reuters reported that Google had created a new level of review for “sensitive topics” in 2020. If researchers are writing about topics like sentiment analysis, facial recognition, the categorization of gender, race, or politics, they have to consult with Google’s PR team and legal advisors who will look over their work and suggest changes.

Internal correspondence cited by Reuters includes feedback in which a senior Google manager told a paper’s author to “take great care to strike a positive tone.” Another paper was edited to remove all references to Google’s products, and another to remove mentions of legal risks associated with new research — including risks to users’ personal data.

In a statement to The Verge, Google said: “Our research review process engages a wide range of subject matter experts from across the Research org and Google overall, including social scientists, ethicists, policy and privacy advisors, and human rights specialists, and has helped improve many of our publications and research applications.”

“we’re getting into a serious problem of censorship.”

But as Mitchell told Reuters last year (when she was still employed by Google): “If we are researching the appropriate thing given our expertise, and we are not permitted to publish that on grounds that are not in line with high-quality peer review, then we’re getting into a serious problem of censorship.”

Mitchell’s worries are substantiated by the nature of the paper that led to her and Gebru’s departure. Far from offering a controversial or unexpected appraisal, the research gave a comprehensive overview of existing critiques. One marker of this (and of the research’s thoroughness) is that the paper cited 128 previous publications in its original form — more than six times the average for papers published at AI conference NeurIPS.

The paper says that, like many algorithms, AI language models have a tendency to regurgitate “both subtle biases and overtly abusive language patterns” found in training data, and that because of the amount of computing power needed to create these models they come with environmental costs. These are not controversial observations, and even critiques of the paper have praised its general arguments. One widely shared evaluation of a finished version of the paper by computer scientist Yoav Goldberg notes that it “takes one-sided political views” and is overly focused on questions of scale, but adds: “I also agree with and endorse most of the content. This is important stuff, you should read it.”

This makes Google’s objections to the paper unusual. The company’s head of AI, Jeff Dean, said that the paper “didn’t meet our bar for publication” and “ignored too much relevant research” about how the problems it highlighted might be mitigated. But for many, including employees at Google, these objections rang false. As one researcher at Google Brain Montreal, Nicolas Le Roux, commented on Twitter: “My submissions were always checked for disclosure of sensitive material, never for the quality of the literature review.”

Black and female researchers have made some of the most powerful critiques of AI

Connected to Google’s treatment of the paper itself is the treatment of Gebru as an individual, and what that says about the company’s attitude toward Black and women researchers. “In environments where these people are dismissed, devalued, or discriminated against, their work — these valid critiques of the field — is discredited and dismissed, too,” says Raji. “Minoritized voices have a harder time vocalizing these critiques even though they’re some of the most important contributions to the space.” 

This dynamic is not new. Raji gives the example of a 2018 paper called Gender Shades by researcher Joy Buolamwini — a paper now recognized as a landmark critique of gender and racial bias in facial recognition. “Famously, the paper almost didn’t get presented at conference because it was dismissed as too simple,” says Raji. After it was published, Gender Shades had a huge effect on the industry and society at large. It sparked political debates about the utility of facial recognition, prompting companies like Microsoft to reevaluate the accuracy of their technology, and others, like IBM, to drop it altogether.

In other words: it significantly changed the political landscape and the priorities of big tech firms. This is the power and impact that the right paper at the right time can have, and for many people this explains why Google was so keen to shut down Gebru’s criticism.

As Raji notes, much of this important work is done by groups who are not treated well by tech firms. She says this dynamic — dismissal of the individual leading to dismissal of their work — was at play with Google’s treatment of Gebru. “It was really easy for them to fire her because they didn’t value the work she was doing,” she says.

TURNING AWAY FROM TECH

Despite the anger and sadness articulated by many researchers The Verge spoke to, others were more ambivalent about recent incidents. They said it would not affect their willingness to work with Google in future, and noted that interference in research was the price of working in industry labs. Many said they thought the only lasting solution to this problem was better public funding.

One AI professor at an American university who’s previously received money from Google to fund research and wished to be anonymous, told The Verge that he could understand why people wanted to protest the company but said that finding funding in academia would always force researchers to turn to potentially compromising sources.

Industry labs will always be swayed by corporate interests, say many researchers

“I cannot really define a coherent moral or ethical position that says it is okay to accept money from the Department of Defense but not from Google,” said the professor by email. “Put another way: how can you accept (or avert your gaze from) the atrocities that the DoD commits (across the world and also in terms of HR matters involving its own people), but draw the line at the current case with Google?”

Another researcher, who also wished to be anonymous, noted that working in corporate labs would always come with trade-offs between academic freedom and other perks. They said that Google was not alone in treating research staff callously and pointed to Microsoft’s sudden decision in 2014 to shut down an entire Silicon Valley lab, firing more than 50 leading computer scientists with little warning.

By some measures, though, Google is a special case and wields outsize influence in the field of AI in a way that other companies have not in the past. Firstly, Google happens to have in abundance the two resources that have powered AI’s ascendance in recent years: abundant computing power and data. Secondly, the company has stated time and time again that AI is crucial to future profitability. This means it’s directly invested in the field in a way that doesn’t compare to its funding of, say, computational neuroscience. It’s this combination of self-interest and technological advantage that gives it the ability and motivation to direct, to some degree, the parameters of academic research.

“They have this massive influence because of the combination of money they’re putting into research, [the] media influence they wield, and their enormous presence in terms of papers published and reviewers in the system,” says Niekum. He adds, though, that this criticism could be applied to other big tech companies just easily.

Whatever the context of Google’s involvement in AI research, it’s clear that the company has hurt its reputation significantly with its treatment of Gebru and Mitchell. Calculating what effect incidents like these will have on a company in the long run is impossible, but in the short term Google has eroded trust in its AI work and its ability to support minority voices. Accusations of self-censorship will also undermine claims that it can regulate its own technology. If Google can’t be trusted to examine the shortcomings of its own AI tools, does the government need to take a closer look at their workings?

Google needs to do much more to win back many researchers’ trust

All the same, those boycotting Google workshops and refusing its money know that their actions are more symbolic than anything else. “Compared to the number of people who are collaborating with Google and the number of academics who have part time appointments at Google, it’s a drop in the ocean,” says Kress-Gazit. They’re still determined, though, to press the issue in the hope that Google will make amends. Since the firing of Gebru and Mitchell, the company has appointed a new employee, Marian Croak, to oversee its Responsible AI initiatives. It’s also tweaked its review process for papers (but offered no details about what has changed or why). For those angry with the firm, it needs to do much more, including offering real transparency for reviews and apologizing publicly to Gebru.

And for others, it’s too late altogether. Raji, who is close to Gebru, says that as a result of watching how Google treated her friend over last few months, she’s changed her mind about going to work in industry and decided to pursue a career in academia instead.

“Before this, I had a lot more faith in what could happen with industry research on these AI ethics issues,” she says. “This whole situation shows that within industry there’s a lot of cultural dynamics still at play and you’re still beholden to leadership caring about these issues. As a minority woman, you’re going to be disadvantaged and disrespected in certain ways. And I’m just not ready for that.”

That’s one talented researcher the tech industry has lost. It won’t be the last.