Yesterday at Google’s I/O developer conference, the company outlined ambitious plans for its future built on a foundation of advanced language AI. These systems, said Google CEO Sundar Pichai, will let users find information and organize their lives by having natural conversations with computers. All you need to do is speak, and the machine will answer.
But for many in the AI community, there was a notable absence in this conversation: Google’s response to its own research examining the dangers of such systems.
Language models like Google’s come with a host of challenging risks
In December 2020 and February 2021, Google first fired Timnit Gebru and then Margaret Mitchell, co-leads of its Ethical AI team. The story of their departure is complex but was triggered by a paper the pair co-authored (with researchers outside Google) examining risks associated with the language models Google now presents as key to its future. As the paper and other critiques note, these AI systems are prone to a number of faults, including the generation of abusive and racist language; the encoding of racial and gender bias through speech; and a general inability to sort fact from fiction. For many in the AI world, Google’s firing of Gebru and Mitchell amounted to censorship of their work.
For some viewers, as Pichai outlined how Google’s AI models would always be designed with “fairness, accuracy, safety, and privacy” at heart, the disparity between the company’s words and actions raised questions about its ability to safeguard this technology.
“Google just featured LaMDA a new large language model at I/O,” tweeted Meredith Whittaker, an AI fairness researcher and co-founder of the AI Now Institute. “This is an indicator of its strategic importance to the Co. Teams spend months preping these announcements. Tl;dr this plan was in place when Google fired Timnit + tried to stifle her+ research critiquing this approach.”
Gebru herself tweeted, “This is what is called ethics washing” — referring to the tech industry’s tendency to trumpet ethical concerns while ignoring findings that hinder companies’ ability to make a profit.
Speaking to The Verge, Emily Bender, a professor at the University of Washington who co-authored the paper with Gebru and Mitchell, said Google’s presentation didn’t in any way assuage her concerns about the company’s ability to make such technology safe.
“From the blog post [discussing LaMDA] and given the history, I do not have confidence that Google is actually being careful about any of the risks we raised in the paper,” said Bender. “For one thing, they fired two of the authors of that paper, nominally over the paper. If the issues we raise were ones they were facing head on, then they deliberately deprived themselves of highly relevant expertise towards that task.”
Google needs to be clearer about how it’s tackling these dangers
In its blog post on LaMDA, Google highlights a number of these issues and stresses that its work needs more development. “Language might be one of humanity’s greatest tools, but like all tools it can be misused,” writes senior research director Zoubin Ghahramani and product management VP Eli Collins. “Models trained on language can propagate that misuse — for instance, by internalizing biases, mirroring hateful speech, or replicating misleading information.”
But Bender says the company is obfuscating the problems and needs to be clearer about how it’s tackling them. For example, she notes that Google refers to vetting the language used to train models like LaMDA but doesn’t give any detail about what this process looks like. “I’d very much like to know about the vetting process (or lack thereof),” says Bender.
It was only after the presentation that Google made any reference to its AI ethics unit at all, in a CNET interview with Google AI chief Jeff Dean. Dean noted that Google had suffered a real “reputational hit” from the firings — something The Verge has previously reported — but that the company had to “move past” these events. “We are not shy of criticism of our own products,” Dean told CNET. “As long as it’s done with a lens towards facts and appropriate treatment of the broad set of work we’re doing in this space, but also to address some of these issues.”
For critics of the company, though, the conversation needs to be much more open than this.