Google has placed one of its engineers on paid administrative leave for allegedly breaking its confidentiality policies after he grew concerned that an AI chatbot system had achieved sentience, the Washington Post reports. The engineer, Blake Lemoine, works for Google’s Responsible AI organization, and was testing whether its LaMDA model generates discriminatory language or hate speech.
The engineer’s concerns reportedly grew out of convincing responses he saw the AI system generating about its rights and the ethics of robotics. In April he shared a document with executives titled “Is LaMDA Sentient?” containing a transcript of his conversations with the AI (after being placed on leave, Lemoine published the transcript via his Medium account), which he says shows it arguing “that it is sentient because it has feelings, emotions and subjective experience.”
“Is LaMDA Sentient?”
Google believes Lemoine’s actions relating to his work on LaMDA have violated its confidentiality policies, The Washington Post and The Guardian report. He reportedly invited a lawyer to represent the AI system and spoke to a representative from the House Judiciary committee about claimed unethical activities at Google. In a June 6th Medium post, the day Lemoine was placed on administrative leave, the engineer said he sought “a minimal amount of outside consultation to help guide me in my investigations” and that the list of people he had held discussions with included US government employees.
The search giant announced LaMDA publicly at Google I/O last year, which it hopes will improve its conversational AI assistants and make for more natural conversations. The company already uses similar language model technology for Gmail’s Smart Compose feature, or for search engine queries.
In a statement given to WaPo, a spokesperson from Google said that there is “no evidence” that LaMDA is sentient. “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” said spokesperson Brian Gabriel.
“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” Gabriel said. “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”
“Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has,” Gabriel said.
A linguistics professor interviewed by WaPo agreed that it’s incorrect to equate convincing written responses with sentience. “We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” said University of Washington professor Emily M. Bender.
Timnit Gebru, a prominent AI ethicist Google fired in 2020 (though the search giant claims she resigned), said the discussion over AI sentience risks “derailing” more important ethical conversations surrounding the use of artificial intelligence. “Instead of discussing the harms of these companies, the sexism, racism, AI colonialism, centralization of power, white man’s burden (building the good “AGI” [artificial general intelligence] to save us while what they do is exploit), spent the whole weekend discussing sentience,” she tweeted. “Derailing mission accomplished.”
In spite of his concerns, Lemoine said he intends to continue working on AI in the future. “My intention is to stay in AI whether Google keeps me on or not,” he wrote in a tweet.
Update June 13th, 6:30AM ET: Updated with additional statement from Google.