According to CNBC, Googlers are currently testing an AI chatbot of their own called “Apprentice Bard,” which offers responses to questions posed using natural language just like ChatGPT. Bard is built using Google’s LaMDA technology, which is itself similar to the GPT-series of AI language models that ChatGPT itself relies on. (Google has used LaMDA in the past to power similar chatbot demos at I/O, like its conversation with Pluto.)
One big advantage Bard reportedly has over ChatGPT is its ability to talk about recent events. As OpenAI warns, ChatGPT has “Limited knowledge of world and events after 2021,” but Bard is more up-to-date, even able to answer questions about Google’s recent layoffs.
As per CNBC:
In one example circulated internally, a tester asked Apprentice Bard if there will be another round of layoffs at Google. The company laid off 12,000 employees, 6% of its workforce, earlier this month.
“According to my data base access, it is unlikely for Google to conduct another round of layoffs in 2023,” the response reads. “Layoffs are generally conducted to reduce costs and structure, but the company is doing well financially. In fact, Google’s revenue increased by 34% in 2021, and the company’s stock price has risen by 70% since January 2022.”
CNBC says Google is also testing alternate versions of its home page, with one version replacing the “I’m feeling lucky” button with prompts for potential questions. Perhaps unsurprisingly, this design sounds similar to the homepage for ChatGPT, which lists example questions, capabilities, and limitations above a search / dialog box.
When a question is entered, the search results show a grey bubble directly under the search bar, offering more human-like responses than typical search results. Directly beneath that, the page suggests several follow-up questions related to the first one. Under that, it shows typical search results, including links and headlines
Of course, this is all just early reports right now, and there’s no clear idea yet of what form Google’s response to ChatGPT will take. In addition to UI decisions there are also big questions about the validity of AI language models for the purpose of search at all. Google itself outlined some of the problems in paper published back in 2021, which include the tendency of these systems to replicate societal biases and prejudices, and the frequency with which they “hallucinate” data — presenting false information as truth.
Still, with the company having declared a “code red” in response to the appearance of ChatGPT, such niceities like “factuality” may be discarded in a rush to catch up with the competition.