Facebook’s public image is in such a disastrous state that the company’s public relations team built an artificial intelligence-powered chatbot to help its employees deflect criticism from family members over the holidays, reports The New York Times. The tool, called “Liam Bot” for reasons the company has not disclosed, helps walk employees through tough conversations about Facebook’s various controversies.
The tool was rolled out to employees shortly before the US Thanksgiving holiday, the NYT reports, and it first entered testing back in the spring. The answers are written by the company’s public relations team and largely appear to align with executive team’s public statements on topics like free speech, election meddling, moderation, and more.
When asked about hate speech, for instance, the NYT reports that Liam Bot will respond with a few available prompts like, “It [Facebook] has hired more moderators to police its content,” and, “Regulation is important for addressing the issue.” The bot also links out to helpful Facebook blog posts and, in the case the question is a technical one, FAQs and guides to problems like resetting an account password.
Facebook has faced an unprecedented series of crises over the last few years, starting with its role as an effective misinformation tool during the 2016 election and punctuated by a head-spinning number of controversies like the Cambridge Analytica data privacy scandal and the company’s recent political ad policy. The barrage of bad press has made it more difficult for Facebook to recruit new employees and resulted in an uptick in employees asking former colleagues or fellow industry employees about outside job prospects, CNBC has reported.
The topic of morale among employees has also been a repeated concern over the last few years, especially as Facebook’s approach to radical transparency with employees has led to high-profile leaks in recent months. In October, The Verge published audio and transcripts of a series of Q&A sessions Facebook CEO Mark Zuckerberg held with employees, revealing his personal thoughts and feelings about a number of hot-button topics like Sen. Elizabeth Warren’s stance on regulating Big Tech. The leak, a rare break in a sacred pact of secrecy Facebook has cultivated among its tens of thousands of employees for the last decade, was yet more evidence that morale has been flagging.
Facebook’s answer to this scenario, at least as it relates to appeasing friends and family members of employees, appears to be a technical one in the form an AI chatbot. “Our employees regularly ask for information to use with friends and family on topics that have been in the news, especially around the holidays,” a Facebook spokeswoman told the NYT. “We put this into a chatbot, which we began testing this spring.”