Skip to main content

Facebook is trying to teach chatbots how to chit-chat

Facebook is trying to teach chatbots how to chit-chat

/

The company’s researchers say giving chatbots ‘consistent personality’ might be the key to better conversation

Share this story

Facebook’s virtual assistant M (pictured, now defunct) is one example of how the company wants to use chatbot technology.
Facebook’s virtual assistant M (pictured, now defunct) is one example of how the company wants to use chatbot technology.
Photo by Vjeran Pavic / The Verge

Despite the death of its personal AI assistant M, Facebook hasn’t given up on chatbots just yet. Over the past couple of years, it’s slowly improved what its artificial agents can do, but their latest challenge is something that can confound even the smartest human: making small talk.

You’d be forgiven for thinking otherwise because of their name, but chatbots can’t really chat. As researchers from Facebook’s FAIR lab explain in a pre-print paper published this week, they fail at this task on a number of levels. First, they don’t display a “consistent personality,” sticking to the same set of facts about themselves throughout a conversation; second, they don’t remember what they or their conversational partners have said in the past; and third, when faced with a question they don’t understand, they tend to fall back on diversionary or preprogrammed responses, like “I don’t know.”

Chatbots can get by with stock phrases, but conversation takes work

Even with these constraints, chatbots can be engaging. (See, for example, the famous ELIZA bot from the 1960s, which acted as a rudimentary therapist by relying on stock phrases like “How do you feel right now?”) But the goal now is not just interrogation, but conversation; to try to recreate this attribute, researchers have turned to deep learning. This means that instead of mapping out preprogrammed questions and answers, chatbots are taught by looking for patterns in large datasets.

So far this has got some good results, but one challenge now, say Facebook’s researchers, is getting the right data to begin with. Many contemporary chatbots are trained on dialogue taken from movie scripts, for example. And this means that when they’re having meandering conversations (rather than directed dialogue for tasks like customer service) they tend to come up with some odd non-sequiturs. See below, where Person 1 is a real human, and Person 2 is a chatbot trained on movie subtitles:

[PERSON 1:] Oh yeah? well I enjoy a nice steak. how about you?
[PERSON 2:] Father always used to say that if we ever had the money you have... we’ d have steak and ice cream three times every day!
[PERSON 1:] I eat all the steak and chicken too, even bacon
[PERSON 2:] Father always used to say that if we ever had the money you have... we’ d have steak and ice cream three times every day!
[PERSON 1:] yeah you said that already
[PERSON 2:] Dynamic structure can absorb shock.
[PERSON 1:] yes it can. I know about that as a bodybuilder
[PERSON 2:] Ladies and gentlemen... the strongest man in the world is blowing up a hot water bottle.

Not great, even if you are into strongmen.

To try to fix this, Facebook’s engineers have built their own dataset to train chatbots with. It’s called Persona-Chat, and consists of more than 160,000 lines of dialogue, sourced from workers found on Amazon’s Mechanical Turk marketplace. (The resource for human data used to train AI.)

Each chatbot ‘personality’ was built from five simple statements

The clever thing about Persona-Chat is that the idle talk it contains isn’t just random. To give some structure to the data, and to address the challenge of making chatbots with personality, the Mechanical Turk workers were asked to design their own character to guide their dialogue. This meant coming up with five basic biographical statements and using them as topics of conversation. So, for example, one persona was based around the following facts: “I am an artist. I have four children. I recently got a cat. I enjoy walking for exercise. I love watching Game of Thrones.”

It’s not exactly Tolstoy, but it’s a start.

This data was used to train neural networks used for existing chatbots, with the results then assessed by another group of Mechanical Turkers. In each case, they were asked to conduct a conversation with the persona-driven bot, and compare it with both other chatbots and humans. As you might expect, the persona bot didn’t score as highly on criteria like “fluency” and “consistency” as the humans, but it outperformed the chatbot trained on movie dialogue.

Here’s a sample of conversation. Person 1 is a human, and Person 2 is the persona-based chatbot. (Its main interest, if you can’t guess, is “writing romance novels.”)

[PERSON 1:] I just got done playing soccer. What have you been up to?
[PERSON 2:] I do not have pets. I spend my free time writing. what about you?
[PERSON 1:] Just graduated from college. Looking for work.
[PERSON 2:] yes I graduated, in writing and poetry
[PERSON 1:] Have you had anything published?
[PERSON 2:] I mainly write fiction novels.
[PERSON 1:] Do you like to watch sports?
[PERSON 2:] do you like kings of leon my favorite by them is use somebody
[PERSON 1:] Are you married? I will be married soon.
[PERSON 2:] haha, no time. I have got a novel to finish

As you’d expect, it’s still not perfect small talk, but it is at least consistent. (And who hasn’t ended a conversation with the excuse “I have got a novel to finish”?) The research also points to a truth that seems almost common sense. If you want to have an interesting conversation with someone — even a virtual someone — then it helps if they have a personality, including likes and interests.

As virtual assistants like Siri and Alexa become more integrated into our lives, we can expect their creators to build out their characters. The problem then becomes whether or not we’ll get along.