Skip to main content

Discover the stupidity of AI emotion recognition with this little browser game

Discover the stupidity of AI emotion recognition with this little browser game


Does your face match your feelings?

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

Phil Collins being taken at Face Value.
Phil Collins being taken at Face Value.
Image: The Verge

Tech companies don’t just want to identify you using facial recognition — they also want to read your emotions with the help of AI. For many scientists, though, claims about computers’ ability to understand emotion are fundamentally flawed, and a little in-browser web game built by researchers from the University of Cambridge aims to show why.

Head over to, and you can see how your emotions are “read” by your computer via your webcam. The game will challenge you to produce six different emotions (happiness, sadness, fear, surprise, disgust, and anger), which the AI will attempt to identify. However, you’ll probably find that the software’s readings are far from accurate, often interpreting even exaggerated expressions as “neutral.” And even when you do produce a smile that convinces your computer that you’re happy, you’ll know you were faking it.

This is the point of the site, says creator Alexa Hagerty, a researcher at the University of Cambridge Leverhulme Centre for the Future of Intelligence and the Centre for the Study of Existential Risk: to demonstrate that the basic premise underlying much emotion recognition tech, that facial movements are intrinsically linked to changes in feeling, is flawed.

“The premise of these technologies is that our faces and inner feelings are correlated in a very predictable way,” Hagerty tells The Verge. “If I smile, I’m happy. If I frown, I’m angry. But the APA did this big review of the evidence in 2019, and they found that people’s emotional space cannot be readily inferred from their facial movements.” In the game, says Hagerty, “you have a chance to move your face rapidly to impersonate six different emotions, but the point is you didn’t inwardly feel six different things, one after the other in a row.”

A second mini-game on the site drives home this point by asking users to identify the difference between a wink and a blink — something machines cannot do. “You can close your eyes, and it can be an involuntary action or it’s a meaningful gesture,” says Hagerty.

Despite these problems, emotion recognition technology is rapidly gaining traction, with companies promising that such systems can be used to vet job candidates (giving them an “employability score”), spot would-be terrorists, or assess whether commercial drivers are sleepy or drowsy. (Amazon is even deploying similar technology in its own vans.)

Of course, human beings also make mistakes when we read emotions on people’s faces, but handing over this job to machines comes with specific disadvantages. For one, machines can’t read other social clues like humans can (as with the wink / blink dichotomy). Machines also often make automated decisions that humans can’t question and can conduct surveillance at a mass scale without our awareness. Plus, as with facial recognition systems, emotion detection AI is often racially biased, more frequently assessing the faces of Black people as showing negative emotions, for example. All these factors make AI emotion detection much more troubling than humans’ ability to read others’ feelings.

“The dangers are multiple,” says Hagerty. “With human miscommunication, we have many options for correcting that. But once you’re automating something or the reading is done without your knowledge or extent, those options are gone.”