Each of Burger King’s new ads starts with an anachronistic burst of noise from a dial-up modem and a solemn warning: “This ad was created by artificial intelligence.” Then, over shots of glistening burgers and balletic fries, a robotic-sounding narrator deploys exactly the sort of clunky grammar and conceptual malapropisms we expect from a dumb AI.
“The chicken crossed the road to become a sandwich. Burger King encouraged the chicken,” says the voice. “The Whopper lives in a bun mansion, just like you,” it chirps.
They’re good ads! And, of course, they’re lies. In a press release, Burger King claims the videos are the work of a “new deep learning algorithm,” but an article from AdAge makes it clear that humans — not machines — are responsible for the funnies. “Artificial intelligence is not a substitute for a great creative idea coming from a real person,” Burger King’s global head of brand marketing, Marcelo Pascoa, told the publication.
It’s a silly, unimportant deception, but one that points to an important truth: we really don’t know what artificial intelligence is and is not capable of.
Burger King’s joke lands because AI exists in the public imagination as a quantum entity — simultaneously powerful and pathetic. Artificial intelligence is about to take our jobs, we’re constantly told; it’s going to destroy the economy and humanity to boot. But we also know from our own experience that it’s incredibly dumb, incapable of understanding the simplest commands (hello, Siri), or telling the difference between a stop sign and a cyclist, or of not showing me nine toilet seats I might like to buy after I buy the one toilet seat I will need this decade.
This fallibility makes AI a perfect comic foil in the age of algorithms. It can stand in for the stupidity of the machines that run our lives, and help us process this brave new world. Some people exploit this potential using genuine machine learning (e.g., Janelle Shane and her AI-generated beer names, Harry Potter characters, and so on), while others, like Burger King, only pretend to (see also: those viral memes that start “I forced a bot to watch ____”).
On the one hand, this understanding of AI as an idiot savant is perfectly sensible. Artificial intelligence is dumb, anyone in the industry will tell you that. But it also creates a cloud of confusion that mainly benefits the people who control and profit from this technology.
In a recent essay, filmmaker and writer Astra Taylor described the concept of “fauxtomation,” where companies oversell the technological capabilities of their products. Sometimes this is harmless, she says, like “smart” ovens that just scan barcodes to find out how long to cook your dinner. In other instances it’s more insidious and even dangerous, like when people pretend to be chatbots, or when the “AI moderators” removing gruesome and disturbing videos turn out to be human.
if we don’t understand AI, we won’t know how to fix the problems it creates
As Taylor argues, automation — including automation powered by AI — isn’t neutral. It’s used to justify stagnant pay and worsening work conditions. It’s used to threaten people who might organize for better jobs (“$15 an hour? We’ll find a robot who’ll work for less”), and with fauxtomation, it often erases the lives of people at the bottom of the economic food chain. This means that having a good understanding of what technology can and cannot do is vital. It helps the public and politicians know what’s at stake, and what needs to be done to avoid the worst outcomes.
Of course, Burger King’s ads are just that: ads. But they’re also the latest product of our anxiety about AI. We used to worry that machines would get jobs flipping burgers. Now we know that’s happening, and we worry they’re going to write our ads instead.