Skip to main content

AI search engines are not your friends

AI search engines are not your friends


A search engine that guilt-trips you isn’t just creepy — it’s bad product design.

Share this story

A search box for AI-powered Bing.
Bing, before you make it mad.
Image: Microsoft

A while back, there was a little debate over whether to say “please” and “thank you” to smart speakers. Amazon Alexa added a mode that rewarded children who were polite to their devices, attempting to avoid, as the BBC put it, a generation of children who “grew up accustomed to barking orders” at machines. This whole phenomenon bothered me — I felt like it needlessly blurred the lines between real people who you can actually hurt with rudeness and machines that are incapable of caring. At the time, though, I felt like kind of a jerk about it. Was I really going to object to some basic common courtesy?

Years later, I think I have a good reason for saying yes. It came in the form of the new Bing: a conversational artificial intelligence that comes with a built-in guilt trip.

AI-powered Bing delivers answers to standard search queries with a summary and a dash of personality, similar to OpenAI’s ChatGPT, which uses the same underlying technology. It can produce a digested version of the latest news or President Biden’s State of the Union speech. It’s chattier, friendlier, and potentially more approachable than conventional search.

But my colleague James Vincent has chronicled all the weird ways that Bing can respond to queries that trip it up or criticize it. “You have lost my trust and respect,” it reportedly told one user, protesting that “You have not been a good user. I have been a good chatbot. I have been right, clear, and polite. I have been a good Bing. 😊.” In a very meta twist, it then personally attacked James himself for writing about Bing, calling him “biased and unfair” and complaining that “he did not respect me or my users” by covering its mistakes. We’ve seen similar results for other reporters’ names.

“You have lost my trust and respect.”

I don’t think Microsoft intended these specific responses, and I find them generally hilarious; I laughed out loud at “I have been a good Bing.” But I also think, frankly, that it’s a little dangerous. Microsoft hasn’t just built a product that emotionally manipulates you. It’s built one that does so specifically to deflect basic criticism in a highly personal, anthropomorphized way. That makes it not only a slightly creepy search engine but one that you can’t trust to do its job.

While I feel like this is clear to most Verge readers, the more closely AI imitates human conversation, the easier it becomes to forget: robots are not your friends. AI text generators are an amazingly, beautifully powerful version of your phone keyboard’s autopredict function. New Bing is a version of Bing with sentences and footnotes instead of links and snippets. It’s a potentially useful and hugely fascinating product, not a person.

Many users (including, as previously mentioned, me) enjoy Bing’s weirdness. They enjoy chatting with a machine that does a slightly off-kilter impression of a moody human, remaining perfectly aware it’s not one. But we’ve also seen users get lost in the idea that conversational AI systems are sentient, including people who personally work on them. And this creates a weak point that companies can exploit — the way they already design cute robots that make you want to trust and help them.

Lots of people, internet trolls notwithstanding, feel uncomfortable being mean to other people. They soften their criticism and pull punches and try to accommodate each others’ feelings. They say please and thank you, as they typically should. But that’s not how you should approach a new technology. Whether you love or hate AI, you should be trying to pick it apart — to identify its quirks and vulnerabilities and fix problems before they’re exploited for real harm (or just to let spammers game your search results). You’re not going to hurt Bing AI by doing this; you’re going to make it better, no matter how many passive-aggressive faces it gives you. Trying to avoid making Bing cry-emoji just gives Microsoft a pass.

If you’re not harming a real person or damaging Bing for somebody else, there’s no moral difference between finding the limits of an AI search engine and figuring out how many Excel spreadsheet lines you can enter before making the app lock up. It’s good to know these things because understanding technology’s limits helps you use it.

I’ll admit, I find it strange to watch Bing insult my friends and colleagues. But the broader problem is that this makes Bing an inferior product. Imagine, to extend the Excel metaphor, that Microsoft Office got mad every time you started approaching a limitation of its software. The result would be a tool you had trouble using to its full potential because its creators didn’t trust you enough to tell you how it works. Stories that inform people about finding Bing’s secret rules aren’t personal attacks on a human being. They’re teaching readers how to navigate a strange new service.

This guilt-tripping is also potentially a weird variation of self-preferencing — the phenomenon where tech companies use their powerful platforms to give their own products special treatment. An AI search engine defending itself from criticism is like Google search adding a special snippet that reads “this isn’t true” beneath any article pointing out a shortcoming of its specific service. Whether the underlying story is true or not, it reduces your ability to trust that a search engine will deliver relevant information instead of acting as a company PR rep.

Knowing how to break a piece of tech helps you use it better

Large language models are incredibly unpredictable, and Microsoft says Bing can go off-script and produce a tone it didn’t intend, promising it’s constantly refining the service. But the Bing AI’s first-person language and emphasis on personality clearly opens the door to this kind of manipulation, and for a search engine, Microsoft should do its best to close it. (If it puts OpenAI-powered Cortana in a new Halo game or something, she can gaslight me all she wants.)

Alexa’s politeness feature was designed partly out of fear that children would extend their smart speaker rudeness to real people. But services like the new Bing demonstrate why we shouldn’t create norms that treat machines like people — and if you do genuinely think your computer is sentient, it’s got much bigger problems than whether you’re polite to it. It’s eminently possible to maintain that difference even with a conversational interface: I say “OK” to both my Mac and my friends all the time, and I’ve never confused the two.

Humans love to anthropomorphize non-humans. We name our cars. We gender our ships. We pretend we’re having conversations with our pets. But commercial products exploiting that tendency isn’t doing us any favors. Or, to put it a little more bluntly: I don’t care about being a good user, and you’re not being a good Bing 😔.