Skip to main content

Microsoft thinks AI can beat Google at search — CEO Satya Nadella explains why

Microsoft thinks AI can beat Google at search — CEO Satya Nadella explains why

/

AI is coming for your browser, your social media, and your operating system, too.

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

I’m coming to you from Microsoft’s campus in Redmond, where just a few hours ago, Microsoft announced that the next version of the Bing search engine would be powered by OpenAI, the company that makes ChatGPT. There’s also a new version of the Edge web browser with OpenAI chat tech in a window that can help you browse and understand web pages. 

The in-depth presentation showed how OpenAI running in Bing and Edge could radically increase your productivity. They demo’d it making a travel itinerary, posting to LinkedIn, and rewriting code to work in a different programming language.

After the presentation, I was able to get some time with Microsoft CEO Satya Nadella. Nadella has been very bullish on AI. He’s previously talked about AI as the next major computing platform. I wanted to talk about this next step in AI, the partnership with OpenAI, and why he thought now was the best time to go after Google search.

This is a short interview, but it’s a good one. Okay, Satya Nadella, CEO of Microsoft. Here we go.

Satya Nadella, you are the CEO of Microsoft. Thank you for coming on Decoder today.

Thank you so much for having me.

Microsoft just announced a huge new version of Bing powered by OpenAI technology. A couple of weeks ago, the company made what was called a multibillion-dollar, multiyear investment in OpenAI. Tell us what’s going on.

Well, today’s announcement is all about rethinking the largest software category there is, search, with this new generation of AI because it’s a platform shift, and you get to reimagine pretty much everything, starting with the core ranking. In fact, perhaps the most salient part of today’s announcement is that we’ve had the best gain in relevance in the core ranking using some of these large models. Second, it’s not just a search engine; it’s an answer engine — because we’ve always had answers, but with these large models, the fidelity of the answers just gets so much better.

And then we’ve incorporated chat right into search, and that chat is grounded in search data. So you can do a natural language prompt or a query, which is long, you get a great answer, and then you can engage in a conversation with the prompt as the grounding or the context. So it’s about basically bringing in a more sophisticated, larger, next-generation model compared to ChatGPT and grounding it in search data.

The other last thing we showed off was a copilot for the web. In Edge, you can look at any website or any document on a website, like a 10-Q, for example, and then do things like summarization. So a whole lot of these features all came together essentially as the new Bing.

A really interesting piece of the puzzle here is that a lot of what you described is powered by OpenAI and OpenAI’s technology. OpenAI CEO Sam Altman was onstage with you today. You have worked with OpenAI for three years, but you haven’t acquired them. Instead, you made a huge investment in them. Why work with an outside technology vendor for the largest software category in the world?

First of all, you have to remember the relationship with OpenAI and our cooperation with OpenAI has many facets. The most important thing is what we’ve done over the last four years to actually build out the core infrastructure on which OpenAI is built: these large models, the training infrastructure — and the infrastructure doesn’t look like regular cloud infrastructure. We had to evolve Azure to have specialized AI infrastructure on which OpenAI is built. And by the way, Inception and Character.ai are also using Azure. There will be many others who will use Azure infrastructure. So we are very excited about that part. And then, of course, we get to incorporate these large models inside of our products and make those large models available as Azure AI. And in all of this, we have both an investment return and a commercial return. And so we think we are well placed to partner. I will never assume that great partnerships can’t have great returns for our customers, shareholders, and Microsoft.

There was a lot of talk in the presentation about the values that are coming into Bing, about the safety work that’s being done, about the responsible AI work that Microsoft has done for years. How do you make sure that bridges the gap to OpenAI, which is not your company but is obviously tied very closely to it? And how do you make sure that your products inherit all of those values even when you’re working with an outside company?

First of all, OpenAI cares about safety. In some sense, their entire inception was about how to think about safety and alignment in AI. And so, we share that. We’ve had our principles — as we talked about today, Nilay — since 2016. We’ve published the principles. And ever since, quite frankly, we’ve been very focused on the hard work of incorporating them into our engineering when we build products, starting with design. One of the things I think a lot about is, when you have a new model coming, it’s probably most important to put humans in the loop — versus designing the human out — so that you can, in fact, ensure that the human agency and judgment is what you use to train the model to be aligned with human feedback. So that’s what we are doing.

When I look at even what we are doing in Bing, we’re taking it one step further to ground AI in the context of search. So I always say, “Look, these generator models just don’t randomly generate stuff. You prompted it.” So there’s a whole lot you can do in the meta prompt and the sequence of prompts you generate, which we can assist with. So, there are a lot of, I call it, product design choices one gets to make when you think about AI and AI safety. Then, let’s come at it the other way. You have to take real care of the pretrained data because models are trained on pretrained data. What’s the quality, the provenance of that pretrained data? That’s a place where we’ve done a lot of work.

Second, then the safety around the model. Ad runtime. We have lots of classifiers around harmful content or bias, which we then catch. And then, of course, the takedown. Ultimately, in the application layer, you also have more of the safety net for it. So this is all going to come down to, I would call it, the everyday engineering practice. And guess what? Search is like that. Search is an AI product. It is interesting that we are now talking about a new algorithmic breakthrough in these large models, but we’ve had AI models for decades now, and we’ve really built our sense of what is authoritative, how to detect [what is] authoritative, how to ensure harmful content doesn’t get through. And those are all practices that’ll now be used.

That leads me to the value exchange of search right now. In a traditional search model, I ask Bing a question; it might return some snippet, but it usually returns a list of links. If I go visit a webpage, the creator of that webpage might capture some advertising revenue or something else. Now, you’re just answering the question directly. And you’ve trained the model on other people’s information, other people’s reporting, and being biased in favor of reporting. How do you make sure that they get the value back?

It’s very important. In fact, it’s one of the biggest things that is different about the way we have done the design. I would really encourage people to go look at it. … Look, at the end of the day, search is about fair use. Ultimately, we only get to use [all of this content] inside of a search engine if we’re generating traffic for the people who create it. And so, that’s why, if you look at whether it’s in the answer, whether it’s in chat, these are just a different way to represent the 10 blue links more in the context of what the user wants. So the core measure, even what SEO looks like, if anything, that’ll be the thing in the next multiple years [that] we’ll all learn. Perhaps there will be new incentives in SEO to even generate more authoritative content that then gets in. So overall, everything you saw there had annotations. Everything was linkable, and that’ll be the goal — whether it’s inside a search, in the answer, or even in the chat session.

But if I ask the new Bing what the 10 best gaming TVs are, and it just makes me a list, why should I, the user, then click on the link to The Verge, which has another list of the 10 best gaming TVs?

Well, that’s a great question. But even there, you will say, “Hey, where did these things come from?” And would you want to go dig in? Even search today has that. We have answers. They may not be as high quality of answers. They are just getting better. So, I don’t think of this as a complete departure from what is expected of a search engine today, which is supposed to really respond to your query while giving users the links that they can then click on, like ads. And search works that way.

The reason I ask this is… obviously when you say you’re taking on the larger software category in the world of search, there’s a dominant player in Google. If Google stops sending as much traffic from its search engine results page to publishers, to creators, to other websites, regulators around the world would freak out because they have a dominant market share. Bing does not have a dominant market share. When you evaluate the risks, both IP, legal risks, regulatory risks, you say, “Well, look, we don’t have the share. We can take a step forward in how we present these results in a way that our competitor cannot.”

That’s not how I come at it.

I’m just curious.

“We will live and die by our ability to help publishers get their content to be seen by more people.”

Yeah, I come at it primarily on… today, if you look at the search category, it’s great. It works 50 percent of the time. It doesn’t work for the other 50 percent of the time. So I think what I really want to do is to go back and say, “Look, is there some new powerful technology that can make search a better product without fundamentally changing how search gets permission to even exist as a product, which is other people’s content organized in useful ways so that users can find them.” To me, that is the category. And so, we will live and die by our ability to help publishers get their content to be seen by more people.

Up to now, you’re absolutely right. Google has dominated this market by a significant margin. We hope, in fact, if anything, having two or multiple search engines — there’s not just us, there’ll be other competitors — that by having more evenly spread search share, it will only help publishers get traffic from multiple sources. And by the way, advertisers [will get] better pricing. And so publishers will make more money, advertisers will make more money, and users will have great innovation. Think about what a great day it’ll be.

I am eager for there to be more competition in search. What I’m curious about is this: if more and more people are producing more and more AI content and that becomes the base layer that you’re training against. So, if instead of me writing a story about the Chinese spy balloon, I asked Bing to write such a story, and that gets fed back into Bing, eventually, the amount of original content in the ecosystem begins to wither. Is that a feedback loop that you’re worried about?

Absolutely. But the way I look at it is what people talk about. My daughter sent me this unbelievable example the other day. She’s taking some French lit class, and she said, “Hey, I was using this AI tool to summarize what I was writing, and it took me two hours” — because she was doing meta prompts and prompts and learned more about that text than ever before. And so, I feel like, let’s give ourselves a little permission to think about what is original content. Because, as I said, AI doesn’t just generate it. You prompted it. You have a draft, which you edit. Today, I would be unemployable but for the red squiggly in Microsoft Word — because that’s what helps me write anything.

So, I think we’ve used and evolved to use new tools. I think of it that way. I think, yes, some of the drudgery of knowledge work may go away, but that doesn’t mean I won’t enjoy it. In fact, the best place, Nilay, I feel, is in GitHub Copilot. It’s not like suddenly you are not coding. If anything, you are more in the flow of coding with some of these prompts. You read more code, you accept more code. So I think it’s just a different way for us to perhaps enjoy our knowledge work more.

That brings us to the second product, which is the copilot inside of the Edge browser. If you look at Bing, you have an opportunity now to capture market share from Google. If you look at Edge, you have an opportunity to capture market share from Chrome, potentially Safari if you go to the iPhone. Is that how you’re seeing this? This is an inflection point, you have a new technology, you have a lead with this partnership with OpenAI, and it’s creating an opportunity for you to go take share? Or is it that you’re expanding the category, and you think you can initiate new users anyway?

I always start not from zero sum, but I look at it and say, “Hey, how does the category expand? How can we participate in that expansion?” That is, I think, the foundational level. But at the same time, there will be places where the dominant browser is Chrome. Forget anywhere else, on Windows, Google makes more money than all of Microsoft. So let’s start there. So there’s a huge opportunity for us if we got some additional share, whether for our browser or our search engine. And so, that’s how I look at it: let’s build first a product that is competitive in the marketplace that’s actually serving user needs, and like all things, Nilay, I’m not a one-platform guy. I grew up in a Microsoft that–

This is your big change in Microsoft, your leadership–

It’s the Microsoft that I grew up in. Because I always remember that Microsoft Office was on the Mac before even Windows. So that’s the Microsoft that I learned from. And I’ll always make sure that our software is everywhere where users want it.

It’s been a relative period of calm between Microsoft and Google. There was a previous period of, I would say, antipathy, or more open antipathy. Recently, you’ve partnered on things like Android on some of your hardware. I think Microsoft 365 on Chromebooks is a partnership that was recently announced. Do you expect this new head-on competition against their most important product to change that relationship?

Google is the 800-pound gorilla in search. I want people to know that we made them dance.

First of all, look, I have the greatest of admirations for Google and what they’ve done. They’re an unbelievable company with great talent, and I have a lot of respect for Sundar [Pichai] and his team. So therefore, I just want us to innovate. We competed today. Today was a day where we brought some more competition to search. Believe me, I’ve been at it for 20 years, and I’ve been waiting for it. But look, at the end of the day, they’re the 800-pound gorilla in this. That is what they are. And I hope that, with our innovation, they will definitely want to come out and show that they can dance. And I want people to know that we made them dance, and I think that’ll be a great day.

What was the moment in the development of the product where you said, “Okay, it’s ready. We should announce it like this” — with a pretty direct shot at the 800-pound gorilla? Was there a light switch that flipped for you? Was it a committee decision? How’d that work?

So when I first saw this new model… because the model that you saw today is the next-generation model–

Is it GPT-4?

Let Sam [Altman, OpenAI CEO], at the right time, talk about his numbers.

Okay.

So, it is the next-generation model, and it’s been done. We called it the Prometheus model because, as I said, we’ve done a lot to the model to ground it in search. So the search use case is pretty unique, and so, we needed to ground it in that as well. So when I first saw the raw model back in the summer of, I would say, 2022, that’s where I thought that this is a game-changer in terms of the search category, aside from everything else that I’m excited about, because I do care about Azure having these APIs even.

So we’ve been at it. In fact, I’ll never forget my first query I did on the model, which, I think for me, growing up, I always felt, if only I could read Rumi translated into Urdu translated into English, that is my dream. I just put that in as one long query, and it was magical to see it generated. And I said, “Man, this is different.” I could have programmed it, done some multi–

That was your first query?

That was the query that changed–

You are one of the classiest people I’ve ever met in my entire life. That’s a very complicated–

Poetry is great, man.

I buy it. My first query was like, “Are you alive?” So that’s where I would’ve gone. So, you run this query to translate this poem across two languages. And you receive the response, and you think, “Okay, this is a product with revenue possibility, or this is a product with market share possibility.”

Yeah. Like all things, one of the things that I think about is, in platform shifts, two things have to happen. You have to retool pretty much every product of yours, so you’ve got to rethink it, whether that’s the way you build it or what its core features are. It’s like how Microsoft had to pivot for the cloud to rethink exchange. It was not an exchange server. It was exchange as a service or what we had to do with our server infrastructure. We had to rebuild, essentially, a new core stack in Azure. So every time, with transitions, you have to essentially rewrite it. That’s how I think about it. The second thing is you also have to think about the business model. Sometimes these transitions are pretty harsh. I’ll tell you, the last transition from having the high share server business with great gross margins to saying, “Hey, the new business is called cloud, and it’s going to have one-fourth the margins” as the new news was pretty harsh, but we made it.

“Every day, I just want a few [more] users.”

Whereas in here, I look at this, there are two things. One is it’s absolutely new tech, but it builds on cloud. So that’s one place where we already have relevance, and so, there is the next generation of cloud. And second, in search, the economics are interesting, which is that we already have a profitable business but with very little share. And so, every day, I just want a few users and a little bit more gross margin. So, yeah, I did see, I think, a tremendous opportunity for us to make some real progress here.

So the model right now is an $11 billion a year revenue business, something like that?

Something like that. I think Amy [Hood, Microsoft CFO] is going to talk about — I don’t know how she wants to talk about it. Yeah.

Incredible hobby. I wish I had an $11 billion a year hobby. You want to grow that into a real business. You want to take market share. But obviously, the new technology does not have the same cost structure as the old search query. I’m sure that whatever you’re doing with OpenAI, it’s more compute-intensive, and then obviously you have a partner sitting in the middle of it. And then the monetization model is still search ads. It’s direct response search ads. But as you bring more and more content on the screen, that model might change or the price of those ads might change. 

It’s so wonderful. Think about what you just said. You said, “Okay, here is the largest software category where we have the smallest share,” and what you just painted out is an unbelievable picture of incremental gross margin. If [former Microsoft CEO] Steve Ballmer saw that, he would’ve lit up and said, “Oh my God.” Very few times in history do opportunities like that show up where you suddenly can start a new race with a base where every day is incremental gross margin for you and someone else has to play to protect it all: every user and all the gross margin.

So, I want to wrap up with two questions here. One, I just want to come back to this. I think you are going to face a lot of scrutiny from publishers, creators, other website owners saying, “Hey, that is our training data.” You are already seeing it. Getty is suing a handful of the image-generation AI companies saying, “Hey, you’re generating results with our watermark in it. This is obviously ours.” So I’m curious if you have a view of the potential IP risk on the downside, or on the positive side, of how to grow and keep the ecosystem vibrant.

On the search side, I’m very, very clear. The search category is about fair use so that we can generate traffic back to publishers. We want to stay very–

And is that a KPI where you’re keeping track of traffic you’re sending out?

100 percent. That’s the only way we are going to be... Our bots are not going to be allowed to crawl search if we are not driving traffic. So therefore, that, I think, is the core of the category. In other places, again, it’ll have to be really thought through as to what is the fair use. And then sometimes, I think there’ll be some legal cases that will also have to create precedent. But at the end of the day, I don’t think any of this can be done without a framework of law that governs it and, ultimately, financial incentives that benefit. If anything, I look at this and say, “God, is this something where the fact that there’s going to be more competition can really help publishers get more monetization, advertisers get better returns for their investment, and users have more choice?”

All right. I want to end with what I think is the most important question. You have described a transformational moment in the largest software category in the world. You’ve said it. Obviously, there’s a moment of increased competition against the dominant player. What was it like in the room when you decided to stick with the Bing brand? There had to have been a slide with 50 options. It’s Microsoft; I’m assuming there’s some passionate back-and-forth debate. Eventually someone decides. Was it you who decided?

Yeah. We wanted to call it Azure Search 2023–

Xbox Live Search.

Or bring back Clippy. Yeah, yeah. No, look, interestingly enough, it was not much of a discussion. Because we felt, look, we love Bing. We have been at it. I was there the day of the Bing launch. I worked on it.

But it has a lot of baggage as a brand.

Yeah. It’s like, look, brands can be rebuilt as long as there’s innovation. I think the brands are only as good as the product and as good as the innovation, and so, we’re going to go work it.

And that was your choice.

Absolutely.

All right. Well, Satya, thank you so much for talking today. It was really exciting to see all the new stuff. I’m eager to see how it grows in the future.

Thank you so much, Nilay.

Decoder with Nilay Patel /

A podcast from The Verge about big ideas and other problems.

Subscribe now!