John Hanke is the CEO of Niantic, a company that makes the wildly popular Pokémon Go mobile game in partnership with Nintendo and the Pokémon Company, but John actually started it as a skunkworks project when he was a Google Maps executive and spun it out.
Pokémon Go and its predecessor Ingress are now the largest and most successful augmented reality games in the industry, which means John has long been at the forefront of what we’ve all started calling the metaverse: digital worlds that interact with the real world.
Lots of companies are chasing metaverse hype. Mark Zuckerberg just renamed all of Facebook “Meta” to underline his shift in focus, and we just had Meta CTO Andrew Bosworth on Decoder to talk about it — but John’s been at it for a while, and I wanted to talk about the reality instead of the hype: is the technology for all this ready? What are the innovations that need to happen to enable some of these dreams? And how do you solve the problem of people walking around in headsets, experiencing wildly different realities from one another?
We also coin the phrase “marketplace of realities.” It’s a ride.
Okay. John Hanke, CEO of Niantic. Here we go.
This transcript has been lightly edited for clarity.
John Hanke is the founder and CEO of Niantic, the maker of Pokémon Go. Welcome to Decoder.
Thanks, Nilay. Great to be here.
I am excited to talk to you. The metaverse has a lot of big ideas to unpack, but I want to start at the very beginning. What is Niantic?
Niantic is a company that is building some really fun, innovative entertainment products: Pokémon Go, Pikmin Bloom, Ingress, and lots of others in development. Niantic is also building a platform so other people can build games and other products that connect us as people to the real world and with other people in real life.
I feel nuts describing this to people because Pokémon Go is such a massive phenomenon, but to be specific: Pokémon Go is a game you play on your phone. You walk around the real world, trying to catch pokémon that appear on your phone in various places. It’s obviously a massive multiplayer game. Is that a fair description of all of your products, or is that more specific to Pokémon?
I think it’s fair. Our overall goal is to turn the world into a game board — to turn the world itself into a more magical, fun place. Pokémon Go is a great example of that. It allows all of us who love this series to live out that fantasy of pokémon existing out in the world: finding and catching them and trading them and battling with them, and all that.
There’s a little bit of an interesting backstory here. You started at Google, and Niantic was part of Google, correct? Ingress is the first game Niantic created that is still running. It is very popular, so it spun off into a company. Could you describe that process?
I worked on [Google] Maps for a long time. I co-founded a company called Keyhole, which was acquired by Google. Keyhole was the foundation for Google Earth and parts of Google Maps. I ran that group and worked on maps for a long time at Google. Then we started Niantic Labs as a little skunkworks. Three of us peeled off and said, “We’ve got this amazing map of the world, all this great infrastructure like mobile phones and wearables that seem really interesting; what kinds of apps can we build for that future?” We started experimenting.
Ingress was the first game. It is still ongoing, and has been an underground, massive success. I don’t know how else to describe it. Once you see Ingress, you start noticing it everywhere — but it’s not Pokémon. It’s not some mainstream huge success in that way.
I would call it our underground cult success. Our first product out of the gate at Niantic Labs was actually Field Trip, which was not a game. Field Trip was about learning about the world — cool places to go, history, artwork that you might see in public spaces. It was really designed to be the app that would be the precursor to what you might have on smart glasses someday: you’re walking around and you want your device to inform you about the world around you.
Our second product was Ingress. Ingress is that thing that turns the world into a game board. In the case of Ingress, two teams battle against one another to control the world — à la Risk, but with a sci-fi, J.J. Abrams bent — around portals, which are public artwork and historic places, and other landmarks in the world. You interact with them and take them over, and then the other team does the same thing.
It’s a global MMO, so it’s very collaborative, with teams formed around the world, across boundaries of countries and ethnicity and language. To this day, Russians, Koreans, Israelis, Japanese, Americans, and Argentinians are still battling to control the world in Ingress. I do it daily on my morning walk as well, so I’m part of that struggle for control of the world.
I love it. Niantic is adding some new messaging and communication features to both Ingress and Pokémon Go next year.
Social has always been a core part of our products. For us, that means helping people get together and spend time together with friends and family in real life. We are working on a couple things. One is a feature that’s coming in Pokémon Go that gives you a way to share a postcard of cool places you’ve visited in the real world with your friends in the game. The other thing that we’re doing is a little bigger and broader in nature: a cross-game social identity. Ingress is the first place that feature will show up.
It is a way for you to communicate and plan activities with other people. Planning meetups is a core part of what people do in Ingress and in other games like Pokémon Go. In Ingress, people plan raids and farm portals together. It gives people a way to easily find friends who are in their area — meet them, talk with them, talk about the game, share things about the game. Importantly, people plan opportunities to play together in real life. That core social identity and set of social services is something that we see as a fundamental part of our games and our platform in the future.
Back to the company as a whole: could you describe the process of spinning out of Google and becoming your own standalone company?
The process was six months of pain and torture with piles of documents and goodwill on all sides. It was incredibly painful to separate intellectual property and people out of a company like that. We had started Niantic with the idea that we might spin out at some point, because we knew that we were doing something a little orthogonal to what Google was interested in and doing at the time.
We said, “Hey, we want to try this.” Larry [Page] was supportive. We had constructed some paperwork around the group that said, “Here’s some parameters so that if you want to spin it out in the future, it can be a possibility.” That cracked the door and we had to push on it for a while to make that happen.
At the end of the day, the Pokémon Company and Nintendo financed us, along with a venture capitalist and some angel investors. That allowed us to stand up Niantic as an independent company.
How long ago was that?
That was in October of 2015.
How many people is Niantic now? How much have you expanded?
We are around 800 people today. We were around 30 when we peeled off of Google.
Wow. How is that structured? How does the company work?
We have game studios, we have several internal game teams, and producers that work with external game teams. That’s the gaming side of the business, Niantic Studios. Then we have the platform. We recently officially launched it externally as the Niantic Lightship platform. That takes all the tech and data we created to power the experiences that we launched, and makes those available to third-party devs. I’m super excited about that. We’ve worked with third-party devs for the past several years, but this makes it a website you can go to, enter your information, and get right into it — access those APIs, tutorials, how-tos, and all that. It’s a big step forward for the company. That’s the other half.
Foundationally, what’s the split? How many people work on Pokémon Go versus the Lightship platform?
If you look across all of the games portion and all of the platform portion, which includes all the research work that we are doing on AR and mapping, it’s about 50/50 in terms of our product and engineering resources. Of course, we also have operations, marketing, other functions, and an HR department that supports everybody.
I’m assuming the revenue is mostly Pokémon stuff to grow with the platform?
Yeah, we are growing it with our portfolio of games. We have about a dozen products in development. We just launched Pikmin Bloom, and we’ve got several others that I’m very excited about. Those will be coming out in the next year or two. We’ve been working on them for the past two or three years. That’s the short-term revenue growth story. With the platform, we’re really focused on engaging developers and just growing the number of people working on that platform first. I think revenue’s not really the primary focus of that part of the business at the moment.
Here’s the last of what I think of as the Decoder questions: how do you make decisions? Obviously, you were a startup founder. Then you were inside a big company. Then you had a renegade project inside a big company. Then you were a startup founder again. What’s your decision-making framework?
That’s a great question. People who work with me tell me I’m pretty methodical. I do like to get all the data on the table. I like to hear from multiple people. Obviously, the idea is to delegate and have other people make important decisions as well, but if a decision is coming to me, I like to hear from a variety of people. We actually have an internal value around inclusivity. We try to create a safe space for everybody to come forward, speak their mind, and really argue forcefully for what they want to do. At the end of the day, we have a “disagree but commit” culture. That means that if we don’t get to consensus — I think we ought to get to consensus maybe 70 percent of the time — but if we don’t get there, then I will make a decision.
At that point, we expect everybody, even if you disagreed strongly or put forward other points of view, to really align behind that decision. I think it’s important for everyone to commit and execute an idea as if it were their own. The decision-making process itself is about encouraging people to come forward with data to support their points of view so that we can make the best call.
Put this into practice for me: you had a Harry Potter game that you decided to sunset. How did that decision work?
Yeah, we are sunsetting it. We’re going out in style with a ton of great content. All the stars of the Harry Potter world are making an appearance. We’re trying to really bring that game [Harry Potter: Wizards Unite] to a fun close in January. It is something that we looked at. The game would probably be considered a success by other companies in the space, but it didn’t have the growth opportunity that we felt like it needed if it was going to consume the mental time and space of some of our best people.
We deliberated about that for a while, and ultimately, I talked to the board about it. We looked at all the numbers internally about what we thought the game could do and what changes we thought we could make — but at the end of the day, we felt that our people would be best invested in other projects. We came to the conclusion to sunset it. Then we got to work thinking about a way that would be best for all the players that were enjoying the game and had committed time and energy into it. We tried to sunset it in a fresh, fun way.
What was the best opposite argument you heard, to keep it going?
That the Harry Potter players are passionate, committed fans to the game that don’t want it to end and will keep playing; that the game could be a success within certain parameters. We could have continued to grow it from where it was if we invested in it and built on those core fans. It was tempting.
Actually, we went down that road for a while. It was a unique project in that it was a joint effort with Warner Bros.’ gaming studio, so we were building the game together with them. That, ultimately, was another factor in deciding to not go down that road. You’ve been around the tech industry for a long time, you know that every time two companies are trying to do something in a cooperative way, that process is just harder. I mean, I’m sure you see that even inside your own organization: when trying to make decisions — and when more and more people join the table — it becomes harder and harder to move quickly. We just didn’t think we’re going to be able to get it done.
When you made that decision, did you send the news in an email? Was it a Slack message? How did you deliver the words, “We’re going to sunset the game and start making a plan”?
I’m trying to remember. It was something that we talked about and deliberated in various meetings of various forms over a long period of time. When we ultimately decided to sunset the game, I remember having a conversation with the team to express my gratitude for everything that they had done and to explain the context for the decision. I think that was after we were in COVID, so the news was delivered over a Zoom call, which was as in-person as we could manage at that time.
Obviously, people are our greatest resource. They pour their heart and soul into these projects, so my biggest concern and focus was making sure that those people really felt appreciated for the great work that they had done.
Games are like movies or books or television shows. Not all of them are going to have the success that you would want them to have. We have to put our best creative effort out there, and some are not going to make it. That’s true of some of the things that we’ll launch in the future as well; it’s just part of the nature of the business.
Let’s talk about the next big set of decisions the entire industry is facing: do you think you’re in the metaverse business?
I’m not responsible for introducing the word “metaverse” into this whole public conversation, okay? I just want to say that I didn’t do it.
One person is very responsible for this.
Don’t blame me for it! Once that conversation got started, I did feel the need to speak up and talk about what the metaverse is or can be. I felt like a lot of people were maybe a little bit overly influenced by what we all endured during COVID, which is to say: spending a lot of time at home, a lot of time on Zoom, kids going to school remotely, watching kids spend a ton of time on Roblox, binging on Netflix, getting delivery food, the whole thing. A lot of these products saw a big lift from COVID. I mean, let’s be honest, people were spending a ton more discretionary time and energy in these worlds. I think that fed a level of frenzy around thinking that the “metaverse” is the future and that we’re all going to live in these 3D worlds. I just don’t think that’s how it’s going to play out.
“I’m not responsible for introducing the word ‘metaverse’ into this whole public conversation”
Over the past 50 years, the trajectory of technology has been towards mobility and ubiquity — what Xerox PARC pioneer Mark Weiser would call a ubiquitous computing vision. We took a detour during COVID; that is my personal opinion. I mean, I’m a huge sci-fi fan too, so I read Neal Stephenson and William Gibson and the whole array of writers back when we founded Keyhole. I’m deep into the latest Neal Stephenson novel right now, actually. Those of us who’ve read the books to the end know how they end. That’s a horrible vision for the future: the world has just completely gone the wrong way and people have to escape to these virtual realities.
I don’t think it’s how things are going to play out. I don’t want it to be how things will play out. I’m a techno-optimist in the sense that I think AR — a real-world version of that metaverse, if you will, that’s about getting people outside and active and learning about their city, state, town — can help bring us back together. It can help us get reconnected with our communities and the places that we live. Those are all the things we need to do so that we can fix some of the problems we’re facing here in the US, and probably in other countries around the world, to build the future that we will feel good about passing on to our kids and to the next generation.
I always think it’s funny that Neal Stephenson and Ready Player One come up a lot. Spoiler alert: the resolution of that book and that movie is that they turn the system off. That’s literally the last thing they do — hit the button to turn it off. I’m always puzzled when those books come up as a positive example of what people want to build.
Niantic builds a lot of augmented reality products. When you point your phone at the real world, the digital world is layered over it. AR right now is very much a phone product. The other vision of this is a totally virtual reality, and I think the best expression of that is Mark Zuckerberg, the person who’s introduced the word metaverse the most so far. He changed the name of the [Facebook] company to Meta. He’s got the Quest 2, which is maybe the preeminent VR headset. It’s a great product, about being in a totally virtual world that you can extend to other kinds of games like Fortnite or Minecraft or Roblox or whatever.
Do you see a split there? Or do you see AR and VR converging in some way?
AR and VR have a lot of shared features. Our Lightship platform is about infrastructure that can allow millions of people to share what you’re interacting with in the game world, which is overlaid on the physical world. You can message back and forth to one another. I can get updates whenever you change something in the world. Similar technologies are needed for the VR version of the metaverse. Likewise, some of the tech for AR glasses and VR glasses, there’s a lot of overlap there in terms of a head-worn device, and you have to miniaturize a bunch of stuff. You have to do 3D. You have to have sensors that track where your head’s pointed and various things like that.
There’s a lot of overlap, but the end state for them is very different. VR is a sedentary process where you’re going to slip into this virtual world. You are going to be cut off physically from people who are in your vicinity. You’re at home on the couch with your significant other or other family members. Are you going to really want to be in VR? Are you going to have the four of you lined up there all in your headsets? Maybe you have an avatar representation of other people, but it’s fundamentally a poor substitute for the real human-to-human interaction.
“AR is about getting out of your way as much as possible with the technology”
AR is about getting out of your way as much as possible with the technology. If it’s a wearable device like a watch, something in your ear, or eyeglasses, AR is about giving you information. Maybe it will help you have fun playing a game, going on a secret treasure hunt out in the world, and finding pokémon. Maybe it just tells you how to get there: painting arrows on the ground, showing you where the subway is. Maybe it shows you the menu of the restaurant you’re standing in front of before you walk through the door. Maybe you tap on a virtual UI to make a reservation or check into the airport. It’s about being helpful, but allowing you to primarily exist in the world as a full-blown, involved human being: using all your senses, enjoying being out in that environment, and probably hanging out with other people in the real world. But AR is just trying to make that experience better, not cutting you off and replacing it. That’s the big distinction.
Let me push on that a little bit. We had Chris Milk on Decoder earlier this year; he runs Within, a company that makes Supernatural, a VR exercise app. That VR app was just bought by Meta for, reportedly, $500 million. You put it on your head, and you can’t see the world around you. That killer workout app is completely about your body. It’s not isolating you from your body or taking you out of your body in that way. A lot of Meta’s vision for VR is avatar-based. The avatars don’t have legs, which is very confusing, but you’re going to be in space with other avatars. That’s going to give you a sense of what Zuckerberg calls embodiment. You’re going to feel physically present with other people.
Your form is not present with other people, but you feel that way because of the technology. Isn’t that related to what you’re talking about with AR? It’s another world you can exist in where you are spending time with friends and catching a set of three-dimensional cues. In the case of Supernatural, you are hyper-aware of your body. There’s a tension there between what AR promises — which is to put you in reality, but better — and where VR seems to be going, which is a little less coherent, but still about being present.
I will definitely step up to the mic and take the invitation to argue the other side of that one. I think the product you’re talking about, Supernatural, is a really cool product. I have a lot of respect for the creators. However, when I say isolation, I’m not talking about you and your body. I’m talking about isolating you from other people and from the physical world.
Look, Peloton’s useful. Immersing yourself in a home theater experience is fun. As cool as that is, and as much as there’s a place for those experiences, it does cut you off. There’s just no question about it, from your normal way of interacting with the world and with other human beings.
I would argue strongly that a visual representation of an avatar — even if it’s a really photorealistic avatar — doesn’t do that. We’ve seen some examples, like Codec Avatars and others, where the experience gets super realistic. Your body senses the world and other people in a very broadband way: eyes and ears. When human beings are together over time, your heartbeats and the frequency of your brainwaves will actually synchronize. In terms of how we relate to other people, there’s a ton of stuff going on in the body that’s evolutionary in its origins, that allows us to learn how to trust people, to bond with people, to build relationships with people. Obviously, those adaptations were integral to our survival in the past. We are evolved and built for interacting with people in the real world. It is a deep, deep thing that is throughout our physiology and how our minds and bodies work.
To think that we can just put two OLED displays in front of our eyes and believe that we’re replicating all that comes from those real interactions just isn’t true. That experience is sipping through a soda straw in terms of the bandwidth with which you’re going to perceive the environment and other people.
There’s a place for it. You and I are talking remotely at the moment. It’s a miracle that people around the world can connect, but it shouldn’t replace our desire to be together with people physically. I think it’s just core to our human nature: it’s fundamental to our being happy. It actually triggers endorphins when you have face-to-face interactions with other human beings. I really think it’s about finding peace of mind and being happy and being true to what we’re evolved to need in our lives.
You’re obviously making a much bigger bet on AR for all of those reasons. AR seems like it has a much longer technology pathway than VR. You can just buy a headset with a couple OLED displays and some speakers in it. The Quest 2 is more or less a midrange Android phone that you strap to your face. From a component perspective, the thing is complete.
AR is really far away from that. The first problem is getting glasses on your face, and no one has come anywhere close to that in a mass-market way. Do you see that problem getting solved soon?
VR’s definitely an easier technology to solve for obvious reasons. Putting the video screens in front of your face is an easier technological feat to pull off than pass-through glasses where you see the world and the technology adds information to that.
Having said that, I had our latest prototypes of the glasses we’re developing on my face this morning. We had a whole bunch of people in the office who were working on that project testing out various things with our version of these glasses yesterday. That’s a reference design that we’re building together with Qualcomm. Tons of progress is being made, even between what the public version of Magic Leap or a HoloLens 2 was a couple years ago to what we’re developing now. We’ve made huge strides in miniaturization, the optics, and the brightness, and we’re getting closer to a not-horribly-geeky-looking device.
Extrapolating forward by a year or two: we are going to get there. There’s no question that it’s harder, but there are massive investments being made across the industry. Some of those enclosed ecosystems — like large tech companies who are doing it in a very closed way over in China with the ODMs of the world — they’re investing in this technology and trying to get good at making these products. Lots of people working on optics are happy to provide those to multiple different companies.
It’s not open-source, but there’s an open-ish ecosystem of supply chain people working on this technology and investing tons of money. We are going to get there. I think we’re a handful of years away. We’ll see devices that are, maybe, good for gaming before we see an all-day kind of device. AR Ray-Bans that just look exactly like ordinary glasses are even further afield, but we think it’ll be fun to play in the early days when these products aren’t quite the all-day device, but are going to be good for games.
Looking at the evolution of technology: we had Pong before we had the IBM PC. We had the Game Boy before we had the iPhone. As the tech gets invented, games are often the way that we’re first introduced to it. Then the tech matures.
Right. It happens over time. VR was a bunch of games first, but now you can be in a conference room in the VR headset.
Is that the end state of this? I don’t want that.
Let’s talk about that tech. The relationship between hardware capabilities and software applications is very deep. To put it in a very simple framework: you would not try to develop the Uber app if the LCD display on the phone didn’t work and you couldn’t hold it in your hand. If you had a CRT iPhone, Uber just wouldn’t exist. Right now, some of the display technology for AR stuff is shipping. It’s mostly waveguides — which refract light into your eyes in different ways — but the technology is not there yet. Are you investing in that piece of the puzzle or are you just assuming that progress is going to happen so you’re focused on the software?
The purpose of our reference design work is to pull together the best of what’s out there. In some cases, we’ve made a couple of small investments in optics companies. One of those is public: a company called DigiLens. DigiLens has a different technique for creating waveguides, which is to apply a substrate on top of glass or plastic, then shape it using a process that’s a little bit like photolithography. That’s an easier way to make a waveguide than to actually etch the glass itself. It’s a lot cheaper, which is very interesting to us from that point of view so that we can get to the right price point. We have an investment in another company that is innovating in a different type of waveguide that’s brighter — a wider field of view.
We’re nudging that technology forward. I think the biggest way that we’re nudging that forward is by pulling this stuff together into a reference design: the compute, the optics, and the software, and trying to push the envelope for a device that can be used outdoors.
A lot of what other people have been focused on have been indoor devices, like a Magic Leap or a HoloLens. A Silicon Valley tech company is maybe going to do an AR product this year, and maybe it is going to be more of an indoor-oriented device than an outdoor-oriented device. Based on the interest that I explained earlier — about connecting people with the world and moving and being active with other people in the world — we’re really trying to push the envelope to pull the tech together for outdoor devices.
We see progress in all aspects of the components: the computing that’s needed to drive this, like wirelessly connected compute devices that are separate from the glasses. You pull a lot of the mass and heat off of the unit that has to go on your head and put it into a phone or puck. That’s going to be an interesting step forward.
These are big challenges, right? You need a battery that lasts all day, but battery technology is stagnant. You need a processing capability to take in all the camera input and then display something. Then you need the display technology.
The processing side of AR is maybe the farthest along — collecting input and producing output. Mobile devices have become reasonably good at processing, but the display and the battery seem like huge roadblocks to the glasses, to me. Every one of these products has bet on one type of display technology. The first time you showed me a smartphone, I could have told you, “Man, you should invest in some LCD companies because these screens are going to be really important to all of us one day.” I don’t know that right now I would make that same bet on the types of display technologies I’m seeing in AR products.
I think there’s good stuff out there. Again, there’s technology in our prototype, reference-design device that’s pretty good. It’s very bright. It’s legible outside and has a good field of view. It’s really a question of just evolving that. I don’t think that radical new invention is required to get to the optics that we need for good AR. It’s just a few more turns of the crank in terms of the process.
Looking back through the history of technology, like Alan Kay’s vision of the Dynabook in the early 1970s — it took a while for that to become reality in the form of the laptops that we all carry around now. It’s a similar story with personal computing. It’s a similar story with the early bad dial-up internet compared to the broadband world that we live in now.
I listened to your great interview around the recently produced Handspring documentary. That’s another good example of lots of iteration around smartphones before we really got to the final formula that took off with consumers.
It’s common in the history of technology for iteration: versions of things that don’t quite get there for consumers before the final thing clicks. That’s where we are with AR. Looking back at the last 50 years of tech, I see a similar pattern, and I think that we are going to get to the finish line. I’m very excited about the opportunities for computing to be less intrusive. Carrying a phone around in your pocket takes one of your hands away. We’re juggling the coffee, the bags, the kids, and the stroller. We’ve all learned how to do that now, but it’s not really ideal. I don’t think that that represents the endpoint for the evolution of personal technology, but rather a way station along the road. It’s pretty good, but I think we can do better. That’s what this is all about.
By the way, that documentary — Springboard — is my colleague Dieter Bohn’s project. I assure you, he is very tired of thinking about ’90s cell phones at this point, but it’s great. You should go watch it. It’s on YouTube now.
That’s the hardware side, but let’s talk about the software side of things. I feel like you mentioned the other big Silicon Valley tech company might be doing something. Apple has spent years doing AR demos on stages. It’s always a person holding an iPad and showing you a train going around a table. Then they literally ship AR sensors in their phones now. The only successful mass-market AR application I can think of is Pokémon Go. That game has endured. It has revenue associated with it and it is still compelling to players. Everything else is a tech demo. It’s looking at a couch in your living room and that’s the end of that experience. What is the next set of compelling AR applications with the hardware we have today? Is it still just games?
I think that you have to ask, “What is AR?” Then you have to break that acronym apart: augmented reality. Pokémon Go is an augmented reality game because pokémon are made to exist out in the world. That game is about mapping the world: understanding habitats where pokémon could conceivably exist and then creating infrastructure so that alternate reality is shared amongst everybody who is playing the game. If I see that Snorlax in the park, you’ll see the Snorlax in the park, and so on.
Pokémon Go is about adding creatures and information to the physical world. It’s not just the screen technology that sometimes I think we associate with the phrase AR, that hologram on the screen. Apple and other tech companies have done a good job of that very specific aspect of AR, but the concept is broader than that. It’s about this notion of spatial computing. It’s about knowing where you are in the world; turning the physical world into something that has information or interfaces attached to it. That is where the real power of AR comes into play.
AR, by the way, doesn’t necessarily have to involve the screen. You can hear about the AR world through your earphones or get information about it on your watch. Ultimately, AR is about the place that you are, the object that you’re looking at, the thing that you want to interact with. That is very much augmented reality in the sense that I’m talking about it. Humans are visual creatures, so I do think that visual aspects of AR are important. A visual form of AR will probably be the one that we ultimately prefer because people just like to look at stuff generally.
I would contrast that way of thinking about what AR is from the on-stage demos that show off the latest in-phone camera and AR visual computing. It’s just a fraction. You asked what the next big successful apps that we’re going to see will be. Our big bet is on products that make the world come alive, make the world become useful, make the world become connected — connecting the atoms and the bits, connecting the physical world with the digital information that can help you know more about it or help you interact with it.
The big linchpin for that is not the visual part of AR in the sense of just overlaying a hologram into the scene, but knowing where you are in the world and where your gaze is directed, precisely. We can get an approximate location from GPS. You can get a really bad orientation from the compass that you have on your phone device. The calculations are off by 10 meters in a GPS — and if you’re in an urban situation, possibly even more — so the compass gets really confused.
If you’ve ever had that experience of coming out of a subway station and trying to orient yourself, you walk a block before you realize you’re going in the wrong direction. There’s an AR map that lets a camera know exactly what it’s looking at in the world so that you can make the pokémon hide behind the park bench, or you can provide the information about the public artwork or the menu or the airport check-in, like I described earlier.
We are building that map in a collaborative way with people who play our games and contribute map data. That collaboration is a key enabler. When you ask, “What Niantic is doing to make this AR future happen? How does your contribution relate to what Apple, Microsoft, or Google is doing?”
We’re really focused on that UGC [user-generated content], collaboratively built map because it’s critical to unlocking millions of really amazing AR apps that are, in some cases, an evolution of stuff that we do on our phone today. It’s more directly connected to the thing that we want to learn about or interact with. Completely new apps that we haven’t even imagined yet will be built from that starting point.
Niantic has millions of people playing Ingress and Pokémon Go who are creating map data for you. Is that collaboratively-generated map the heart of the platform that developers will be using?
It’s a key part of it. We talk about the platform, Lightship, as being mapping. There’s a real-time mapping aspect to that in terms of understanding the topology of your environment, like where the park bench, the tree, the table, and the chairs are, so that you can put a hologram into that environment and have it scurry around on the ground, but not walk through objects or walk through people. That’s something the platform provides.
You see a lot of early AR and bad AR where the holograms are oblivious to all the real stuff that exists in the world, so that’s a problem that we solve. Another piece of what the platform does is understanding. That’s computer vision that knows what those pixels are, so it knows that that is water or pavement or grass or sky or any number of other categories. Again, you can situate your AR objects into that environment in a way that makes sense.
If you want to place something that really belongs in the water or on grass or in the sky, or should be walking along the sidewalk, then that semantic information is joined with the map information so that you can start to create these really advanced and intelligent forms of AR. The third thing that we’re helping developers solve is sharing that multiplayer environment, because a lot of AR apps aren’t like that. You look through the device, you see what’s there, but nobody else sees it unless you share your phone with them.
What we provide is the client server data infrastructure and the computer vision infrastructure so that we can put that hologram into the room, and multiple people can walk up to it and all see it and interact with it. We can all see it change. If I push or pull on a virtual object, like a Zelda-style puzzle, then we would all be able to manipulate it together and see the same thing. That’s what’s in Lightship today. The mapping part is something we’re working on that we’ll launch to our Lightship developers next year.
We are moving up my stack of questions to the technology stack of the product itself. Will the battery last all day? What will the display be like? Are the processors fast enough to detect the water and spit something out semantically while not draining the battery too fast? That’s the hardware layer.
Then the software platform needs to have a great map that is generated either by users and games or an army of robot cars, whatever it is that other companies are going to do. Then you get to the last big problem: you’ve built the tools to augment reality, but who is going to augment reality? Pokémon Go does not have an inbuilt misinformation problem, I’m assuming — there’s not a lot of deep divisiveness about Snorlax — but you have rules on that platform, right? You don’t let players play in cemeteries. You set all kinds of boundaries on what players in that game are able to do and not able to do because games have rules. When you’re looking at the United States Capitol building, how do you make sure the information displayed by an AR app is appropriate? Is that something you’re thinking about?
Yes. I think about related challenges as well. We’re building the tools in the platform today to let people augment the world. Users are beginning to do that. Other companies are building platforms to also allow people to augment the world. These platforms will become the basis for the next generation of computing devices that we all use all day long, every day.
There’s a fork in the road here: whether we propagate the patterns that exist today with our devices and the companies that operate the services that we rely on — that log us in, that serve us ads, and track many things that we do in ways that, frankly, I find a little unnerving. As we move from phones to wearable tech, the issue gets even more serious because you’ve got devices on your body that can measure your heartbeat, that can measure whether your pupils are dilated or not, that know actually what you’re looking at and can tie that information together.
Think about that: you’re out in the world, you see something, and you react to it. You see a person, you react to it. You see an ad, a product, you physiologically respond. You don’t click on anything, you don’t type anything in, but yet you’ve given something away. I think we have to think really seriously about what the rules are going to be around ownership of data and privacy, and what we want that world to be like.
We are really making a pitch to developers to say, “We can be part of building a world that we want to live in, or we let stuff happen based on what’s happened over the past 10 to 15 years.” I think people are interested in putting their time and energy towards something that’s going to address some of the flaws in the way that things have evolved — not necessarily because companies are trying to do bad things, but it’s just gotten to that point where there are some problems. I think we have to acknowledge that and think about a different way to do things.
You asked about something that’s a little bit different than that. What can exist in the world, and who decides what can exist in the world? I actually have an opinion about that. Maybe it’s controversial, maybe it’s not. I do think that there’s a great benefit for people to be able to share things. I’ve said that if I’m the only person that can see something, then you might think I’m a lunatic. The way society works today is that we have consensus reality. We all agree that this is what it is.
Sharing is good, but I also believe that people should have personal choice. We don’t dictate what music you can listen to as you’re walking down the street. We don’t dictate what podcast you can listen to when you’re standing in front of the Capitol building.
A new law just passed: you can only listen to Decoder.
I put AR in that domain. I think people should be able to theme the world however they choose to theme the world. If I want to see a world that looks a little bit more like Nintendo everywhere, and it’s bright and happy, and has Marios popping up from behind park benches, I think that should be my choice.
“I don’t think that somebody should assert some right to control what’s happening on my body and in my eyes or ears or whatever as I walk around. That would be a weird kind of censorship.”
I don’t think that somebody should assert some right to control what’s happening on my body and in my eyes or ears or whatever as I walk around. That would be a weird kind of censorship. Just think about it in terms of tapping on a virtual object and getting data about it. Today, I can look it up on Google. If I look at something in the real world and I want to learn about it, I should have access to all the choices that I have today.
Let me push. I think that Marios jumping out of cars is one example, but it’s an inherently safe example. That’s just someone next to you hallucinating via AR. Another example: you are standing next to me and we’re both looking at the Capitol building. What I see is the home of democracy, and maybe what you see is the building where the election was stolen from Donald Trump. That has drastic implications for society. Maybe that’s fine, but it does mean that there’s already a break between shared reality in this country and this world. Now people can fully inhabit different realities. It seems like we should think about that early, when we’re still worried about whether the batteries will last long enough, as opposed to later when we might have to impose a regime that looks like informational censorship or experiential censorship.
Again, you could listen to a different podcast or different radio station while you’re at the Capitol. The problem you’re describing, which is this fragmentation of society, is real. The info bubble problem is real. We are suffering through some really bad consequences of this problem that we will have to mend.
But I don’t think the answer to that is controlling what AR hologram that you can see. I think it’s about drawing people out into the world where you can meet your neighbors. It’s about drawing users out into the world where you’re spending time with people from different walks of life. Come to a Pokémon Go Fest: you’ll see stockbrokers and bike messengers and grandmas and suburban soccer moms hanging out together, doing a Pokémon Go raid — all these people becoming friends, crossing barriers, breaking out of those info bubbles and seeing people just as people, not as categories or as labels that get attached in these hateful online forums.
What we need more of is more face-to-face human time. We need to spend time with other people to build those personal relationships. When you get people together in real life, generally speaking, you see a much more reasonable posture. It’s one of the reasons that local politics like school boards work. People come to it with different points of view, but you don’t have that sort of “us and them” mentality where each side is painting the other side as evil. You have people debating and arguing about things, but ultimately coming together in a civil way.
Right now, we are seeing school board politics in America get torn to shreds by misinformation on social media.
Exactly. These online virtual information bubbles get people all worked up and fired up about a certain point of view. Then they bring that into the physical context, but I think the real-world side of that is the way to diffuse it. More of that will actually reduce that level of vitriol where people are seeing each other as enemies, rather than people who maybe just have a different point of view.
We have to deal with it, 100 percent. I get your point, but I do think being in-person together is the way to spend less time immersed in a purely electronic mediated interface. I don’t know if you’ve seen this in your work, but I think Slack and online chat versus a physical meeting between people, even in a work context, things are much more likely to go off the rails when they’re electronically mediated.
It is incredibly easy to be a jerk on Slack. You should just make a phone call there.
There you go.
That’s as much a reminder to me as anyone, honestly. Let me keep pushing on this a little bit, but not necessarily about misinformation, just about fragmentation. Do you see interoperability between the products you’re building and what Apple and Google and Meta are building? Do you see a way for those things to be interoperable or a way for them to connect to consensus reality of some kind? Or do you think we’re all just going to pick and choose in the marketplace of realities?
There’s a couple of points that you’re making there. One is about interoperability. The early internet was all about distributed systems interoperating around open standards. It was a marvelous, wonderful thing, right? That’s what took us from the walled gardens of CompuServe and AOL out into a world where you could have many, many people creating and publishing — organizations like yours get created and whole businesses get built because of that.
Over time, we have become much more centralized in how the internet operates, so some of the original ideals of the internet have fallen by the wayside as we’ve gone back to a more centralized model for all of the services that you get coming from a single company wrapped in a single, set infrastructure. That’s not really part of the open internet. The pendulum is due to swing back a bit in the other direction because of some of the problems that I alluded to earlier — for example, somebody knowing all of the information about you and everything that you do because it’s all flowing through one centralized system, versus users having more control over that.
“The world is ready to go back to a more decentralized internet built around interoperability.”
The world is ready to go back to a more decentralized internet built around interoperability. When we talk about the real-world metaverse — which is the version of the metaverse I’d like to see created — I do see it as a system where products and services from multiple companies would interoperate. That theme comes through in the Web3 movement. That desire to pull back from centralized control by a few companies to a more open system that puts more control in the hands of the consumer does rely more on interoperability at its core.
The second point that you raised was whether there will be a marketplace of choices. I think that, yes, we should be able to share those experiences. I should be able to poke you on the shoulder and say, “Hey, join me in the Pokémon world for a few minutes in the park. Let’s have that experience together.” Then you might drift back to the William Gibson world that’s adding elements to your reality. I think we should have the freedom to do that. I don’t think that we’re going to break humanity by allowing people to have some choice there.
Do you think your applications are going to run on everyone else’s glasses? There are deep political and business considerations to running applications on phones today. Do you think those issues are going to get better or worse? Do you think more competition is coming for glasses?
I think the devices that we use to access stuff are really there because we want the content. We want the service, we want the experience, we want the podcast, we want the app, and the best creators are going to create for the biggest possible audience. That is just the laws of economics there. You don’t want to pour all of your effort and precious time — and if it’s a company, its precious capital — into a project that has a limited audience.
In the world of phones, generally the most successful app experiences are compatible with both Android and iOS. I would argue if there were a third or fourth popular operating system out there that had traction, then content creators would also be trying to access those other platforms. We want the games and other apps that we build to run everywhere. We think consumers are going to want those popular applications on the devices that they use, so there may be some friction introduced by policies of people making devices that makes development harder. But I think ultimately what consumers want is the great apps and experiences, and that means, I think, we’ll be able ultimately to offer our experiences everywhere.
Have you started to have some of those conversations, or is that too far afield?
Sure. We have business development relationships with most of the major players in the space. We spend a lot of money with Google on infrastructure. We have very popular applications that make a lot of money in the app stores for Apple and for Google, so a lot of money that is generated by our apps flows to those companies today. We’ve done some public stuff with Microsoft around HoloLens. We’re friendly with other people who are playing in this space. I think a world that’s interoperable is the right one from a consumer point of view. It’s a business, and people have their own interests, but I do think there’s a megatrend that will pull us a bit more in that direction.
You mentioned Web3. I don’t know if you’d call it a killer application for cartography, but one of the most popular applications that I hear about for crypto products is NFTs — how NFTs will let you bring virtual goods from one digital experience to another. Are you bought in on this? Do you own any NFTs? Are you in the game?
I’ve only dabbled. I am not deep into NFT ownership as an investment. I think it’s really interesting. Obviously, one of the earliest precursors to what we think of as NFTs today were CryptoKitties, which is kind of a gaming collectible. I think the idea of taking your objects with you as you move between experiences is key to this interoperable future, real-world metaverse that we’ve talked about. I’m with it in spirit. In reality today, there are some challenges. Crypto is not very ecologically friendly when it’s a proof-of-work-based system, as Ethereum is today, though it’s supposedly evolving to proof-of-stake in the future.
It will forever be evolving.
Maybe you get around that. Consumers handling [cryptocurrency] wallets today is very non-mainstream in its current incarnation. I would predict that we will get to a version of that which is something that people will use to have portability of virtual digital objects, and that those will be economically traded. I believe in that future. I think we’re really early in it right now.
Aside from all the issues with wallets, climate impact, energy use, and gas fees with regards to NFTs, what is the basic game design element here? I buy a sword in your game to move it to someone else’s game. They need to have coded the sword, along with its physics, into the game. Is that a surmountable challenge? Or are we all dreaming and reality is going to come crashing down on us?
It’s software. It’s all theoretically surmountable. I do think people tend to gloss over all the things that you just described. An NFT today is a few bytes, maybe, a pixelated image that’s encoded on the blockchain. Otherwise, it’s a pointer to a file somewhere. That doesn’t necessarily mean that the file, if it’s a JPEG image, gives you a 3D sword with attributes and capabilities that you can take from one game to the next. The way that that is programmed today is unique in every single game. Portability there requires either a lot of work to recreate assets for every single environment that they’re going to use it in, the behaviors for every single environment that they’re going to be used in, and animation systems for every single environment they’re going to be used in, and people agreeing on the rules.
If you bring your super-rare sword to my game, what does it do in my game? Does it allow you to win the game with a single sword stroke? That would make my game not fun. Who decides what it actually does in different environments? Nobody has really solved that. Some things are easier than others. If it’s about bringing your avatar with you, or a pair of shoes for your avatar that you bought, or a virtual pet, or something like that — those things are easier, from a conceptual point of view, to bring around with you than things that are going to have a deep impact on gameplay. We would probably start with things like that.
You’re going to start with clothes and dances.
I just think that’s easier, yeah.
Here’s my last big question, and then we’ve got to wrap this up. You have given me more time than you promised, which I thank you for. I have a killer app for AR in my head: I’m horrible at faces and names. If you sold me a pair of glasses that would just tell me the names of people I’ve met before, I would pay you almost any amount of money, and then I would immediately become the president of the United States. I would become the most powerful and charming human being to ever exist. I cannot envision how you would build that product, which does seem like the killer app that would bring us all together. How do you build that product without building a worldwide facial recognition database?
I think the product you described is a worldwide facial recognition database, so I don’t think that you can build that app without building that database first. I think the real question is: is that product going to get built? Is there some way to safeguard it so that it’s helpful and not abusive?
I don’t know. I’ve heard you describe that product before, and frankly, in the earliest days of when I first saw Google Glass, it felt like something that you would want. I’m also bad with people’s names, as people around me will attest to. It seems like it would be incredibly useful for a lot of people and a lot of social situations, but a very controversial company out there right now has already built a very widely applicable facial recognition database. I’m not sure what’s going to become of that.
I do think there’s a high creepiness factor to having that sort of data in the ether. There’s potentially a high abuse factor. If you read cyberpunk — William Gibson, etc. — those writers imagine a future where such technologies are ubiquitous. People have face masks that they wear to confuse those systems. I don’t know if we’re headed for that or not. I’d like to believe there’s some way to engineer that in some distributed, safeguarded, opt-in way that would protect the privacy of people involved, but I don’t know. I don’t know.
What’s next for Niantic? What’s the next big project we should be looking for?
The next big project is Pikmin Bloom. Check it out if you haven’t. It will make your daily walk a lot more fun. We have a ton of cool games coming, so if you keep your eyes on Niantic, you will see some really awesome stuff next year. From a tech point of view, we are incredibly excited about making this map work. The map is going to open up a lot of apps, not only for us, but for people in the developer community. For us, 2022 is going to be about what devs start doing with those maps, in terms of lighting the world up with real AR that’s anchored in the world, that makes real life more interesting, useful, and entertaining.
That’s great. John, thank you so much for coming on Decoder. It’s been an absolute pleasure.
I enjoyed it, Nilay. Thank you.
Decoder with Nilay Patel /
A podcast from The Verge about big ideas and other problems.