Google taught a tablet to see. What happens next?
I’m standing in a strange, spartan space. There’s a floor, but no walls. There’s a white tree in the center, casting a shadow on the ground. It’s like an Art Deco holodeck, alternately soothing and surreal. Three snow-white floating heads float around me. They’re somewhere between cute and terrifying, and they follow me as I walk around the tree.
One of the heads approaches me, and its black mouth moves as it speaks. I reach out to touch it.
If this were virtual reality, I would touch nothing. If this were augmented reality, I would know exactly who I was about to touch. In whatever version of reality this is, my hand (which I can’t actually see) taps the shoulder of Johnny Lee, the man in charge of Google’s Project Tango.
"Yep," the mouth moves, "That’s me." And it is — Lee is standing in front of me in an actual, real room, though I can’t see him through my VR goggles. The floating head turns away, and my hand falls from his shoulder.
Project Tango is almost ready for primetime.
At last year’s I/O, we saw the first Tango device available to developers, an Nvidia-made tablet laden with cameras and sensors that could map a 3D space in real time. That ability turns out to be hugely important, because it gives the tablet the power to know, immediately, where it is in space. It’s an ability humans take for granted, but for a computer, deducing that information is gnarly and difficult.
Creating a tablet that knows where it is unlocks capabilities that Lee and his team are only beginning to explore. For the demo I tried, I put on a jerry-rigged VR headset with a tablet strapped inside. As I walked around the real room, my position in the virtual room mirrored what my real body was doing. And when three other people in the room strapped on their headsets, we were all in the same virtual space, positioned appropriately.
It wasn’t quite enough to make my head spin until I realized that the room wasn’t pre-mapped and there was no external server powering any of this. It was just year-old Android tablets talking to each other over Wi-Fi.
Now that Tango has graduated from its skunkworks ATAP division, it is officially part of Google. But that doesn’t mean it’s ready to release a consumer-ready device just yet. Instead, it’s opening up access to the Tango tablet to more developers in the US. The price has been cut in half, to $512, and its purchase no longer requires preapproval from Google.
Lee says that so far, Google has sold 3,000 Tangos, but presumably that number will go up. Google is starting up a developer contest (prizes start at $1,024, then $2,048, $4,096, and finally $8,192, natch) to encourage app development. It’s also partnered with Qualcomm to create a reference device that can bring the Tango experience to a phone.
Even with these advances, Project Tango is clearly still quite a ways from becoming part of a viable consumer product. Sundar Pichai takes a long view towards research projects like Tango, though. "We never know whether [Tango and similar projects] even make viable business applications, but we want to push the technology at times because you don't know what's possible on the other side." Tango seems like just such a project: we don't know yet what possibilities it will create once it hits scale, but Google sees an opportunity there.
It may be a while before we carry spatially aware devices in our pockets, but Lee says his team is in active discussion with OEM partners to possibly develop consumer versions. But hurtling pell-mell toward creating consumer devices is kind of pointless unless there are real, practical things you’d actually want to do with them.
That’s why Lee and his team are showing off what Tango is capable of. Currently, there are a few Tango demos available, created by Google and third-party developers. In a series of demos, Lee and I built a tiny house out of don’t-call-it-Minecraft blocks, hunkered down, and climbed inside; walked through a trippy, floating cloud of colorful bubbles; stomped around tiny soldiers arrayed on the floor of the room in a mock-up Real-Time Strategy game; and ran around with a Nerf gun that had been equipped with a Bluetooth trigger, blasting at robots marching around the room.
We ran around with a Nerf gun, blasting at robots marching around the room
But there were also more practical demos: Lee and I hit a button and watched a Camaro appear on a screen sitting in the middle of the room, then sat down on a real chair "inside" the car and customized its interior. We measured the precise size of rooms and furniture. We got directions through a building, arrows on the floor gently pulsing to lead us through concrete hallways.
All of which is to say that Lee and his team aren’t lacking in ideas and fun ways to augment reality, but they’re not introducing ideas we haven’t seen elsewhere. What’s interesting is that Lee’s ultimate goal isn’t a stand-alone product like HoloLens — instead, it’s incorporating Tango into common smartphones. "Nowadays, you wouldn’t consider buying a phone without GPS," Lee says. "We hope to see Tango kind of reach the same level of adoption."
The trick, then, is to come up with some use cases that will drive enough demand to start getting these sensors widely installed. And right now, the main use cases seem to be related to retail. Aisle411, for instance, is working with Tango to make tablets that can navigate you around a store.
But as painful as finding your way through Target may be, it likely won’t drive real consumer demand. It may be that whatever will drive consumer demand is something we haven’t seen yet. There are plenty of other projects pushing both VR and AR that could create a market big enough to accommodate Tango. Google’s other VR project, Cardboard, has been described as the "tip of the iceberg" for Google’s VR efforts. Microsoft is pushing augmented reality with HoloLens (Here’s what Lee thinks of Microsoft’s project: "The future is awesome"). Facebook’s Oculus division is obviously pushing hard, too.
But the difference between all of those and Tango is that Tango is (presumably) cheaper and works in real time — it doesn’t need to map the room you’re in. "We basically only need light in the room to be able to operate," Lee says. "In robotics research … This is called Visual SLAM: ‘Visual Simultaneous Localization and Mapping.’"
And since Lee brings it up, I’ll bring it home. There’s yet one more division within Google that could benefit from Visual SLAM: Boston Dynamics. Want to have a Big Dog robot quietly sleeping at the foot of your bed? It’ll need something like the spatial sense and indoor mapping abilities that Tango is developing. It might not be easy to come up with obvious applications for people, but for robots, it’s a totally different story.