Google's surprise reveal of Project Tango, a smartphone equipped with a variety of cameras and vision sensors that provides a whole new perspective on the world around it, left us with quite a few questions about how this device actually works and what it's for. Google says the Tango smartphone can capture a wealth of data never before available to app developers, including depth- and object-tracking and real-time 3D mapping. And it's no bigger or more dependent on power than your typical smartphone. We sat down with Remi El-Ouazzane, CEO of Movidius, the company that developed some of the technology used in Tango, to get a better idea of what this device can do and what it means for applications of the future. We also got a chance to use the device Google will be delivering to developers next month.

Movidius has been working on computer vision technology for the past seven years — it developed the processing chips used in Project Tango, which Google paired with sensors and cameras to give the smartphone the same level of computer vision and tracking that formerly required much larger equipment. In fact, El-Ouzzane says the technology isn't very different at all from what NASA's Exploration Rover used to map the surface of Mars a decade ago, but instead of being in a 400-pound vehicle, it fits in the palm of your hand.

The phone is equipped with a standard 4-megapixel camera paired with a special combination RGB and IR sensor and a lower-resolution image-tracking camera. Those image sensors give the smartphone a similar perspective on the world as you and I, complete with spatial awareness and a perception of depth. They feed data to Movidius' custom Myriad 1 low-power computer-vision processor, which can then crunch the data and feed it to apps through a set of APIs.

It's like having the Mars Rover's eyes in the palm of your hand

But what can you do with all of that data? That's really up to app developers and is the reason Google is giving out 200 of these prototype devices to developers in the coming weeks. The devices that we saw were equipped with a few demonstration apps to show off some of the hardware's capabilities. One of the apps was able to display a distance heat map on top of what the camera sees, layering blue colors on far away objects and red colors on things that are close up. Another took the data from the image sensors and paired with the device's standard motion sensors and gyroscopes to map out paths of movement down to 1 percent accuracy and then plot that onto an interactive 3D map.

Perhaps the most impressive demo was an app that was able to capture a 3D model of a scene in real time and draw it on the display as you moved the device around the room. It's pretty amazing to see a three-dimensional model of the table in front of you get drawn in real time in just a few seconds by a smartphone.

Applications range from 3D modeling to assisting the visually impaired

The potential applications for this type of technology are pretty widespread, with the most obvious ones being 3D-mapping apps for room and building planning. But El-Ouzzane notes that the depth-tracking technology could also be used to help the visually impaired "see" in front of them and give them warnings and alerts to obstacles in their paths. It's not hard to imagine this being integrated into a wearable necklace that would replace the age-old walking stick. Other applications could be advanced augmented-reality games that provide far greater detail in scenes and integrate more models of real-world objects. Movidius' processor can feed a lot more data to a smartphone's graphics chip than a standard camera, giving the GPU that much more to work with when developing a scene. The visual-effects world could also use this technology to build 3D sets and models in record time and with little work compared to what is required today.

El-Ouazzane wasn't able to tell us how much the Project Tango device cost to build or when we'll be able to see something using this technology in a consumer product, but he was confident that it won't be very long before everyone has computer-vision-equipped smartphones in their hands. Google says that developers that have applied for access to the prototype device should have it by the middle of March, and chances are we'll see the products of those developers' efforts in the very near future.

Correction: This post has been updated to clarify that Movidius' role in Project Tango is limited to the Myriad 1 vision processor, and it was not involved with the design or development of the other components used in the device.