As you might have seen in our second episode of On The Verge, our own Joshua Topolsky recently toured Microsoft's Building 99, where the company conducts a variety of wild research. Today we're pleased to share a look at Kinect Fusion and Lightspace — two technologies that bridge the physical and virtual worlds with sensors and imaging.

Kinect Fusion is a system that uses the Kinect's sensors to create an interactive, real-time 3D model of the environment — the demonstration shows virtual balls bouncing around on the objects captured and rendered directly from the real world. (It's not the first time we've seen the Kinect used for 3D modeling, but it's nice to see an official effort.) Microsoft's Kevin Schofield is quick to point out that the $150 Kinect sensor can accomplish the same tasks as industrial versions of the technology that cost about $50,000.

Lightspace works in the opposite direction: with a combination of depth cameras and projectors, it can create linked interactive screens on different surfaces. In the video, principal researcher Andy Wilson demonstrates how objects projected on a table can be moved around, re-sized, and even carried to another display using his hands. It's something you have to see to really understand, so fire up that video.