Microsoft's Kinect excels at registering depth in 3D space, but filmmakers still want high definition video for creating detailed visceral artwork. An open source initiative known as RGB+D has created a workflow for laying HD video on top of depth maps from the Kinect's 3D sensor to create shimmering, dream-like videos with their RGBDToolkit.

The RGBDToolkit workflow involves affixing a DSLR on top of a Kinect and calibrating the two using a specifically formatted checkerboard pattern in conjunction with the software. After the two are calibrated, the camera and Kinect can be freely moved around a scene at the filmmaker's discretion. Once the scene has been captured, the RGBDToolkit allows the editor to easily correlate the 3D data from the Kinect with the raw video from the camera.

The introduction of DSLR cameras with HD video capabilities in 2008 with the Nikon D90 and the Canon 5D Mark II marked a moment in filmmaking technology which drastically reduced the cost of entry, and the Kinect has done something similar with spatial 3D data. This union of gaming peripheral and DSLR might seem unorthodox, but the results are striking.

Other technologies like the Lytro camera and head-tracking displays both hinge on the notion of bringing an element of perspective immersion to the viewer, and with holograms making their way into popular culture, this trend of breaking the two-dimensional plane seems bound to continue. The RGB+D Tumblr page has more videos of the process and an explanation of how their efforts play into a collective progression in the field of digital artwork.