Step into the Cube: Virginia Tech's giant virtual reality room
An interdisciplinary research project is keeping VR weird
15
From the outside, I look like the worst stereotype of a VR user as I walk around the Virginia Tech Cube. Not only am I blindfolded by an Oculus Rift as I feel my way around, the headset is sporting a 6-inch-high, slightly wobbly 3D-printed antenna. The Rift isn’t wireless, so I’m tethered to a laptop, which a research assistant is carrying around behind me. My gait lurches from tentative single steps to single-minded strides to sudden stops — sometimes because I’ve clipped through a wall in virtual reality, sometimes because I’m about to run into one in real life.
A 50 x 40-foot box isn’t even big enough to fit the scoreboard in Virginia Tech’s Lane Stadium. But for me, the room looks like the giant venue, full of 60,000 spectators in the midst of evacuating. The audience is represented by tiny boxes, torrents of them streaming through a simple replica of one wing, mixing and jostling each other as they pass. If I walk slowly, I can match their pace. A little faster, and my speed multiplies, until walking briskly in the Cube shoots me through the stadium and straight into empty blue space.
These digital seven-league boots are just one piece of the massive puzzle that Benjamin Knapp, director of Virginia Tech’s Institute for Creativity, Arts, and Technology (ICAT), and other researchers are trying to put together. In most of the tech world, virtual reality is slimming down and becoming more accessible, as developers learn to create simple experiences that anyone can enjoy. Inside the Cube, it’s messy, complicated, and ambitious.
The Cube is a new initiative, but it fits with Virginia Tech’s long history of virtual reality research. In the mid-’90s, the school unveiled a CAVE (a recursive acronym for Cave Automatic Virtual Environment), a 10 x 10-foot enclosure with stereoscopic 3D images projected on every side. A successor to the CAVE — now called the Visionarium VisCube — is still around on campus, and two Cube researchers have previously worked on Visionarium projects.
Originally built as a black-box theater, the Cube is shared between ICAT and Virginia Tech’s Center for the Arts, used for both art projects and scientific research. This doesn’t necessarily have to involve VR; in 2013, the Cube theater hosted a live performance called Operacraft, where K-12 students used Minecraft avatars — projected onto a wall — to perform an opera sung by Virginia Tech musicians.
One of the Cube’s biggest selling points is its sound system, which creates deafening 360-degree audio with 124 standard speakers, four subwoofers, and nine additional speakers that project hyper-targeted sound, like the aural equivalent of a spotlight. It’s possible to create things that could never be replicated with an ordinary sound system, like an experimental composition by ICAT media engineer Tanner Upthegrove that sends metal and chainsaws whirling around the room and wouldn’t feel out of place in Hellraiser. Close your eyes in another demo — a recording from inside a tornado — and you can almost feel the tremors as wind rips away nails and wood.
Rigid-body targets like these provide markers for the Cube's tracking cameras.
The Oculus Rift tracks head movement with a single webcam, which reads an array of LEDs embedded in the headset. Alongside the speakers, though, the Cube is lined with 24 cameras, which read up to 24 rigid-body targets — small constellations of dowels and reflective balls. Tape one to a tablet or headset, and the wall cameras will be able to "see" visitors as they explore anything from a very large molecule to a very small tornado, mapped onto the dimensions of the room. "The beauty of the space is that you now move through a virtual world by walking," says Knapp. "I can explore this area in this space, and the model in this space, with you in there — and with anybody else."
A non-VR project called FutureHaus, for example, uses an old augmented reality trick: by holding up a tablet, you’re given a window into a simply rendered three-story home, its dimensions mapped roughly to the room. Unlike most other virtual rooms, though, whole groups of people can mill around the house, where they’ll appear as Prisoner-esque white spheres. FutureHaus drives home the loose connection between physical and virtual space in an almost eerie way. You can explore the house by climbing virtual staircases, sending your avatar up or down while you traverse the exact same space in the Cube. If you head to another floor and a companion stays behind on the lower level, you could hold hands and chat while your avatars walk several stories apart.
A tornado visualization tool — unrelated to the audio installation — works with space in another way. Put on the Rift and you’re standing in a room about the size of the cube, getting a bird’s-eye view of a flat map. Instead of empty space, though, you’re looking at a bright, abstract funnel made of reds and yellows, representing the temperature of the air as a tornado sweeps across the ground. You can walk through it or kneel and see tiny topographic lines, while rocks and wind whirl around you in a small artistic flourish. Right now, it processes pre-recorded weather data, but one day it could provide a live feed, creating a real-time record of a disaster.
The most interesting part, though, isn’t the image, which feels about as informative as a normal 3D render. It’s the sense of place. The simulation represents users as hovering green pyramids, tipped forward like an arrow. As I stood on the map, another pyramid floated toward me, representing Virginia Tech Department of Geography head Bill Carstensen. When Carstensen pointed out the eye of the storm by staring at it, he could have been poking at a screen, or drawing a red line in MS Paint. But actually peering through the 3D landscape, I could respond with the most intuitive interface of all: my own body. The demo was simplistic, but where so much VR feels like a blown-up version of a thing I could get on a screen, it gave me a real reason to use it.
Virginia Tech's Moss Arts Center, home of the Cube.
Designing for the Cube, though, presents its own set of challenges. There’s a tremendous amount of space to track, and since everything has to be portable, you can’t rely on having a super-high-powered PC to render environments. Normally, being able to walk around is a great way to avoid motion sickness. But the clunky FutureHaus demo sometimes ran at only a few frames per second, and the Lane Stadium evacuation simulator could get nauseatingly laggy. The Oculus Rift is currently hooked to a ThinkPad, which must be carried around, open, at all times. The next step will be putting the ThinkPad into a backpack, and after that, the team is looking at streaming video through a Raspberry Pi, which would make the headsets truly mobile. If you want to track fine motion, like hands, you’d have to strap on a Leap Motion or some other controller.
For now, walking around in the Cube in a headset feels simultaneously retro and futuristic: you’re using a system that overwrites real space in a way that Valve and Oculus and Sony will never match, but in a bulky, awkward format straight out of a ‘90s X-Files episode.
Knapp is aware of these limitations. But even as researchers work to fix them, he’s imagining huge conceptual leaps. One Virginia Tech student, for example, is working on a system that could detect muscle movement and translate it into motion controls — instead of having to look for a gesture, the room would know that you’d flexed to pick up a cup. And unlike Oculus and many others, Knapp doesn’t just want virtual or augmented reality glasses to get smaller. He wants them to disappear altogether.
"The crazy distant future is to shine laser light into the eyes themselves," he says, while we’re talking about interfaces. At first, this sounds like a virtual retinal display, the same technology that’s thought to be used in Microsoft’s Hololens and the mysterious Magic Leap headset. In reality, it’s a lot weirder: putting projectors on the walls, not a headset and using sophisticated tracking to beam images directly into your eyes. "Just like the aural environment doesn't touch you — you don't have to wear anything to get the aural environment — wouldn't it be neat if you didn't have to do that with the visual environment? Right now we've moved [from a] screen onto commercially available devices like the Oculus," or next-generation headsets like Magic Leap. "But the eventual goal is to move all of that off-body."
Is that the future? A virtual reality theater where speakers are targeted to your precise location and sensors track your muscles, while the walls shine lasers into your eyes? Not for most people. Maybe not for anybody. But as far as wild VR experiments go, things don’t get much better than this.