We all know Microsoft's motion- and voice-controlled Xbox accessory can enhance your games, but its greatest achievement so far has perhaps been in the way it's breathed fresh life into the DIY hacking community. Kinect's array of sensors have been put to all sorts of weird and wonderful non-gaming uses, and you can follow all the past, current, and future developments in that field right here.
Sep 4, 2013
Researchers have used Kinect to inject a natural part of face-to-face conversation into video calling: eye contact. Using the Kinect's facial-recognition technology and some specially developed software, the team is able to make it appear callers are looking directly at their conversational counterpart. Essentially, Kinect determines where a caller is looking, and modifies the angle of a their face to make things seem more natural.Read Article >
More than simply skewing the entire image, the team's software separates the person from their background, adjusts the angle, and then pastes the new image back into the frame in realtime. By maintaining the angle of the background, the adjusted face looks more natural and the image keeps its depth. "We want to make a real video conferencing meetings as similar as possible," says Claudia Kuster, a PhD student at the Laboratory for Computer Graphics ETH Zurich. Kuster and her colleagues want to develop their software further, targeting devices with conventional webcams like smartphones, and also hope and to develop a Skype plugin.
Aug 15, 2013
Microsoft launched a Kinect Accelerator program for startups last year, and the results are starting to be shown. Ubi Interactive worked closely with Microsoft to create a system that uses a projector in combination with a Kinect sensor to create a virtual touchscreen that can be cast onto any surface. As many businesses already have projectors installed, most will simply be able to buy a Kinect for Windows sensor and Ubi's $149 software to turn a projector into a touch-capable unit.Read Article >
The basic app will support up to 45-inch display sizes, with options to purchase professional ($379) and business licenses ($799) that provide 100-inch support. There's only one single touch point on the basic version, but a business version provides two, and the enterprise edition ($1,499) supports up to 20. The app only works on a Windows 8 PC, but as you'd expect it fully supports Microsoft's touch-friendly "Metro" environment. After a beta test with 50 organizations, Ubi is now taking orders for the software.
Aug 7, 2013
Researchers have created a "touchscreen" for your bathtub. With only a Kinect sensor and a projector, AquaTop display facilitates interacting with virtual objects using just your hands. In order to function properly, the system requires you add some bath salts to your tub, which will turn the water opaque and prevent the camera from picking up false inputs. As a proof of concept, the team at Tokyo's University of Electro-Communications Koike Laboratory have created a number of demos that utilize some innovative gestures.Read Article >
Jul 31, 2013
Faced with the possibilities of VR gaming, most people think of chasing dragons in Skyrim or storming compounds in Counter-Strike — but Toronto tech agency Globacore had a simpler idea. They wanted to make a virtual reality version of the classic Atari game Paperboy, powered by an exercise bike, a Kinect camera and an Oculus Rift. A bike tracking widget called Kickr lets the developers track the pedaling speed of the bike, while the Kinect tracks the paper-throwing motion of the arms, and the Oculus Rift brings it all together in an immersive VR environment. The result is called PaperDude VR, and it's a surprisingly thorough combination of VR mechanics and old-school aesthetics. So far, it only exists in the Globacore office, but hopefully they can be persuaded to take it on the road.Read Article >
May 31, 2013
One of the biggest upgrades to Microsoft's recently unveiled Xbox One console is the new and improved Kinect: the device now features a higher fidelity sensor, a larger field of view compared to the original, and better skeletal tracking. This could have some potentially cool applications when it comes to watching TV on your console or playing games, but, like the original Kinect, the most exciting new ideas will likely come from outside of Microsoft and traditional game developers. The hackability of the original Kinect created a healthy DIY movement, and while it's too early to tell if the new version will do the same, there are definitely plenty of possibilities — especially with the recently announced Kinect for Windows launching sometime next year. But will the new Kinect be as open to developers as the original?Read Article >
Much of the possibility stems from the higher-resolution camera built into the new Kinect. It utilizes a Microsoft technology dubbed "Time of Flight," which, according to the company, can measure "the time it takes individual photons to rebound off an object or person to create unprecedented accuracy and precision." In our time with the device, it could tell not only that you were moving a thumb, but which way that thumb was facing.
May 29, 2013
A new robot developed by researchers at Cornell University isn't only capable of assisting you with tasks, it can accurately predict when you might need a hand. Armed with a Kinect sensor, the robot — developed by Ashutosh Saxena and his team of computer scientists — utilizes a dataset of 120 videos to analyze and understand your movements. It can then help you perform certain tasks including making a meal, stacking or arranging objects, or taking your medicine.Read Article >
To detect your actions, researchers use complex algorithms to detect skeletal movements that are assigned to a sub-activity, which include reaching, moving, pouring, eating, and drinking. Activities are associated with the objects around you, allowing the robot to assess future actions and possible destinations. It can also learn from mistakes, improving its precision in the process. The robot has already achieved accuracy of 83.1 percent for detection of high-level activities, outperforming the algorithms it was tested against.
Jan 9, 2013
Remember how ads in Minority Report were interactive? We've just played with a demo unit of such a system here at CES 2013; it's called Swivel digital signage, and it uses a Kinect sensor to place clothes and accessories onto passerby's. The idea is that advertisers that use digital signage will not just show static images of models wearing clothes. Instead, as people walk up to the sign, they'll get to virtually "try on" the clothing. To do so, the company behind Swivel, FaceCake, scans and measures clothes in-house. When a user steps up to the advertisement, the Kinect sensor analyzes the person, chooses the appropriate size, and layers it on his body. It's not the first time we've seen the technology — FaceCake demonstrated the Swivel digital wardrobe with Microsoft at last year's CES, and it later made it into a 20-store Bloomingdale's trial last fall.Read Article >
So, how does it feel to virtually slip into a dress? Well, it's certainly capable of inducing some giggles. We wouldn't necessarily rely on the tool to aid us in piecing together our next outfit, but it is a bit liberating to be able to visualize yourself anything you want — wedding dresses included — without worrying about going to a store. And, undoubtedly, the laughs that will ensue will make it well worth your while to step up to one of these ads and see just what you'd look like with a cowboy hat on. If clothing is a bit too tame, the company has also introduced Swivel Up Close, which lets you try out makeup, earrings, and glasses in the same way. In all, both systems work fairly well — it's most impressive when clothing and accessories turn in 3D space as you rotate — but making selections with gestures remains as difficult as always, and items don't always stick to your body perfectly. If you'd like to give it a try, FaceCake tells us its digital wardrobe will make it back into some Bloomingdale's stores later this year.
Dec 18, 2012
If all goes as planned, Microsoft's Kinect motion sensor camera will be used to help provide physical therapy for injured soldiers and veterans. According to Defense News, the company is working with the Air Force and the Army’s Communications-Electronics Research, Development, and Engineering Center to create a therapy system that utilizes the camera, a standard PC, and off-the-shelf software. Kinect offers an extremely affordable and effective tool for tracking body movements, and therapy software, such as ReMotion360, is already being used for physical rehabilitation.Read Article >
Due to the low cost of the device — Kinect for Windows can be purchased for $249.99 — and the minimal upkeep involved with the setup, the program would allow those who are not located near a Veterans Affairs facility to easily take advantage of its benefits. Additionally, having the program accessible remotely will lower the costs involved with maintaining medical facilities. Other military bodies, such as the Defense Advanced Research Projects Agency and the Navy’s Bureau of Medicine, have also expressed interest in the program. Beyond physical therapy, Kinect may eventually be used for training and simulation purposes, as well as treating post-traumatic stress disorder through group sessions.
Nov 18, 2012Read Article >
And gaming is an area where Faceshift could present new opportunities to developers. An SDK targeted at animators and game creators has been released, though it wouldn't surprise us to see this technology used in other unexpected ways. But Faceshift already faces competition from major industry players including Sony, which has brought similar facial capture functionality to EverQuest 2 (and soon other titles) through a feature it's calling SOEmote.
Nov 1, 2012Read Article >
The surface of a lake or river is an iconic part of the natural world, but a mechanical version has its own kind of beauty. For his installation Underwater, artist David Bowen mapped real-time wave patterns by putting a Kinect above the water and setting it to capture the water's surface as a plane. That information was then sent to hundreds of motors, which controlled the web above. The result, shown last month at Interieur 2012 in Belgium, is a rippling ceiling that captures the feeling of looking up while being underwater. Bowen's work often focuses on the disconnect between nature and machines, but he says he's increasingly finding that "maybe that contrast is not necessarily as black and white as we might perceive."
Sep 26, 2012
Navigating through unknown territory is a challenge in and of itself, especially in the case of search and rescue missions — and guiding others while doing so increases the difficulty even further. To assist in these situations, researchers at MIT have built a prototype wearable mapping system that can wirelessly transfer data in real-time, as reported by MIT News. Using a "stripped-down" Kinect sensor and a laser rangefinder, the unit scans the area around the wearer in a 270-degree arc to create a map that can be viewed remotely, as seen in the video below.Read Article >
While travelling, wearers can indicate points of interest on the map, compensating for potential scanning inaccuracies caused by natural human movement. The unit is also capable of tracking multi-level movement using an inertial sensor and a barometer, and the team plans to implement voice communication in the future as well. The project is supported by both the US Air Force and the Office of Naval Research, and the researchers hope that it can eventually be used by first responders when coordinating disaster response.
Sep 17, 2012
Researchers at the University of Bristol in the UK have managed to free Microsoft's popular Kinect depth sensor from the constraints of wired use, building a self-contained unit which runs on battery power and uses Wi-Fi for communication. Created as part of the Patina project, the device contains a Gumstix single-board Linux computer for interfacing with the sensor, and produces results which can be outputted to mobile devices — the demonstration video below shows information being displayed on a Samsung Galaxy Nexus.Read Article >
According to an explanatory blog post, the unit only makes use of the camera part of Microsoft's product, with much of the processing being handled by a homemade circuit board. Still, the researchers have published designs for the board online in both CadSoft Eagle and Gerber formats, so hackers with access to PCB printers and the requisite components should be able to knock together similar devices without too much trouble.
Sep 10, 2012
Blogger Chad Ruble has created an innovative, gesture-powered interface for his aphasia-suffering mother, using Microsoft's Kinect accessory to help her send simple emails. Writing in a blog post, Ruble explains how his mother, Lindy, has experienced difficulties reading and writing since suffering a stroke twelve years ago — wanting to help her bridge the "keyboard gap" which prevents many disabled people from interacting with computers, he created a system based on emoticons, allowing her to select both an emotion and a level of intensity, represented by signal strength bars.Read Article >
When a message is sent, it is translated into text, with most of the information being transmitted through the subject line: "Lindy feels very happy." The system builds on a previous effort, dubbed Iconicate, which used the prototyping platform Arduino to provide a physical interface — according to his post, Ruble plans to expand the app in future, offering a wider variety of messages while refining it to avoid the occasional mistakes that Lindy still makes.
Aug 30, 2012Read Article >
The BBC has published a short video report on Fearful Symmetry, an art installation which ran at London's Tate Modern gallery earlier this month. Conceived by British artist Ruairi Glynn, the installation involved the use of a large robot which traversed a darkened room using a ceiling-mounted rail — with the aid of three of Microsoft's Kinect sensors, it interacted with visitors, responding to their movements with a number of pre-programmed actions. As the visitors interviewed in the video note, one of the most disconcerting aspects of the robot is its apparently "organic" movement — controlled partly by a set of algorithms and partly by a team behind the scenes, it acts with an eerie mix of human curiosity and robotic determination. Check out the BBC's report for a fuller explanation.
Researchers at the University of Surrey are preparing for a new mission that could see tiny satellites dock together in space with the help of Microsoft Kinect sensors. Called STRaND-2 (Surrey Training, Research, and Nanosatellite Demonstrator), the mission consists of two nanosatellites that measure just 30 centimeters across and are equipped with Kinect technology, which makes them aware of their surroundings. The plan is to use this ability to join the satellites together once in orbit — sort of like big pieces of LEGO.Read Article >
The team describes them as "space building blocks" that could be used to build larger crafts in space, which could also be reconfigured depending on the situation. Eventually the satellites could be used to do anything from providing a backup power source to adding extra propulsion to a craft. "Once you can launch low cost nanosatellites that dock together, the possibilities are endless," explained project lead Dr. Chris Bridges. Unfortunately, there's no word yet on when the mission is expected to take place, so it'll likely be quite some time before we see any Voltron-style spaceships.
- Read Article >
We've seen Kinect used to turn surfaces into touch screen-like interfaces before, but we haven't come across anything quite like Kreek — a prototype that eschews a solid surface like a coffee table for something a little more flexible. Created by a group of students from the Köln International School of Design in Germany, Kreek features a stretchy piece of fabric with images projected on to it, while Kinect cameras are used to determine exactly where you're touching. Because the surface is flexible, in addition to using it like a regular touch screen you can also push down on it for an added layer of depth. For instance, when a model of the human body is displayed, you can press down to remove layers of skin, muscle, and organs — push far enough and all you're left with is the skeleton. You can see Kreek in action below, as well as some images of its construction over at Flickr.
May 28, 2012
Ubi Interactive (no relation to Ubisoft) is a startup from Munich currently working with Microsoft in Seattle as part of the Kinect Accelerator program. Their idea is a simple one that looks to be pretty effective — a system that uses a projector in combination with a Kinect sensor to create a virtual touchscreen that can be cast onto just about any surface. Ars Technica visited the team to test it all out and came away fairly glowing with praise, noting the system's responsiveness as well as the way it plugs directly into Windows and can use the bespoke touchscreen gestures.Read Article >
It's not a totally new idea, as we've seen similar implementations such as OmniTouch, but Ubi's work has clearly impressed Redmond. The team sees it being used in offices, public displays, hospitals, and so on, and we can certainly see it working out cheaper than the cost of a TV-sized touchscreen — right now those seem to be the preserve of Steve Ballmer.
May 11, 2012
It can be hard enough finding the space for two people to play Kinect games together, let alone more than that, but a solution of sorts could be on the way from Premium Agency. The Japanese company develops CGI software and the occasional game such as Death By Cube for Xbox Live Arcade, and took to the Smartphone and Mobile Expo in Tokyo, Japan this week to show off its LiveAR software.Read Article >
The program matches an iPad app to the Kinect so that people in the background can affect the on-screen action by throwing up fireballs, balloons, and so on from their touchscreen. LiveAR isn't quite ready to be used in a marketable product right now, but beyond gaming Premium Agency expects the technology to be used with traditional video cameras in applications such as live events and digital signage.
May 7, 2012
If you've ever played a Kinect game, you'll know that sometimes it can be difficult to work out exactly where to move your body. Researchers from Microsoft and the Department of Computer Science at the University of Illinois have created a prototype system called LightGuide that could help with that and more besides — it projects visual feedback onto users' arms to help teach them correct movements for a variety of tasks. For example, an amateur martial arts trainee could be instructed by the system exactly how far to extend his or her arm while punching so as best to avoid injury.Read Article >
LightGuide uses the Kinect camera to track an arm in 3D space, and visual cues such as arrows and shadows are projected onto it to help guide the user's movements. The researchers found that users were able to perform various actions significantly faster after training with the new system, and say they want to extend the concept to other parts of the body.
May 7, 2012
If you long for a high tech twist to your sandcastle making, the Augmented Reality Sandbox might be of interest to you. It's not the first AR table we've seen, but this special sandbox uses a Kinect sensor and a projector to create an interactive topographical map with real-time water simulations. As you can see in the videos below, you can use your hands or a shovel to push around the sand to form mounds and valleys, and the software uses the Kinect's distance readings to overlay a color-coded topographic map atop the sand — red means high elevations and blue the opposite. If that weren't enough there's an accurate water simulator: open your hands above the sandbox and you'll rain down water into the virtual world, which will then flow naturally and gather in the lowest-lying areas.Read Article >
For now it looks like a fun tech demo — not unlike a Czech project that it was inspired by — but this sandbox is intended to be installed at museums to help introduce topographical lines, color-coded elevation maps, and water flow to young visitors. We're yet to hear when the sandbox will make it to your local museum, but we expect to eventually see it at the ECHO Lake Aquarium and Science Center in Burlington, Vermont — one of the project's collaborators alongside Oliver Kreylos of UC, Davis, the Tahoe Environmental Research Center (at UC, Davis), and the Lawrence Hall of Science (at UC, Berkeley). Good thing, too: we can't wait to play around in the sand again.
May 1, 2012Read Article >
Microsoft Surface is cool, sure, but it could be even cooler if it took inspiration from another division of the company. That's what we thought after seeing Bastian Broecker's amazing coffee table that uses Microsoft's Kinect sensor, infrared lasers, a PlayStation Eye, and head tracking software to create a 3D augmented reality experience. The effect is best understood by watching the video below, but basically the Kinect is being used to track the user's head and move the 3D environment in tandem, and there are four lasers at each corner that work with the PlayStation Eye camera to detect finger motion. The result is an AR block building program that looks for all the world like the hands-on future of Minecraft.
Apr 24, 2012
Berklee College's Rethink Music conference isn't just for those in the music business — over the weekend, Boston-based music intelligence company the Echo Nest sponsored a "hackathon" at Microsoft's New England Research and Development Center. This gathering provided an opportunity for music developers to get together and spend a day and a half creating applications from scratch that were shown off and judged Sunday afternoon. The top three apps were demoed to Rethink Music attendees today, and we were in the audience to see what was dreamed up.Read Article >
The first winning submissions demonstrated was known as "Kinect Bomba" — Bomba is a traditional Puerto Rican dance where the percussionists match their performances to the movement of a lead dancer, rather than a dancer following the beat of the music. This dance was the inspiration for the app, which uses a Kinect to track the gestures and movements of a dancer and match them to different musical cues — essentially, the dancer becomes a MIDI controller for triggering musical events, allowing the dance itself to become the basis of a musical composition. The system even allowed for two dancers to trigger two separate sets of instruments, and certain gestures could start and stop recording to built a fuller sound. It worked surprisingly well in practice, especially considering the short development timespan. While there might not be a lot of practical uses for this type of music software, it certainly could be the basis of a whole new type of rhythm-based game. It's not the first time we've seen Kinect gestures used to control music, but it was an impressive implementation nonetheless.