Yesterday we brought you news of a Japanese research effort to produce interactive kissable posters, but some of you expressed concerns over its hygiene in the comments. Well, a separate team in Japan may have inadvertently hit on a solution by devising a tongue-control interface for Kinect. Adding to the long list of innovative uses people have found for Microsoft's motion-sensing device, the research group at Tokyo's University of Electro-Communications is actually working towards a much more useful target: people with oral motor function disorders. In future, the system could be used to train people how to speak or swallow, though for now the demonstration software is a simple shooting game where you aim bullets with your tongue.

If you've ever played a Kinect game, you're probably wondering how it could be used to detect something as precise as tongue flicking. After all, it tends to work best with exaggerated full body motion like dance moves and volleyball serves, and doesn't usually pick up finer details such as individual finger movements. The team got it to work by extrapolating positional data from standard face recognition — after detecting the eyes, movements in the nose and mouth can be used to more or less calculate motion in the tongue. The researchers admits that it isn't very accurate so far, but the goal is a worthy one; while more precise systems are being developed, they require physical hardware to be affixed to the mouth.