Skip to main content

Google’s miniature radars can now identify objects

Google’s miniature radars can now identify objects

/

Researchers from St Andrews University teach Google's tech a fun new trick

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

When Google unveiled Project Soli in 2015, the company presented it as a way to create gesture controls for future technology. Soli’s miniature radars are small enough to fit into a smartwatch and can detect movements with sub-millimeter accuracy, allowing you to control the volume of a speaker, say, by twiddling an imaginary dial in mid-air. But now, a group of researchers from the University of St Andrews in Scotland have used one of the first Project Soli developers kits to teach Google’s tech a new trick: recognizing objects using radar.

Every object has a unique radar fingerprint

The device is called RadarCat (or Radar Categorization for Input and Interaction), and works the way any radar system does. A base units fires electromagnetic waves at a target, some of which bounce off and return to base. The system times how long it takes for them to come back and uses this information to work out the shape of the object and how far away it is. But because Google’s Soli radars are so accurate, they can not only detect the exterior of an object, but also its internal structure and rear surface.

"These three sets of signals together gives you the unique fingerprint for each object," lead researcher Professor Aaron Quigley tells The Verge. RadarCat is accurate enough that it can even tell the difference between the front and back of a smartphone, or tell whether a glass is full or empty.

This system is surprisingly accurate, but there are some major limitations. For example, RadarCat does occasionally confuse objects with similar material properties (for example, the aluminum case of a MacBook and an aluminum weighing scale), and while it works best on solid objects with flat surfaces, it takes a little longer to get a clear signal on things that are hollow or oddly shaped. (For more information, check out the full study published by St Andrews.)

RadarCat also has to be taught what each object looks like before it can recognize it, although Quigley says this isn’t as much of a problem as it initially appears. He compares it to music CDs: "When you first started using them, you put in the CD and it would come up with song list. That information wasn’t recorded on the CD, but held in a database in the cloud, with the fingerprint of the CD used to do the lookup." Once the information has been introduced to the system once, says Quigley, it can be easily distributed and used by . And the more information we have about various radar fingerprints, the more we can generalize and make inferences about never-before-seen objects.

One of the most obvious applications of this research is to create a dictionary of things. Visually impaired individuals could use it to identify objects that feel similar in shape or size, or it could deliver more specialized information — identifying a phone model, for example, and quickly bringing up a list of specs and a user manual. If RadarCat’s abilities were added to electronics, then users could trigger certain functions based on context — hold your RadarCat-enabled phone in a gloved hand, for example, and it could switch to an easy-to-use user interface with large icons.

Unlike the internet of things, radar works without Wi-Fi

Of course, much of the current wave of tech (specifically the Internet of Things) is concerned with creating smart objects and environments. However, RadarCat’s approach has the advantage of being unobtrusive. You don’t have to add extra information to an object to recognize it (like QR codes) and you don’t have to give it an Wi-Fi connection either (a practice that’s a security nightmare and even threatens the stability of the internet itself).

The next step for RadarCat’s creators is to improve the system’s ability to distinguish between similar objects, suggesting they could use it to not only say whether a glass is full or empty, for example, but also classify its contents. If the technology ever moves into mainstream use it would be quite the evolution — from a military tech used to detect ships and airplanes, to a consumer one that can tell you exactly what you’re about to drink.