Skip to main content

Google puts up $1.5 million to help robots learn more like babies

Google puts up $1.5 million to help robots learn more like babies

/

Try, fail, and try again

Share this story

While robots are increasingly good at seeing, hearing, and understanding the world around them, they are still pretty helpless when it comes to interacting with that world the way humans do. They struggle to open doors, walk down stairs, or eat with a fork. “Their action and manipulation capabilities pale in comparison to those of a two-year-old,” explains Abhinav Gupta, an assistant professor of robotics at Carnegie Mellon University.

Gupta and his team are hoping to change that paradigm. Their approach is to allow robots to play with physical objects, playing and exploring their ability to grasp and lift the same way a baby would. “Psychological studies have shown that if people can’t affect what they see, their visual understanding of that scene is limited,” said Lerrel Pinto, a PhD student in robotics in Gupta’s research group. “Interaction with the real world exposes a lot of visual dynamics.”

Gupta’s group demonstrated their results at the European Conference on Computer Vision last fall. The promising demo helped them to secure a three-year, $1.5 million “focused research award” from Google, which will be used to expand the number of robots they are using, creating a richer database on which to learn. “If you can get the data faster, you can try a lot more things — different software frameworks, different algorithms,” Pinto said. The learning that happens on one unit can be shared with others.

The team is looking into cutting-edge techniques to speed up the learning process even more. One approach uses one set of skills — pushing an object — to bootstrap a completely different skill like grasping. Another approach is known as adversarial learning, in which one robot tries to grasp an object, and another shakes or snatches at its target. Think of it like an athlete who trains with added resistance, or a parent who teaches their kid to catch by pitching them increasingly difficult throws. So far, robots trained against adversaries have show significant improvements in their skills, versus those who train without opposition.