Scientists have developed a new system that allows people with no coding experience to teach robots simple tasks, such as grabbing an object and dropping it into a bucket. The system aims to mimic how humans learn, and even allows robots to teach what they’ve learned to other robots. That could allow machines to one day be trained more quickly and cheaply.
Usually, robots are trained using one of two methods. In the first, a robot’s actions are programmed by an engineer who specifies the timings and positions of each individual movement. (Like, for example, cutting a piece of metal to an exact shape and size.) In the second method, this information is input using motion capture, in the same way that the movement of CGI characters is plotted in films and video games.
The new method stands somewhere between these two approaches. First, robots are taught a series of basic motions — like how to be parallel to an axis, or how to move in a plane. Then an operator gives them instructions for a specific task by moving a 3D model of the robot about on-screen. To program a bot to open a cupboard door, for example, an operator will drag its arm to that door and just tell it to grab and pull. The software, called C-LEARN, does the rest, applying the correct movements from its library of motions.
“Imagine it like a video game, where you have a 3D representation of the robot, and in this video game you can kind of grab the hand of the robot and move it around,” says study co-author Claudia Pérez-D'Arpino, a PhD candidate at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). This allows you to teach a robot even when it’s not physically in the same room as you. She compares it to how a human learns — by knowing how to apply basic actions in a wide range of situations.
“When we teach another person a multi-step task, even if the other person has never seen that specific task, probably nothing is a surprise,” says Pérez-D'Arpino. “It’s just a combination of previously known actions.”
Using this method, researchers were able to teach robots to perform tasks with just a single demonstration. For example, one task asked the robot to grab a tray with both hands and lift it up so that the tray was parallel to the floor. The robot was shown the task only once, but was able to apply its previous knowledge of how to be parallel to an object in order to complete the task. The robot was also taught to grab an object and drop it into a bucket, as well as extract a cylinder stuck inside another cylinder.
The researchers tested the system on a two-armed robot called Optimus. This robot was then able to “transfer” its basic knowledge to Atlas, a humanoid robot that’s six feet tall and weighs 400 pounds. After the information is moved from one robot computer to the other, the second robot can use this learned information to accomplish the task, basically allowing robot-to-robot learning. The research is going to be presented at the IEEE International Conference on Robotics and Automation in Singapore, which takes place May 29th to June 3rd.
The system could be used in the future to teach robots how to work in manufacturing facilities alongside humans, Pérez-D'Arpino says. Because the system doesn’t require an expert coder to teach a task, it allows people with no coding experience to “train” robots. “It would be very beneficial to be able to have these robots working together around humans so that they can collaborate. It’s kind of like a symbiosis,” Pérez-D'Arpino says. “Humans know how to do some stuff very well but robots know how to do other things very well.”
Another application is in disaster situations, such as the nuclear accident at Fukushima Daiichi nuclear power plant in Japan, following the devastating 2011 earthquake and tsunami. Right now, robots that handle these situations — or handle bomb disposal, for instance — are manually operated joint by joint, with an operator basically moving the robot as a puppet. That takes highly skilled people, and time. If a robot could be pre-trained to handle a disaster situation, it could be deployed more quickly.
The system is not without flaws. The robot, for example, is not very adaptive: if the robot has been taught to grab an object and then extract it, but the situation requires the robot to extract the object and then, say, drop it, the robot can’t complete the task. “That would involve some logical thinking that, for humans, it’s pretty straight forward, but for computers it’s very challenging,” Pérez-D'Arpino says. “It is kind of intelligent in some tasks, but it’s also limited in many other ways.”
Still, the research is promising, and provides an interesting, cost-effective opportunity for how we’ll train robots in the future. Pretty neat.