Behind the scenes at the final DARPA Robotics Challenge

The robots at the final DARPA challenge have come a long way, but they still need a lot of human help

16

The robot CHIMP weighs 443 pounds and has tank treads for elbows, but it’s the little things that give it trouble. At a recent DARPA competition, it drove a cart around a course, but then spent 15 minutes unhooking its elbow from the steering wheel. It opened a door, but promptly fell through it. When it haltingly carved out a section of drywall using a power drill, the audience watching from the stands cheered and chanted the robot’s name. After nine hours of watching robots keel over at the starting line, tumble out of cars, buckle at the knees, walk into walls, and, more than anything, just stand there doing nothing, the standard for ovation-worthy accomplishment begins to change.

Designed by DARPA, the Pentagon’s sci-fi incubator, the obstacle course was the final stage of a three-year project to build disaster-response robots. Twenty-three teams from around the world had convened in an old racetrack in Pomona, California, to vie for the $3.5 million in prize money.

It felt like a county fair from 2035. People ate hotdogs in the stands and listened to announcers sedately narrate the robotic action. Outside the team garage, students walked a headless robot dog beneath technicolor Old West shopfronts; inside, robots performed sleepy calisthenics as engineers ran tests. And while they tinkered, they were visited by Tesla's Elon Musk, Uber's Travis Kalanick, Google's Larry Page, and various Amazon employees, representatives of companies that are already making plans for how robotics will shape the future.

Program director Gill Pratt designed the challenge following the Fukushima nuclear meltdown in 2011. That disaster posed a particular set of challenges to emergency responders: radiation prevented people from going into the station and venting explosive gas, and the robots available at the time couldn’t operate the machinery, even if they could get past the stairs, doors, and debris standing in their way. To better assist in disasters like that, Pratt launched a competition for robots capable of navigating spaces designed for humans.

Several teams used the humanoid Atlas robot, made by Boston Dynamics, while others designed their own, like South Korea’s Hubo, which scooted around on wheeled knees, and NASA JPL’s Robosimian, a four-legged contortionist with wheels on its butt. Because it’s DARPA and because some of the robots resemble the Terminator, many observers have speculated about military applications. Pratt doesn’t rule that out, but emphasizes that these robots are general-purpose tools with less violent uses: disaster response for now, but eventually elder care, mining, construction, manufacturing, shipping, and other jobs that are too unpredictable for today’s robots. The potential of a new round of automation raises other questions, like what humans will do when the robots are doing all the work.

The finals were a chance to see just how far robots have come. When the previous competition was held, in 2013, the robots had half an hour to complete each task, and they were hooked up to power cables and safety harnesses. This time, the cords were cut, and the robots had an hour to complete the entire course. They had a total of eight tasks: driving a car down a dirt road, getting out of the car, opening a door and entering a building, turning a valve, cutting a hole in a wall with a drill, completing a surprise task (flipping a switch or unplugging a tube and plugging it into another hole), navigating a pile of rubble, and walking up a short flight of stairs. If the last competition was like watching grass grow, Pratt said, this was closer to a golf game.

Spend a few hours watching the robots totter around drunkenly, pondering doorknobs for minutes at a time, and it’s easy to conclude that these robots are pretty stupid. In fact, they’re even dumber than they look: all thought and most perception is being done by humans behind the scenes. Basically, Pratt says, they’re puppets.

"It can barely stand up on its own. You can tell it to open a door, but if it turns out the door is a different type, it won’t do it because it doesn’t know what a door is. You can teach it to change a tire, and as long as the tire has four bolts it does pretty well. But if it comes across five bolts, it doesn't know what to do. The competence it has tends to be brittle or narrow."

They're basically puppets

Pratt encouraged teams to make their robots more autonomous by adding a further challenge: once they enter the interior portion of the course, communication between the robots and their human operators deteriorates. The teams made impressive strides beyond joint-by-joint teleoperation, but their robots still have only a basic level of autonomy. Robots working outside highly structured spaces like factories struggle with perception and object recognition, so in the competition, humans had to do it for them, annotating their robot’s LIDAR map with a 3D image of a door handle, telling the robot to open it, then checking its plan to make sure it wouldn’t accidentally punch a wall. The robots are human-shaped, tool-using tools, operated by people sitting in a garage on the other side of the fairground.

Down the street from the obstacle course is the garage where teams tend to their robots. On either side of the huge concrete space, robots dangled from frames. One had its arm in a sling. Several partially deconstructed Atlas robots hung in the Boston Dynamics section, cannibalized for parts. The robots had a rough first day.

"This is it, man, the bird is leaving the nest," William Howell said as his team’s robot was pulled down from its frame and lowered onto a gurney. "Four years of nearly 100 people’s lives have gone into this."

Howell is with the Florida Institute for Human and Machine Cognition, which was ranked fifth going into their final run. Their Atlas fell on the rubble pile the day before, jarring its sensors, then fell again on the stairs. They’d been working through the night trying to correct the problem, but Howell said they’d decided to "fly by wire," with a co-pilot standing by reading off a clipboard to remind people which sensors were off.

Some teams had eight or nine operators, each focusing on a separate task or element of the robot’s sensory system. A few had teammates strapped into virtual reality headsets to help parse particularly confusing LIDAR readings. IHMC had a single pilot, John Carff, the winner of a competition within the lab. "He is just ridiculous," said team leader Jerry Pratt after competition. "He grew up playing video games. People say don’t waste your time playing video games — yes, waste your time playing video games."

Their robot interface does feel like a game, albeit a laggy and bewildering one. Carff sees a fragmented simulation space, built from the robot’s LIDAR and stereo cameras, and his job is to guide the robot through it. Using a keyboard and mouse, he lays out a series of bright green footprints, issues a command, and watches as the robot plods along them.

The first part of their run went quickly — relatively — and the robot got out of the car without falling on its face, as so many others had. But when the robot entered the building, the communication disruptions began. The monitor turned into pixelated neon shards and grey blankness. Carff aimed the robot’s cameras at the wall, making the space legible to a human, then made it legible to the robot by dragging a 3D model valve from a library and placing it over the pointillist mess. He told the robot to open its claw, raise its arm, grasp the valve, and rotate its wrist. Because of the communication lag, we heard the point confirmed before the camera showed the valve turning. The team had cheered earlier points, but by then they had fallen silent with focus.

Carff sees a fragmented simulation space, and it’s his job to guide the robot through it

Their robot proceeded through the course, futzing for several minutes with the drill, turning it off and on and off and then finally on and drilling a wobbly oval hole into the wall. It unplugged a tube and plugged it into an outlet, a remarkably delicate task for the robot’s bulky claw. With Jerry warning to take it slow, Carff guided the robot step by step over the rubble field toward the last challenge, a short set of stairs.

Carff dropped a footprint on the first stair, and the robot moved its leg to match it. But Carff felt like something was wrong. He adjusted the step again. And again. And again. The consensus in the room was that the foot was twisted too far to the right and it would put the robot off balance on the next step. Carff continued adjusting and once he was satisfied, he put down another footstep, and another.

"If you fall, fall forward," Jerry said as they neared the final step.

The robot put one foot onto the final platform, then the other. It was as momentous as a moon landing. The room erupted in cheers, and Carff punched in a series of automated victory dances for the robot to perform. But the dancing was too much: mid-running man, the robot toppled against the guardrail and slumped over.

The IHMC team ultimately took second place in the competition. They chalk up their success partly to their conservative approach to autonomy. They focused on balancing and walking algorithms and left the rest to humans. "All the real cognition was inside the garage," said Matt Johnson, one of the team leaders.

"If John tried to manually do the stairs, it would’ve been impossible, because the balancing is so hard," Johnson said. "If we tried to do it autonomously, it would've failed because it misplaced its first step. But he was able to say, that doesn't look right, we should do it again. That’s what people are great at, telling when there’s an abnormal situation."

Even teams that emphasized autonomy landed in a similar position. Russ Tedrake, one of the leads of the MIT CSAIL team, calls it graded autonomy. "The idea is to solve really hard perception problems by writing algorithms that take advantage of a few pieces of information from a human," Tedrake said. "We’re really excited about it as a way to take the autonomous algorithms we have and make them already practical for the real world without having to solve AI completely. For me it’s a new paradigm."

Robots will rely on the things humans are good at

Jerry and Matt compare these robots to cyborgs. Just as humans have been technologically augmented with better vision, memory, and navigation, they said, robots will rely on the things humans are good at: perception, common sense, contextual understanding. For them, the goal of greater automation is to make the human-robot interface more intuitive, allowing people to guide robots through the world without a deep understanding of the machine’s quirks.

Photo courtesy of DARPA.

South Korea’s KAIST team took the top prize, beating IHMC by six minutes with its robot Hubo. Carnegie Mellon’s CHIMP took third. Both robots are evidence of what can be accomplished by being just humanoid enough to work in human-designed environments without needing to balance on two legs. CHIMP has treads on its elbows, allowing it to recover from falls and plough through rubble on all fours. Hubo moves on wheeled knees; when it climbs stairs, it stands, rotates 180 degrees at its waist, and walks up backwards to avoid hitting its shins.

Jerry believes that legged robots have a future, though we may be decades away from seeing them out in the world. But in the short term, he says, making robot upper bodies will be big business.

At the challenge, there were hints of the intense commercial interest currently surrounding robots. Travis Kalanick of Uber was there, visiting with Carnegie Mellon’s team, the leader of which recently went on leave to work on Uber’s autonomous car initiative. Larry Page was there as well. Google has already snapped up several teams involved in the challenge: Boston Dynamics and Schaft, a Japanese team that won the last challenge and then withdrew from the competition. Elon Musk, who is both pursuing autonomous driving and worrying about the robot uprising, stopped by to watch IHMC’s run.

Pratt compares the DRC to the autonomous driving challenge first held in 2004. The race was a mess: many cars crashed immediately, and the most successful one made it only 7 miles into the 150-mile course before it got stuck on a rock and its wheels caught on fire. But the next year, five teams finished the course, and a decade later, Google, Tesla, Uber, and possibly Apple are all pursuing autonomous driving, often with teams built from participants in the DARPA challenge.

Many teams said that Amazon recruiters stopped by the garage. Amazon sponsored several of the teams, and the week before, it had held a challenge of its own: design a robot that can pick objects off a shelf and deposit them in a box. The company currently uses Kiva robots to bring shelves to human workers, who pick them up and package them. The MIT team took second place in the challenge using the perception algorithm designed for the DARPA competition, an early indication of the way the technology on display here will have wider applications.

Robots capable of recognizing and handling a wide range of objects would allow manufacturing to become automated even further, along with packaging, shipping, cooking, and other jobs that are repetitive but just unpredictable enough to stymie today’s robots.

Maybe humans will do image recognition for robots from a call center

"There’s a new batch of robotic technologies that have the capacity to displace the workforce," says MIT’s Maurice Fallon. "I think the benefits that come about from the technology are pretty self-evident, but it will take questioning in our society about how we get the best of these without too detrimental an effect."

The challenge showed that robots are getting faster, more autonomous, and more adaptable. It also showed that robots still have significant shortcomings. They need stronger actuators, more powerful batteries, better touch sensors, more dexterous grippers, and, until they get better at perception and object recognition, humans there to help them.

The challenge was as much a showcase of what robots can do, as it was a preview of what our future relationships with them might look like. Maybe, Jerry Pratt said, there will be new jobs doing image recognition for robots. Watching the teams, it was easy to imagine a future with robots out in the world calling in for technical support from a human workforce, maybe sitting in a call center behind monitor arrays like those in the DARPA garage.