Robots do not feel pain. They can roll fearlessly into a war zone, lose a leg from a land mine, or be destroyed and replaced in short order. But for the soldiers who handle them, it's sometimes hard to see one give up the ghost. Recent University of Washington doctoral student Julie Carpenter has studied human-computer interactions for years, and for her dissertation, she examined how the military's explosive ordnance disposal (EOD) teams bonded with the robots they used. And based on her results, she asked a question that may becoming increasingly important: if we start seeing robots as animals or even people, will they lose some of their usefulness as simple tools?

Over the course of interviews with 23 operators of the tiny tank-like robots, participants talked about them in ways that blurred the lines between a tool and an extension of oneself. One man, "Jeremy," imagined his ideal robot as "a full human avatar," saying that "it would be me with remote control." Another participant jokingly talked about robots as things that reflect the thoughts of the people using them, saying that he could tell the attitudes of the person behind the robot by watching how it moved. When robots fail to perform, operators sometimes express frustration at themselves for failing to perform, even if it was a simple mechanical error. The robots, though, became more than tools or avatars: they became colleagues.

"Why did you kill me? Why?"

"They almost become like a team member," says operator "Ben." Though he made clear that a damaged or destroyed robot wasn't remotely on the same level as a wounded team member, there was still a sense of loss. When a friend's robot was blown up by an IED and the "carcass" was recovered, he says, someone attached a sign with the friend's name reading "Why did you kill me? Why?" The sign, along with many other anecdotes, was discussed as a joke. But people still talked to their robots or developed familiarity with a specific machine, learning its quirks. Even if someone described the robot as only a tool in one breath, in the next they might talk about holding a funeral if it "died."

It's human nature to give human or animal characteristics to an inanimate object; we do it with everything from computers to toys to huge forces like winds and oceans. Robots, though, are something different. They move of their own volition and have a sense of purpose, even if they're simply going in circles. And explosive ordnance robots in particular take on a special significance. "When you're deployed you're in a situation where you're potentially lonely and far from home," says Carpenter. "You work in proximity to the robot on a daily basis." She compares the bots to a high-tech version of a military working dog — there's a reason Boston Dynamics named its pack robot BigDog. Designers, she says, actively keep that relationship to a pet-like level, rather than looking for something more autonomous that could take decision-making options away from humans.

"Your instinct when you see it fall over is 'That's sad.'"

But even a pet-like connection could potentially be dangerous. Carpenter's study isn't meant to measure anything statistically, and the people she interviewed didn't believe that their robotic cameraderie affected their decision-making. But it's possible that sympathetic instincts could end up affecting split-second decision-making, she says. That said, how would you stop a process that seems all but inevitable? Carpenter questions, among other things, how much we should be trying to make some robots look like people or animals. BigDog can survive a roll in the mud, but it's almost tragic watching it struggle to its feet. "Your instinct when you see it fall over is 'That's sad,'" says Carpenter. And that sadness could compromise its use.

Not all robots were born to die, though, and there are plenty of benefits to anthropomorphized machines. If someone doesn't have experience interacting with robots, they may be more comfortable with something that suggests a human form. And DARPA is testing disaster response robots that mimic humans or apes, since those bipedal and dexterous designs make it easier to navigate environments designed for people. At a certain point, though, humanoid robots could be undermined by an opposing phenomenon: the uncanny valley, into which robots might fall if they start to look too much like us.

Carpenter agrees that as you work to make something artificial look more and more like a person, it can start coming off as creepy or repulsive. But that doesn't mean that the impulse to connect with it is necessarily overcome, she says. To put it dramatically, if you give a human and a robot enough time, love — or at least familiarity — may conquer all.