Skip to main content

Teaching robots body language offers common ground for humans and machines

Teaching robots body language offers common ground for humans and machines

/

Artist and coder Madeline Gannon wants to explore new ways for humans and robots to relate to one another

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

Anyone who’s been up close and personal with an industrial robot will tell you that these machines have an uncanny, almost unsettling presence. Rationally you know that they’re programmed automatons, but when they start moving — huge metal arms swishing through the air with inhuman precision and speed — some primeval part of your brain lights up like a switchboard and calls start pouring in.

“Danger, danger!” they say. “You need to get the fuck away from this predator now.”

Madeline Gannon is someone who delights in this discrepancy. She’s an artist, coder, and designer who, for the past few years, has been exploring how humans relate to robots; programming machines that react to our presence and that use mechanical body language of their own to communicate back. In that fecund little valley that divides our rational and and instinctive reactions to machines, Gannon’s work thrives.

Gannon’s work gives robots the means to communicate with humans

Her latest piece, titled Manus, makes the leap from single robots to pack behavior. At the World Economic Forum in September, Gannon installed 10 industrial robot arms in a row, linking them to a single, central controller. Using depth sensors at their bases that give them a “worm’s eye view” of the world, these robots would track passersby and respond to their movements. Algorithms made judgements on who to pay attention to (ranking people who hung around longer higher than those who just arrived, for example), while the robots’ motions flowed from one arm to the next like ripples in a pond.

Speaking to The Verge, Gannon explains the motivations behind her work; the process behind training a pack of robots; and how animalistic behavior might help us imagine a more harmonious future between human and machine.

The interview below has been lightly edited for clarity.

Hello Madeline, thanks for speaking to me today. First, I wonder if you can tell me a little bit about your background and how this latest project of yours came about?

Sure. So my background is atypical for robotics. My training was actually in architecture, that’s where I have a master’s degree, and I sort of fell into robotics and gained extra experience through practice. A lot of what I do is to approach these problems in robots from the perspective of an architect. So once I know how to talk to these machines I lend them my hypersensitivity to how people move through spaces.

This work, Manus, is the third in a series of projects trying to understand body language as a means of communicating with robots. One of my earlier projects was a large-scale installation in the Design Museum in London — a giant industrial robot arm engaged with crowds of people. It was called Mimus, and was a point of contact for people entering the museum, helping them to understand what industrial automation can and can’t do.

Being an ocean away, I was compelled to see how people were reacting to Mimus on Instagram and Twitter, and to see the range of emotions people would project onto the robots; how much personality they could render from playfulness and friendliness, to curiosity and creepiness.

From that I got this latest opportunity, an invitation to the WEF to develop a new installation for them. They have a theme of Industry 4.0, and for me, using this tool, this symbol of automation infrastructure, and reconfiguring it in a more human-centered way, is part of a more desirable future I have for these machines. I take these things that are used to doing short repetitive tasks over and over again, leading very boring lives in factories, and have them become more contextually aware.

Children interacting with Mimus when it was installed in London’s Design Museum.
Children interacting with Mimus when it was installed in London’s Design Museum.
Credit: Madeline Gannon

It’s fascinating to hear you refer to robots as animals, talking about rescuing them from factories as if they were battery hens. What do you think is the benefit of this framing?

For me it’s been a really useful metaphor to think about how we might engage with non-humanoid autonomous machines. I’m based in Pittsburgh, which is this nexus of self-driving car companies, so autonomous machines do cross my path every day in public life. And just like industrial robot arms, these machines are fast and powerful and don’t intrinsically have a way to communicate with the people around them. So in thinking through this problem I lean on this metaphor for animals. It’s something that’s hardwired into us. If you go for a walk in the park and see some strange creature cross your path, you will read its body language and try to understand its intentions. I think that’s something that can be tapped into.

But what about the dangers of treating machines as if they were sentient? I think some roboticists would say this is a bad idea, as it gives people a false impression of the intelligence and autonomy of robots. What do you think?

I do think there’s danger when there’s a disconnect between the behavior and personality the robot is projecting and its agenda or motivation. So you see this with cute robots, for example, because being cute is a very effective way to win us over and gain trust. But these robots might not be worthy of that trust if they have cameras behind those big cute eyes, broadcasting our data to some company we have no agency over or insight into.

My goal with the design language I’ve been working on for non-human robots is really just legibility and transparency, and there’s a level of accountability that can be added when that happens. I’m a bit of a purist in this. So for example, I think a big industrial robot should look dangerous if it’s about to do something dangerous. It should trigger our instincts when it moves; forcing us to step back and give it our full attention.

This is new territory we’re charting, and I want to argue for design patterns that build legibility into the behavior of these machines.

Speaking of that instinctive reaction, there’s video of you interacting with Quipt, one of your earlier projects, reaching out your hand as if to pet it. I saw it and was reminded of that clip of Chris Pratt in Jurassic World, jumping into the velociraptor pen. The body language is exactly the same...

Yes, the sound of the motors really does make it feel as if you’re approaching an animal.

A clip showing Gannon interacting with Quipt, the first of her projects involving industrial robot arms.
A clip showing Gannon interacting with Quipt, the first of her projects involving industrial robot arms.
Credit: Madeline Gannon

But it’s a mime, right? A show? The robot isn’t thinking about you, it’s just following your programming?

Well, we have a lot of one-sided relationships that add meaning to our lives. If you think about our pets, for example, these are non-humanoid things that don’t have great ways of communicating with us, yet they’ve transcended their utility, from being wolves in nature to being sheepdogs, and they’ve been domesticated and they add meaning to our lives.

“you are a bit of a lion tamer if you’re working with that machine.”

To push this metaphor even further, we tend to think about our relationships with robots as this unitary thing, but there’s a dynamic ecology that’s starting to grow, and for different types of robots we have different relationships.

So for that robot in the museum, it may not ever be domesticated but it can be tamed. And you are a bit of a lion tamer if you’re working with that machine. It’s so powerful that if you lose respect for it, it will maul you. In the future, there will be robots like this that we don’t want around people, like military robots, robots where there’s an acceptable level of risk, like industrial or construction robots, but there also will be those that come into our home and workplace, like delivery or assistive robots.

So how do you program this robot body language? What are the ways you communicate?

A lot of it is going with my gut feeling. Implementing it and then pretending to be David Attenborough. Then you see how people interact with it and tune and tweak the parameters to accentuate different features.

The robots do a lot themselves, too. I program the world that they operate in, but I never quite know what they’re going to do. I sort of see myself as orchestrating different stimuli that they receive from the environment. And maybe accentuating their response to it. But a lot of it comes from observing interactions with people and working from there.

Could you give an example of that?

So for Manus, when it was in China, that project had its own engineering challenges. I had just two robots in Pittsburgh, and only got the 10 robots when I had traveled halfway across the planet and was calibrating them for the forum. So there was a lot of guessing from afar and tuning on-site.

“One thing I noticed was that people wanted to control the robots.”

One thing I noticed was that people wanted to control the robots: they would take both hands and raise them on high and beg the robots to follow. Almost as if they were conducting them; like the robots weren’t like living things, but like they were being operated. So I had to go in and help the robots misbehave. So once someone does that gesture they’d get distracted and look at someone else instead. At the WEF there are all these important delegates — you know, very important people in their own world — and they’d go to conduct a robot and then it doesn’t pay attention and looks at their underlings instead, it... kind of takes the wind out of them.

Manus is made up of 10 industrial robots which use depth sensors to track and react to passersby.
Manus is made up of 10 industrial robots which use depth sensors to track and react to passersby.
Credit: Madeline Gannon

And what was the typical reaction from visitors to Mimus?

At WEF, in particular, there were plenty of people who were very knowledgeable about advanced manufacturing; people who might own or use these machines. And I think a lot of surprise or maybe delight came from seeing them do things they’re not designed to do.

So for example, their arrangement in a line evokes an assembly line but they’re spaced in a way that they can all hit each other, which you’d never do in a factory. These robots also usually have tools on the end of them, but I left them as naked as the day they were born in the factory.

I think the most surprising reaction came from the engineers from robotics company ABB who installed them, who were saying that the robots’ personality was something that they thought might be useful in a factory. It’s counterintuitive to them, because they’re thinking it’s a waste of energy to add these extra movements to a robot. But they see that when the robots seems more personable, people might take better care of it.

And speaking more broadly, what do you think is the future of robot-human relationships? Everyone says the robots are coming — in factories, in public spaces, and in our homes. What do you think it will take to make this future harmonious?

“I want an interface with the machine that ... augments me rather than replaces me.”

One of the hard things is that we’ve outlined the future we don’t want with robots. We don’t want them to be our masters and I would argue we also don’t want them to be our slaves. Right now those are the two dominant narratives. I’m definitely interested in exploring alternatives. Not that animalistic robots are the way to this future, but it’s a different narrative and dimension to explore.

With these industrial robots, they have abilities that are how we describe superhumans. They have superhuman speed, superhuman endurance, and precision, and right now they’re all trapped into factories, separated off from people. And what I want — and this is not necessarily the future we should have — is I want those superpowers. I want an interface with the machine that lets me have access to that superhuman strength and precision in a way that augments me rather than replaces me.

Part of the agenda with my installations is to give people the ability to imagine a different future with these machines.