Bringing a project like Christian Rivers’ film Mortal Engines to life was always going to require a significant amount of visual-effects work. Based on Philip Reeve’s novel series, the story takes place in a future dystopia where cities have become massive, roving machines that prowl the surface of the Earth, with London, the biggest of them all, gobbling up smaller cities and dicing them to pieces for resources. There was simply no way to realize the world without heavy use of digital animation. Leading the charge: Weta Digital, the group known for its groundbreaking work on projects like Peter Jackson’s Lord of the Rings films and the Planet of the Apes series.
But while building up the film’s massive, mobile monstrosities was the most obvious challenge, some of the more nuanced work was even more intriguing. I sat down with visual effects supervisor Ken McGaugh (Godzilla, Valerian and the City of a Thousand Planets) and animation supervisor Dennis Yoo (Avatar, The Jungle Book) to talk about using computer simulations to create on-screen destruction, why motion capture isn’t always the best idea, and what Weta hopes to do next.
This interview has been edited and condensed for clarity.
Procedurally generated destruction
In the world of Mortal Engines, the interior of the mobile city London is a giant processing plant that rips smaller cities to pieces. In one of the film’s early chase sequences, a young revolutionary named Hester (Hera Hilmar) is pursued through one of these smaller cities as it’s dismantled around her. In the past, scenes like this would be handled by digitally building the shards of the destroyed buildings, then carefully fitting them back together so they could explode on-screen. Instead, Weta elected to do the sequence procedurally: creating digital models of the complete buildings, and letting a computer simulation determine how they would splinter apart.
Can you walk me through the old process of handling this kind of sequence, and how you approached it differently?
Ken McGaugh: Sure. So as an example, if you had a section which was wood, you would have to pre-build all the shattered, splintered pieces. You’d have to have a modeler go in and model it all in a way that fits back together. And then when you run the simulation, you have it just animating those pieces opening up and flying around. Likewise for tearing and bending metal. You have to not only have the cutting modeled, where it tears, but you have to have all the shapes of it bent modeled, as well. And then you have to animate those into position.
Dennis Yoo: So 10 years ago, you see something cutting through something else in a scene, and you go “Are you going to keep this? Because we’re going to start building it.” So they were locked down, you couldn’t actually change it after, that because it was so time-intensive to create the effect. You’re stuck.
KM: And now, with this, we put a lot of effort up front. There was a lot of effort involved in building these things so they could be destroyed. The models all have to be made watertight, so they have to have thickness. A lot of times, you model a thick brick wall, it’s just fake. It’s very thin. But we had to make sure it had proper thickness, so that the digital models weren’t intersecting each other too much. And then you go and you assign these building elements material properties. You say, “This is wood.” Then when you hit it with a saw in the computer, you get the breaking and the splintering and everything procedurally.
What does this approach get you that the other approach doesn’t? Is it quicker or easier in the long run?
KM: Well, because of the amount of work you have to do, setting up something to be arbitrarily cut up is quite intense. It also means you have to be a lot more physically based. You can’t just cheat things to eye. It’s a bit more daunting of a task, but it’s one we knew would pay off.
“I feel like that’s a lot of my job. Facilitating [filmmakers] to be as creative as possible.”
One example — in one of the shots where Hester’s sliding down that roof. There’s a big wide-open space of a building, and it’s like, “Well, we probably should hit it with something.” We were talking to the animators and said, “How about you just hit it with a saw?” The effects supervisor was like, “What’s gonna happen?” And it’s like, “Well, let’s see!” And they hit it with a saw, they ran the simulation; it looked great.
It always needs cleanup. You’ll always have to go in, and you get these bits spinning real fast, or flying off too much, so there’s definitely a lot of work that goes into cleaning it up, but that would happen no matter what. So it’s a trade-off, and this was one case where we just knew the benefit would be there.
DY: You’re allowing that creative freedom for the filmmakers to change things, as well.
KM: I feel like that’s a lot of my job. Facilitating those that are a lot more creative, or in more creative positions than me, to be as creative as possible. Be it the filmmakers or the animation team.
Is the move to procedurally generated animation the general direction of the visual effects industry at large?
KM: I think everyone’s moving that direction increasingly. It’s a huge investment up front, and you have to know the payoff’s going to be there. If you end up having one shot that has a destruction, it’s not worth it. But we knew that this whole sequence was going to be across multiple shots, needed a lot of choreography across those shots with live-action plates, as well as digi-doubles, moving cameras… We were confident that the payoff would be there, so we invested in the upfront work.
When to ignore motion capture
Weta is known for its motion-capture characters, like Gollum in the Lord of the Rings films and Caesar from Planet of the Apes. In Mortal Engines, a CG character named Shrike seems like a likely candidate for the same approach to character generation. Portrayed by Stephen Lang (Avatar), Shrike is a cyborg assassin with glowing green eyes. Originally, he was going to have a static, non-expressive face. But those plans changed, leading to Shrike having some remarkably nuanced, subtle facial expressions. To create him, the filmmakers chose to ignore modern mo-cap techniques in favor of empowering the film’s animators to create his distinct movement by hand using traditional keyframe animation.
When I think of Weta creating a CG character, I immediately think motion-capture. Why did you decide to not go in that direction with Shrike?
DY: It was a big question mark whether we were going to motion-capture or not. I wanted to push that into motion-capture, initially. Keyframing actually has a lot of intensive work, and it’s artist-driven, as well. I didn’t even know what kind of crew I was going to have, if I was going to have strong keyframers or not. So that was a bit of a worry.
I have five guys that do amazing keyframe motion where they can make it look real. I have another 15 guys that are mid-level; they take a little while to make this stuff look real, I have to pull them through. So when you have motion-capture, you have this great base to work from, so all those guys have an easier time just pushing a lot faster. If we’re not having any motion-capture, you’re starting from scratch. So all that time I need to spend to make things look real takes much longer.
“That movement was supposed to be stuff that you couldn’t actually do with motion-capture.”
That’s why I wanted to actually go the motion-capture route, until I started talking more with Christian, where I started realizing, “Oh, it’s just going to be the same amount of time.” We’re going to capture this stuff, which is a big, front-loaded thing where the cost is quite high, where you’re hiring actors. It’s literally like a little film set. You’re hiring an AD, a whole crew. It costs quite a bit. And then I started realizing, “We’re going to have to keyframe on top of the motion-capture, because Christian doesn’t want the way Shrike moves to look like motion capture.” So that cost went out the window. That’s the main reason we went keyframe. It’s because that movement was supposed to be stuff you couldn’t actually do with motion-capture.
So how does Stephen Lang’s performance play into the character? Is he just doing the voice and facial performance?
DY: His facial performance was a bit limited because he thought Shrike was going to be —
KM: Shrike didn’t have a face when we were shooting. Lang was trying to pretend he didn’t have any facial articulation himself.
DY: Right. But it’s all his voice, it’s all him there. So that’s definitely something we need to look at as reference.
Eyes and skies
There’s a scene near the end of the film where Shrike has a strong emotional moment with Hester, and it’s created almost entirely with his eyes and the way his brow moves. It’s remarkably lifelike. How did that come together, since you didn’t have that motion-capture base to fall back on?
DY: That is the biggest problem. You’re literally trying to go, “Okay, I want to make this performance believable.” If you’re filming yourself, and you’re acting pretty poorly, then you’re having a hard time. It’s great when you have an acting reference like Andy Serkis. We’re using Andy Serkis’ acting reference in the Planet of the Apes movies and we’re trying to get Caesar to hit the same moments he’s doing. It’s not a one-to-one with the facial data we get. We’re actually trying to get Caesar to feel the same way as Andy on those particular frames, or in those particular moments.
So with Shrike, some of his performance wasn’t filmed, or Stephen was trying to be too stoic, and he needed to actually act more. Then we actually had to make it up ourselves. I’d go with a facial animator, and we’d just talk it through, and try to figure out how these emotional beats would hit. And that’s actually looking in the mirror and trying to get your performance.
“We pick up such subtlety in the human face, that anything that’s too exaggerated you instantly think is fake.”
One thing that’s good is that we’re not actors, where you need to perform and it has to be on film. We can actually keep looking — “Okay, when I do this, it feels more real.” We’re just playing it back and forth and trying to find those performance beats. As humans, we pick up such subtlety in the human face that anything that’s too exaggerated, you instantly think is fake. So it’s just finding that subtle beat where it starts to feel real.
Were there any specific moments you had to pull back on?
DY: It’s usually overacting, because overacting is the easiest acting you can do. What actor can’t overact, right? [Laughs] It’s the ones that have these believable performances; those are the guys that get paid all the money. That’s one thing I needed to tone down with the animators, because animators love to overact.
How did the design of Shrike’s eyes come about? When you first see them, they’ve very bright, almost cartoonish. But in his final moments, he has a lot more nuance.
DY: Christian really wanted to push that with these blasting eyes in the beginning. This was stalker mode, and it’s more intense, the more intense he feels. And that actually created this awesome ambiguity with him. You didn’t actually relate to him at all. I’m going to reference the movie Top Gun — all the Russians in Top Gun had these helmets, so you didn’t care if they died. Just kind of detaching that emotion. But once you start seeing them as people, it’s like “Oh wait, there’s a person behind there.”
So in those flashbacks, when you first see Shrike without his eyes being blasted, you start connecting to him. In the scenes where he’s dying, that flashing starts disappearing, so you start connecting with him, again. That was purposefully done. Christian would savor those moments. Even blinks; he didn’t even want him to blink, until the right moment. So we were trying to nail the right scene, the right shot, of when he’s supposed to blink and emote and connect Shrike with the audience.
There are always technological breakthroughs on movies like this, because you’re trying to do so many things. Was there anything you wanted to do, but couldn’t pull off?
KM: For me, yes. We were doing all the exterior stuff in the Great Hunting Ground for the chase scene. We have a new rendering technology that allows us to do a physical simulation of atmosphere. So the whole sky, we aren’t telling the computer, “We’ve got this blue sky.” We actually say, “We have this air quality, and we have a sun out here,” and it creates the whole sky, all the aerial perspective, all the god-rays. But it doesn’t work with clouds. We started working on a way of potentially generating clouds, so we get proper cloud god-rays and shadows on the ground, and it would have given us a lot more flexibility in lighting.
We usually work with these high-dynamic-range images of skies that have a sun “baked” into them, or at least the effects of the sun. So as soon as you start moving your CG sun away from that, you start getting a disconnect between the look of the sky and the lighting of your CG terrain. I wanted to put a couple of those together by having a fully procedural sky, including clouds, and we just didn’t quite get there.
Mortal Engines is now playing in theaters.