Behind the scenes with Weta Digital and the stunning mandrill bridge chase
One of the most intense action sequences in Jake Kasdan’s Jumanji: The Next Level sees the main cast – including Dwayne ‘The Rock’ Johnson – attempting to evade a troop of vicious mandrills while also traversing a series of precarious rope bridges.
The sequence would involve a wealth of visual effects; for the mandrills themselves, the mountainous landscapes, and the bridges (the simulation of which became a touchstone feature of the scene).
A major challenge for the VFX studio behind the work, Weta Digital, became marrying up mandrill animation with the constantly moving bridges. Here’s a look at how both the live-action for the mandrill chase was shot and how Weta solved the complex VFX.
How to shoot a scene…on bridges
Production visual effects supervisor Mark Breakspear told befores & afters that the mandrill chase was actually intended to take place in the first film, Welcome to the Jungle, but there hadn’t been time and budget to tackle it back then. This time around, a deliberately terrifying sequence was imagined.
Planning for the chase began with an early environment layout. This was taken in by previs studio The Third Floor. “They started putting bridges in and coming up with the look of the environment,” says Breakspear. “Then we started having discussions about, where do they run, what are some of the things that can occur? Todd Constantine over at Third Floor and his team did an absolutely amazing job of telling a story about what could be happening and then refining and refining and refining.”
Live action with the main actors was mostly shot on an interior bluescreen set in Atlanta, although the ‘A’ and ‘B’ sides of the chase were acquired at Lithona Quarry just outside the city. “The bridges themselves were practical,” outlines Breakspear. “We built 40 foot, 30 foot and 25 foot versions, and they were on Lazy Susans so we could turn them around and we could get interactive light.”
Real mandrills are surprisingly…boring
Although the mandrills in this sequence were vicious and intentionally scary, it turns out mandrills in real life are much more sedentary. “We tried to dig up as much mandrill reference as we could but it’s surprisingly boring,” notes Weta Digital visual effects supervisor Ken McGaugh. “There’s next to no reference of mandrills being aggressive at all. They usually just slowly walk around and sit down and scratch themselves. Our mandrills are actually distinctly different to real mandrills. They’re based off of concept art, not from reference. Our mandrills actually have tails, and the proportions are quite a bit different. We just had to go with our gut on making them look aggressive – we referenced a lot more baboons because baboons are more aggressive.”
To help inform mandrill animation, Weta Digital capitalized on its extensive experience in bringing primates to life by relying on motion capture. “We have some motion capture artists here who have a lot of experience with us using special arm extenders and rigs to achieve primate actions,” says McGaugh. “The Apes experience lent itself quite a bit to the mandrills as well, even though they’re monkeys, not apes. We mostly used mocap for all the action that takes place on the ground at the end of the scene.”
That experience, of course, also translated into the final animation, as well as for things like fur simulation. Here, though, Weta Digital did decide to resurrect a previously used technology that had been dormant at the studio for a little while. “This was for rendering our fur in a way that lets you consolidate a bunch of fur strands into a single strand, but shade them as if they are a bunch of fur strands,” details McGaugh. “We used it quite a bit on Alvin and the Chipmunks: The Road Chip, and I don’t think it was used much between Alvin and the Chipmunks and Jumanji, largely because in the interim we’d switched to Katana and it hadn’t yet been ported to Katana.”
Bridges, and moving bridges
While compositing live-action actors shot on bluescreen and generating photoreal mandrills was a tough challenge, Weta Digital’s most significant hurdle for the sequence came in the form of generating so many mandrills – and actors – on many different bridges for the shots. This was further complicated by the fact that the rope suspension bridges had to appear to have the right kind of motion as people and animals jumped or ran on them.
Breakspear described it as: “It’s not just a case of having actors on a bridge pretending mandrills are chasing them. There has to be replacement of the people, replacement of the bridges and addition of the mandrills so that the whole thing dynamically feels accurate. Otherwise it would just looked like the mandrills didn’t have any weight.”
Some of that action happens in the foreground, while a significant portion also takes place in the background. Traditionally, Weta Digital might have used crowd simulation tool Massive for the background mandrills. “But,” says McGaugh, “here the bridges added an extra twist to all this in that they needed to be dynamic. They needed to respond to the actions of the monkeys on them and vice versa. While we actually did do some tests in Massive that achieved that, it wouldn’t hold up that close to camera and we knew that we needed a solution that would work for close to camera as well as in the background.”
The solution came in the form of animating vignettes of the mandrills running across bridges, being idle on bridges or doing other various things. “We had a trailer early on that really pushed this idea,” recounts McGaugh. “In that trailer we had exactly one vignette that was about eight monkeys running across a bridge. The monkeys land on the bridge, run across and jump off the other side. Just by changing the timings of that one vignette, we were able to fill out all the shots in the trailer. That proved that the idea could work.”
However, the vignettes idea was also a gamble for Weta Digital since it involved spending a large portion of the animation budget in a way that did not allow the VFX studio – or the client – to see any results until later on.
“We basically had to go into hibernation as far as showing things to the client,” says McGaugh. “Fortunately, we were able to show a vignette on its own and say, This is where we are planning on using this shot.’ Fortunately the client was very, very good at viewing vignettes, telling us the things that in general they liked and didn’t like without looking through the lens of an actual shot or an edit.”
“In the end,” adds McGaugh, “we couldn’t have done the sequence without the vignettes. It’s one thing for an animator to go take some library animation and place it in the shot, but in this case a vignette represented a dozen monkeys, sometimes two or three bridges, along with all the dynamics on the bridges, including secondary dynamics on the ropes and FX dust and debris coming off the bridges. Those had be packaged up in a way that could be placed and displayed very, very quickly and easily by the animators.”
The bridge vignette pipeline
Weta Digital leveraged its internal Atlas scene graph and Gazebo GPU rendering solution to help make the bridge vignette pipeline possible. The Atlas scene graph system is independent of any DCC such as MotionBuilder or Maya, and is tuned to be very fast at displaying skinned models and proxies. For the mandrills chase, the bridge vignettes were packaged up in a particular way. Here, McGaugh breaks down the process, step-by-step.
1. We would have a static bridge or group of bridges that had no dynamics on them, and the animator would animate the monkeys doing the choreography across these bridges, jumping from bridge to bridge, swinging, stopping on an end, whatever was required for that vignette.
2. That would then go to our motion edit team and they would use a ragdoll piece of software called Euphoria. It allowed us to actually take the animation from the animators, let it drive a simulation on the bridge, and it automatically conformed the monkeys to the dynamic bridge. The input was monkeys running over a static bridge and the output was monkeys running over a dynamic bridge where the dynamics are driven by the monkey animation and the monkeys are sticking to the bridge or the ropes.
3. Then it would go to the creatures team and the creatures team would run secondary dynamics. The big handhold ropes and the floor of the bridges and the posts were all part of a puppet that would get stimulated by Euphoria. But the smaller ropes connecting everything together were secondary simulations done by our creatures department.
4. It then all gets put back together and it goes back to the motion edit department and they would do the QC pass where they’d make sure all the feet contacts are clean and nothing’s crashing through anything else.
5. Then it goes to the FX team and the FX team would run simulations based off of the motion of the bridge, little bits of debris, say, little bits of moss falling off and dust being kicked up.
6. The whole lot would then be packaged up as its own vignette asset. And then the animator – the original animator that was animating the monkeys in the vignette – could then take the whole vignette and very simply place it around and change the timings of it. This leveraged our Atlas scene graph system and Gazebo renderer to display it very quickly and efficiently. Once that was all in place, it truly then did become a lot easier for an animator to place bridges full of monkeys than it was to just animate a single monkey.
A digital Rock
Of course, it’s not just the mandrills that do all the running and jumping in the scene. The main characters also leap, swing and, at one point, Dwayne Johnson’s character picks up a rope bridge to shake the animals off. This required, at times, digital doubles, face replacements or digital take-overs to pull off certain stunts or for characters to appear in background action.
“We had digital doubles for every single performer in the sequence, all five of them,” states McGaugh. “They all had different requirements. For example, Mouse (Kevin Hart) only needed to be a really small digital double, but from the waist down he needed to be very high-res because we have those shots of him where he’s crashed through the bridge and his lower half is hanging below the bridge. He’s actually on a crash mat there and so we had to completely replace his lower half with CG and that required a quite high-res lower half to his body.”
Weta Digital has in the past done several films involving Dwayne Johnson, so they already had on hand detailed scans of the actor. Here, though, they needed to push his facial expressions for face replacement shots even further than before. “He’s really straining when he’s lifting those ropes on that bridge to throw the monkeys off,” says McGaugh. “We have lots of reference of him doing his workouts where he’s got a massive vein in his forehead and he’s really straining- he’s pulling his cheeks back really far. Our existing Rock face rig would completely flatten his face out. It wouldn’t look like him, it would go completely off model. So we had to build in a lot more extreme face shapes for that.”
Bridges in the sky
The backgrounds for the mandrill chase are based on the Zhangjiajie National Forest Park in China. This also happened to be the basis of many of the floating rock spire scenes in Avatar, and that meant Weta Digital was already well-versed in what those kinds of backgrounds looked like.
To be as efficient as possible with so many background mountain elements, Weta Digital built up rock spires out of kit pieces. “One of the benefits of doing that,” notes McGaugh, “is that you can instance everything and it makes it quite a bit more efficient for rendering. You can quickly bash things together and change them if you have to.”
The downside, says McGaugh, is that it can be difficult to get what feels like a coherent look across a spire. “If it’s kitbashed together with a bunch of pieces all crashing into each other with a bunch of foliage added onto it, you don’t quite get the integration. So we developed some comp-based weathering tools that would take renders of the spires in the cliffs and the rock faces. By the time it’s come through our Manuka renderer, it’s all been assembled. You have all the data, you know that there is a tree on a little ledge right ‘here’. Therefore you can simulate some staining that happens underneath the tree where the tree contacts it.”
For cloud formations and the necessary volumetrics high above the ground, the VFX studio relied heavily on the Nuke plug-in Eddy from VortechsFX, which was actually developed by artists from Weta Digital.
“We leveraged Eddy on this show more than we’ve done on previous shows I’ve worked on,” comments McGaugh. “However, we did next to no simulations in Eddy. We used Eddy mostly as a layout and pre-rendering tool for pre-baked volumes. Most of the volumes either came from our library or they were generated by FX. In some situations, we used Eddy to assemble and create these composite atmospherics and volumes that we would then render out to brick maps and we actually would use our old Nuke brick map pipeline – it didn’t have a lot of overheads that Eddy has with it, so it was able to run faster.”
While audiences took in the vast rock spire backgrounds, there were also other elements to the environment that might not at first seem obvious. As Breakspear describes, a bunch of hidden animals appears in the mountainscapes.
“There is a monkey in the background – a big stone monkey – on the side of a mountain that’s very obvious when you see it. And then there’s also slightly less obvious animals – there’s a rhino, there is an alligator with its mouth open, and there’s an elephant. That’s something Jake loves to do, is to hide animals in this whole movie. If you’re not looking, just watching the actors, you’ll never see them. But if you start looking around on a second viewing, it’s a cool little thing.”
For McGaugh, working on The Next Level proved to be an exciting journey, one that became slightly tougher once it was realized that it wasn’t just a CG creature-filled sequence. “It became obvious that the biggest challenge we had on the sequence was actually the bridges themselves,” he says. “They ended up being a huge character in the sequence.”
McGaugh also recalled Mark Breakspear joking about the complexity of the bridges work, especially when Weta Digital’s CG bridges ended up being preferred over the practical bridges that had been used on set. “Mark commented that that was the reason why they came to Weta anyway,” says McGaugh, “because we’re known for our bridges, not for our monkeys or our environments. Just our bridges…”Buy issue #1 of befores & afters in print