Getting to the final shot

Men in Black: International

Following along with DNEG’s design and FX sims process for the twins and Vungus in ‘Men in Black: International’.

When VFX teams come onto a film, there’s often already concept art, previs, and many decisions already made about how a particular character or shot should look. But oftentimes, the VFX crew are heavily involved in devising the final look of the effect.

Take, for example, scenes in Men in Black: International in which agents M and H take on alien twins who are able to manifest as pure energy. What did that energy look like? What happens when it gets hit by MIB weaponry? Or the alien Vungus, which had been designed but then went through several possible design changes – how does a VFX studio react to changes like this?

These were challenges that DNEG visual effects supervisor Alessandro Ongaro and his team, who worked under overall VFX supes Jerome Chen and Daniel Kramer, faced on MIB: International, which he outlines to befores & afters.

The Twins, and why particles worked better than fluids

The (short) brief: They had this idea that the twins were made of pure energy, but at the same time there was this idea about galaxies. You see them in human form but you also see them as pure energy. Then when they get shot, the idea is that their bodies decompose in the nebula and then get sucked back in. That definitely wasn’t something easy to do.

Concepts: We did a lot of 2D concept work for the twins, which is not always the best way to start because you can do whatever you want in 2D but then you have to translate this concept into 3D. But we gave our artists a lot of freedom by saying, ‘Come up with whatever crazy idea you want, and then we will figure out how to do it.’

Nebula equals fluid sims? Nope: I knew from the very beginning that doing a fluid simulation wasn’t going to be the right approach. Based on past experience, I’ve always found that particle simulations are the best solution to these ‘magical effects’ problems because you can control literally the position of each individual particle if you want. Then, you can render millions of millions of particles – you can get the fluid look by just adding to the density of it.

Particles it is: We started first with a nebula look and then from that point on figured out how we were going to do the simulation. The first tests, they were showing me things done with the fluids, and I always kept saying, ‘Guys, you can keep trying, but I guarantee you’ll never get the look.’ In fact, we had to go with a particle solution to get the nice nebula look.

Tools for the job: We used Houdini to do the sim. The basic effects was a advection through noise. So you have the twins plate and we did a rotomation on them. Then we would surround them with the noise field, where we used to advect particles through it. Because it was a specific way the nebula was forming, it was based on the tracers from the weapons M and H were firing.

We had to create custom fields and divergence fields that would allow us to really decide, okay, there’s a tracer coming hitting on the shoulder, let’s define how fast and how much it’s spread out to control the movement of the particles. And at the same time, they had to be aware of the character of themselves, so avoid the character. Especially when they were sucking in, they had to go back to where they were coming from. We basically built with the guy that did the effects team, kind of a custom solver in Houdini that used divergence fields and velocities and a collision field to literally drive the simulation.

Last minute Vungus

Design iterations: Vungus – the full name is Vungus the Ugly – was actually designed early on by Weta Workshop when we started on this show last September, but for a long time they weren’t sure if Vungus should look like an alien or more like a human. As you can imagine, those are big design differences for us. The original design, which is what we ended up building, had protruded eyes like a frog with a mushroom head and a very wide mouth.

Initially, they thought he might work better if he had more human eyes. We couldn’t really start building it until the design was approved. We started from what Weta did and started to humanize it a little more, changing the way the eyes were and the eye sockets and the mouth. But eventually they decided to go back to basically what was originally designed.

Actor: On top of that, early on they shot the sequence with the actor that played Vungus. It was just a stand-in initially because they were looking for the right actor to play the performance, but they had to shoot it, so they used a stand-in. We were then told that most likely they were going to re-shoot the performance of Vungus.

Now, this limited the technology that we could use because at the very beginning we thought about doing performance capture with tracking markers. But, again, because we weren’t sure if the actor was changing or not, we couldn’t afford to build two separate facial rigs.

Design loop: Eventually, they decided to go back to the original Vungus design. They actually gave the concept to Aaron Simms to do a few more takes on it. And then what that all meant for us was we just had a lot to do in not much time.

Building Vungus: What we had to do was of course build all the blend shapes, based on FACS. So we actually used one of facial rigs we have at DNEG, which is a full facial asset. We used the Vungus face and we transfered the main face from a human to the alien. Which gave us a list of starting points. It was actually a bit surprising how nicely we were able to re-target the blend shapes from a human to the character.

Then of course from there, we started building the hero blend shapes, I think we built about 350 unique blend shapes, which then were split in four. So, we had about 2,400 blend shapes in the facial rig.

Animation: Then we went old school key frames. So the whole performance of Vungus was all done by key frame. We just really matched the performance of the stand-in actor, they ended up loving him and we used that as a reference. But, we really made sure not to – when you go key frames it’s really easy to over animate and go a little bit too cartoony and that is something we didn’t want to do. So we tried to stay as natural, as human natural as possible.

It was challenging because while we were animating, we were finding problems with the facial rigs, we had to go back into the build department and just build shapes, pass them to the rigging department, polish new rigs – it was quite the process. Meanwhile, we were also working on the lookdev, so everything was overlapping. It wasn’t a solution, but in the end I had a great team and we managed to pull it off.

Leave a Reply