Hey Thanos, what’s new with you?


The extra tools employed by Weta Digital and Digital Domain in crafting Thanos for ‘Endgame’.

#endgameweek is brought to you by Masters of FX.

Last year, Digital Domain and Weta Digital set some new standards for the translation of live-action actor performances to CG characters when they took Josh Brolin’s motion-captured acting for Avengers: Infinity War’s Thanos and gave it life as a digital being.

Both studios relied on their well-established facial animation pipelines to complete Thanos, with some of the impressive tech breakthroughs back then including Digital Domain introducing some machine learning techniques into the process, and Weta Digital-first translating Brolin’s performance into a digital facsimile of the actor himself, before moving things over to the Thanos puppet.

Now with Endgame, Weta Digital and Digital Domain had a chance to further innovate on their digital character work. Here’s a look at the new approaches they adopted for their respective Thanos’ in the film.

Digital Domain goes even further with machine learning

For Digital Domain, work on Thanos was very much a continual process between films – in which they re-employed their Masquerade workflow – but one area the studio did concentrate on was to improve efficiencies in tracking Brolin’s face. “That whole process used to be quite a manual thing, and could take up to one or two weeks to track a shot,” states Digital Domain visual effects supervisor Kelly Port. “This time we were able to utilize some machine learning techniques, and we were able to take that down from one or two weeks to just a few hours. That became a more automated process, and that’s huge.”

Digital Domain's Thanos.
Digital Domain’s Thanos. © Marvel 2019.

On Infinity War, Digital Domain had used machine learning to up-res the motion capture data on a per shot basis to ensure the performance included movement at the micro-level. What was extended here was the use of machine learning to simplify other procedures that needed to be done surrounding tracking the face. For this, in particular, Digital Domain relied on a new piece of software called Bulls Eye that uses machine learning techniques to automate the 3D marker tracking from the head-mounted camera. It lowered turn-around times significantly (hours instead of days) and allowed the team to dive even further into the detail on Thanos.

“We wanted to really take what we had achieved on the first one, and just find more nuances to show on Thanos,” notes Digital Domain head of animation Jan Philip Cramer. “And, I mean, we thought the last one was really good, but then once you really study the details of the lips, and we started just introducing lots more controls for the animators to get absolutely the micro-level of the performance out.”

“In the yurt sequence, for example,” adds Cramer, “this was a sequence where we were able to really showcase this because it was a bit of a different Thanos. He was just at home, no longer this aggressive being that we had met before, but rather more humbling. And he’s in lots of intimate performances, where you get really close on his face, where you could really show how the mechanics of how it looks.”

Weta Digital adds Deep Shapes

As noted, Weta Digital retained its Infinity War Thanos approach of using an ‘actor puppet’, an intermediary stage of the digital Josh Brolin, to deliver Endgame Thanos shots. What this means it that they would first solve the tracks from the head-mounted camera capture of the actor, check this was delivering the right performance on their actor puppet, before migrating that motion to their digital Thanos model. Disney Research Medusa scans of Brolin were also used to validate the work.

Digital Domain's Thanos
Weta Digital’s Thanos. © Marvel 2019.

In terms of extra Thanos development for Endgame, Weta Digital introduced a few new approaches. “Firstly,” says Weta Digital visual effects supervisor Matt Aitken, “we had felt as we were doing the work on Infinity War that we could improve our ability to control the corners of Thanos’ mouth in a more detailed fashion. We ended up having to patch that for Infinity War, but once the first film was done, we had the time to circle back and fix that.”

“Another thing we did on Endgame that we used for the first time,” adds Aitken, “was implement some tech called Deep Shapes, a methodology for adding another level of complexity to the facial performance in an analytic way. It’s not a simulation, it’s intermediary shapes. When the facial performance is going from one expression to the next, the end points of that transition aren’t changed at all, but the transition itself gets more of the sense of the tissue in the face. It doesn’t involve any extra work from the facial models team, it’s an analytic process, and is available to the facial animation team as they’re working – they get to see this stuff in real-time, they don’t have to dial it in.”

The legacy of Thanos

Both Aitken from Weta Digital, and Port and Cramer from Digital Domain, say that they valued highly the ability to work on a significant character on two movies back to back, something not all VFX studios gets to do.

“I feel like I know Thanos,” comments Cramer. “I’ve spent a lot of my life now with him, for three years. And then at the end of it all, I was allowed to even kill him, which made for a very nice arc.”

Leave a Reply