Machine learning at the VFX studio level

How Digital Domain has adopted deep learning and AI into its visual effects toolsets.

For a number of years now, Digital Domain has been experimenting with and directly implementing machine learning approaches into its visual effects workflows.

Some of its most recent digital human projects–Ant-Man and The Wasp: Quantumania, She-Hulk: Attorney at Law, and The Quarry–have benefited from machine learning research aiding in the facial capture and facial animation steps.

Here’s an excerpt from issue #11 of befores & afters magazine looking at just some of DD’s work in this area.

Back in 2016, Digital Domain looked to revisit its already heavily used facial capture system, especially to be able to process large amounts of data accurately and quickly. “We felt the best way to do that was to incorporate machine learning into our pipeline,” outlines Lonnie Iannazzo, producer, Technology at Digital Domain. “To this end, we created Masquerade. Masquerade uses an ML system for inferring high-resolution deformations from a sparse marker set. With it, we can drive the high-resolution topology directly with the cleaned-up marker data and have all of the amazing emotive captures come through without limiting an actor’s performance.”

Buy Me A Coffee

The studio had success with Masquerade for offline capture on projects such as Avengers: Infinity War and Avengers: Endgame, but then were keen to explore its ability to work for real-time facial capture. “That led our Digital Human Group to develop Masquerade Live,” states Iannazzo. “With it, an actor’s facial motion is captured using a single camera on a helmet. The images derived then drive several neural networks that intelligently utilize machine learning to decode the actor’s facial movements into 3D geometry, blood flow maps and animated fine wrinkles, all at 60 frames per second.”

Beyond the ground-up creation of CG digital humans with machine learning techniques, Digital Domain has also looked to neural rendering as a way to create more believable digital humans, starting with real images or photography. This led to the creation of Charlatan. “Charlatan leverages machine learning’s ability to convert one image to another,” explains Iannazzo. “It features an advanced suite of digital-aging tools, and it can serve real-time and even mask removal applications.”

Charlatan found prominent use in the 2020 Malaria No More campaign ‘Malaria Must Die’, where artists at Digital Domain added nearly 30 years to David Beckham to deliver a speech in the future. This was done by combining footage of current-day Beckham and an older stand-in, with Charlatan used to merge the performances. Charlatan can also be utilized for facial transfer, as was the case for the late Teresa Teng’s real-time, posthumous hologram performance crafted by Digital Domain.

Meanwhile, Iannazzo identifies the studio’s ML Cloth system as the newest use case for machine learning. It is aimed at speeding up cloth simulation. “Our new simulation software uses a proprietary algorithm to train a machine learning system to replicate the original’s quality and run at more than 150 frames per second,” says Iannazzo. “This capability is immensely useful for accelerating cloth, muscle, and skin simulations, which allows animators to work with full-resolution, complex characters instead of low-resolution rigs.”

Read more in issue #11.

Need After Effects and other VFX plugins? Find them at Toolfarm.

Leave a Reply