VFX Futures: How a deep learning renderer was used to re-animate a character’s lines in ‘Free Guy’

Digital Domain used its Charlatan tool to re-animate a line for the game-play version of Channing Tatum in the film.

Amongst the various visual effects shots worked on by Digital Domain for Shawn Levy and Ryan Reynolds’ Free Guy was a moment in the film that presented the studio with both a storytelling and technical challenge.

The moment revolved around the game character version of BadAss, played by Channing Tatum. The actor performed that role, with Digital Domain making a ‘game-play’ digital double of the character. A speech made by BadAss was originally delivered, but then the filmmakers realized the scene needed changes to the dialogue.

Instead of what can be an expensive process of organizing reshoots for only a couple of lines, Digital Domain was tasked with initially trying to re-animate the mouth and face performance of the game-play digi-double. But something wasn’t quite right, so the VFX studio turned to a slightly unconventional use of their deep learning renderer called Charlatan (something they also refer to as a face-swapping tool) to handle the required face-swapping for BadAss here, which was essentially re-animated lines.

Charlatan uses neural networks and a 2D approach to do its work. As a Digital Domain press release notes, “Charlatan takes existing footage and analyzes the movements down to the minutia. Artists then introduce a new face digitally constructed by hand, and the neural network in Charlatan matches it with the existing footage, replacing the original performance. Artists can then alter the facial movements to incorporate new expressions.”

You may have seen Charlatan used already by DD for a Super Bowl ‘Coach Lombardi’ commercial, and for a David Beckham ‘Malaria Must Die’ campaign.

Of course, in those examples, the subject was live action, and in Free Guy, it was a digital-double made for the game play view. But the process with Charlatan worked. It gave a much more natural and life-like result, even though this was a game-version character.

Some more detail from the DD release: “Artists created a new facial model of BadAss by hand, then used Charlatan to combine it with the original performance. Once the neural network was able to link the two and replace the original animation, the results were a more realistic digital avatar that could then mimic the actor’s facial mannerisms and movements to mimic reading the new lines. The actor then later recorded the new dialogue in ADR.”

For more on how this process worked, I talked to Digital Domain visual effects supervisor Nikos Kalaitzidis about that decision to adopt Charlatan, and the results, plus a look to the future of this kind of work in visual effects.

You can listen in at Apple Podcasts or Spotify, or in the embedded player below. Also, here’s the RSS feed.

Feature image from Ryan Reynolds’ Twitter page.

Subscribe to befores & afters magazine