Gollum. Caesar. Thanos. And more.
For over two decades, Weta Digital has played a key role in many performance capture innovations. The studio has, via the crafting of scores of CG creatures and characters, invented workflows for body and facial capture and developed tools to ‘solve’ capture data and add to the performance with keyframe animation and many other steps that make up the compelling final result.
Here we dive into a visual history of Weta Digital’s relationship with performance capture, including the major milestones the studio has been involved with in capturing actors and translating them into CG characters.
The Lord of the Rings: The Two Towers (2002)
With Gollum, Weta Digital had to deliver a main character that talked and had to interact with other characters. Andy Serkis, with whom Weta would collaborate on multiple projects to come, performed Gollum on set with the other actors, often just in a skin-tight plain colored-suit. Additional body-only motion capture of Serkis in an optical marker suit was done separately on a motion capture volume installed at Weta Digital (which has been running ever since). The final result was a blend of this motion capture of the actor and keyframing by artists, always closely observing Serkis’ original performance.
King Kong (2005)
For the first time, the studio utilized facial motion capture to drive the facial performance. This was actually the same system that had been used for body motion capture on Gollum, but reconfigured and ‘shrunk down’ to work with facial markers on Andy Serkis (who had to stay within a one meter cube to enable accurate capture). A new facial solver implementing the facial action coding system (FACS) took the facial performance and used them to drive Weta Digital’s CG facial puppet, the same puppet ultimately used for keyframe animation.
For Avatar, Weta Digital handled performance capture data that came from multiple performers working together on a motion capture volume. It also represented the introduction of head-mounted cameras (HMCs) into the process, giving actors better freedom of movement on the stage and allowing body and facial tracking to happen together. At this point, too, Weta Digital began implementing its FACETS system, which takes in tracked data from the HMCs to drive the facial solver, coupled with the input of motion editors and animators who are able to edit and control the solve.
The Adventures of Tintin: The Secret of the Unicorn (2011)
Weta Digital’s first feature animation project involved several performance captured actors working in a volume. Here, the new development in facial motion capture was allowing for real-time on-set facial animation. The actors and filmmakers could get visual feedback on their performance in real-time on a proxy CG character model while on the motion capture stage.
The Hobbit trilogy (2012/2013/2014)
Gollum and several other performance-captured characters were created in these new films set in Middle-Earth, where Weta Digital introduced performance capture (body and face) that occurred at the same time as live action photography. This was enabled via a switch from an optical motion capture system that used light that was in the visual spectrum to a mocap system that used light that was in the infra-red spectrum, with ‘active’ LED markers. It meant that the capture could be done without interfering with any of the usual film lighting used on set. Weta Digital, as it always does, still dialled in parts of the performance via keyframing, ensuring the original intention of the actor’s performance was maintained.
The Apes trilogy (2011/2014/2017)
The Hobbit films came around the same time that Weta Digital embarked on the new Planet of the Apes films, where the studio worked with a host of actors in using performance capture to drive and inform the final CG primates. The big advancement made here was taking the motion capture out on location, often in sometimes challenging conditions that included rain and snow. in British Columbia. With each Apes film, new developments came along, such as the use of wireless mocap cameras and HMCs which were smaller yet higher in resolution, along with refinements in Weta’s facial and muscle tools and fur simulation techniques.
Avengers: Infinity War and Endgame (2018/2019)
Along with Digital Domain, Weta Digital translated Josh Brolin’s performance capture into a CG Thanos for Infinity War and, a year later, for Endgame. Infinity War represented Weta Digital’s first appearance of their ‘actor puppet’, where the facial solve occurs not straight onto the final digital character but first onto an intermediary stage, in this case a digital copy of Brolin. Artists iterated in this space until they were confident they had captured the full depth and complexity of the actor’s performance, only then moving from a digital Brolin to digital Thanos.
Alita: Battle Angel (2019)
The CG Alita character also relied on Weta Digital’s actor puppet approach. The studio, too, introduced a new active marker motion capture suit for actor Rosa Salazar to wear. The custom suit also allowed artists to track Salazar’s breathing to give animators a guide for making sure those natural movements come through. In addition, the suit featured embedded marker strands that made it faster to ‘suit up’ the actor, with head, hands and feet components made to be detachable, too. The HMC was the first time Weta Digital had implemented a stereo camera rig.
Other notable moments, and what’s ahead
This listing is of course an overview of the major moments in Weta Digital’s performance capture history. The translation of actor to animated character can also be seen in projects such as Fantastic Four: Rise of the Silver Surfer, Furious 7, Central Intelligence, The BFG, Valerian and the City of a Thousand Planets, Rampage and The Umbrella Academy. And Weta Digital looks set to up the performance capture ante again soon with Gemini Man, in which Will Smith plays a hitman who faces off against a younger clone of himself.
Follow along during this special weekly series, #mocaphistory, to re-visit motion capture history and hear from several performance capture professionals.Support befores & afters on Patreon, and get bonus content in the process!