2D, 3D and everything in between.
The befores & afters ‘Body of Work’ series is all about showcasing a particular kind of effect from a single visual effects studio over the years. This newest piece highlights the work of Weta Digital and face replacements.
Over decades, now, Weta Digital has been one of the major innovators in photoreal digital humans. Certainly, those kinds of effects are something the VFX studio uses to a high degree in face replacements, but the body of work discussed here – featuring films including Furious 7, Rampage, Iron Man 3 and The Hobbit: An Unexpected Journey – goes even further into 2D replacements, digi-doubles for stunts, replacements where actors have been injured or are unavailable, and even hair replacement.
Read on to hear directly from Weta Digital crew-members Scott Chambers, Sean Walker, Erik Winquist, Ken McGaugh and Martin Hill as they discuss their particular experiences with face replacement VFX.
Scott Chambers, compositing supervisor
The biggest challenge on Gemini Man was making a completely photoreal version of Will Smith that was valid for him physically in his teens and early twenties, while still honoring the global audience’s collective memory of how he has looked over the years. This was predominantly through his performances on ‘The Fresh Prince of Bel-Air’ and his early feature film roles. This was incredibly difficult, but also key to selling the believability.
There were two major hurdles we had to face. Firstly, the fact that he would have had a substantial amount of make up on, for today’s standards. Secondly, the majority of Will’s performances in his younger years that we remember are comedy moments, with the genre-specific exaggerated facial expressions that naturally came with it.
In Gemini Man, the tone, pacing and gravitas called for a slower, more intense and refined acting style. Present day Will performed these elements so well – it was a stark departure from audiences’ memories of young Will’s performances. We had to take creative licences to stay absolutely true to the intent of the performance and the scene, but at the same time, make sure we were injecting subtle nuances that we saw in past performances from his younger days.
The scene where Junior confronts Verris in his office was one of my favourites. This is a contained scene that focused on a heated discussion between a father and son – a real turning point in the film. On top of recognising the importance of pulling off the right mood within this scene, there were no visual distractions to take you away from the emotional connection between the two characters. Will’s performance ranged from angry to hurt to betrayed to a semblance of acceptance. Through painstaking attention to detail we were able to preserve the performance and transfer it to our digital Junior.
Weta Digital has a lot of experience working on hero-character worthy digital characters and this project called upon all of that experience. The success came from the sum of multiple departments, artists and technicians working collaboratively to create what can only be achieved with a team that is cohesive and gels incredibly well together. All departments had a part in bringing this movie milestone to screen. It was the culture of collaboration and achieved excellence, time after time, that enabled our success.
Iron Man 3
The goal for face replacement work on Iron Man 3 was to be as invisible as possible. An interesting production dilemma on that show was Robert Downey Jr. (Tony Stark/Iron Man) broke his ankle while performing a stunt, as Sean Walker discusses also, below. Interestingly, the actual take was a great stunt to begin with, so it is used in the film. It cuts out just before we see the foot twist! As a result, the shooting crew had to get creative and ponder how to complete the film with Robert unable to move freely. One solution was to use a stand-in for scenes that required quite a bit of movement, so our challenge in this case was to seamlessly graft a digital version of Robert Downey Jr.’s head onto a completely different actor.
With this approach, we had to make adjustments to any movement that may feel like the head is detached, or is pivoting around unnaturally. This is a unique problem that comes with combining two physicalities. The camera team did a rock solid job of match-moving the stand-in’s head – the departments of Animation and Models took this challenge on and they provided realistic and specific nuances that matched Robert Downey Jr. Then, the Lighting and Compositing teams expertly integrated the digital head into the scene, in the hope of not presenting it as VFX work, with the end goal to keep the audience in the moment and to let the story progress unhindered.
In particular, our teams for Modelling, Groom and Shaders were key to the success of achieving Robert Downey Jr.’s digital self on film as completely photo-real. We started with a scan of his face and costume measurements. Then we found our own reference of him in various recent movies that we could confirm likeness on to build his base head geometry. From there it was on to Textures, Shaders and Creatures to add some hair in the areas of a moustache, goatee, eyebrows and head hair. The hair in particular was important to get right because Robert Downey Jr./Tony Stark has quite a distinctive look!
“Interestingly, the actual take was a great stunt to begin with, so it is used in the film. It cuts out just before we see the foot twist!” – Scott Chambers
We used Weta’s proprietary hair creating, simulation and manipulation tools to achieve this – ‘Barbershop/Wig’ (groom) and ‘Figaro’ (hair simulation). Barbershop/Wig is a highly customizable tool that helps us to achieve real world grooming implements and accessories – imagine real tools such as hair gel, scissors, brushes, and even blow driers. With the tool Barbershop/Wig, we are able to modify the whole hair in one go. This could be in clumps or even a single hair. On this show, we could dial in the right density, distribution and feel to the different hair regions on Robert Downey Jr.’s face. Barbershop/Wig is so versatile that it is also used for peach fuzz (those very fine hairs on skin) and tiny strand textures on clothing, such as velvet. Barbershop/Wig was integral to making our digital Robert Downey Jr. believable. It was the amount of experience and development over the years accumulated in providing this effect with minimum fuss but with a maximum result, to get to the point of creating this realistic aesthetic. It was a pleasure for the Lighting and Compositing teams to pick up and run with such a well thought out and artistically great asset.
Although perhaps not one of the flashiest of VFX shots, there was one key shot of a stand in moving quickly with Robert Downey Jr.’s digital grafted head. This was very successful I thought – a hard one to achieve the full 100% accuracy of detail, due to the profile nature of the performance and the subtle lighting and atmospheric conditions present in the plate.
The Hobbit: An Unexpected Journey
The Hobbit was peppered with many small and subtle features, but paired with intricate face, body, hair replacements. As always, we wanted the audience to never question or even see this work and be distracted from the story. Shooting was a mixture of location work and studio sets. When the crew enters Rivendell, we see a common challenge we would come across in lighting and compositing.
The filming was positioned from a low angle, this prevented us from keeping hair detail because the keying would have been impossible. We were fortunate that we could keep Martin Freeman’s face as it was shot in the plate, but we needed to augment his hair with a CG replacement due to the space lights behind his head overexposing all the fine hair detail. Replacing hair required a very accurate camera and match move of Martin’s head plus perfect lighting and integration to match the plate hair.
Our groom tools were very advanced at the time, as it was built to hold up with full frame as we worked on the shot. We had a hero Bilbo digi-double, so we were confident the hair would work for us if we could track, light, render and composite it in. We ended up with a seamless transition that didn’t distract from Bilbos face and enabled us to see Rivendell in all its splendour through the characters eyes. This was one of my favourite shots to work on.
The Hobbit trilogy was made up of a diverse ensemble of cast members. Their characters were called upon to handle a gamut of performances; these performances were contained but also physically demanding and, at times, sometimes dangerous in terms of stunts and horse riding work. In order to create a safe film set for the actors, stunt performers were used on screen. Like any ordinary stunt practice, they were dressed as the actor they were doubling with. The hair and makeup was designed to be as identical as they can be to their character – but the likeness will never be 100% granted even with this attempt.
This is where we stepped in – in this example we can see Elrond played by Hugo Weaving. His face has been grafted on to the stunt performer that is riding a horse at high speed on the stage. Even though we have a robust asset building, animation, lighting and rendering pipeline, sometimes taking a simpler approach can achieve a great result in half the time. For this shot we took a still image of Hugo Weaving in character from another shot. The lighting and expressions were not a complete match, but with house-built colour matching and correction tools, we ‘re-photographed’ Hugo with the lighting from the stunt performers face. We also found a mismatch between the physical dimensions of the two, so we adjusted scales in X and Y in selective areas to better match the head, neck and shoulder areas. Motion blur, focus and sensor noise were crafted in to match the original plate, providing a seamless graft that the audience members wouldn’t expect.
Sean Walker, sequence VFX supervisor
Iron Man 3
For Iron Man 3 we had a couple of challenges when it came to face replacements. On top of a hectic schedule and a few script changes late in to production, as Scott mentions, Robert Downey Jr. broke his ankle midway through shooting the third act battle. To his credit, it was a pretty cool stunt. You can also see the determination in his face as he powered through the pain when he returned to shooting. However, while usually only a few shots are needed for a face replacement to cover stunt work, here we needed to go a bit further. Shots that wouldn’t usually require a double now needed one, along with a face replacement. As we had many shots that involved a fully digital Tony Stark, we were able to help out by rendering and integrating a digital face on to a body double for the shots Robert wasn’t able to perform. As the double’s hair was noticeably different in certain shots, we often ended up fully replacing the head.
While we managed to keep the beginning of the stunt where Robert Downey Jr. broke his ankle (the jump itself), the camera angle that involved the landing could not be used. For that we filmed a stunt double and generated a face replacement, Scott goes into finer detail about this. I was glad we were able to keep the stunt at least halfway with Robert, and just subtly adding in his likeness where needed. Using a face replacement as a subtle effect is super satisfying, but we also try and use as much of our actor’s performance as possible.
“Being able to generate realistic, fully digital faces that we have complete control over gives us incredible flexibility.” – Sean Walker
The other main challenge was a big script re-write that came late in the schedule that involved revealing Killian (Guy Pearce), still alive, and the Mandarin! Additional photography was shot with Guy on a greenscreen, and we were to integrate him in to our environment. One of the big issues here was that Guy was committed to shooting another film at this point and had grown a considerable amount of facial hair. Effort was made to glue it down but it didn’t help very much in the end. We decided that as we already had a digital version of Killian, we would replace Guy entirely with a digital version, carefully matching his performance, and adding the ‘Extremis’ effect on top.
It was a beautiful effect, and it told the story Marvel wanted to tell. We got it done in a relatively short amount of time thanks to having amazing artists and the technology needed to support them. I also heard that Guy Pearce didn’t realise we had used the fully digital version of himself until he was told after it screened, which is a testament to what we accomplished.
Our creature work and digital double work is some of the best in the world. There are many techniques to creating face replacements, but most of them have short comings. Being able to generate realistic, fully digital faces that we have complete control over gives us incredible flexibility. One of the main reasons I feel we are such a great pairing with studios like Marvel, is we are able to help them make these kinds of changes without fear of losing the story that they really want to tell.
For Fantastic Four we were tasked with creating the digital effect for Reed Richards, a super hero with the ability to stretch and contort his body as if it were made of rubber. As much of Reed’s powers involved something that couldn’t be replicated practically, we created a fully digital double with all the technology and controls to manipulate his body like putty.
We wanted to keep as much of Miles Teller’s performance as possible, but as his body was performing very unnatural feats, we ended up tracking his face on to our digital body, much in the same way we’d replace a face on a stunt double. We still had Miles act out as much of the action as possible to give us the lighting and motion reference needed to integrate his digital body, as well as the facial performance that we would later integrate in to the shot.
From ghastly monsters to realistic digital doubles, we have lots of experience in this area. This effect required something in the middle. We didn’t want to lose the humanity of Reed in the process, but still had to have him move and deform in some pretty horrific ways. Weta’s experience with both meant we had the technology and the artists to pull this off.
One of my favorite moments featuring face replacements in this project was the transition from Fernando to Reed. Reed being able to manipulate his body like rubber meant he was able to morph his face in to the likeness of another person. This used to be done in 2D, by morphing two faces to line up to each other and then fading between the two. Here, we decided to have a fully digital 3D transition. We did this by scanning and creating digital versions of both actors, and then tracked these digital actors to both actors’ facial motion.
From there, we were able to transition from a live action Fernando to a fully digital version of himself, geometrically blend to the digital version of Reed, and then transition to Reed’s live action performance. Only the face from each actor was used which we tracked on to our digital head, before and after the morph. The rest of the head, hair and neck was entirely digital – you can see this in the images attached. This allowed us to control exactly how the transition would happen. We went through a few iterations where bones would pop in and out of place, but as the result was quite frightening, we ended up going with a more subtle effect.
Erik Winquist, VFX supervisor
A number of years ago, we would see face replacements predominantly used in the instance when it was typically a dangerous stunt where the stunt person was moving quickly. This was often masked in motion blur and was a reasonably good match for the actor they were doubling. So face replacements were just that; we could take footage shot of the actor in question, or even photographs, and simply patch the face in Compositing. The motion blur and lighting often helped mask the trick. These days, it’s common to be asked to replace a stunt person’s whole head with that of a digital asset. And it’s no longer just fast moving or wide angle stunt shots. With increasing frequency, we’re finding ourselves replacing heads in close-ups or shots of actors delivering dialogue.
The shots that Weta Digital delivered on Rampage were primarily Creature and FX work, but there were a handful of examples from our Chicago sequences at the end of the film where we needed to do head replacements (or complete digital doubles) for Dwayne Johnson’s character. Many of these were of the dangerous stunt variety. For example, his character was required to jump out a window, fall a few stories and tumble to a stop just inches from camera. The wide shot in this case was in slow motion and the final shot is a close-up, so fairly unforgiving territory.
But another example was just down to something as mundane as actor availability. In a one-off setup, his character needed to get out of a helicopter which had just landed in an intersection, exchange a line with Naomie Harris’s character, and run off down the street. This was a second unit setup which was shot in downtown Atlanta, but Dwayne had a full schedule shooting with main unit back at the studios, so the decision was made that his stunt double would perform the shot and we’d do a head replacement with Dwayne delivering the dialogue in post. The filmmakers later photographed Dwayne delivering that dialogue from an array of cameras for our reference, but because of lighting and the nature of the shot, it was ultimately more practical to replicate that performance with a digital asset.
I quite liked the above mentioned shot (where Johnson’s character gets out of the helicopter and runs down the street) because it’s just…not at all the kind of shot you’d expect to find a face replacement in. There was no dangerous stunt happening, so there’s no reason for anyone watching to expect an effect happening in front of their eyes. We strive for all of our work to be seamless and look like it was just captured by a camera there on the day, but with some of the work that we do, impossible things are happening on screen. Even when the animation is completely plausible and we nail the integration, something in the back of your mind still knows that the A-list actor didn’t really perform that.. it would have been way too dangerous. So if we can execute a “mundane” shot like this (if you can call a Blackhawk helicopter landing in an intersection mundane) without the audience knowing any better, that’s a fun test to pass.
“We strive for all of our work to be seamless and look like it was just captured by a camera there on the day.” – Erik Winquist
Weta Digital was ideally positioned to take the approach we took on these shots because we already had a recent, high fidelity digital asset of Dwayne Johnson in our arsenal. A year or two earlier, Martin Hill and his team here had created this as part of the work we delivered on a comedy called Central Intelligence, in which we had to make a younger, heavier version of his character for some hilarious flashback scenes. Dwayne hadn’t visibly aged since that work was done, and our digital asset had a complete facial animation rig. Additionally, Central Intelligence and Rampage were both from the same studio, so obtaining permission to re-use that asset was pretty straight forward.
Because we had all of that momentum coming in to the process, our asset dev was then about matching his character’s look in our scenes with regard to dirt, sweat, and blood that the make-up team had established in the live action footage. Having this asset available to us meant that we could make efficient use of it in cases where the budget might have otherwise required a plate-based approach, which always limits your chances for success.
Since Rampage, we’ve been able to further leverage that digital Dwayne Johnson asset on more recent projects like Jumanji: The Next Level and the upcoming Jungle Cruise. It’s the gift that keeps on giving.
Ken McGaugh, VFX supervisor
Valerian and the City of Thousand Planets and Jumanji: The Next Level
I worked on some face replacement work in Valerian and Jumanji: The Next Level. For the scene in Valerian where we first meet Rhianna’s Bubble character, she is performing a shape-shifting cabaret dance. Many of the dance moves were actually performed by a professional dancer whose face we had to replace with Rhianna’s. For many of the action shots in Jumanji: The Next Level, Dwayne Johnson’s character Dr. Bravestone was shot with a stunt performer whose whole head would need to be replaced with a CG version of Dwayne Johnson’s head.
The biggest challenge in both circumstances was the difference in proportions between the stunt/dance performer and the actor. Everybody is different, and those differences become very apparent when there is a mis-match. Humans are very adept at noticing subtle mis-matches and minimising them often involves a lot of trial and error – this is something that we face in cinema and post-production on a daily basis. Even when you get the proportions believable, it often doesn’t feel right because the performer’s body movements can be subtly different to the actor’s. There is no easy solution to that other than creative editing.
Weta Digital has a tremendous amount of experience doing digital facial performances and photorealistic lighting, shading, and integration. These are all the first requirements for doing believable face replacements. Beyond that, there is quite a bit of artistry required to overcome the uncanny valley, and that is where Weta’s vast experience specifically doing face replacements shines.
There is a shot in Jumanji: The Next Level where Dwayne Johnson’s stunt double is sprinting while the camera tracks back with him. So his face is relatively stationary in frame through the whole shot so there were no cheats that could be employed. Even with very good element of Dwayne Johnson giving a facial performance in the same lighting we had to resort to a full digital head replacement. It was a tedious exercise in getting the proportions to work believably, but in the end it looked great.
Martin Hill, VFX supervisor
Face replacements is a pretty broad term and can vary in technique and complexity. I’m going to look at 5 films I’ve supervised at Weta Digital and look at face replacements from simplest to most complex – Game of Thrones, Hunger Games: Mockingjay, Valerian, Central Intelligence and Furious 7.
Some of the most satisfying face replacements are the straightforward cases, like when a stunt performer is required as the actor can’t physically perform the action for the shot. With a bit of planning on set these can done quite efficiently. Costume, hair and makeup can take you a long way towards finishing the shot, and with frenetic action and strategic camera placement, you can minimize and sometimes avoid VFX altogether. However, most of the time the director wants the actor’s face and performance in the shot, so we get to work because there’s no hiding it!
An effective method for more simple face replacements is to have the actor repeat the performance of the stunt performer in the set lighting, with their face relative to the camera without performing the body move. Ideally this would be photographed with the face full frame to capture more fidelity than in the hero plate. It’s rare to have the time on stage to get the performance timing correct especially as you don’t know which take of the stunt will be the select, so the face performance is often recreated afterwards from multiple takes.
“Some of the most satisfying face replacements are the straightforward cases.” – Martin Hill
Taking the face performance and tracking it onto the body can be done either by match-moving the head with rough geometry and projecting the hero face performance on, or by using optical flow to stabilize both performances and applying the inverse stabilization from the stunt performance to the hero, in order to make it stick into place. This can be softened so you don’t apply too much of the high frequency details of the stunt performance, but maintain the overall head rotations, which is very useful especially when you’re trying to retain the hair from the stunt plate.
Game of Thrones
For a key sequence in Game of Thrones we had to go a bit further. Due to the stunt performance in the harness not quite having the energy the filmmakers wanted, we ended up replacing the body with full CG for this shot, however we still used the face replacement technique here to have Bella Ramsey’s facial performance, and to avoid building a full CG face puppet of her for one shot. Incidentally we did full CG take over for Crum, the giant holding Lyanna, in the same way, projecting the plate onto the geometry of the CG.
The Hunger Games: Mockingjay – Part 2
Similar techniques were used for Jennifer Lawrence in The Hunger Games: Mockingjay – Part 2. Jennifer did a lot of her own stunts in the Lizard Mutant sequence, but some of the fight beats were too rough, and after the stunt was shot with her double (Renee Moneymaker) she replayed the stunt with the stunt actor playing the Mutant at 3/4 speed, with a bit less violence.
This gave us options whether to use the stunt plate or speed up Jennifer’s plate. In the end we went with the former, match-moving both plates and inverting the projection from the Jennifer plate and projecting onto the Renee plate. Some frames that were particularly sharp were then lifted directly and tracked in 2d, to keep some very clear photography of Jennifer in the final shot.
If you don’t have a bespoke head performance from the actor for a shot, it’s entirely possible to use a take or a still from a different shot or from reference photography, and project it on in the same way. This requires more grading for lighting line-up to be done in this case. It’s fantastic how efficient these techniques can be with a skilled compositor, in this case Tobias Weisner creating the hybrid performance with a convincing face replacement relatively quickly, and completely works in the frenetic sequence.
Valerian and the City of Thousand Planets
Moving a step up in complexity – for the film Valerian and the City of Thousand Planets – we had a different set of challenges with the face replacement for the cabaret scene where Rhianna’s character performs some acrobatics. Rhianna could not perform all of the dance and acrobatic moves and Olympic Gymnast Emilie Livingston was brought in. Instantly this is a more difficult sell, we’re not hiding in a fast-paced action scene but have stage lights and a very clear view of the performers face. The motions that Emilie brought were difficult for Rhianna to replicate the timing of, and the sharp stage lights meant reprojections with the head at a slightly different angle threw the lighting off.
This meant we knew we were going to need a full CG head replacement. To get all our face shapes, prior to shooting we scanned Rhianna at ICT on the Light Stage X for a Full FACS set which also involved extra poses not in the standard set. As the choreography hadn’t been created yet for the dances, we used poses from some of Rhianna’s music videos as inspiration – it was quite surreal in the Lightstage, showing Rhianna “Diamonds” back to her! Having these among scans really helped the facial team nail the very specific looks that Rhianna gives during performances, rather than having to reconstruct them from base poses and reference.
Rhianna had many costume, hair and makeup changes which meant that during shooting, there were a number of full scans for costume texture and make up in 4DMax’s photogrammetry scanning setup at Cite du Cinema. Then we created the same number of textured and shaded models each with a bespoke hair model. This is obviously more involved than reprojection methods, so for each set up we also filmed Rhianna in the same costume and makeup on set. Reprojection worked for some shots with less motion, and if it didn’t perfectly work then it still served as fantastic reference for the digital head rendering.
“It was quite surreal in the Lightstage, showing Rhianna ‘Diamonds’ back to her!” – Martin Hill
Technically, the challenge we faced was to ensure the head replacement didn’t look stiff; it needed to flow with the dancing. To make sure the neck/head match-moves were perfect, we used 4 operated witness cameras running at 48fps, and another 4 fixed machine vision cameras that also covered the Hero Camera’s position to validate the camera track. These match-moves were used for both the digital head and projection versions. As well as the head replacements we also had to morph her from each costume into the next and then into a gelatinous amorphous alien creature, but that’s another story!
The show I supervised at Weta before Valerian was Central Intelligence which involved a very different kind of choreography. We needed to turn Dwayne Johnson into a 400-pound high school kid and make him dance in the locker room showers. This had a number of challenges, the filmmakers had cast Sione Kelepi, a dancing sensation on Vine, as the body double. For this show we had a hybrid reprojection and CG approach.
Since we were so close up on the character, we knew reprojections weren’t going to cut it. Dwayne and Sione had very different head shapes since Sione’s was wider, Dwayne’s taller. Placing Dwayne’s face on Sione’s head either gave the impression of a very small face, or no forehead. We sculpted our character Robbie, adding a lot of weight to the Dwayne scans, but still trying to maintain Dwayne’s features. We then then match-moved Sione’s head, re-projected the plate and warped it to our character Robbie’s head model constrained to the matchmove – this was a great saving, as it meant we could use all the plate hair and ears (which had water droplets in, and would have been another task to create digitally) so all we needed to recreate in CG was the face.
The performance was partially based on Dwayne, who recorded the head performance based on Sione’s timing after the fact with a face camera on. As our models Dwayne and the Robbie character were on the same topology we created a Dwayne Digi Double head from scans again at ICT’s Light stage X, which we could validate against the reference, and then transferred it to the Robbie model. We also needed to do a lot of de-aging augmentation to the textures.
Dwayne and Siones’ character had to be able to sing, my second surreal ICT moment was asking Dwayne to perform En Vogue’s “Never gonna get it”, in the Light Stage X, he was a very good sport and did two takes of the whole song!
The most advanced face replacement work I’ve supervised was Furious 7, where we had the task of creating a digital Paul Walker for around 300 shots after he tragically passed away. The majority of the work was full 3D head replacement, but there was a few reprojection shots taking footage from other films and re-projecting them in. The last shot of Paul is with this technique as the director James Wan was adamant to get as much of Paul into that shot as possible. Even that shot was around 70% CG for the head, as the source plate was shot on film, at night with mercury vapour lights which are very green and don’t leave a lot of colour complexity in the skin. It was at about half the resolution we needed and featured a significantly younger Paul.
For the rest of the shots, we had to create a digi double head, capable of full performance, and dialogue. It was a particularly challenging CG build. Normally you have access to information like scans and texture reference from the actual actor and none of that was available. One of the first things we did was contact the family – they were very amenable and helpful in the process. Paul’s brothers, Caleb and Cody, let us scan them and run then through the USC ICT Light Stage process. That gave us the next best thing in terms of their skin quality, their wrinkles, their pores, their skin tone. That was really useful to get something that as close as we could get to Paul.
For the model build we took the reference we had, including some stills from the sixth film – these were taken to build a stunt digital double asset and we used those to get to the ground truth of Paul’s face in terms of its structure. Then we went through a process of augmenting the textures from Caleb and Cody, and building up the texture layers for our skin shading models which had to be re-written and improved for the show. This included adding vascular constriction, for example, when Paul is screwing up his brow his procerus is firing and the blood rushes into some areas and out of others. The skin colour changes as he moves. That was a very important detail that really brought life to the face. We’d also add in all the delays for the blood rushing back in. Conversely, we added exertion, so he’d look more flushed during action sequences, and a tiredness dial for when that was appropriate.
“We put a lot of work into, well, how would Paul playing that respond?” – Martin Hill
We then built up a reference library from all of Paul’s films from the last five years, and whatever else we could get access to – this included media and publicity photos, and footage that didn’t make it into the films from the studio. Then we would break it down into camera angles and lighting directions, creating a large navigable library of reference. It was a huge time saver. From that we could essentially validate our work by match-moving these various shots and build up a faux FACS session where we could go through all of the poses and make sure all of our facial poses were on character. That was really important not to make anything up. Around 30 of these ‘shots’ were taken all the way through the pipeline, rendering and comp, to make sure our animation, face shapes and shading were completely true to the footage.
Paying such honest attention to matching Paul’s performance in character paid dividends later, as for some shots, we used parts of these performances or poses directly. We found out quite a way into the production that we had underestimated this character step, Once we’d created a digital double that looked real, and had Paul’s likeness and expressions, we realized that it didn’t necessarily act like the Brian O’Conner that Paul had played. We actually went back and removed face shapes and poses that were taken from media footage and other films that were not in character.
Similar to the Rhianna shoot, we had multiple witness reference cameras to keep faithful to the motion and timing of the actors on set (sometimes played by Paul’s brothers Caleb and Cody), one of the most important things is to keep the head and face motion natural to the body, since the two are so closely linked. It’s tempting sometimes when you have complete control of the head, to change the timing of a head turn, but it quickly looks unnatural and you pick up on it. Even something as subtle as changing the timing of dialogue can look wrong and robotic, as you express and communicate with your whole body.
One of the things about Furious 7 was that we were doing these full emoting, with dialogue, digi-double heads. We knew we were going to be pushing boundaries and we needed to evolve the tech a lot during the project. There were some shots where we were shooting safety versions of the shots, because when I was looking through the lens I was thinking they were very close-up shots of a very nuanced performance and that we’d need to get a safety from a longer lens, wider, further back, or maybe a little bit more obscured in some way, just in case we can’t pull it off. What was interesting during that process was, by the end of the development time on the show, we were using all the close-up plates on all the shots. It was just really pleasing to have the digi-double at the level where you wanted to take the hard option.
One of my favorite shots is the tower chase. James Wan really wanted this to be a moment where you saw in Paul’s eyes an expression that he decided that living this action-filled life was no longer for him and he wanted to call it a day. James wanted that from just a look. We put a lot of work into, well, how would Paul playing that respond? The look in there is quite subtle, but I think it’s really successful in showing a nuanced performance.Sign up to the weekly b&a VFX newsletter