Vision vs The Vision: a VFX how-to

March 26, 2021

Digital Domain discusses the full-on Vision fight scenes in ‘WandaVision’.

This week at befores & afters we’ve previously looked at some of the different VFX studios involved in Vision visual effects for WandaVision.

Another key studio contributing Vision shots was Digital Domain, which delivered dynamic scenes with both Vision and The Vision (or White Vision) in the final episode of series 1. Not only were they translating Paul Bettany’s live-action performance into a synthezoid by taking parts of his face and adding CG elements, the VFX studio also had to do extensive full digital ‘human’ work.

Here, Digital Domain visual effects supervisor Marion Spates and several other members of the team outline the work involved. As a bonus, Digital Domain highlights their shots for the Agatha vs Wanda fight that also happens in the finale.

b&a: For the Visions battle, can you firstly outline the methodology you followed in crafting digi-double versions of Vision and The Vision for DD’s pipeline?

Marion Spates (visual effects supervisor): As Digital Domain’s VFX supervisor for the show, I initially spent a lot of time with the department heads focusing on the digital humans aspect, knowing we would create two Vision digi-doubles that would need to hold up in camera at a very close proximity. We quickly learned we’d need quality on set data to achieve the shots in the Vision battle.

© Marvel Studios 2021. All Rights Reserved.

Suzanne Foster, our VFX producer, worked with production on how we could capture a new data set while onset in Atlanta for the Library Sequence. Tara DeMarco, Marvel’s VFX supervisor on WandaVision, was very supportive in having us join her for the library portion of the Vision-on-Vision battle. Luckily, that shoot was weeks before Covid hit, so we were able to perform a traditional data acquisition while in Atlanta.

Digital Domain is known for how well we capture on set data, and I was thrilled to have one of DD’s best, Doron Kipper, join me for the shoot in Atlanta where we captured scans and high resolution photography for all of the Visions: Vision, (White) Vision and all of the stuntmen who would play either Vision at any point in time).

Once back in LA, and after receiving the scans, we then started to rework Vision from top to bottom. Our facial team, headed by Ron Miller, determined Paul’s muzzle was not represented closely enough within the model, so with Tara’s approval we started to integrate Paul’s face more into the base Vision model. Since Vision is a synthezoid, we needed to be careful not to introduce too much of Paul’s head into Vision’s head.

Meanwhile, our model supervisor, Nelson Sousa, started to evaluate the body portions of the Visions. We were instructed to keep the MCU portion of the body because it was built to represent the MCU version of Paul. Once the MCU versions for both Visions were complete, our modeling team had to build six versions of the Visions for matchmove purposes. We needed all variations of Visions available for any shot that would have a version of Paul or a stunty in the shot playing the role of Vision or (White) Vision. Our rigging department, led by Eric Tang, played a huge role in building rigs that allowed our integ and anim team the ability to swap our character types on the fly as needed.

We started the development of our cape simulations early on. Once we received animation test files from our animation team, led by Frankie Stellato, our character FX team, led by Eric Ojong and Sunil Rawat, started to look at every possible scenario for the Visions. This early development really paid off in the end, ultimately giving us the ability to be ready for anything.

The most exciting exercise was helping Marvel create The Vision (White). For this task we called upon Chris Nichols, our senior character concept artist, who then started to sculpt Marvel’s concept in 3D using Vision’s head as a starting point. Once we had a buy off from Marvel, we passed this data back to Nelson and the modeling department to finalize (White) Vision’s head model.

John Brennick (compositing lead): The lookdev and comp department spent quite a few months prepping the assets to ensure that not only were the Visions looking like the Vision everyone has gotten to know over the years in the MCU, but that it also maintained Paul’s features as closely as possible.

Once we established a solid starting point, lookdev and comp worked closely together to balance the overall look for how Vision will sit in the various environments. Lighting would release its first version while comp would spend time balancing out the shader properties as a guide to help identify what properties would work best in the final asset shaders. This created a great starting point for comp to not have to push the CG too hard once shot production was moving pretty aggressively. Allowing the shaders to respond in a physically accurate way helped tremendously when assembling the comps while the deadlines were approaching. With the asset feeling pretty well balanced at the base render, comp was able to build out a template that would incorporate all of the little things that makes Vision the character he is.

Some of these setups would include adjusting panel grooves, applying the mind stone and now the new White Vision stone treatments as well as adding new details to the new White Vision asset to help differentiate his character from Red Vision. For this, we played into the digital aspect of the character and really tried to enhance those details like subtle circuitry patterns that show up in the spec, an underlying circuitry pattern in the stone as well as in the eyes and new led lights on the side of the head and face. Once the look was established, comp built out a template for these treatments to maintain consistency across the board for not only the digi-doubles, but the plate integrated shots as well.

Marion Spates (visual effects supervisor): I also want to give props to DFX supervisor Matt Smith, additional CG supervisor Kazuki Takahashi, and Oliver Seemann under the leadership of Krista Mclean, our environment department supervisor.

b&a: For scenes where the characters are ‘floating’ or flying, what was DD’s methodology in balancing wire work/greenscreen plates with CG take-overs?

Erik Ojong (character FX (CFX) lead): On set, Paul Bettany and the stunt doubles never wore a cape so we had to sim one every time it was visible in the camera. This required a tight matchmove to the shoulder plates/cape holders. After the matchmove and animation were approved, CFX would run the sim cape using the body as a collider. For shots in the library where there was very little need for wind, the cape sim was sent to the farm and then adjusted as needed. In some cases, we really needed the cape to look a specific way (usually when vision was on the ground) in these cases, to achieve the right look we would have to adjust the direction of gravity, or visions starting pose before the shot even began. For the flying shots, the cloth attributes were adjusted depending on the shot requirements, with a focus on the timing, direction and speed of the wind on the cape.

© Marvel Studios 2021. All Rights Reserved.

Vinh Nguyen (compositing lead): In the Agnes vs Wanda battle, we constantly pretended there was a third witch floating in the air, acting as the cameraman. By being in a stormy witch fight, it would be impossible for the camera witch to hold still. The two fighting witches would have idle floating patterns up in the sky. To break up the wire work/greenscreen feel, like a real camera, it was key to have the witches float and camera move on all 3 axis, vs just screen space. Since the witches are high in the sky, when Wanda threw her blasts and Agnes absorbed them, the witches’ momentum would continue, to remove the feeling of being on the greenstage ground.

Robert Frick (CG supervisor): For the most part (in the final witch battle) the wire work was fine for Scarlet Witch – but Agatha had issues sometimes where the dress would be caught up on the harness. We had to decide if it was easier to just completely replace her from the neck down, or try to fix the dress in the plate in comp. For flying shots, animation did a great job of adding additional motion to the wire work they were able to capture in camera, so in those cases, we just added a post transform to the plate (if it could be achieved in 2D), or matched the lighting and transitioned when it was least troublesome. Most of the time, we decided just to make it all CG since our asset looked great, our CFX team was killing it, and our lighters are fantastic. Adding float in 2D to FG plates where we were just putting in a sky behind them was instrumental in taking away the ‘Batman on a surfboard’ curse. Also, adding low FG clouds always helped sit the plates better in our environment.

We want to give props to anim supervisor Frankie Stellato, lookdev lead James Stuart, Texture lead Nick Cosmi, comp lead Vinh Nguyen and lighting lead Olivier Van Zeveren for the final witch battle flying shots primarily.

b&a: For Vision’s head, in particular, can you talk about DD’s approach in taking live-action plates and the performance of the actor and realizing the final CG elements and final head shots?

Frankie Stellato (animation supervisor): The best approach we found was for our animators to match Paul’s facial performance with our CG Vision as best we could. Once we did that we had a CG version of Vision matching Paul’s performance, our comp team could then blend Paul’s eyes and lower face with our Vision asset into the shot.

John Brennick (compositing lead): Probably the most critical aspect of getting Vision and White Vision’s plate head to integrate into the CG head would lie with the integ and anim department. An incredible amount of time and focus was spent articulating as much of the mouth movement and jaw movement as possible. This allowed comp a lot of room to work closely with the texturing department to create custom mattes generated in CG since they lined up with the performance so well. The use of RGB mattes, STMaps and the XYZ passes were critical to apply grades to Paul’s face and help darken or brighten the plate where needed. Additionally, since such a large amount of time was spent focusing on the likeness of the CG to the plate in the development of the asset, comp was able to use the spec from the CG over the plate to really help seam the CG by avoiding any hard lines as well as bringing the plate more into the CG environments that were being fully created.

Simon Twine (compositing supervisor): Paul’s ears were removed with paint in all the Vision shots. We had to tread a fine line between retaining the performance of Paul, whilst also making him look like a synthezoid. This meant retaining details around his eyes and nose, but smoothing some of that out and carefully blending it with aspects of the model. We had a lot of articulate roto to enable us to bring back specific areas of his face and blend them with our CG and to allow us to add the subtle eye-rings around his pupils. A lot of time was spent trying to nail the blend between mask and skin, avoiding the ‘Batman’ mask feeling and making sure that our color and overall shininess of the skin matched Vision across the other episodes and stayed true to the established MCU look. We concentrated a lot on the T-section of the face, keeping as much of Paul’s performance from his eyes, nose and mouth as possible.

b&a: For White Vision, what are the challenges of realizing and rendering a largely white character, just in terms of integrating him into the scene?

James Stuart (lead look dev artist): When approaching (White) Vision, in particular, we knew it would be challenging to sell him in both a low light environment (library) and an exterior strong sun lighting environment, especially keeping him with a relatively strong spec response. Once we were happy with a diffuse and subsurface value that tied into the onset paint work on Paul’s face we introduced a finely detailed circuitry map to drive the specular, this allowed us to have a nice level of breakup in the spec and achieve the strong reflections without feeling too much like marble. We also introduced a ‘thin film interference’ layer in comp which introduced a slight variation of colors to the glancing angles of his face to break up the overall white color.

Simon Twine (compositing supervisor): Comp/color wise it was very tricky in some environments to have White Vision appear white than dishwater-dull grey…we took cues from the onset costume and one of the most important things we did in comp to help make WV’s color more complex was to bring in a layer of Thin Film Interference. This is essentially the type of rainbow effect you see when you look through soap bubbles and we usually use this for glass or water, but we wanted to add a hint of pearlescence to WV and this DD tool allowed us to add this and dial it in comp.

b&a: For Agatha vs Scarlet Witch, what were some of the particular challenges here in terms of the mix of wire work flying and digi-double replication? What ‘behaviors’ did you make sure each character retained in the fight? Can you also discuss the cloth simulation side of the work in particular?

Erik Ojong (CFX lead): For Agatha, her dress was light and airy so it was always blowing with the wind. Unfortunately, this meant parts of her shawl or sleeves would get caught on the wires holding her up. When this happens we have to do a matchmove followed by a cloth sim to match the situation. After the approved sim was rendered, the comp team would choose which area was best to blend.

© Marvel Studios 2021. All Rights Reserved.

Vinh Nguyen (compositing lead): We really wanted to emphasize it was mostly a one-sided battle. Wanda launches a fury of blasts towards Agnes and Agnes deliberately absorbing all of Wanda’s magic. We wanted Wanda’s violent blasts to really pop and rattle the camera. Meanwhile, Agnes would lovingly take the violent pops and slowly absorb Wanda’s magic.

Robert Frick (CG supervisor): We needed to have CFX artists and model artists constantly handing off WIP dress models for testing in CFX, and allowing for CFX to drive a big part of the decisions on how we modeled the dress, how the pleats were laid out, where cuts in the fabric were, and when to add extra geo when we were losing volume after running simulations. These early sims identified issues we were able to address in modeling before we went too far down a particular path.

Erik Ojong (CFX lead): For Scarlet Witch, the CFX team would take the model and animation into Houdini and create a rig using Numerions Carbon plug in as the solver. For the Soccer mom outfit, the CFX team would take the model into Houdini and break it down into a sim friendly version, adjusting values and rest geometry to get the right wrinkles and movement. After sim is approved, the renderable model would then be wrapped to the simmed geo and passed to the lighting.

The approach for Agatha’s dress had to be different because this was a costume that opened up and showed multiple layers. If we didn’t get the build correct, the final simulated costume would look different from the version shot on set. To help with this, on top of getting the costume patterns we received multiple reference photos of the dress in separated layers. We built a basic costume fanned out as much as possible, and with none of the fine details like the pleats or wrinkles. From there the Sunil Rawat, who created most of the CFX rig builds on the show, ran a sim without wind to see if the digital costume matched the reference photos of the dress when it was worn. After this Sunil would turn on the wind to see if the costume’s silhouette matched what had been filmed when Agatha was floating. The CFX and modeling teams worked closely on this asset, several iterations made from the original model, which was made in Marvelous Designer to the final simmed costume.

Once we had a preliminary sim, the modeling and lookdev teams created the finer details such as the pleats and wrinkles. Meanwhile, the CFX team refined the cloth sim, trying to optimize the sim times while finalizing the simulation’s overall look and feel. The purple shawl was by far the trickiest aspect of this costume, it had a distinct look and shape that we needed to get right, and on top of that, it was pleated which meant it tended to fold along the length of the pleat rather than perpendicular. We tried simulating both pleated and unpleated sim geometry and found that while simming the pleated geo gave more accurate results, it was less stable and took a lot longer to sim. So the decision was made to sim the flat geometry with a second sim that helped clean up any penetrations between the layers.

This was definitely one of the biggest costume sims DD has worked on in recent times, but due to the tight and constant communication between departments we were able to pull it off.

Robert Frick (CG supervisor): Props to CFX dev artist Sunil Rawat, who did a fantastic job under the leadership of Erik Ojong, as well as the modeler Rie Ito, under the supervision of modeling dept head Nelson Sousa.

b&a: How was a typical ‘blast’ handled from Agatha and Scarlet Witch in terms of the particle FX approach for their magic?

Jermey Hampton (FX lead): At first, we devoted a lot of time to developing the looks for Wanda’s and Agatha’s unique magics. The blasts and absorption effects ended up being an extension of these looks.

The FX team would use low res geometry that represented the location and speed of the blasts to drive volume and particle sims. A lot of attention was paid to the look of the advected particles, so FX used a surface tension model to ‘pull’ the particles together giving a crisp inky look. We also ran multiple variations of the volume sims, mixing their densities and colors for more layers in the magic.

All these passes were wrangled by our amazing compositing team, using custom templates that allowed the looks to be matched across all the shots.

Marion Spates: Additional props here also to additional FX lead Nathaniel Usiak!


Subscribe (for FREE) to the VFX newsletter




Leave a Reply

Discover more from befores & afters

Subscribe now to keep reading and get access to the full archive.

Continue reading

Don't Miss

Watch Digital Domain’s VFX breakdown and postvis reel for ‘Madame Web’

See how DD helped craft several scenes.

Watch Digital Domain’s VFX breakdown for ‘Echo’

Visualizing the origin story of the Choctaw people.