Into the Looking Glass mask with Marz

The 360 camera rig and CG/comp workflow

Many people may remember the stunning ink-blot Rorschach mask from Zack Snyder’s Watchmen film.

Well, it turns out that some of the same team who worked on Rorschach’s VFX were also part of the team from visual effects studio Monsters Aliens Robots Zombies (MARZ) behind the reflective Looking Glass mask in recent Watchmen series. This is the mask worn by Tulsa Police detective Wade Tillman (Tim Blake Nelson) in the HBO show.

The process for crafting the mask itself and its reflections would involve a 360-degree camera head-rig worn by the actor, and a specific CG and compositing workflow to realize the final look. MARZ visual effects supervisor Nathan Larouche breaks it down in this befores & afters interview.

b&a: Can you outline the various tests you did for the mask look? Which one did the show makers respond to and why?

Nathan Larouche (visual effects supervisor, MARZ): We spoke with the senior VFX Supervisor of the series, Erik Henry, early on in the project’s development process—we pitched the fact that we had worked on the original Rorschach mask for the movie, so we’d love to use that experience and work on Rorschach again in the series. We thought it’d be a perfect fit because our artists have a lot of experience in character work—that’s what our studio specializes in.

It was a quick discussion. After about 20-30 minutes, Erik responded by saying Rorschach wouldn’t be in the show. But a few weeks later, he came back to us to talk about a different character and a different mask, known as the Looking Glass. The costuming department was having a hard time creating this specific piece; it was a reflective mask. They had tried a few materials, but it wasn’t possible for them to create a mask that was mirror-like and could be comfortably worn by the actor.

Look dev #1.
Look dev #2.
Look dev #3.

In the design phase, all we had to go on was the pitch of making a reflective mask. The first design was an experiment we thought was cool (very stylized, a bit cyberpunk, very geometric). We presented it to Erik and he said it didn’t really fit with the world that the Director, Damon Lindeloff, was creating.



We took a step back and thought more about how we would end up doing this mask if we were building it practically: take a mirror, shatter it, and take those shattered pieces and stitch them together. We ended up with a design that looked like a shattered mirror mapped on someone’s face. The problem with that was the reflections on the surface of the mask were super fragmented, like a mirror ball. But in the script, there were key moments when the audience needed to clearly see the reflections on the mask…so this second design didn’t work, either.

The third time around, we pored over the pitch and created, very simply, a reflective mask. No bells and whistles. We focused on one question: what would a reflective fabric look like? We simulated a realistic fabric balaclava, and in surfacing and texturing, applied a mirror-like material to it. When we showed that to Erik, he said it was definitely the direction to go in (both in terms of being believable and telling the story that the series needed and wanted to tell).

b&a: How was the actor filmed on set? What things helped in terms of look and feel of the mask and lighting reference? Can you talk in particular about the use of the 360 camera for reference and how you tested this setup to make sure it would work?

Nathan Larouche: We had about three months to prepare for the shoot of the pilot. We used this phase to test various technologies that would allow us to design a unique, fast and efficient pipeline for this project. During those months, a big focus was on how we would streamline the way in which the reflection footage would be captured on set. The challenge was that we weren’t going to be given time by production, for them to take their cameras and shoot reflection footage for us. It would slow down the shoot too much. Capturing the reflection data was fully our responsibility.

We experimented with different cameras and different capture methods, and ended up designing a head rig the actor could wear to capture the reflections needed to pull off the final effect. For all up-close shots, we had the actor wear a green mask and before each scene, we would place the camera rig on his head, and shoot the scene along with all the reflections. It was pretty liberating for production—we stayed out of their way as much as we could, and let them block the shots as they wanted to. It was rare for us to have to step in and ask for a change for technical reasons.

Throughout the shoot, the actor was wearing a mask, so he wasn’t pretending to interact with the fabric…to him, it was natural. Other performers interacting with him had to be guided to know where key reflections would end up being placed on the mask. We had made the decision early on that the most reflective and least distorted part of the mask was the forehead. We communicated that to the DP, director and actors, so anytime there was a visual gag where the actors were using the mask to tie their tie or pick their teeth, they would be looking at the forehead. So cameras and eye-lines would align.

VFX shot

In the three months of prep time before shooting the pilot, we went to William F. White and experimented with various 360 cameras. We wanted a system that was ‘set it and forget it,’ letting us hit record and make sure we were capturing everything required for post. In our case, we had two 360 cameras mounted on the front and back of the actor’s head.



One of the biggest challenges we faced wasn’t necessarily the quality or dynamic range of the video, but being able to stabilize it efficiently. A lot of 360 cameras at that point had software that the footage needed to be processed through, resulting in a heavy calculation that analyzed the motion of the pixels and then used that data to stabilize the footage. The results were mediocre. The Rylo camera, at that point, was the only camera that recorded the rotation information of the camera directly into the video file, letting us stabilize the footage perfectly after the fact, in order to reflect a stable element back onto a moving mask.

After the pilot shoot, we also contacted Rylo to see if they would make changes to the firmware of the camera, mainly to allow us to remote control the camera (there was an issue with the bluetooth connection between the camera and the remote). We ended up creating our own remote, transmitter and receiver for the camera, essentially creating a custom rig that allowed us to easily start and stop recording.

b&a: In terms of tackling the mask, what were some of the modelling considerations? How did you match the shape of the actor? How did you approach the fabric look and feel?

Nathan Larouche: During the pilot, we had a 30-minute session when we were able to work with SCANable to capture a 3D scan of the actor’s face and body. We scanned him with the mask on, and then with the mask and the camera rig combination. We also scanned a few different expressions on his face to help drive our facial animation in post.

The actual look on the surface of the mask is really showcased in the first episode during the interrogation sequence. There are moments that are super close to the mask, where 70% of the screen is the mask itself. Those are great moments to showcase the level of detail we were able to generate. We started with a pure chrome material, and then experimented with a lot of different fabric scans (denim, or knitted yarn, etc). We mapped those displacement maps, and used those shapes from natural fabric but rendered them to be very reflective. We noticed throughout the process, that there wasn’t one fabric that worked each time, because fabric tends to make a surface rough, and rough surfaces aren’t reflective. Instead, we had to create a mask that had rougher parts and smoother parts.

The parts of the mask that were pulled really tight (forehead, around his mouth and eyes) were very chrome-like and without much relief. For parts that were looser (like seams around his head), we introduced displacements maps and weave patterns that were more rough. This combination gave the illusion of a more reflective material.

We also looked at other references like leather, and wear and tear that leather can gain (it can be shiny and reflective as it ages, but can also flake off and have scratches). In close-up shots, you can see those details. In the end, depending on how close the mask was to the camera, we changed its surface to keep its look consistent. Close-ups had subtle, fine details and wide shots had much larger, rougher damage points so that in a wide shot, the mask still looked slightly uneven.



We used a lot of different software packages—it was a soup of software. For creating the asset, we used Marvelous Designer to generate the base structure and the geometry of the mask. That geometry was then brought into ZBrush, where we then sculpted finer details and art directed folds and seams more easily. Once we had all of the folds and seams looking the way we wanted, we brought that geometry into Substance Painter (for texturing and shading, where we introduced damage and the chrome look). One really great advantage of this software is that you can simulate wear and tear based on time—let it rain on the mask for an hour and see what it looks like, versus five hours, etc.

We would then take that and bring that asset into Maya, which was our animation package. We had an animate-able version of the actor’s face, and the mask would be simulated on top of this face. We used Carbon’s cloth solver to simulate the mask on top of his face inside of Houdini. Those simulations were then exported and brought back into Maya to light. One of the last steps was to light and reflect all of the Rylo reflection footage back onto the simulated mask—we rendered this in Redshift, which allowed us to have fast turnarounds.

Most of the creative notes and feedback we got was based on the position of the reflected elements in the mask, so we needed a quick way to art direct and position the reflections. Redshift was a key tool there—it allowed us to have fast iterations for the artists and supervisors. We then composited all of those layers together inside of Nuke.

That’s not even taking into account the environments, which were reflected into the mask. We stitched all of the onset photos together, and these spherical images were re-projected onto LIDAR data of all of the sets. These sets were then brought into our lighting department to reflect onto the mask. So the final result was a combination: we were reflecting LIDAR data and performances captured on the Rylo cameras together, to pull off the final effect.

b&a: What were some of the specific tracking and match move challenges with the actor head?

Nathan Larouche: We needed to ensure that the tracking was spot on. We have one of the best layout leads in the world, Kenny Yong. All of our tracking was done in 3D Equalizer. We created undistorted plates that we incorporated into a new workflow that allows us to map the breathing seen in some lenses as they rack focus. We then did camera and object solves using LIDAR and head scans of the actor. These tracks were then laid out relative to each other so our reflections would be accurate across an entire sequence.

b&a: How was the on-set capture brought into your system and how did you manage the lighting pipeline?



Nathan Larouche: To set a virtual object into an environment, the best thing you can do is recreate the environment around it digitally and use that to light the CG object. You never show the environment you created, but you get realistic lighting and reflections from having that data in the scene. For every sequence, we ended up modeling low-res versions of the environment and re-projecting photos from the set back onto it. We would then do all of our camera layouts to that environment so animation had reference as to where the character was in that environment, and lighters had reference for what objects needed to be seen reflected on the mask.

We started by keeping everything very physically accurate, but quickly realized that it wouldn’t work for the final shots. We needed to break away from accuracy and art direct the placement of reflections on the mask. In some cases, this required us to rotate or skew the environment around the character to get the desired reflection.

During the interrogation room sequence in the first episode, for example, there’s a hero reflection on Looking Glass’s forehead of the suspect…who is also reflected multiple times across Looking Glass’s eyes. None of that is remotely close to being scientifically accurate, at all. We took the suspect’s footage from the Rylo and placed it onto cards in 3D space, and moved it around until it seemed to be believable. More importantly, it looked really cool. This was a fine line we had to walk—something that was believable, but also helped to tell the story visually.

In terms of the pipeline, it ended up being a whole lot of different reflection elements. And then in compositing, we were able to isolate these different areas of the mask and reveal different lighting passes. It allowed the compositor to art direct and come up with the final look of the shot. One of the key lighters on the show, Tony Linka, really ran with the reflections and had a big part to play in terms of what reflections were used and where they were placed on the mask. Reflecting moving objects is tricky and there’s a balance there. Tony was able to find the perfect balance of making something look cool and seem accurate.

b&a: In compositing, what was the approach you took to bringing everything you had together? What ended up making shots work versus ones that still needed work?

Nathan Larouche: For each sequence, we would isolate hero shots (where the mask was being used as a gag or where Looking Glass was a prominent character) and design the look of the mask in that shot first. We would then distribute that to the rest of the compositing team who would replicate that look.

The way the mask was treated in comp varied drastically depending on the environment. We gave compositors different passes with different roughness amounts—in very bright environments, the mask seems to read smoother and chrome-like, while in dark lighting, the exact same mask looks more rough and dirty. We gave compositors multiple ID passes of rough texture maps so they could increase or decrease the roughness of the mask in comp.



What worked well was the hand-off from lighting to comp, whereby the lighters would be given generic Looking Glass templates, and they would input all of their lighting passes into that template. We would review that and pick which passes had interesting details in them. All of that happened in the lighting stage.

When we had an image that worked, we handed that to comp and the comp team took the laid out reflections and added all the believable details (roughness, damage) to integrate it into the shot. They would take cues from the footage in terms of chromatic aberration, different flare elements, etc to get that last level of polish on it. On complex shots where Looking Glass was interacting with the mask, lots of 2D work was required to rebuild finger details and reconstruct parts of the actor’s face. This type of work is extremely difficult and requires talented compositing artists to pull it off. Iyi Tubi, Perunika Yorgova, and Mitchell Beaton ended up working on some of the most challenging shots in the show and absolutely nailed it.

Below: the Intelligent Creatures reel for 2009’s Watchmen.

b&a: Can you give me a brief insight into the recent work MARZ has been up to?

Nathan Larouche: MARZ is a young company, focused exclusively on delivering the highest quality work for TV clients. In our first year, we worked on Watchmen (HBO), Living With Yourself (Netflix), The Umbrella Academy (Netflix), The Boys (Amazon), The Expanse S4 (Amazon) and various other shows. It has helped us build a lot of momentum.

Naturally at first, and now deliberately, we’re a shop that focuses on character work and effects that are tied to a character. Going into 2020, we’re working with various networks and production companies on really cool shows where we’re definitely continuing to head in the character direction, and pushing the boundaries of what’s possible to achieve on a TV timeline.

Leave a Reply