Milk VFX breaks down a single scene from ‘Cursed’.
One of the sequences visual effects studio Milk VFX worked on for the Netflix series Cursed—based on the illustrated novel by Frank Miller and Tom Wheeler—sees the character Nimue (Katherine Langford) confronted in a forest by several wolves. Eventually, from her position atop a rock, she is able to use the Sword of Power to defeat them, depicted with a final stylized wolf head decapitation.
To get a sense of how that kind of visual effects sequence was achieved, befores & afters spoke to several members of the Milk VFX team. Working overall visual effects supervisor Dave Houghton, they delivered 390 shots for the show, with the wolf pack fight just one portion. Here the team breaks down the wolf sequence shoot, CG build, animation, lighting and compositing stages, step-by-step.
1. Coming on board
Jenna Powell (visual effects producer): We were able to be involved in the concept stage of the wolves and therefore we were actually able to kind of influence how they were designed.
Ciaran Crowley (visual effects supervisor): We had done a style book of a lot of images and concepts of wolves. There were questions about, would they be a Frank Miller-esque 300-style wolf or something a bit abstract. Ultimately a more realistic style wolf was decided upon by the team, except they were made a bit bigger because the set piece was about six feet and the wolves needed to be able to just cling to that rock.
2. Shooting the scene
Ciaran Crowley: It was shot in the second week of the production over two days on a greenscreen stage. They had some dry runs with the stunties who were playing the wolves, and would do things like bite Katherine’s arm. On set they had rain and also a lightning lighting rig, which we recorded with some 360 degree cameras so that we could replicate that in 3D.
Once the shoot was done, the editorial team only had a few stunties in the shots, so we got a rough cut and we did some block animation and placed the wolves in positions with IDs on them in terms of black wolf and white wolf and the grays, and built a little bit of a sense of geography for the scene.
Chris Hutchison (animation supervisor): We got our tracking team, lead by Amy Felce, to give us some very rough tracks to begin our early blocking animation with. This helped with the layout and positioning and to make sure that the tempo and pacing of the sequence would flow and feel as energetically and ferocious as possible.
Sliding a basic rig around and putting a very early run and walk cycle on the creatures let us see what would and wouldn’t work cut / pacing wise and allowed us to tweak and figure out the world space of the wolves and to make sure that the whole sequence flowed, making sure that the wolves wouldn’t disappear in one shot and suddenly appearing in the next.
3. The build process
Adrian Williams (CG supervisor): Sam Lucas, our modeling supervisor, started to build our base model. We did one initial wolf first, and then we could do all our tests and work out the anatomy on that and translate it to the other wolves. Once the model was done, we handed things over to rigging supervisor Neil Roche who started to develop the muscle systems using Ziva.
Neil Roche (rigging supervisor): With Ziva, we’ve written a lot of tools to integrate the plugin into our pipeline. One of the tricky things was most of the tools we’d written were based on just having one asset in the scene and because we had five different wolves in most of the shots, it added a bit of complexity.
The other complicated part was the fact that the wolves had to be dry in some shots and wet in others, in terms of the groom. So we had to devise a way to pass the groom state attribute through all the shots. The grooms were all dynamic as well so we had to do an extra pass of dynamic hair on the groom before we applied the actual lookdev to it.
We do all our grooming in Yeti. Our lead groom artist Matt Bell did a dry groom and a wet groom and then the animators basically had to use an attribute on the rig which they could set to wet or dry. Then this attribute would be added to the Alembic cache. As the cache would go through the pipeline, we could analyze the Alembic file and we could return whether it was a wet or dry wolf. It would then find the correct files from Shotgun to use when it was doing the simulation of the groom for each shot.
We’ve actually developed a proprietary tool called Moobot as a way for us to run our CFX and groom simulations on the renderfarm via Shotgun. It’s a very automated process. The animators publish their shots and then just through the Shotgun browser, we can select an Alembic cache, run CFX and groom from a custom menu, and it’ll do all the procedures for you and then spit out a HD render of a groomed wolf, which we were using for QC or for bash comps.
4. Final wolf animation
Chris Hutchison: Once the client had signed off our early blocking / layout the official plates came in and we could get started on the full animation and tracking of the sequence.
We sourced hours of video reference and watched as many documentaries about wolves as we could to find snippets of live action reference that we could animate and use in our shots accordingly to make the creatures feel as realistic and believable as possible. Looking at the reference we found some fascinating wolf behaviors and mannerisms that we could use, that related to our shots, especially with wolf pack behavioural patterns and mentalities of the group that we could use as a base to build a library of wolf animation for all of our shots.
This helped with consistency amongst the animators and to save time (as we could reuse certain aspects of our library of animation) but also helped us figure out the mechanics and explore the physicality of the wolves. We then would have a template to start from and could then work the finer details and nuances of the wolf behaviour into the shots as the shots progressed making each shot unique and as threatening as possible.
Even after using real world reference, Frank Miller emphasised that he wanted the wolves to be as intensely vicious and ferocious as possible and wanted us to reference his wolf from 300 in which its head is down and has a strong menacing and intimidating pose. So with that in mind we finessed our animation to give Frank the performance and look what he wanted. In short, the wolves needed to be big and bad and fuelled with a load of aggression and ferocity that I think came across nicely in the final product.
5. The forest environment
Ciaran Crowley: The environment was based on the location where we were shooting a lot of the show, so we lidar’d that area and all the trees.
Adrian Williams: In CG, we started off with four variations of the pine trees, based on what was on set. We modeled those, then did lookdev inside of Maya and then handed the asset to FX and they gave us some ambient swaying and leaf and branch movement which was baked into the caches. Inside Houdini, we built up the environment with scattering tools. We had a lot of branches, twigs, rocks and other set dressing.
That whole environment was modular; we had all the modules in different areas in inside of Houdini and we exported that out to Maya for lighting and rendering. We could render those out as a blanket for each shot and them comp could build the shots up that way. Environment supervisor Simon Wickers and his environment team did a 360 cyc for the very background of the CG forest.
6. Lighting and comp
Robin Cape (2D supervisor): The plates that were turned over were challenging. They were covered in rain and practical lighting effects. They had this lighting rig on set that was programmed to emulate lightning, so we had to work with that to start.
Adrian Williams: We got the lighting set-up from the set so we could mimic it inside of Maya. It was definitely tricky to match up, to animate lights triggering at the same time as the lightning on the plates. We figured out that we could have a four directional light set-up. We initially set up a light rig where the artists were able to light each shot to a gray sphere so they could one or two frames where the lighting was locked on 100%. We then rendered all the lights together, so the wolves and environment were completely lit. Lead compositor Alvaro Cajal then developed this cool Nuke script, where artists could time the lightning flashes with the directional lights that were rendered.
Robin Cape: The majority of the dynamic lighting would be triggered by 2D artists in the comp using essentially what was a curve tool. Alvaro took the idea of a curve tool to capture that dynamic lighting and made some amazing setups for our artists. Here, the artist could choose a portion of a plate, track it in, and capture the curves, and apply that through the different passes.
For the rain itself, we had to make some decisions on what to keep from the plate, what to get rid of and how to blend in CG rain. There are some shots where essentially we just had to cover it up. We had to throw everything we had at it; 2D the elements, particle effects in Nuke and real rain.
Interaction between the wolves and Nimue was tricky; there were a few tough shots where our proxy actors had to be cleaned out and then replaced with wolf jaws. There were a lot of work-arounds to make that work—just a lot of roto/paint and anything that helped make those bites look realistic.
Adrian Williams: Also, because it was a muddy environment and was raining, we got the FX artists (overseen by FX supervisor Dimitris Lekanis) to do essentially a displacement pass where the wolf would place their paw in the mud and give us an undulated surface area around the wolf’s paw. We gave that as an ambient occlusion or a black and white map to comp, rather than have it as a physical rendered piece of geometry, which means they could blend in how much they needed of it.