How Framestore used machine learning techniques in their comps on ‘The Matrix Resurrections’

A chat with 2D supervisor Patricia Llaguno.

Some of Framestore’s central scenes in The Matrix Resurrections take place in the third-act motorcycle chase, where Neo and Trinity try to avoid a ‘swarm’ of humans, helicopter attacks and more.

One particular moment sees the duo weave through several cars and crowds on the street while also dodging helicopter-fired missiles. Here, Framestore (VFX supervisor Graham Page) created CG versions of Neo and Trinity for key shots, generated missiles and enhanced practical explosions.

An interesting task also fell to the compositing team, led by 2D supervisor Patricia Llaguno, in needing to restore additional range into the original photography owing to bright blown-out areas of the explosions where the cameras lost detail, i.e. where the highlights were clipped.

Llaguno talks to befores & afters about that specific work–done with Nuke’s CopyCat toolset–and some other aspects of the chase scene, from a compositing point of view.

b&a: What were the particular compositing challenges for that helicopter chase moment?

Patricia Llaguno: The helicopter attack beat comes at the tail end of the swarm attack sequence and the entire sequence was shot exposed for dark streets and had very dynamic set lighting, with movie lamps often visible within each shot and lots of moving localised practical smoke which the actors and practical vehicles were travelling and running through. So our first challenge was to paint the movie lamps and replace them with a source to justify the lighting direction and shadows: often we added CG cars with their headlights at the right position or extra street lamps when the sources were higher. We also had to make the lit up smoke around the cleaned up movie lights be softer and more diffuse, which meant rebuilding it and often extending it over extras and the environment.

For the CG crowd enhancement work this ties in to how complex it then was to balance the CG lighting and to integrate the crowds into the plates. The lighting department did a superb job at giving us renders that reflected the very complex light sources in the scene, but there was still a lot of finessing and hand animation of individual lights for CG characters that had to happen in comp. The main challenge in this respect was integrating the CG into the photography smoke and getting the fog levels to read correctly across each CG crowd character on every frame. As you know, every shot in the sequence has got a fast moving camera, following fast moving cars and bikes and people running around everywhere (as well as dive bombing from buildings and hitting the road), so the task was was not trivial!

By the time we reach the helicopter attack section the camera sits back a bit to reveal the wider frame with the helicopters firing missiles at Neo and Trinity on their bike. These shots posed a different challenge for us which was that the practical explosions were overexposed in the source photography which was calibrated for the dark street and Neo and Trin on their bike. The highlight tonal range was missing, with large portions of the explosions turned to flat grey areas with no detail.

This was not visible with the naked eye, but we anticipated it would be a problem later when the material went through an HDR grade after VFX was completed, so we offered to find a solution. We quickly realised that a manual approach to replacing the bad areas with 2d explosion elements, graded, warped and retimed to match the surrounding fire would be too time consuming. This was partly down to the fact that the damaged areas of the image were not consistent over time and changed with the changes in light intensity coming from the explosions.

Wouter Gilsing, one of our three leads, offered to test a procedural approach using machine learning to patch the missing pixel information and restore highlight detail dynamically over large areas. And it worked really well as a solution.

The section where Neo and Trinity get thrown out of their bikes and are enveloped by the explosion had been shot in the studio as a green screen wire set up. And that was all about successfully describing the action compositionally before they get ejected from the explosion, deploy the shield and land on a car, so a lot more of a creative look development task.

We had some CG explosions as a base, but we did a lot of dressing up with library elements in comp. We also made some Nuke particles for embers to add interest, movement and a range of scale within the frame. We were careful to embed the fire and embers and dress them around the edges of the characters and do all the heat haze and diffusion without compromising Neo and Trinity’s faces. It’s a balance to strike with shots of this nature where you want it to be optical and believable, but it has to be stylized enough that you can still respect the performance and not detract from the narrative and emotional power of the moment but hopefully enhance it.

b&a: With the machine learning side of things, was it something done in Nuke?

Patricia Llaguno: Yes, we used CopyCat for it. We have some in house tools as well, but CopyCat resolved it for us in this instance.

b&a: Had you and your team been experimenting with CopyCat much prior to this film?

Patricia Llaguno: I personally hadn’t. But as I mentioned Wouter (Gilsing) jumped at the opportunity to test it as a solution, as he had been exploring possible applications beyond automated mattes, beauty work and face replacements. He collected a large dataset of in-house library elements of explosions which had been shot underexposed to retain the full range in the top end highlights and used that as a target; he then “broke” that dataset to mimic the problem footage as a reference to train against and the result was great.

b&a: I also wanted to ask you about what I think is also a key Framestore shot–its that moment where they are on the bike and they appear to be about to land on a car, and Neo does his force field thing and the car crumples. Can you talk about that particular shot from a comp point of view as well?

Patricia Llaguno: Well, technically, it’s almost a full CG shot. We ended up keeping the background plate, which was the building at street level, although they had DMP on top of it. I think with shots like that, as always, the challenge for comp is to make it optical, to make it look real and also beautiful. There wasn’t much that we actually added in terms of extra elements in comp. It was more about the grading and lensing, and all the convolves and glows coming from the explosions to make it look photoreal and beautiful as an image. There is a lot of careful work, like dressing glints on the shattering glass or adding scuffs to the metal on the lose tyre for example, that elevates the result.

b&a: I figured a lot of it was CG, but I felt like in that whole sequence you were doing things in multiple ways and that helped with integration.

Patricia Llaguno: Well, one thing we did do in general in the sequence is collaborate closely with FX to dev the shield look for Neo. We essentially lookdev’d the shield in comp but because FX were doing the CG bullets, we had find a way that we could collaborate to take their bullet velocity and match it to what we were doing.

Normally it would be maybe more down to lighting to do this. Lighting usually gets the FX sim and then do a bunch of lookdev themselves. For the shield we skipped lighting altogether and picked up the assets as point clouds from FX in Houdini and found a way to bring them into Nuke and create collision maps that would drive sprites for a particle system in Nuke to create the shield ripples and give them orientation so you could outline them. Dan (Glass) had a very specific idea about what the ripples should look like, he didn’t want them to be watery. He wanted them to resonate with the classic Matrix ripple effects and bullet-time trails. But he didn’t want to exactly replicate the same effect, so there was a bit of honing to be done.

And I think because of that, it was just decided very early on that it was best for comp to do it (courtesy of Hannes Sap and Ivan Sorgente), to drive the look iterations more rapidly, rather than it being a lighting task requiring render time. It was at its core, a distortion effect and it needed to go through comp in any case in order to be presented or discussed. So we ended up coming up with a way of trying to work with FX and find a procedural way of doing this in comp that didn’t mean we had to hand place and hand animate every single bit of texture and ripple per shot.

b&a: I actually really love that shield effect because, I don’t know how to say this, it didn’t feel ‘magical’, but it probably easily could have. It was more subtle, in some ways.

Patricia Llaguno: We had to think, well, conceptually what’s behind it? It’s just the ‘perceptual’ wobble in a computer simulation, right? So it has to have some simulation, but it has to be modeled on physical rules somehow. It was about finding that balance and making it a useful storytelling tool too. We had to make the action happening around the shield feel plausible, as well, say, the car collapsing or the bullets stopping or whatever it might be. So it had to be a subtle effect, just enough to lend solidity to the action. It was a fun asset to help create.

One Reply to “How Framestore used machine learning techniques in their comps on ‘The Matrix Resurrections’”

  1. I wish there were some Before & Afters to look at. The CopyCat workflow reads great but visuals would have showed a lot. I’m very interested in this area!

Leave a Reply

%d bloggers like this: