Go behind the scenes with DNEG.
Timur Bekmambetov’s Mercy, starring Chris Pratt and Rebecca Ferguson, sees Pratt’s detective character need to prove his innocence to murder to an AI judge. For several sequences, the production utilized the LED wall at Stage 15 at Amazon MGM Studios lot in Culver City for filming. Later, DNEG would tackle a number of key scenes with visual effects. We chat to DNEG visual effects supervisors Chris Keller and Simon Maddison about the work.
b&a: This is a film that utilized shooting on the Volume. Can you talk about the process of taking this footage for various scenes and times where you needed to add to it (say, for backgrounds), augment it etc? What kind of cross-over could you employ with imagery crafted directly for shooting on the Volume? What were the challenges in VFX of dealing with that workflow?
Chris Keller: The volume played a monumental role in our VFX work. The entire courtroom was pre-built in Unreal Engine and projected all around Chris Pratt, and because the design and lighting were intentionally kept consistent between principal photography and final VFX, a lot of medium and close-up shots held up with no or only minimal touch-ups.
For wider shots, we typically replaced the volume imagery with our courtroom asset, keeping only Chris’s part of the chair from the original plate. Even in those cases, the volume still provided valuable lighting which facilitated integrating Chris into the CG environment.
The volume was especially important for animated screens. Any time you see screens swooshing across frame or popping up out of thin air, production was running temp animations on the volume during the shoot. The exact graphics and layout would evolve, of course, but we leaned heavily on the timing and intent in the original photography, so Chris still felt connected to our CG screens. Only a small number of shots ended up needing 2D relighting.
b&a: Can you break down your approach to some of the chase scenes involving vehicles around LA? What approach did production take to filming live-action, and where did you need to add in vehicles and enhancements?
Simon Maddison: The production was able to shoot a truck driving through the streets of LA in the very early morning, which often gave us a very good base to work from. Because the streets were empty and there were many collisions, a lot of the surrounding traffic had to be added in post to augment the action, including the police cars.

Other shots required a fully CG truck and traffic, crowds of pedestrians, and many of the homeless encampments. The director’s vision for the near future included a lot of these camps, so we built a library of digital doubles and tents, which were then added to most of the shots in post.
One particular shot, where the truck drives through a street parade, needed to be entirely CG. The base plate being only the street and the truck. Everything else was added in VFX, including fleeing festival goers, food trucks and a collision with a rather unfortunate Ice Cream truck.
b&a: For shots of the flying quadcopter, in particular, it seemed like there was some fun practical gimbal work done for some shots — how did DNEG take this further for wide shots and any enhancements for the close-up shots?
Simon Maddison: There was a gimbal rig with the quadcopter shot in a volume which we used for some shots, although even then the background still needed to be replaced. What it did give us was amazing reflections and light interaction on the pilot and her vehicle. These shots were primarily the dash cam attached to the bike, looking back at her face in close-up. Some shots were captured practically on a crane arm on location, wth the rig later removed in VFX. Many of the mid and wide shots, particularly during the action sequences, were fully CG-generated using a digital bike asset and a digital double. For many of those shots, we incorporated the original spherical array captures filmed on the streets of LA as background elements, which helped enhance the level of detail and realism.

b&a: What went into the build of the AI judge’s courtroom?
Chris Keller: The main challenge in building the courtroom was on the creative side. Chris and the audience live in that space for most of the film, and the room had to carry the storytelling. It needed to feel convincingly futuristic, but not so over-the-top that it became distracting, and not so minimal that it felt sterile or empty.
Luckily, the rough design language was locked before the shoot. The room was essentially a dark, modern space with roots in a traditional courtroom layout. There are labs left and right, a data center beneath the floor, and a dedicated nook for the AI judge seated in front of a quantum computer. Those things would evolve but not change fundamentally.
Lighting was critical. We needed the room to stay dark for screen readability, but at the same time, Maddox (Rebecca Ferguson) and Chris still had to read as hero subjects. On top of that, we wanted the environment around them to have this eerie ambient glow, motivated by the data center below the frosted glass floor, the lit labs on either side, and the ceiling fixtures. This was a tricky balance. The quantum computer became the centerpiece of the space. The two vertical light columns in front of it were key, as they helped motivate the lighting on Maddox in a natural way.
After principal photography had wrapped, we ingested the Unreal courtroom asset from production, upres’d it, and made design refinements. We kept lookdev running in Unreal for a while because it gave us faster creative turnaround while the look was still evolving. Once we’d landed on the final design and lighting intent, we moved the asset over to RenderMan for final rendering.
b&a: What tools and techniques did you use to visualize the Municipal Cloud?
Chris Keller: While most of the virtual screens in ‘Mercy’ were achieved with a complex 2.5D Nuke setup, we knew right away that the municipal cloud with its thousands of screens required a full 3D approach.

DNEG’s Production VFX Supervisor, Axel Bonami and editorial did a huge amount of work gathering enough material to fill the cloud: headshots, CCTV, documents, phone clips, bodycam, etc. Ingesting and organizing that volume of source material was handled by a separate unit supervised by Rosie Walker, using a combination of clever automations and good old manual labour.
Then we spent time figuring out the cloud’s design and behaviour. We looked at existing ways of visualizing complex databases, and we talked through what kind of data the municipal cloud would realistically contain and how it would sort that data. That led us to the following rule set: the cloud is organized into hierarchical clusters. The first level is always footage of a location (CCTV, phone, bodycam, etc.). Anything of interest inside that footage – a person, a vehicle, an object – would then spawn a second layer of smaller screens showing a more detailed report. Some of those second-tier screens would spawn a third layer again, like a passport record or a court document. The result was a fractal structure that can explode into the room while still feeling like it has logic.
Technically, we achieved the cloud by sorting the source elements into an actual database that we could load into Houdini, and then instance onto points. FX supervisor, Marco Van Der Merwe, built a procedural setup that gave us scale and speed, but still left room for art direction. We wanted the best and most relevant content to sit on the foreground screens, so we used hand-placed hero clusters to control those moments.
Finally, we needed it to match the sleek UI look we’d already lookdev’d in Nuke, so we developed RenderMan shaders to get the right screen feel in 3D: bevels, glassy diffusion, subtle reflections, and self-reflections. And as with the other screen work on the show, production doing temp graphics on the volume was a massive help. It gave us wonderfully complex practical lighting that made integrating the CG cloud much easier.



