‘OK, this looks very cool, but can you build it underwater?’

February 17, 2022

How ‘The Matrix Resurrections’ re-thought bullet time.

In 2019, Volucap CEO Sven Bliedung von der Heide was at his volumetric video capture outfit, located in Germany’s Studio Babelsberg, when Matrix director Lana Wachowski paid him and his colleagues a visit.

“Lana came in, and saw our very bright white capture stage, which already looked like a white room from the Matrix,” recalls Bliedung.

The director was there to inspect Volucap’s volumetric video capture set-up. “‘OK, this looks very cool, but can you build it underwater?’” Bliedung related Wachowski as asking. “I said, ‘How much time do we have?’ And she said, ‘Maybe two or three months.’ I said, ‘Give us two days.’ And this was our first meeting!”

An exasperated, but excited, Bliedung pledged to investigate whether Volucap’s current volumetric video capture rig—which had only been in development since 2018—could be adapted to underwater housing and control. After two days, he reported to Wachowski that it could be, and so began an intensive testing process to turn what is usually a heavily controlled scanning process into one that would have many more variables once immersed in water.

A new bullet time?

What Lana Wachowski was, unsurprisingly, investigating with her desire to shoot volumetric capture underwater was a new way to capture actors—such as Keanu Reeves as Neo and Carrie-Anne Moss as Trinity—in the ‘spirit’ of bullet time, an effect so fondly remembered by audiences from 1999’s The Matrix.

The original bullet-time shooting set-up from ‘The Matrix’.

On that film, leaps and bounds were made by the visual effects team to shoot actors with a multi-camera rig array against greenscreen, insert them into photogrammetry-based ‘virtual’ backgrounds, and use interpolation techniques to manipulate speed and time. Indeed, few visual effects moments are as iconic as bullet time, arguably because the technology and the ideas were supported so strongly by the film’s narrative, including that the characters were within a giant simulation.

On the new film—The Matrix Resurrections—Wachowski was looking for a way to also communicate moments that were within the simulated world of the Matrix, but with a new ‘meta’ twist that the film would employ. “It was about the subjective experience of Neo,” identifies Matrix Resurrections visual effects supervisor Dan Glass. “It was about, ‘How do convey that feeling of trying to resist, or fight back,’ which is what Neo needed to do. Underwater is great for that.”

The idea, then, was to film the actors underwater and capture them with the volumetric capture rig. This would provide, it was hoped, a photoreal 3D representation of the actor with that ‘underwater’ impact on their face, hair and clothing, that could then be virtually re-staged from any angle, with any camera movement. You would get all the ‘Keanu Reeves’ nuances, say, as moving footage, plus the slightly ethereal, watery and caustics feeling from simply being shot underwater.

How Volucap’s rig went into the deep

A note here: if you’ve seen The Matrix Resurrections, you might recognize the ‘bullet time-like’ scenes that appear in the final film. These are essentially the moments when The Analyst (Neil Patrick Harris) stops time. These were scenes where underwater volumetric capture was explored, but was ultimately not used, for reasons that will become apparent later. Ultimately, a range of techniques helped deliver what became known as ‘split-time’ for these Analyst moments, some inspired by the underwater test footage.

Some of the different camera rigs Volucap ultimately helped employ on ‘The Matrix Resurrections,’ including the underwater volumetric capture rig.

Still, Volucap did actually complete the monumental task of getting its underwater volumetric video capture rig to operate in a tank set-up at Studio Babelsberg following the original request from Wachowski. Its first challenge related to underwater housings for the cameras.

“Underwater housings for cameras of course exist,” notes Bliedung. “But then if you have multiple cameras, you need to synchronize them underwater, which off-the-shelf housings don’t really let you do. Also, we needed a ‘live’ signal, and only connectors that offered low resolution were available. We needed at least 4K, 10bit and 60 FPS high resolution previews from all those cameras. So we built our own housings, power supply, trigger system and cables.”

Alongside back-up systems in case a cable broke, or water got into a housing, Volucap raced to deliver an underwater volumetric video capture system in only around two-and-a-half months. At the end of 2019, the rig was ready for tests.

Those tests employed stand-ins, dressed in suits and clothing that would likely match the real actors, who were captured with Volucap’s rig in the seven meter deep the tank. The rig was essentially immersed inside the tank and surrounded the stand-ins as they performed a scene.

One piece of test footage involved a stand-in reaching out to stop a slow-motion bullet (with bullet trails) from hitting another person—ie. Neo attempting to stop the bullet fired by The Analyst at Trinity. For a test shot like that, Volucap captured the scene, then delivered layers, geometry, textures and so on. The final composite—with the characters placed in a warehouse environment—was crafted by One of Us, overseen by One of Us director and production side associate visual effects supervisor Tom Debenham, and of course by Dan Glass, also.

Part of the underwater test, with VFX executed by One of Us.

For Bliedung, the results of the tests were incredibly pleasing given the intense R&D and production schedule. Also, Volucap had delivered on the original idea: to capture actors with the intent of using their ‘real’ performances. “You can’t really fake water or underwater things in CG,” Bliedung says. “It always looks weird when you just move the hair and not the skin. The whole weight of the skin is completely different underwater, and you have a very different drag to the cloth as well.”

A re-think of technique, and story

The underwater volumetric capture was, of course, just a test. As promising as it appeared, there were a number of remaining challenges. Glass advises that it was expensive, for one. COVID-19 also made an impact (the film was shut down for a short time, and new COVID-safe approaches to shooting needed to be employed).

In addition, the initial intention was to heavily show Neo’s subjective experience in that ‘stopped time’ moment. Could the underwater volumetric capture go ‘close enough’ to the actor to show that? One hurdle was that, perhaps unsurprisingly, the underwater footage lost some resolution and focus, although Volucap had made many strides to combat this.

“The other thing is that the reduced gravity underwater can do weird things with people’s skin,” adds Glass. “It changes the way you look. And we could have gone to all that effort—shot your A-listers under the water—and then we would’ve potentially been asked to digitally fix the faces.”

Another challenge became what Glass describes as the evolution of Wachowski’s shooting style. “Her kind of filmmaking style now is almost polar opposite from where we were 20 years ago. Her storyboards, back then, literally could be used to figure out where bullet holes should be placed on walls, that’s how specific things were.”

“Nowadays, she is more about trying to set the stage, if you will, so that you have the location, you have the feeling of light, and the actors know what they’re supposed to be feeling in a scene. Complex technical set-ups obviously can completely stop the flow in those situations. So while we made those explorations, I think we wisely felt like that would impact too much of the way that Lana wanted to tell the whole story.”

Glass praises Volucap’s efforts on the underwater capture, and says that the work later done by One of Us and Framestore in the ‘split-time’ scenes (discussed below), still incorporated feelings of underwater. “We certainly looked back at underwater photography, the medium, in the way that caustics affect things. We teased that back into the material, even though it was shot in dry. And then there was a major offshoot of this, which was some extra array-based photography, which we partnered with Volucap on.”

The advent of split-time

Those Analyst scenes—one, the cafe fight, and another the bullet firing and apple explosion moment in Tiffany’s (Carrie-Anne Moss) workshop—were ultimately achieved with a range of methods. While there appear to be ‘frozen time’ people in those scenes, Glass ended up calling the shots ‘split time’, since, technically, the people were not really frozen, and they tended to show different pieces of action happening at different frame rates or speeds.

Tom Debenham explains the workshop shots, which One of Us handled: “The Analyst can control time and he can make one person move in slow motion while he keeps moving at whatever speed he wants to. As the sequence progresses, a bullet is shot and, in slow motion, is destined for Tiffany’s head. Neo is slowly trying to reach out to stop it and save her, while The Analyst can control time and move faster. He places an apple in the bullet’s path, which smashes into pieces as the bullet pierces it–referencing both bullet time and Harold Edgerton’s famous photograph.”

“The real challenge, and also the most creative and interesting part,” continues One of Us visual effects supervisor Tyson Donnelly, “was that the brief was developing as the work was progressing. Instead of just executing shots, we were instead trying to solve the story, execute the visual effects, and create the look! We worked on it for a long time to develop in a newer way the medium. We asked ourselves many questions, ‘Are we meant to see it or aren’t we meant to see it, do you feel that is there?’ There were a lot of back and forths and then there was a moment where we tried slow-motion while everything else was moving around. We thought that was a good idea so we tried to sell the effects at that point. But for some reason, we had to scrap and then figure it out and then come back with a better idea. I mean, there was always an initial brief but like it could be taking 20 ways. We started in a completely different direction from where we ended up with it.”

How part of the workshop ‘split-time’ scene was filmed.

Ultimately, a stereo rig with cameras running at different speeds was utilized, and the footage ‘split-screened’. There were also CG additions in terms of CG hair, limbs, clothing and other objects to aid in the slow-mo look. The approach to shooting with a stereo rig came from a suggestion from Gareth Daley in the camera department, as Glass outlines. “He said, ‘Let’s use the stereo rig, two cameras, not offset left and right eye, but actually aligned, but shooting different frame rates, Initially, Gareth was thinking 24 fps for The Analyst, and then eight frames for the more motion blur parts. We ultimately settled on shooting at 120fps and eight. This meant we could create crisp, slow motion frames, and even go slower if we wanted. We could also rebuild the Analyst if we needed to, but we had the eight fps as a real photographic reference as well.”

One of Us manipulated and brought together the footage and then orchestrated other things such as bullets moving at their own frame rate, sparks in the workshop going at different rate, an apple exploding (based on Phantom camera reference), Neo and Trinity going in slow-mo, while The Analyst needing to be delivering dialogue at normal speed. “Putting together different plates at different time shots was a very complicated process,” says Donnelly. “We had to film with an old-school technique where you have two cameras aligned in a stereo rig. One is filming in slow motion and the other is filming at normal speed. And then a lot of the other shots were filmed as multiple passes at different frame rates.”

VFX by One of Us.

To film the necessary plates for split-time, Glass had first considered capturing the actors and any props as separate greenscreen elements. “The difficulty with that is that the kinds of interactions and the way that eye-lines work, it just felt like ultimately better to pose the kind of near-frozen people and have Neil basically deliver his dialogue in the scenes and the actors all appear still, and then we go in and freeze them. Which is, more or less what we ended up doing, where people are posing as slow, and we’re going in and fixing areas where they wobble.”

Adam Azmy, 2D supervisor at One of Us, explains his studio’s work on this aspect further: “The wonderful thing about this sequence was that it was all primarily created using plates, shot with mostly natural light and natively shot with the different frame rates we’d need to use across all the sequences. This was also the biggest challenge. Marrying all of these elements, that was shot mostly handheld took an increasing amount of delicacy. Firstly, starting with lots of broad smash comps across the sequences as a whole so that we could start the dialogue with the editorial. They provided a great lineup edit to use. But unlike most shows, where the editorial reference is the bible, we quickly had to adjust our thinking to allow for some creative back and forth to make the sequence work as a whole. Some of the most complex moments involved the most delicate of interactions. A real-time character momentarily touching a frozen or slow-moving character on the face involved an extensive amount of cleanup, facial tracking, and patching to make the two worlds meet. Not only would the interaction need to be flawless, as they were often closeups, but the strong daylight obviously creates strong shadows, and the real-time shadows would also need to move across the slow-motion performance too.”

Azmy also notes the re-timing approach, given elements were shot at different frame rates, needed close attention. “We found that elements shot at 120fps gave us the perfect amount of frames to mimic the 8fps long exposure look using a number of retiming processes and shutter adjustments to create the correct motion blur and stuttering. Although, a sizable number of the shots that the effect is applied to for The Analyst, is actually using the in-camera 8fps take of that performance. We would then often use the inverse, of lining up and retiming the 120fps footage to bring back moments of clarity to the Analyst’s performance.”

“Things like the welding and plasma cutting sparks also need a 2D treatment,” adds Azmy, “as this was one of the key visual signifiers that time was being messed with. We went through a number of wedge tests and experiments, all using the 120fps shot sparks as a basis to get a look that Lana was after. Something that felt slow, and looked somewhat like long exposure light trails, but also a little wild and somewhat explosive at points, mixing in both 120fps and some real-time too. The 120fps footage being shot with a 360° shutter gave us the right ingredients to create long ghosted trails using a retime process. We were then able to use this as a way to use more or fewer frames to creatively expand or contract the length of the trails.”

For the key bullet and exploding apple moment in the workshop, Donnelly mentions that, “we started in a completely different direction from where we ended up with it. We did around 20 different versions of how the bullet turns to a flower petal–‘Is it beautiful? Is it scary?’ It didn’t help that there were large periods between directing and the feedback, which happened because we were shooting during COVID. It was hard to go back to it after a long time, not having any solid feedback to then going in a different direction. That was what made it a bit tough. We were in a place where we had 10 tests in play and they were all cool but we were asking ourselves which one we wanted to do properly. It was a long sequence made of 126 shots and to keep changing across so many shots was hard. Also coming back working from home was challenging because we couldn’t use big screens but only our home setups. That also fell into the challenge of it.”

“We had to deal mostly with photographic elements,” adds Debenham. “The apple was all CG although the apple shot in slow motion was used as reference for the bullet. So we looked up and the trail of the bullet, the apple was all CG and the bullet itself, obviously. And because of the slow motion aspect of things and the fact that Neo should look like he is floating and suspended, but he wasn’t, we did a lot of CG hair, so the hair is looking like he was suspended and his coat as well, there was a lot of work, it was all very subtle actually.”

Earlier, for the cafe sequence, a different kind of confrontation scene, several people had to be in ‘frozen time’ in the air. “I think there’s three or four that were suspended on wires that just had to hold position,” discusses Glass. “And then there’s a few more digitally added in the air as well. Framestore worked on these shots. It’s kind of funny because that felt like a rather old school approach, a sort of a mannequin challenge, if you will, where you’re just standing school. But it’s pretty effective.”

VFX by Framestore.

Camera arrays stay in the mix

While Volucap’s underwater volumetric rig was developed but not utilized in the film, aspects of the multiple perspectives that can be achieved with volumetric capture remained in the film, including for what became known as the Red Pill effect. This was the representation of a drug or trance-like state of a character almost ‘glitching’ as the world of the Matrix is shifting around them—One of Us produced the VFX shots for this effect.

Then, the Volucap team remained on hand during shooting at Studio Babelsberg to help with a number of key camera set-ups and technology, including mobile volumetric camera systems on tracks, cables and cranes, a handheld camera system that could shoot with eight cameras, and machine learning approaches to these volumetric video captures.

One particular sequence in which a custom-made Volucap volumetric capture rig came into play was in the cafe fight, for a moment in which Trinity and Bugs (Jessica Henwick) overlap. The rig and some AI software also developed by Volucap enabled VFX to spatially match separate performances and blend them together. “It’s this unique moment where the two people are connected in through the same jack into the Matrix,” says Glass. “So essentially, they’re avatars, coalescing and merging together.”

More camera rigs worked on by Volucap.

“That was definitely a struggle for a long time trying to figure out how to depict that,” continues Glass. “But then we said, wait a minute, we’ve got those arrays. We can do this as volumetric capture, with what was effectively a section of a sphere.”

The important thing to note here, though, was that this was not volumetric capture on a controlled stage, it was volumetric capture filming on the set, where other ‘normal’ scenes were being filmed, too. So, Volucap helped deliver a smaller and more manageable camera array rig for this purpose.

“One of the big advantages was that even with a smaller volumetric rig which could travel with the actors, we were able to look behind the shoulder, which meant we could have a spatial representation of the whole space from a particular angle,” says Bliedung. “For us it became a research project of how portable can we make this rig, and what shots could it be used on to give VFX a new dimension of freedom to work with.”

For the merge, the two actors were filmed individually with the volumetric camera rig on set performing roughly the same moves. Volucap’s machine learning algorithm then went to work. “It could analyze the character, and build a 3D version,” details Glass. “But without any CG. There’s no formal kind of CG model built. We had CG models of the actors, but the process doesn’t require it. And there’s no kind of textures derived to be lit or rendered. It’s all photographic material. What you can do is then teach the computer to change the angle of the head or move them positionally in frame. And you can bring the two performances together, which is what we did.”

The result was that there would be passes that appear to be Bugs’ head on Trinity’s body, and vice versa—“The machine algorithm is basically working it out, and since it’s photographic, the lighting is 100% convincing,” notes Glass.

Says Bliedung, on the machine learning side: “We used different applications of artificial intelligence for those shots, with our own algorithms. “In a traditional world, you would just use 3D scans of the head. You would re-animate them, do hair simulation, everything. But, especially here where you’re so close, and when it’s a fight scene, there’s always a risk it feels CG. That’s why we created own software for this purpose to run really high resolution results and then be able to blend between those actors.”

Ultimately, BUF handled the final composites for the Bugs/Trinity merge after receiving various layers from Volucap. “I love those guys,” observes Glass. “You ring them up, and you’re like, ‘We don’t quite know what we want…’ And they’re like, ‘Perfect!’”

An example of other volumetric capture handled by Volucap for the film.

Certainly those merge shots and the original idea of the underwater volumetric capture are somewhat of a throw-back to the look and feel of bullet time from the original film. But shots like these appear only sparingly in The Matrix Resurrections, a deliberate choice by the filmmakers to go in a different direction. It’s a direction Glass is glad to have taken.

“It felt like everyone was expecting us to do something groundbreaking or new with the technology, so, yes, we did explore things like the underwater volumetric capture. But I think that it actually makes a lot of sense that we didn’t use it, that we didn’t go to some huge, impactful technology solution that could just almost ground us to a halt to achieve it. That was just not what the style of this movie was for Lana, in the end.”


Subscribe (for FREE) to the VFX newsletter




Leave a Reply

Discover more from befores & afters

Subscribe now to keep reading and get access to the full archive.

Continue reading

Don't Miss