Including major advancements in connecting real water to CG water. An excerpt from the Avatar: Fire and Ash magazine from befores & afters.
Several sequences in Avatar: Fire and Ash feature the characters in water, such as in flowing rivers, at the riverbed, a swamp and in and on oceans. This includes Spider, which meant that live-action practical watery shooting environments were necessary. The innovation that Wētā FX worked on was connecting the practical water with their CG simulations. “We got a little bit better at matching the CG water surface to what was shot,” outlines senior visual effects supervisor Joe Letteri from Wētā FX. “It was about updating our tools, learning what we knew about water and waves and figuring out what was done on the set and figuring out how to generate a smoother match.”
In the river
For scenes of Spider and members of the Sully family leaping into a jungle river and then being pushed along by the rapids, the scenes required both performance capture and live-action. The special effects team created a 250,000 gallon wave pool at Manhattan Beach Studios for underwater performance capture. Here, the pool was outfitted with wave and current machines, including two 19,000-pound steel wedges motivated by an 800 horsepower hydraulic system. These were able to simulate realistic ocean and river conditions. For the live-action component, production filmed with Jack Champion accompanied by stand-in blue-suited performers. The tank was a ‘racetrack’ configuration, with large amounts of water carrying the actors that could be directed around a course or river.




“The set was amazing, built and dressed to be almost 100% camera-ready barring some bluescreen and rigging deep in the background, details Wētā FX visual effects supervisor Sam Cole. “In the photography we had plants, log bridges and rocks and a set fully detailed down to staining the water to mimic the tannins that would be present. The river eventually widens into an open sunlit area, with the kids clambering out. The calm shoreline exit shots were filmed on a partial studio-based set of the river and shore in New Zealand and handled by a team lead by Sergei Nevshupov in post. Ultimately the ratio of plate-based shots versus fully digital for the river scenes is about 50/50, with the plate shots still requiring some or all of the water replaced or augmented, set extension and character insertion.”
From the rapids plates, Wētā FX would look to retain as much as Spider as possible, as well as retain some practical water, and then replace the stand-ins with Na’vi and craft a digital environment and water. Details Cole: “The blue-suited proxy performers in the plates were not a perfect proportional match for digital Na’vi, plus they needed their positions or movements modified, often creating large gaps and underlaps requiring CG water, plants and set replacement. Meanwhile, joining plate and CG water is unforgiving, especially so in native stereo. Water is reflecting and refracting, it entrains bubbles and foam and carries a history of anything that has occurred. Any modification or sim patching done to match the surface has to continue and must move coherently or it stands out immediately.”
“In the jungle river,” adds Cole, “the challenges were amplified by the number of things interacting. In the words of our compositing supervisor Tim Walker, ‘Dang, everything is touching!’. The water is connected to and motivated by the characters, the characters are inside clumps of plants touching and manipulating vines, the vines are dangling and interacting with the water and jostling other plants. Essentially there are no clean split points and we had to be very strategic with knock-on effects of replacing or augmenting plants or the water surface, really threading the needle. A final cherry on the cake is that the practical camera housing itself has water dripping and sheeting off it, the distortion of which must be matched in stereo across the frame.”
To solve these issues for the jungle river, Wētā FX refined its approach to matching water simulations very closely to the practical photography. “Dotted around the set, we had many machine vision cameras in stereo pairs, with a wide baseline that we used to recover geometry from pixel-depth of our performers, environment and, in this case, the water,” explains Cole. “As it’s evaluated per frame, the stateless mesh can’t directly drive water but works well as a guide for gross motion and alignment.”
“As a starting point,” continues Cole, “everything and everyone gets match-moved as tightly as possible, including the camera crew and any camera equipment that is interacting with the water. For integration shots we track ten to twenty extra locators that describe the water surface and are not too messy in their motion from splashing or reflections. The team targets bubbles for as many frames as they live, intersections with match-moved characters, the set or rigging. Using these tracks as oscillators, the FX team can decompose the water surface into a parametric representation that provides the initial force inputs to our high resolution simulations.”
Armed with this data, Wētā FX’s FX team then iterated on simulations, feeding the comp team with low fidelity renders until they were confident they had a simulation that matched the stereo plate. “With our ‘bulk’ simulation settings in the can,” says Cole, “we can kick off a hero simulation complete with bubbles, foam, aeration volume, splashes, ballistics with re-entry and high resolution thin film on our characters. As a final sweetener we do a fine capillary waves solve, used as a tiny sub-pixel displacement at render time—it captures a lot of high frequency detail.”





“Ocean water is rarely perfectly clear and we often use Jerlov water types as a starting point to derive how the chlorophyll, plankton or dissolved matter impacts the water volume appearance and map it to our shader parameters,” notes Cole. “In the case of tannin rich jungle water we made something a little custom. Lighting lead Ari Ross defined the absorption and scattering coefficients, sprinkling in a little suspended silt to the murk.”
Meanwhile, once water needed to be shown tumbling down rocks or crashing against walls, Wētā FX orchestrated the look so that bubbles coalesced into patches of foam. This was handled as extremely dense explicit geometry. “Job Guidos in research and Tane MacDonald in FX really pushed the limits on the new foam system,” advises Cole, “looking down the length of a river increases the size of the simulation domain as we’re seeing a lot of water and that water requires a decent run-up time to get stable for realistic looking foam patches to appear.”
For the whole stretch of the river, Wētā FX decided to orchestrate the water sim as one full scale effects simulation. “For that to be successful,” suggests Wētā FX visual effects supervisor Sergei Nevshupov, “we first of all had to make sure that all the shots were calibrated. So, making the water level the same everywhere, making sure the water level in the plates was matched. Same was for the speed of the water, we had to find out the right volume of the water and create overall river depth profile so that the center of the river would flow faster and be more dangerous than the area where kids were hitting the shore. From my own personal experience, too, I knew that with these kinds of whitewater rapids, that meant there were a lot of huge boulders, so we scattered quite a few of them under water to interact with the current. There were also a lot of secondary simulations. Whenever water hits a big rock, we would create little splashes and some nice droplets so they would shine in the sunsetting sun. We created a mist from the waterfall that would be carried over the water surface.”
Already challenged with integrating live-action Spider into the rapids scenes, Wētā FX further had to deal with his oxygen mask. “The mask would reflect the large light sources on set,” notes Nevshupov. “We had to break up those reflections and make sure that they looked like the sky was passing through the canopy of the trees. We had one particularly interesting shot where the camera was following them as they go from the riverbank into the forest and up the hill. The choreography of the stand-in actors on the plate was different from the choreography that Jim ended up with on the template. Lo’ak was also crossing the path of Spider going through the part of the set that was practically shot. So, we had to replace all the background. We left just the upper part of Spider, to allow his CG legs to interact with carefully built CG environment and then we blended it back with the original plate as he climbed the hill moving closer to the camera.”
In the ocean
The approach of matching plate water to CG water took place also for shots of the characters in the ocean. An early scene in the film features Kiri (Sigourney Weaver) lamenting the fact that she is unable to connect to the Spirit Tree and Eywa. Here, Spider swims up to her. “The water there was partially live-action,” describes Cole. “It’s a plate of Jack Champion swimming up. It might be 1,000 or 1,200 frames. It was one of the first shots that we used to exercise our new water matching, because the rock that Kiri is sitting on needed to be extended and moved, which meant we needed to have plate water joining directly with CG water for about 1,000 frames in stereo.”
Using the oscillators method for tracking, artists tracked bubbles, intersections with scaffolding, witness cameras and with Champion himself from the body match-move. “Then we could feed those oscillators into a new Stokes wave model and get a parametric representation of the water. That allowed us to drive the simulation.”
Another kind of ocean shot was where Spider is learning to ride his Ilu, Wētā FX adopted the same techniques. Traditionally, they would have rotoscoped the plate character off the live-action photography (of Champion on an Ilu buck), placed him on a card, animate the creature and then have the plate moving in space relative to the motion of the creature.
On Fire and Ash, the VFX studio in fact did the inverse of that, outlines Wētā FX lead FX supervisor Nicholas Illingworth. “We actually extracted the wave motion from the actor. So we had the match-move of Jack that sat on a black ilu-esque buck that was buoyant. It would be moving up and down. We would extract a single oscillator from that position of Jack moving up and down the wave. We would push that into a Stokes wave to author those wide translations, which meant that we got a very accurate representation between the plate and this new water element. So, when we did add in the creature, the only parts that we were changing on the plate and Jack were essentially from the knee down. Anything that was under the water was CG. Everything else we were able to preserve from the plate.”
“We extended that to multiple oscillators as well,” says Illingworth. “Here, you can look at your plate photography, you can track certain positions in the wave height, and we had Basler machine vision cameras for that, which let you form a stereo mesh. It sometimes came through with a lot of noise, which is why we would add in these multiple oscillators, but then that was able to get as far closer to a water match to the plate than we ever had before when using the more traditional methods.”
Of course, there are a lot more CG character and CG ocean shots in the film, such as those portraying Lo’ak’s journey on an Ilu into the deeper ocean in search of the outcast Tulkun Payakan. Lo’ak pushes through a storm and is shown navigating enormous waves. The waves in that scene are truly gigantic,” observes Wētā FX visual effects supervisor Francois Sugny. “They were art directed to look dangerous and to really put the emphasis on the story of Lo’ak going through that crazy ocean to find Payakan.”
“There’s a shot from underwater looking up at him,” adds Sugny. “Jim’s comment on that shot was that he wanted to have that feeling of surf videos when you look up from underwater and you see the giant waves rolling along on top. That was challenging as well because it was at night. In the daytime reference videos of surfing, you really understand what’s going on because of the sky refractions through the weather and everything. But at night, that was much more challenging. We had to find ways of playing with the density of the volume and the volume itself to really sell that distance to the surface and also the actual curvature of the surface above us. And of course it was raining! And there is lightning. We don’t actually see that much in the shots, but there are rain drops and rain droplets hitting the surface everywhere that do leave tiny little bits of displacement, even crown splashes, from the rain.”
In the swamp
The characters also end up in a swamp environment, which was something quite different than the oceans and the flowing river, notes Nevshupov. “The difference here is that it was a relatively large body of water. Technically when you have water that big, it has always has some kind of waves, even if it appears still. Our first approach was to use Fast Fourier Transforms (FFTs) which we usually use for large bodies of water to simulate how waves interact with each other and are disturbed by wind. Well, it didn’t work. We still used FFTs but the wavelength was larger, the amplitude lower, with less spectral components.”
The environment surrounding the swamp required trees, dead trees falling into the water, leaves floating on the water, duckweed, and other debris. “The water was made to look almost completely flat at some points because the duckweed essentially suppresses the wave propagation,” says Nevshupov. “That’s what we tried to simulate. We created the areas with the duckweed and the masks that would suppress FFT at those points. That already created a realistic, nice looking stretch of the swamp’s surface. On top of that, we had characters that were walking through and they disturbed that water, so we had to work out how that surface would react, which was very different to open water. When characters move through the duckweed, it hits them, collides with them, and follows them in a swirly motion that creates a wake behind them.”
Simulation-wise, this made for heavy scenes. Every piece of duckweed was a plant. Wētā FX therefore relied on instancing methods to achieve these sims, as Nevshupov relates. “The challenge of the nicely looking physically correct propagation of that wave, the dampening of that wave, and advection of the duckweed behind the character and colliding of the duckweed with the characters—even some of the duckweed would stick on the surface of the character—that was a challenge. Jason Lazaroff and team did great research and development on that, and we ended up with a nice setup that worked really well.”
At one point in the swamp, Spider is holding onto Jake’s shoulder as they pass under a tree vine resting in the water. “The tough thing here,” says Nevshupov, “was that the stand-in actor playing Jake had different proportions to the final Na’vi character, but they were in the same body of water. So, when we put the CG Jake in there, his shoulder was higher up. The two hanging vines were there, but Jake did not fit in. So we had to replace those vines, but match them exactly to the plate. We also couldn’t use our Spider digital double because it was so close on him. The real Spider, Jack Champion, had a lot of little details on him like water dripping, and we didn’t want to lose that. We ended up just replacing Spider’s hand from the shoulder down, and re-animating the hands so he could grab Jake’s shoulder. We left a little of the plate, but we also replaced a lot of vines and the moss with CG ones.”
At the riverbed
A later scene featuring Spider, Jake and Neytiri at the riverbed was brought together first as a performance capture shoot and then by shooting a constructed river set so that the live-action Champion could be filmed.





“It obviously didn’t have the flowing water or the speed it needed to be,” advises Saindon, “but we were able to get Jack standing on the edge of the water and bounce the light properly off it so we could get the proper caustics bouncing onto Jack. It really helped on the lighting side to integrate him. That’s the thing that works so great on these movies—because Jim knows what the lighting’s going to be, what the camera angles are going to be, ahead of time, when we actually get to the live-action, we can actually film it. We can get the DP to look at that as reference as the proper guide for it and get the light coming in from the right angle. We don’t have to go back later and mess with the lighting. We don’t have to mess with the image and make it smaller or deform it to fit into screen because we know what the shot’s going to be ahead of time. It really makes a huge difference with integration.”


