The rendering advancements made for the extremely memorable character.
Industrial Light & Magic’s Davy Jones in the 2006 film, Pirates of the Caribbean: Dead Man’s Chest, is one of those characters that people still marvel at. Informed by the live-action performance of Bill Nighy, wearing one of ILM’s new iMocap suits, Jones was a fully CG character with incredible personality.
He’s also the subject of our newest Retro RenderMan article, where we’re featuring previously published Pixar materials on rendering. Read on below to find out how ILM utilized point-based approximate ambient occlusion and color bleeding, without ray tracing, in RenderMan for Davy Jones, combined with refinements of in-house subsurface scattering tools.
The illustrious history of subsurface scattering in production
ILM laid much of the early groundwork for production-ready subsurface scattering, based around the research of Henrik Wann Jensen. The subsurface scattering model, suggested by Jensen, includes two scattering terms: single scattering and diffuse scattering.
Single scattering simulates the effect of light scattering a single time inside the medium. Diffuse scattering computes the “statistical dipole point source diffusion approximation”. Of the two, single scattering is the simplest and least computationally-expensive. For these reasons, it found itself integrated into production first. The techniques were implemented to use on digital doubles in Star Wars: Episode II, but shots had already gone final looking good enough. Consequently, single scattering made its proud debut on the next feature in the pipeline: Hulk. As single scattering (alone) resembles a specular reflection, the scattering term was randomly jittered to mimic a more diffuse scattering look.
The first complete implementation, of both single and diffuse scattering, appeared in Dobby the House Elf (Harry Potter and the Chamber of Secrets in 2002). This made famous the z-buffer technique, developed by Hery, Letteri, and McGaugh. It won the Academy Award for Technical Achievement in 2003.
This technique worked well, but had limitations. First, there was considerable book-keeping required to set up and track the requisite z-buffers. A single skin mesh, for example, requires (for each light) its own z-buffer, needing additional buffers for blocking geometry (like cartilage and clothing). Perhaps the greater limitation, given the proliferation of image-based lighting, is the requirement that all illumination (contributing to the SSS) come from depthmap-casting CG lights. This prohibits the use of ray-traced shadowing and global illumination techniques, which have become increasingly common for realistic lighting.
The solution to this problem was to write out a cache of micropolygon shading grids, with irradiance information written on them, then run the subsurface scattering diffusion calculations on that cache. This approach leverages the automatic dicing of the REYES algorithm, allowing arbitrary illumination and shadowing to contribute to the irradiance. This technique first debuted on Terminator 3 and used, as the main scattering technique, on subsequent productions.
In addition to the complexity of implementing the scattering algorithms, it is difficult to sensibly set the input parameter to the subsurface scattering calculation. Most other illumination calls have intuitive tuning parameters that return a float value, which is easily modulated by multiplying it against a color map. The subsurface scattering calculations, however, return a color value and are driven by the unintuitive parameters: albedo and mean free path length. The question becomes one of how to author the correct input parameters, in order to return a realistic scattered skin color. To address this, ILM developed something known as the “texture inversion trick”.
The inversion trick was developed to help integrate subsurface scattering into the shading pipeline, with minimal disruption to the texture artists and the look development process. Since the texture artists were used for creating diffuse texture maps which apply to standard BRDFs, the idea was to use these diffuse texture maps as the starting point for determining the SSS parameters. The ‘trick’ is to assume the diffuse map is already the result of scattering calculations, made under uniform lighting conditions; then invert the scattering calculation and work backwards to the input parameters. Essentially, the techniques “un-lights” the diffuse texture map and feeds the starting parameters into the SSS calculations.
The challenges of Davy Jones and crew
The enormous challenge facing ILM, in creating Davy Jones and his crew, dawned when the concept art of Aaron McBride and Crash McCreery showed up on the walls. The detail and complexity of the surfaces required a new approach to designing and rendering the characters. To realize the complex geometric detail, Pixologic’s Zbrush was integrated into the pipeline to provide high resolution sculpting and displacement maps.
All this detail was not realized without cost: Davy Jones came back from the modeling department as a super-dense, highly displaced model. New ground had to be broken to successfully render the character, to the required level of realism. Earlier characters – such as Dobby the house elf and the baby from Lemony Snicket – were successfully ray traced, with full occlusion and secondary illumination (color bleeding) passes. Similar techniques were tested on Davy Jones’ super-dense mesh, but proved prohibitively expensive for the production use. Ray tracing occlusion, for example, took upwards of 10 hours. The color bleeding failed to finish.
With impeccable timing, pioneering work on non ray-traced approximate occlusion showed up in NVIDIA’s GPU Gems. Seeing the potential, ILM worked closely with Pixar’s RenderMan development team, to get the techniques implemented in PRMan for use on Pirates. Tests, using this new approximate point-based occlusion, cut the rendering times for occlusion from ten hours, down to two hours. And as a bonus, the point-based color bleeding was free, if the illumination is baked into the point clouds. Previously, color bleeding would have proved impossible on such demanding shots. These techniques – combined with the artistry of the modeling, texturing and lighting departments – allowed Davy Jones, and crew, to be realized so successfully on screen.
Another challenge existed when creating fully CG eyes for Davy Jones. Initially, there was reluctance to create CG eyes for Davy and his crew: as a fall back, the actors’ real eyes were shot with black makeup and tracking markers, just in case the CG eyes failed to deliver a life-like performance. Despite the trepidation, the Look Development team pushed ahead, treating the eyes like any other rendering challenge.
As a result, the eyes came to life using a standard set of rendering techniques. The cornea, for most shots, did not use any ray traced reflections; but relied on environment lights reflecting HDRI, attenuated by a prepass of reflection occlusion. The refractions in the cornea, however, were ray traced on every shot. The sclera (white of the eye), used subsurface scattering with the colors pushed towards the red end of the spectrum, giving the compositing department room to dial back the saturation as necessary.
Uber-shading Davy Jones
The starting point, for shading Davy Jones and his crew, was an ubershader developed by Pat Myers. The ubershader included all the parameters for tuning the final look. This monolithic shader, although complete, was created to shade only one type of material per primitive. However, in Pirates, the characters were going through the pipeline as subdivision surfaces, with a heterogeneous mix of materials spawning from the mesh; things like algae, coral, barnacles, and the skin itself. Independently, the ubershader (as written), could be tuned to shade each of these material types.
However, rendering all these materials on a single sub-d mesh required modifications. The shader was rewritten to handle four material types: skin, clothing, barnacles, and seashells. The already-large parameter block was duplicated four times, so each material would have the same tuning parameters. The guts of the shader were then wrapped into a loop. The loop is iterated over, using image maps to blend between the material types; turning on and off the blocks of computation not relevant to a certain material.
Internally, the material includes a strong component of subsurface scattering using a proprietary DSO. In fact, the shading model relied solely on the results of the subsurface scattering, for diffuse illumination. There were no Lambertian, or other diffuse terms, included. As mentioned above, tuning subsurface scattering is complicated by the fact the input parameter – the mean free depth (a 3 channel entry in mm) – is unintuitive to paint into a map. Often, it is simplest to keep these parameters constant; but using the albedo map inversion (as described above) guarantees coherent variation across the surface.
On top of the SSS were several layers of Cook-Torrance specular: one of them modulated for wetness (see implementing a custom BRDF). Finally, the already dense meshes were displaced with 32bit displacement maps from Zbrush. Out the back of the rendering pipeline were, roughly, 10 secondary outputs and AOVs, which were passed to the compositing department.
ILM on set
On set, ILM uses the industry standard practice – which they helped pioneer – of capturing chrome and grey reference spheres and HDRI images. The minimum set of lighting reference is a gray chrome sphere. Beyond this, time and access permitting, high resolution HDR panoramas are captured using a Spheron camera.
The reference captured on set, feeds directly into the lighting pipeline. Light placement and properties are dealt with on a shot-by-shot basis, to best match the background plate. They are also based on the judgment of the lighting TDs and effects supervisor. Image-based lighting (IBL) is now frequently used for lighting setups and integrates well with the new point-based subsurface scattering. Where IBL is used, the key light and secondary lighting are split; the key light being procedurally painted out of the HDRI and replaced with a CG light. The remaining HDRI provides the secondary illumination attenuated by an ambient occlusion pass. Replacing the key light with a CG light increases directability, giving maximum control over light placement and shadow casting.
The addition of point-based rendering effects to the pipeline, allows efficient utilization of the captured set reference in the production pipeline. Additionally, using point-based techniques, scenes which were previously unthinkable to render, can now be rendered with all the bells and whistles – including subsurface scattering, ambient occlusion, and color bleeding. All without having to resort to ray tracing. In short, the emergence of point-based rendering techniques has opened up a world of possibilities for advanced production rendering.