The story behind RenderMan 25’s machine learning Denoiser

How the new denoiser works.

When Pixar Animation Studios released RenderMan 25 in April of 2023, it included a completely new Denoiser that takes advantage of machine learning technology.

The Denoiser was developed by Disney Research in collaboration with Industrial Light and Magic, Pixar and Walt Disney Animation Studios. Pixar had already used this tech internally since Toy Story 4 (released in 2019), and now the Denoiser is available for all RenderMan users.

Here’s an excerpt from the full story in issue #11 in befores & afters magazine.

Wait, what’s a Denoiser?

A denoiser reduces the amount of noise present in a rendered image, with the idea being that this image does not need to be rendered to full convergence, thus saving on render times. The machine learning techniques in the Denoiser in RenderMan 25 are used to predict what a final image would look like if it had been rendered without noise.
The result is a cleaner image that does not require as many samples per pixel–you can render things faster but without any loss in image quality.

Buy Me A Coffee

Pixar originally adopted denoising technology once it had rolled out its original RIS path tracer several years ago. “Path tracers are inherently noisy,” notes Pixar marketing manager Dylan Sisson. “Every time you iterate on the image or the render iterates, it starts to converge. The last couple iterations to achieve full convergence can take as long as the rest of the render simply because the amount of samples required for each iteration grows exponentially.”

For that original denoising tech, Pixar utilized a denoiser that Walt Disney Animation Studios had developed for its Hyperion path tracer on Big Hero 6. Pixar, of course, comes under the same technology umbrella as Walt Disney Animation Studios, and was able to use this previous denoiser first on Finding Dory (2016).

“There,” says Sisson, “instead of sending maybe 2,000 samples or 4,000 average samples per pixel, we were able to render with something like 512 average samples per pixel, a big savings at the time. That cut our render times down quite substantially. That allowed us to render images with fewer samples, which was faster, but also let us render more complexity. Images that we couldn’t actually render to full convergence, now were renderable because we could run the denoiser. It added breadth to the types of complexity that we could handle with refractions and all that kind of good stuff. That was great, and that worked really well for a while.”

The new Denoiser takes shape

Disney Research Zurich (another organisation in the Disney technology umbrella) then began looking at the problem of denoising and how they could apply machine learning to it. They developed a system that trains itself on movies from, say, Industrial Light & Magic and Pixar, as well as across all the Disney studios.

Sisson makes note of the fact that the Denoiser is not just trained to make things look ‘Pixar-like’. “It’s not like we’re going to train it on Pixar movies, and then everything you render is going to look like Toy Story 4. It’s smarter than that. Sometimes, we’ll feed it different case scenarios for different types of bokeh, just so it can learn what these different types of things look like, so we’ll render out thousands of images of an edge case and add that to the training set. The training data is broad and includes imagery from ILM.”

Pixar has used the same training set for all the movies that they’ve rendered since Toy Story 4, occasionally adding and augmenting to it. “We thought we were going to have to retrain it, but it turns out we never had to. It hasn’t needed it,” comments Sisson.

In terms of operation, importantly, the Denoiser is temporally stable. “If you’re rendering an animation,” says Sisson, “it’ll look at the frame that you’re rendering, but it’ll look two frames ahead and two frames behind to create that temporal coherence, so you don’t have that flickering from frame to frame.”

The result is a new Denoiser that improved performance dramatically, advises Sisson. “To compare it to our previous denoiser, the one that was shipped in RenderMan 24, instead of sampling an average of 512 per pixels, we can now render with significantly fewer samples. So, with Lightyear, we averaged around 64 samples per pixel or lower. That was kind of astounding.”

Alisha Hawthorne from Lightyear, featuring 4 samples (at left), and the results of the machine learning Denoiser. (Image courtesy Pixar)

“And then,” adds Sisson, “we also have a concept called pixel variance. Pixel variance is a setting that controls adaptive sampling and how aggressively noise is cleaned up. If you increase it to a lower quality setting, you’re aggressively pruning the rays and accepting a higher amount of noise, which the new denoiser does a great job at removing.”

“What that results in is the ability to use a lower quality pixels variance, which reduces the frame rendering time by quite a bit. Just by changing our ‘normal’ pixel variance settings used with the new Denoiser, we can send fewer samples per pixel. We can get a 10-times speed up just right there before we even run the Denoiser. So, we’re getting the ability to render stuff that we couldn’t render before.”

Sisson admits that the previous denoiser “fell down a bit” if you had hair or fine details to render–essentially small point particles or anything that was one pixel or subpixel in size. “The new Denoiser, however, handles fur just fine and hair just fine. The better quality allows us to sample much less and get a better result out of it. We’ve run some tests where we’re looking at output from different denoisers. Denoisers tend to blur things up and mush things like hair. With the new machine learning Denoiser, that’s not an issue anymore, even for animation it’s temporally stable.”

Read the full story in issue #11.

Need After Effects and other VFX plugins? Find them at Toolfarm.

Leave a Reply