‘Stampede. In the gorge. Simba’s down there!’
In the 1990s, traditional 2D animation was in a major state of transition. Digital technologies were allowing animation studios, including Walt Disney Feature Animation, to do things in very different ways. Partly, that came about via Disney’s adoption of the Computer Animation and Production System (CAPS) that had been developed in conjunction with Pixar to aid in digital ink and paint, compositing, and production and library management. By the time The Lion King was released in 1994, CAPS had already been utilized on a number of films, including Rescuers Down Under, Beauty and the Beast, and Aladdin.
But it wasn’t just CAPS that was changing the way Disney produced animation; computer graphics was also having a big influence. CG had played a role in Beauty and the Beast’s ballroom scene, and for the lion’s head entrance to the Cave of Wonders and aspects of the magic carpet ride in Aladdin. Then, in The Lion King, CG helped bring the dramatic and dynamic wildebeest sequence to life. There were so many animals in the scene, that animating all of them by hand would have taken an enormous amount of time to do traditionally. But Disney developed a way to use CG for the wildebeest ‘crowd’ while also maintaining a cel-shaded look to them.
With Jon Favreau’s The Lion King – a completely computer-generated and photoreal film – about to be released, I went back in time with Scott Johnston, credited as ‘artistic supervisor: computer graphics imagery’ at Walt Disney Feature Animation on the 1994 movie, to find out how Disney was doing CG back then, and what was involved in making the wildebeest stampede (interestingly, the 2D animated The Lion King made it into the VFX Oscar bake-off for 1994, competing with such films as The Mask, True Lies and eventual winner Forrest Gump).
b&a: At the time, what were the big challenges in taking a two-dimensional drawing of the wildebeest and turning them into 3D characters (but also ‘preserving’ the 2D look with things like inklines etc)?
Scott Johnston: The challenge in taking a two-dimensional drawing of a wildebeest and turning it into 3D wasn’t just in the design of the CG, but in creating a system where the Wildebeest looked correct on a traditionally drawn and painted background paintings. The storyboards by Thom Enriquez were amazingly cinematic and created the visceral feel we needed for the sequence. We kept referring back to them to stay focused.
For the sequence to work properly, many of the layouts were large and complex, with unusual perspectives. We also needed to be mindful of the camera moves, so that the animals did not strobe or move counter-intuitively.
We had used ‘hidden line’ algorithms and cel-shading techniques at Disney prior to The Lion King, to produce CG with a drawn look-and-feel, but needed to adapt our tools to allow us to create subtle variations in the rendering to make each animal unique. Atmospheric effects between layers of wildebeest helped increase perceived depth and helped marry the characters to the environment.
b&a: What were the specific tools Disney was using at the time to do 3D modeling, textures, animation and rendering?
Scott Johnston: The modeling, rigging and animation of the Wildebeest was done in Softimage. The rendering was done with RenderMan, and the compositing in CAPS.
b&a: Since there were so many wildebeest, what simulation tools allowed you to carry out crowd animation, and also ensure they didn’t collide with each other? How could you still art direct the movement?
Scott Johnston: There were no existing tools for simulation. There had been a few incidental ‘flocks’ in other features prior the the release of The Lion King, but nothing where an artist-controlled procedural ‘crowd simulation’ drove a substantial story point. Kiran Joshi, MJ Turner and I wrote our own software, including stitching libraries of motion to hook into one another, code to create and simulate the animals under the direction of ‘key’ animals whose trajectory was designed by the animators, physics based modifications that were applied to each animal individually to help them look unique (leaning into curves, for example), and tools to render them with suitable variation.
Ruben Aquino provided reference 2D animation that was used as inspiration by Greg Griffith and Linda Bel. The paths for lead animals were designed by hand, and the simulated animals played ‘follow-the-leader.’ With multiple lead animals, the animators had a lot of control over the final look without having to over-animate.
Using the simulation also allowed us to iterate and make changes – something that would not have been possible had they been drawn by hand. The software also allowed for ‘triggers’ to change behavior, such as when they needed to transition from running to leaping over the edge of the hill. As the animals crossed a demarcation line drawn on the simulation field, they could transition into a different behavior. After the simulation had been computed, we could remove animals that behaved erratically. We had enough so they weren’t missed.
b&a: I think it was mainly the wildebeest themselves that were CG but was there a push or a test or an attempt at all to make environments CG also?
Scott Johnston: Nope! For the ballroom in Beauty and the Beast, there was a specific drive to create an environment that enhanced the realism of the film for an emotional scene; we very consciously move into and out of that environment and moment.
For The Lion King, the scene needed to play out in the world of the rest of the film. We needed to seamlessly add the stampede to the vision of Africa that Andy Gaskill designed. That said, we did use CG as reference for designing some other environments and aspects of the film. Some complex camera moves were blocked out – what would now be called ‘previs’.
b&a: How was the CG composited back then with any hand drawn/painted backgrounds and other animation?
Scott Johnston: Disney had its ink-and-paint and compositing system called CAPS which was used to assemble the films. The introduction of digital compositing allowed the CG team to deliver renders into CAPS in the same way that the 2D artwork was delivered from the digital camera and ink-and-paint departments. When integration was needed between 2D and CG elements, we would use a pen-plotter to accurately print line-art versions of the CG for use by the 2D animators. This system had existed prior to digital ink-and-paint, as early as The Black Caldron, to allow CG elements to be incorporated in non-digital 2D animation.
b&a: What are some of the scenes or shots in the rest of Lion King that maybe people don’t realize were helped made possible with CGI?
Scott Johnston: One nice example of a previs shot is the camera moving around Mufasa as he sits on the top of Pride Rock with Young Simba discussing their territory and legacy as kings. Having finished his work on Aladdin, Steve Goldberg jumped in and gave us reference for the 2D animators.
b&a: Can you give a sense of what it was like at Disney around that time, where CG was being used to help craft certain scenes in major 2D animated films? How did people feel about that? What were the possibilities? What were the limitations?
Scott Johnston: It was an exciting time to be at Disney. The storytelling, visual styling, and animation were all going through reinvention. Incorporating CG was another piece of that creative explosion. Some artists were wary of the tools, but within Feature Animation we were trying to augment the style of storytelling that we were doing. Disney was working with Pixar on Toy Story at the same time, which was the outlet for creating fully CG features. The speed of machines was vastly inferior to what we have now, so efficiency and optimization was an important part of being able to finish anything, and needed to be factored into what we could accomplish. However, those constraints added to the excitement of finding solutions that let us push boundaries.