And how ILM crafted the final intricate performances of Ollie, Rosy and Zozo.
In Lost Ollie, the live-action Netflix mini-series created by Shannon Tindle and directed by Peter Ramsey, based on William Joyce’s children’s book, a lost toy searches for the boy who lost him. Along the way, the toy, Ollie, makes two other toy friends, Rosy and Zozo.
These three toy characters would be brought to life entirely via CG animation for the show by Industrial Light & Magic. On set, a team of puppeteers used handmade puppet stand-ins for the toys to perform in scenes or act as reference. Earlier, a series of tests incorporating puppetry, different forms of motion capture, puppet augmentation and CG animation were also devised.
In this befores & afters interview, ILM visual effects supervisor Hayden Jones outlines more on the tests, and the final VFX process for the Lost Ollie characters.
b&a: It doesn’t feel like it when you’re watching it, but Lost Ollie, I imagine, was quite a big creature show in a way, wasn’t it?
Hayden Jones: Yes, I think there were something like 1,100 shots in total across the four episodes. That makes it such a logistical exercise, as well as a creative one, just to get that many shots through on what is a TV episodic budget. In creating the three main characters, we knew from the start that if we were to do it, we had to get the performances absolutely right. We had to get the integration and the look of the characters absolutely right because you had to believe they were real. And if that stopped at any point, then we’d break the magic spell of the whole show.
b&a: Did that start with any early discussions about whether any of the characters could be done with on-set puppetry as ‘in-camera’ shots, or just as reference, or some kind of mix of all that with CG animation?
Hayden Jones: Actually, Netflix commissioned a test. It was a 12 or 14 shot test that was shot in the Valley in Los Angeles. Our VFX producer, Stefan Drury, went out to supervise that for a day. It was apparently one of the hottest days of the year as well.
But the test was incredibly useful because we got to try out a number of different options. We tested out whether or not motion capture could help. We motion captured both a standard performer and we also tried to motion capture an oversized puppet as well, with the thought of keeping a ‘puppet’ feel throughout. Neither of those gave the right aesthetic or were what Shannon was looking for.
We also wanted to experiment with using a simulcam on set. We had thought, at one stage, ‘Maybe we can produce a library of animation and then we can stream that through a simulcam on set to aid bigger character moves through scenes?’ It was very obvious early on that that was not going to work on the time scale that we needed.
Another kind of test was puppet augmentation. This is where we’d get a puppet performance and then we’d remove rods and do either a warping approach to the mouth and the eyes or a semi-head replacement, while keeping the rest of the puppet. This did work incredibly well, but what we rapidly came up against, was the shooting size of these puppets, which is very small, and that made it very difficult. Also, the shoot was quite quick. There wasn’t a lot of time for shooting, but quite a lot of footage to. So it rapidly became the norm that the puppets were there to time out the scene and to get us really good lighting reference. They were used for interaction with other characters, say human characters, as well.
— Industrial Light & Magic (@ILMVFX) August 24, 2022
b&a: How were those on-set puppets built?
Hayden Jones: Jim Henson’s Creature Shop built the puppets and did the on-set puppetry used in the show. They were really beautifully handcrafted. They were so beautifully done in fact that we really wanted to make sure that our digital versions were as accurate as possible. So we sent them off for scanning by Gentle Giant in LA, and then we got swatches of all of the fabrics that we used in each of the characters.
We actually ended up making a fabric scanning unit that we could all do from home, since COVID meant we were all in our own homes. We worked out a way of scanning the fabric at an incredibly high res because we knew these characters at 4K were going to be all over the screen. We were matching the puppets down to even having the very smallest flyaway fibers from the corduroy and cotton. That also then made them so beautiful to light. You always get these beautiful rim lights on them.
b&a: What kind of early explorations did you take in terms of character behavior and movement?
Hayden Jones: Well, firstly, with Ollie, he’s a really interesting character both from his limitations and his complications. The limitations were in the face. He has just two buttons for eyes and a very, very simple mouth. We did a number of tests to work out whether Ollie’s eyes should slightly tilt to show eye direction. And actually, that made Ollie look a little bit too alive and made him a little bit too active in appearance. So we went for static eyes.
However, we still needed to convey a lot of emotion, so we had some eyebrows which we designed with Shannon to feel ‘stop-motiony’. We’d even change the shape of them in between some cuts. We’ll either use a fast piece of motion or a cut point to change the shape, so you are not really seeing them active all of the time. We’re trying to hide that as much as possible.
Then with the eyes, we knew we had to get the eyes into different shapes, but with them being rigid buttons, that’s tricky. So we came up with an idea of a cloth covering so that the cloth can just cover the lower and upper lids. That meant we could just slightly maneuver them as well which gave that feeling of eye direction.
The complication was always Ollie’s ears because they’re such an intrinsic part of his character. There were some amazing drawings that Shannon had done where Ollie is hugging his ears at moments where he’s really sad. We were looking at those going, ‘That’s a real challenge,’ because they were in shapes ears are not normally in. What we ended up doing was, rather than going for the normal pipeline where animators animate and then it goes to the creature team to do simulated elements on a character, we instead gave the animators access to simulation tools for the ears. We built a Houdini Engine plugin for Maya, so that the animators could animate and then hit simulation on the ears. This meant as they were iterating their animation, we could also iterate the ears.
b&a: What about Rosy? I love Rosy’s fur, but I can imagine it was a particular challenge.
Hayden Jones: Rosy’s fur was probably the toughest groom on the show. It was really specific. It had to feel natural. It had to feel like a well-loved, well-worn toy. It couldn’t feel too new. There was a lot of time and care put into weathering that. Plus, it had to transition through the show. We had to do variants where Rosy gets dirty or where Rosy gets wet.
There was a shot in episode 3 where Peter really wanted Rosy to hand Ollie the star and he wanted a close up on the hand. We were looking at it saying, ‘Well, how close is close up…?’ But in the end we were able to, at 4K, fill the frame with Rosy’s hand and it really looked beautiful and it showed all the love, care and attention that all of the artists poured into the characters.
b&a: Tell me also about Zozo and what you had to do to make this character possible?
Hayden Jones: The main challenge with Zozo was cloth simulation because he’s wearing a jacket and trousers. In fact, all the simulation was tough, since Ollie also had to be sim’d to give him his stuffy feel to get all those wrinkles and folds. Christian Waite***, who was our CG supervisor, set up an auto-simulation pipeline where we tried to automate as much as possible so that we could at least get really, really good first passes of simulation through, whether that was Zozo’s clothing or Ollie or Rosy’s stuffy simulation. That was amazing. We barely touched them afterwards.
b&a: Let’s talk about how things worked on set. I’m guessing because of COVID you couldn’t be there yourself, but how did it work?
Hayden Jones: Robert Habros was our on-set supervisor. The main thing was to make sure we got enough data from the set. Lighting data was crucial and key to enabling us to reintegrate the characters. And because our characters are so mobile and moving through so much of the set, we ended up LiDAR’ing pretty much every single location that the characters were in. It was Industrial Pixel who took that on, along with set and prop scanning.
b&a: Did scale come into play in a big way in terms of VFX challenges?
Hayden Jones: Well, one thing was that the camera was often very low down, and the characters were normally quite close to camera focus. This meant the focus plane became a much bigger issue. We worked from a very early stage with Kim Miles who was the DOP, who’d had some experience on shows like Welcome to Marwen where he was shooting in small areas with smaller characters.
One of the tricks we used was that, even though we were finishing at 2.35:1 for Netflix, we actually shot everything full aperture. This gave us a lot of padding, both at the top and bottom of frame, so that there was a margin of error. We could always reframe shots. We could always just slightly tilt the camera up or down 20%, which enabled us to get much, much more fine-tuned framings for when the characters were actually moving.
b&a: Although you didn’t go with that simulcam approach on set, was there time to do any animatics or previs for reference?
Hayden Jones: We did do previs for quite a lot of the action scenes, and that was really helpful, not only to focus on what was needed on set, but also to get the pace and the setting. Sometimes these action scenes were being split between full CG environments and real places, so it was helpful to start getting that marrying of the two. Then, obviously with Peter Ramsey as our director, storyboards were wonderful on this show.
b&a: There’s a few sequences where it’s raining and we see the characters in the rain but also wet afterwards. How did you solve this on set but also for the CG characters?
Hayden Jones: In episode 3, there were sequences where we’d put in rain bars and got a good amount of rain into the background. What we always tried to do was protect the area that the characters would be in just because we needed to have a little bit more control over those areas. Then we always knew that we were going to put the classic extra rain into the foreground so that we could really embed the characters into the scene.
We’d then need to simulate rain and raindrops, all the different types of those. You’re getting little splashes, and those splashes—in comparison to our lead character sizes—are not insignificant. What I’m really proud of with the simulation teams that did all of that is that it really never draws your eye. You are still ultimately focused on performances and the performances shine through. So even though we’ve done all this work to integrate the background and to make the backgrounds feel as consistent as possible, actually you’re still focused on the performances, which was the major trick for the show.
b&a: There are a lot of action sequences in this show, but I have to say it never feels like it’s too ‘fantastical’ ever. It feels grounded. I mean, I guess that’s the goal of any visual effects shot, but I just wanted to get your final thoughts on that aspect of the show.
Hayden Jones: I think one of the things we were really careful about in the more action-based sequences is that there was always that level of reality. So even in one of the action sequences in episode 3 (the swan boat sequence) where the swan boat is gliding down the hill, any time you are on the swan boat, those are fully digital shots, but they are intercut with wides, which were shot with a real swan boat on location with our characters added in. There’s always this mixture of styles, which I think stops everybody focusing on one or other and it just allows you to blend these styles together and to create a really believable sequence.
It was the same in episode 2, where they are running after the train. There’s shots of a real train, there’s shots that are completely CG, and then inside the train itself, there was a real train carriage.
Zozo’s hideout was one that we always knew was going to have to be CG. And that was an incredible collaboration because that was designed by the production designer Greg Venturi. He designed an initial layout and then sourced a load of real objects, which Industrial Pixel scanned for us. We went into the ILM model archives and tried to find things that were appropriate, as well. And then we came back together and started working on the layout and it just rolled from there. The idea was to light the entire scene from a bowl of Christmas lights.
That’s the beautiful thing about this show. It was a huge collaboration, with production design and the DP and VFX, and of course with Peter and Shannon. The whole approach to the show was hugely inclusive, hugely collaborative. And really, the show could never have been made any other way. We really needed that feeling of inclusion.
I think because Peter and Shannon had come from an animation background rather than a VFX background, they’re used to having a lot more reviews with a lot more artists. And so we were able to do reviews with Shannon and Peter where almost all of the animation team and almost all of the artists were on the call at the same time, getting direct feedback from the directors. It not only gives you this feeling of collaboration, but it also really makes people have a sense of ownership for the show. And when people feel like they have ownership, that’s where the magic happens and you start getting everybody coming up with these amazing ideas that just make the show better and better.
Join the VFX community by becoming a b&a Patreon...and get bonus content!