Framestore’s Tim Webber on the lessons learned from his live-action/virtual production hybrid short, ‘FLITE’.
Having pioneered several major virtual production advancements with the 2013 film Gravity, including the use of LED panels for lighting and an extensive previs/techvis approach to filming, Tim Webber (now Chief Creative Officer at Framestore) has returned to the virtual production fold with his short film, FLITE.
The short stars Alba Baptista as Stevie, who escapes the clutches of Johnny (Gethin Anthony) via some nifty hoverboarding over a future-London-scape. These moments exist within the context of the mysterious idea of ‘memory investigations’.
Webber, and his team at Framestore, relied on a volume/LED wall and motion capture shoot to film FLITE, with Unreal Engine and integration of the visual effects studios own toolsets and workflows, FUSE (Framestore Unreal Shot Engine) and FPS (Framestore Pre-production Services), at the heart of what they hope to be a new pipeline for interactive virtual filmmaking.
In this excerpt from issue #12 of befores & afters magazine, he tells us what it was like to direct the short with this new approach, especially the idea of everything being live in Unreal Engine the whole time, as well as revealing what many have been asking in relation to the film: is the main character live-action or CG?
b&a: How did FLITE come about?
Tim Webber: I’d been wanting to make my own film for a long time. I wanted to make something in this sort of way, using virtual production at the heart of a whole project, taking a lot of what we learned from Gravity and taking it further. Epic had their MegaGrants, and we wanted to develop some tools to improve on the process of making a film inside Unreal Engine. All of these things came together, really.
b&a: One of the challenges with Unreal Engine is that it’s actually a lot of different things, isn’t it? We’ve seen it used very well with LED wall filmmaking and doing real-time rendering, but compared to say your classic, ‘Grab a camera, film something, add some CG imagery, do some compositing,’ versus use real-time tools to do these things, I think it’s still a different process. Was this film and the development of the FUSE tools about coming up with a different pipeline to work in real-time?
Tim Webber: I think that’s true. I think it’s hard to make stuff with Unreal at scale, it’s fine if you’re making it amongst a small team of people working closely together, but if you’re making it at scale, it needs a pipeline applied to it, and that was one of the things we wanted to do.
Not everyone has got the time to retrain in a new tool, and it can take a while for VFX artists to get their heads around because its base isn’t in filmmaking, its base is in games. It is incredibly powerful and can do a huge range of things, and that makes it complicated. So it needed a pipeline, it needed film-orientated tools added to it, it needed tools that enabled people who weren’t working in Unreal to be able to work with Unreal. Part of what we did with FUSE was enable people who weren’t working in Unreal to work with Unreal. So, still get a lot of the benefits, still use Unreal, to be part of an Unreal pipeline, without having to be terribly familiar with it themselves.
We also needed to add some tools to allow for this different method of filmmaking. Not just because it’s Unreal, but because Unreal gives us the ability to switch around the creative flow in ways that are advantageous. We needed to make it work with that as well.
b&a: The final film is amazing in terms of the room that she goes into and then where she can fly around. I think the imagery is spectacular, but I still don’t really know, Tim, whether she and the characters are fully digital or whether they’re live-action. I really want to ask you that first.
Tim Webber: Yes, I mean, part of it is, I’m reluctant to tell people! I have been generally keeping it a little obfuscated, a little behind a smoke screen, just because I think if people are looking for joins, knowing that this bit is that and that bit is this, and then they’re looking for the join and they’re not watching the movie. I think part of what is successful is the way it all just melds together, and you don’t know what is what.
We’ve had a few screenings now and a lot of the questions were people asking, ‘Which bits have you filmed? What bit haven’t you?’ And I quite like that. It’s like, they really can’t work it out. I mean largely, everything is CG, apart from the faces. I’d say that is the principal technique. But we do do different things at different moments.
b&a: So does that mean to capture the performances and to stage it, it was a motion capture performance type approach or was it something else?
Tim Webber: Well, it’s a combination. We motion captured and we filmed. And a key part of what we did was to do all of that at the same time, which was a little complicated to set up, but not too bad. You just have to make sure it’s all working together to get everything done at the same time.
One of the things that’s always important to me when you capture a performance is that it’s a single performance. Quite often, in big budget films even, they’ll capture a body performance, then they’ll go away and capture a facial performance, and then they’ll capture a vocal performance separately. That can make it hard work to bring these separate performances together, and people have to manipulate and change, and it still doesn’t quite feel like a natural in-the-moment performance. It’s very hard to make it feel totally in the moment. It’s hard to bring together those kinds of Frankenstein’d performances.
b&a: So you do have characters and you do have virtual sets. Clearly the city and the expanse of the city is completely virtual. However, I’m not 100% sure whether the interior of that loft is a partial set or film set, but tell me about the methodology.
Tim Webber: It is 100% virtual. If we’d had a bigger budget, we might have built bits of it, I suppose, but there are big advantages to not doing that. It would be better for the actors, but we found other ways of making it a good experience for the actors. One of the big things we did was previs it extensively. And because of the FUSE pipeline, because everything is staying in Unreal throughout, you can put more effort into the previs because you’re not going to throw it away–it’s not wasted effort. You can get a version of the previs that is much more able to be judged as a proper film.
Of course, initially the big thing you don’t have is the proper performances of the actors, but we were able to use temporary performances by way of placeholders. It was very useful. You could make certain judgments about the film, how it’s going to work at that very early stage. For something like the chase sequence, that’s all critical. You have to have it incredibly well-planned. A long continuous chase sequence on Tower Bridge has to be immaculately planned and worked out.
At the same time, the way it works when they’re in the apartment is much freer. We did previs it. We could pre-light it. We could make judgments. I could learn how to get the camera to work in certain ways, so it was a very informative process. One of the big things is that you’re making all of your decisions with much more context than you’d normally have. So we’re lighting the scene with a character here and the background there, and we are seeing it through this angle and that angle and the other angle. We’re making decisions about it when we can see everything that we need to see.
What often happens on a bluescreen shoot is that you’re lighting the character and you’re trying to match it to a background that you haven’t totally decided on yet because you haven’t invested the time in the previs, and you can’t see the character with the background as easily as you’d like to be able to. I mean, you can if you’ve got a background, but it’s not as easy as it should be. Whereas we could do all that in the context of the scene, of the sequence even. We could actually light it so that when we’re on stage shooting the real performance, we’ve got all that information coming in.
In the chase sequence, that has to be pinned down and we have to stick to what we’ve planned. But in the apartment sequence, we have total freedom to change, and if the actor wants to do something different–as they should do, that’s part of performing–we can give them the freedom to move around. Because of the long continuous shots, we didn’t have as much freedom to move around as we should have had, but that’s for completely different reasons. It’s not because of the technology, it’s because of the long shots. You could do a close-up you weren’t thinking of. The actor could walk over there and you can see how it’s going to work. You might have to change your lighting, but you’ve still got much more context than you normally would have.
That’s why when we did bring together the live-action parts of their performance with the CG, everything just slotted together and it worked incredibly well. We could get rough material out much quicker than normal, so we could edit with good material. It worked very well from that point of view. There were a lot of advantages.
Read the FULL interview in the magazine, along with additional coverage of FLITE and other virtual production shorts.Need After Effects and other VFX plugins? Find them at Toolfarm.