This indie film with more than 500 VFX shots took more than 5 years to make

January 20, 2021

How warehouse spaces were transformed into the lush world of ‘The Wanting Mare’.

It’s fair to say that director Nicholas Ashe Bateman found himself a little over his head when he jumped into making The Wanting Mare. The film, releasing February 5th, is a fantasy tale set in the world of Anmaere, but was mostly filmed in a warehouse in Paterson, New Jersey.

Shooting at that location, but realizing a more expansive setting, required extensive VFX intervention, to the tune of 70 per cent of the film; a significant undertaking for an independent production.

Ultimately, after five years, the film is about to be released, and now Bateman reveals to befores & afters just what went into the VFX work, including a detailed account of his matte painting, After Effects and Blender workflow. He also discusses the warehouse shoot and offers some tips for indie filmmakers who might be looking to make their own project.

b&a: It may sound like a silly question, but why did this film become such a large VFX film?

Nicholas Ashe Bateman: Not a silly question at all! I think we were just as surprised by it as anyone else. It is hard now to try and remember how much we really thought we were getting into, but it certainly doesn’t seem like we were anticipating what it became.

We shot the film in sections, and those sections felt as though they were on wildly different scales. Interestingly, the earliest stuff we shot (the first 35 minutes of the film) has the most practical work in it. We piled the small group of us into a van and went on a road trip along the east coast in the United States, and finished it up by heading to Novia Scotia to an Air BnB destination that would become a crucial set.

We knew that there would eventually be a lot of visual effects, and I was personally pretty terrified about the immediate foreground challenges for a lot of those scenes. In many ways, we went up there to just get some shots where we were very clearly outside, and very clearly walking in real grass. My initial comfort zone was somewhere around keying a green screen, creating a matte painting, and 2d tracking in After Effects. That’s basically where we started.

From there, we were able to do more VFX work as our ability to composite some of this stuff advanced in the interim. Additionally, we were able to successfully raise more money, which gave us the highly coveted gift of ‘space.’ We rented a warehouse for two months, and used that as almost a community theater operation – building small sets and rearranging their pieces.

I believe that it looks somewhat insane from the BTS that a handful of us are making this whole movie in a big room, but at the time it felt surreal and incredible to have that much space that was just devoted to the film. We’d been shooting in literal closets and kitchens up until that point.

Once we started on the final 2/3rds of the film, we basically were giving in to the idea of not fighting the VFX ratio at all. This became the most important thing. We would all love to be able to do more things practically, but that was obviously impossible. I think it’s impossible for everyone except maybe a few filmmakers in the world. So, once we admitted that it was either holding to the idea of ‘practical’ or not attempting the ideas that were in our heads, the answer was easy.

I often think the praise of ‘practical’ over ‘vfx’ is a widely misused and reductive comparison; there is a negative connotation with ‘vfx’ that affects lower budget filmmakers in a way that doesn’t consider ‘what is the film you’d like to make?’ That’s the only question I’m interested in. I believe this myth is often perpetuated as a marketing tool by larger films (that still engage massive, insanely talented VFX departments with little credit) to appeal to people wanting to see ‘real films’, or to tap into a sense of nostalgia for how films were once made.

The hope is to be able to make these strange wonderful things that are in your head, and if you’re able to remove some of the embarrassment of starting on your long-awaited project with a 6×7’ green screen and a tiny consumer DSLR in a kitchen – so much more is achievable.

All that being said, the answer is also: no, we had no idea what we were getting into. It was horrifying.

b&a: Many of the VFX scenes you filmed seem to be set outdoors, but were filmed in a warehouse. Can you talk about what a typical bluescreen/setup involved? What challenges do you feel you faced in lighting for these outdoor scenes but filming indoors?

Nicholas Ashe Bateman: The indoor/outdoor work really evolved throughout the film. Being guilty of much of what I was saying above, I was hoping to do as much of the outside stuff ‘outside.’ Additionally, this indoor screen work was pretty impossible to begin with having so little space to work with. One of the key scenes in the first section of the film takes place on the tip of a massive rock jutting out of the surf in the middle of the night. We filmed that in a small storage closet, and didn’t really have much room to light the blue screen separate from the faces.

The director of photography/producer David A. Ross and gaffer/producer Z. Scott Schaefer deserves (obviously) all of the credit for this. David really preferred shooting in a controlled environment. The experience of the first section was so tough to be able to light things with our small crew and gear in real night circumstances, so he really wanted to be able to work more specifically with it; he had bigger and more exciting ideas than whatever was possible with our five lights outside. I had to get rid of my own negative connotation in order to enable him to do this, and once I did I saw how correct he was. Additionally, once we were in that large warehouse space, we were able to bounce light quite a lot easier, and it removed some of the more hotspot qualities of some of our earlier challenges. All of it started to flow a bit better.

The main thing that David did so masterfully – given that he has been part of the entire process – is he made a layer of creative choices on top of his natural photographic wishes. So, in some of the scenes in the driveway, David is deciding where he would ideally like to expose the camera if he had an endless grip package, where the lights would be, and then creating some additional sense of ‘what would we be able to do’ if these lights were outside. I think he’d start to take away imaginary lights in his head and work his way back. We’re all just playing with this fun idea of ‘how can this look.’ We’d be concerned if we shot a scene completely as we wished, sometimes we’d pick an imaginary constraint to keep it reeled in.

I don’t know of too many other green screen shoots that are planning specifically for parts of the image to be blown out, or even part of the image to look as though it’s been slightly underexposed and pushed. We’re really messing with the footage, and trying to get to a texture that is both authentic and expressive of the film. I think David and I are sort of obsessive about different slices of time and exposure in movies “oh the few minutes of late blue light in There Will Be Blood, where was the camera exposed, which sections are too hot, what’s the black level? How much detail in his clothing here? How hot is the eyelight?” This is the stuff that really moves us to express these slices, and then mixing that in the endless decisions of your own that you have to make in filming it.

Additionally, we really got good at being able to do all of that and shooting it a stop or two above the desired point (partially to accommodate for the brightness of the room and the screens), but also to get the most out of our little Sony A7sII. This took years of trial and error, just compositing and reshooting and compositing again- trying to find that right balance of ‘loss of detail.” The “offset” setting in compositing exposure was certainly our friend.

b&a: Can you break down a typical VFX shot (say, an actor filmed against blue and an environment placed behind them)? What was involved in crafting a matte painted environment? How did you approach compositing?

Nicholas Ashe Bateman: The majority of the shots follow a similar visual rule as the first season of The Mandalorian – trying to avoid feet at all costs! We have a lot of medium-wide shots that cut right around the knee, but without our wider aspect ratio, it would maybe feel a little too small. So, we widened the frame a bit, this let us use more of the actual digital negative while also giving a strange mix of very close and very wide. I’m also really moved by a more golden-age of Hollywood, and I am just so in love with matte paintings – so I just naturally want to keep the horizon as low as possible, and move it into this Victor Fleming silhouette.

From there, the hope was to keep things slightly buzzed out of focus whenever possible. The film was shot on anamorphic lenses, so we were hoping for that additional veneer of softness to play around with. It was a rough trade though, as the distortion proved to be near impossible to track at times.

Additionally, we’d have really two types of camera moves that were within our wheelhouse- handheld shots with a stationary operator, or push/pull shot on a Movi Pro (in the final stages). This basically kept us with either simple 2d tracks or relatively simple 2.5 solves. The few shots in the movie that move beyond this took us literally years.

Otherwise, if we stuck to those two formats, we were able to keep all our assets as flat cards. The overwhelming majority of the movie is comprised of matte paintings done in Photoshop, imported into After Effects, and then arranged in flat layers at different angles. Once this became the method, we figured out tons of ways to cheat this, cutting up our layers in Photoshop to make a very crude workable projection. Some of those layers (like ground layers) we could precompose, and then add stock footage of grass blowing, or birds, or blinking lights, and have working animated matte paintings.

All we had to do was copy and paste all the layers parented to the null, and we had (what felt) like a 3d asset in After Effects. In some instances, we could parent this 3d null to a 2d track null – without ever having to 3d solve the scene – the whole arrangement would move and give us the appearance of full 3d parallax. There are an uncomfortable amount of shots in the movie that we did this way, and then just tweaked the rotation of the null by eye to get it to look right.

I really started in Photoshop, and am still most comfortable there (additionally as a concept designer), and the combination of matte elements was the most fun. We clearly had the photobashing approach for some mattes, as well as some plates we shot in Novia Scotia, but then we were also able to build rougher 3d models for some of the key sets we saw from multiple angles. From there, I would set 3-4 keyframes in Blender on the model to correspond to our needed angles, render out views with the lighting on it that David and I liked, and then matte paint/photobash over it. That gave us some actual reference for these things, and given that this was before Eevee or Unreal being so amazing, or even Megascans assets, this was pretty much all we could do. One frame would take hours to render. This whole process would be totally different if we started the movie now, mainly because it would be very hard to get me to leave Blender.

I love composting in After Effects because I’m most comfortable painting in Photoshop. So, whereas a traditional method might be a 3d model, 3d camera solve, volumetric lighting and fog, we’d take these plain flat layers in photoshop, and I’d use solid colors at different opacities, masked and at different blending modes, and create this feeling of environment right in After Effects. It made the projects absurdly complex, but, It worked!

b&a: You mention some shots were achieved in Blender – how did this approach differ from your normal AE workflow? What do you think about adopting any real-time shooting or game engine rendered environments in the future?

Nicholas Ashe Bateman: I am deeply obsessed with Blender. I love it. I did over 75 shots in The Green Knight this past summer and fall, a huge portion of which are matte paintings/digital sets, and I used Blender for the large majority of it. There are a few shots in the film that I did entirely in Eevee – which is just mind blowing to me considering four years ago I had trouble rendering out a single frame guide for a matte painting in Cycles (even though modern Cycles and modern BDSF shaders are my preferred method now) – being able to see something as you work on it is so huge. I think this says way more about the technology than it does about me!

The past year on the film has been somewhat painful because I just had to fight the daily desire to redo the whole film again with how far we’d come. There are some shots in The Wanting Mare that are done using Element3D in After Effects (which I love, and was my first real 3d dip), but I don’t even think I have it installed anymore, so it felt strange delivering a movie with some elements using it. However, at the time, the shots would have otherwise been impossible. Thank you, Andrew Kramer!

The main shot in Blender is the opening shot of the film, which was a long, long process to figure out. Years ago, I figured that I’d need to make a full master 3d model of the city (in Blender), which became about as impossible and maddening as you’d expect. It also became a file that is still unable to open. However, as time went on, I developed a flat, almost orthographic matte painting of the city (Whithren) from a top down view.

After all of the insane attempts at making this shot, the one that worked was just projecting that texture over some basic geometry in Blender, and turning that matte into a displacement map to get some additional miniscule height difference, and then removing any texture effects at all – just the straight matte plugged in to the material output. From there, I just painted about a dozen different cloud arrangements in Photoshop, and imported those layers as alpha layers and duplicated them countless times. I did have some slight reflective qualities on these cloud textures reacting to an HDRI, just to get some slight color difference depending on their angle.

When the camera path was already set, I was able to arrange those cloud layers so that they were constantly viewed from the correct angle by the camera, even keyframing some movement on them, giving the appearance that the clouds are moving as the camera flies through them.
From there, I couldn’t resist going back this summer and fixing a few shots using Megascans (like the cliff diving shot). The water planes in that shot (and throughout the entire film) are achieved by projecting repeatable video tiles of water that have been stabilized from stock footage, and then creating a roughness map from the video for some shimmer. Lots of ActionVFX splashes and fog as well!

In terms of future real-time work, we’re very interested in it. I think the next great frontier for me is a digital character (not necessarily a human one) and that’s something that I’ve been concepting and working on in preparation for what we’re hoping to make next. I imagine that will send us even further down the rabbit hole with a lot of this. I am equally obsessed with the work Ian Hubert is doing in Blender with mocap (and basically everything else). I subscribe to his Patreon and am constantly watching and learning, the community around this is so much bigger than when we started only five or six years ago, and I’m really hoping to keep following it along. It’s so very exciting, I’m learning every day. I think a lot of us are children of DVD Special features, and just trying to figure out a way to realize our own hopes for these things.

b&a: Because the project was a lengthy one, what do you feel in particular you learned or improved on as time went by? What would have been your wishlist early on in production knowing now what you know?

Nicholas Ashe Bateman: I wish we knew everything! It’s really hard to express how rough this stuff looked for a very long time, even so much as deeply affecting the editing and review process of the film. For example, we obviously didn’t have the ability to previs any of the work, and it could be years until we’d get a shot to a point that it was ‘final’, which made it impossible to really show the movie to anyone without them thinking we were insane.

The options were: watch this strange thing of people in a warehouse, or watch these terrible vfx shots that really distract from a movie trying very hard to be a ‘serious drama.’ It was endlessly unsolvable. However, when you’re compositing every shot in the movie, you’re really getting to the bottom of what you like visually on an almost instinctual level. The amount of time David, Zack and I spent deliberating on how bright the night skies should be is comical. How blue? How much detail? No stars? How much cloud cover? How much atmosphere? How much difference in the black levels between me and this building far behind me? These things really affect the movie on a large scale, and while those choices may seem apparent now, it really is so hard to answer all of them when you start with nothing.

I think it’s clear that we all wish Blender had advanced to where it is now (or I knew how to use it then), but I’m also happy we had to go through this last gasp of non-real time workflows. I think our improvement was very large in a technical sense, but the real work is how much you can picture it in your eye: the compositing, the levels, the detail, getting used to noticing these details in other films and real life, and mixing that with whatever standard or hue of realism you’re shooting for. Not all VFX needs to just be ‘realism’ – that seems so boring. David and I were constantly saying that you wouldn’t hire a DP who said he was going to shoot everything to look ‘real’, the perspective is important. Cassandra Baker (production designer) taught us that. She’d be getting texts and pictures from us every night for years. It’s just painting. Everything is painting.

b&a: One thing that seems quite amazing is your nimble crew behind the VFX work (including yourself). How did you communicate during post, and how did you stay motivated?

Nicholas Ashe Bateman: It was very easy to stay in communication because we all lived in the same office space, basically. Zack Schaefer was the one to really take up the call (with no prior vfx knowledge), and move into the office that I was living in, and just try and work our way through this immovable mound of work. I taught him some of the (limited) things that I knew, but every day we were waking up, making coffee, watching youtube videos, and just trying to make it work. Zack just taught himself whatever he needed to know at his desk, and was really undeterred by how crazy it all was. The power of YouTube tutorials is really hard to overestimate. Zack still has a pile of knowledge that I haven’t learned yet, so we’re still sending stuff back and forth on current stuff.

From there, Ger Condez would come stay, and David would also be in every day (overseeing everything, giving notes, and working on sound). If the movie succeeds in being a cohesive and somewhat new visual experience, it will be a testament to how dedicated these people were to the film.

I am constantly talking about how hard and sometimes-damaging making movies for little money can be, and it’s most often to the credit of the people who aren’t being interviewed. Without them, I’d still be sitting in the office rotoscoping my own hair with a shot that was untrackable.

Additionally, we utilized the great resource of some websites like PeoplePerHour and things like that, and found an amazing guy named Talha Rana and his incredible company (TalhaVFX) that handled a huge amount of the rotoscoping of the movie. He built his own small team, and they worked on the movie for almost a year as well. I am so grateful to them, because that is the true work hurdle that you can’t overcome. It was quite a treat for us to get pictures from them huddled around their computers, and reply with pictures of us huddled around ours. It’s a real circus mentality sometimes.

It has been one of the great joys of my life, a real true terrifying adventure. I can’t wait for people to watch the movie.

The Wanting Mare releases in select cinemas and on VOD February 5th. You can currently pre-order the film on Apple TV. Follow the official Twitter account for more info. A 35 minute making of film will also be released on Feb 5.


Subscribe (for FREE) to the VFX newsletter




Leave a Reply

Discover more from befores & afters

Subscribe now to keep reading and get access to the full archive.

Continue reading

Don't Miss

The art and design and VFX of a music video

A breakdown of Father John Misty's single ‘Funny Girl’.