Behind the scenes of the still work in progress animated short ‘Retour a Hairy Hill’, and the tech and cloud rendering that’s making it possible.
When E.D. Films co-founder Daniel Gies sought to make a personal animated film featuring characters ‘made’ of paper, he didn’t quite realize what he was getting himself into.
That’s because, as you’ll see in this befores & afters interview about Gies’ animated short Retour à Hairy Hill, crafting CG characters with a suitable paper look–with all its unique texture and folding qualities–proved a tough prospect.
Gies and his small team was able to overcome these many challenges by implementing a mix of Maya, Houdini and Unreal Engine approaches, and by relying on cloud rendering in Redshift through Conductor’s deep integration with Maya. Find out more below.
b&a: I think that when someone first sees what you’re doing with this film, there’s immediately a, ‘Oh, what is this?’ response, because of the look of it. I actually feel like that with many of e.d.Films’ projects.
Daniel Gies (director and co-founder E.D. Films): Part of the driving force of all of the projects we’ve done has been trying to capture something that’s more handmade. I think I was in year four or five when I wanted to be able to take watercolor paintings and move through them like they did with Disney films. I was looking at technology as more facilitating an independent project versus trying to make it do what Pixar did, because that felt impossible. I’ve never been that interested in making the technology shine. I wanted to try to make it invisible, and I’ve been trying to do that for probably 15 years now.
b&a: I have to admit when I watched what you’ve released for Retour à Hairy Hill, I didn’t 100% know if it was some kind of combination of real paper stop-motion and CG, or just CG. How did you accomplish this?
Daniel Gies: The whole project is predominantly digital. It’s keyframe animation done by Agora Studio. There’s no actual real paper puppets at all. I did build paper puppets first to see how they would work, because I love stop-motion. I love the feeling of paper. I also come from a background of drawing on paper. I found that when I draw with ink and paper, I tend to be less ‘controlling’ of the medium. But then, when I draw digitally, I have a tendency to fix things too much or to get caught up in the ability to fix and correct everything. That can create a rigidness in the design and the aesthetic. When you do stuff with pen and ink and paper, you make a mistake and then you work towards that mistake because you can’t erase it. And so, I wanted to capture that and keep that.
In the film, all of the textures of the characters were started by hand-drawn ink outlines, and all the trees and other pieces are hand-painted in various ways. Some of them are digitally painted, but it all started out from a very physical place, to understand that that’s where the paper puppets came from, almost like 2D digital cut-out animation.

b&a: How did that ‘physical’ side translate to what you were doing here?
Daniel Gies: Well, this needed to be 3D, but one thing I’ve always hated in 3D is UV mapping. If you’re trying to draw on a 3D object, there’s something about it that’s just really not that easy. When we first started on this, Substance was not what it is now, where it has a lot of procedural elements to help you do that work.
Now, one of the things of course with 3D objects is that the UV map is just a flattened version of the object. I thought maybe I could just draw faces squashed out, and then curl them in the computer, or maybe cut them out, and curl them and fold them. One day I built a paper puppet with this idea of what a UV-mapped face would look like. I just started drawing it freehand, and then I started cutting out this face and seeing what you could do with a piece of paper that had a face drawn on it.
I think because I’d done so much 3D work, I was able to visualize what a paper maquette could do without having to measure and build it first–I could almost deconstruct it in my head. That turned into, ‘Wow, I could do this in the computer and actually animate them!’ I do love stop-motion, but stop-motion is so, so difficult and tedious, and you can’t fix it easily. I knew that I would never be able to animate it properly if I did stop-motion.
b&a: What were the challenges of translating this paper look and feel to something that could work in Maya or Houdini?
Daniel Gies: I was not expecting that to be so difficult. The texturing was super hard, and simulating the paper was a big nightmare. That was probably the biggest problem we ran into through the whole thing. Originally, I thought, ‘Okay, if we animate this in paper, physics can take care of most of the animation.’
In fact, I was originally going to do all the animation myself, and we were also going to use VR to animate the characters, because we’d done these tests using VR and the puppeteering of VR worked really well because it gave the paper puppets a bit of a janky feel. But the thing I didn’t realize would be so difficult was getting the computer to simulate paper. The computer’s really good at simulating cotton t-shirts, but it sucks at stimulating paper!

b&a: Why is that?
Daniel Gies: I think it has a lot to do with the fact that paper isn’t stretchy at all. It has a rigidity to it. It almost has a unidirectional rigidity, where if you curl paper in a horizontal way, you can’t curl it the other way. It almost decides it’s going to fold in one direction, and then it reinforces itself along that direction and can’t collapse on the other side, because it’s not linked. It doesn’t have a chain link kind of thing like cloth does. Also, the way that paper gets crumpled and then holds its form after you crumple it, but then slightly expands, is a nightmare!
Our main character had all these little accordion wings. I could do that really easily with my paper maquette by making all those mini-folds and then fan it out. But the computer hated it. The wings would flop everywhere. You have to teach it not to bend along the edge. It wants to bend equally everywhere.
b&a: Was your solution shot-by-shot fixes or something larger than that?
Daniel Gies: With the paper, we figured out a way to do it in Maya, but there we found that Maya cloth was only great if the characters weren’t moving quickly. So, we mainly used Vellum in Houdini. We actually found that if the mesh was really low resolution, it actually behaved more like paper. If you had a high resolution mesh, the simulation tended to go in a bunch of different directions.
The rigs were also challenging. And they were also built so that they had these hanging strips of paper holding them together. In some shots, you can actually see through them and see the paper that holds the character together. Sometimes that was 16 pieces of ‘cloth’ all hitting and rubbing up against each other.
b&a: What about the textures and final look for that paper quality? How hard was that to pull off?
Daniel Gies: I did some experiments first to see how light would go through paper. I used an animation light table and then scanned or photographed paper and then I used that as a transmission texture. One particular thing we noticed was in relation to ink on paper. If you draw on one side of the paper and you put light behind it, that drawing shows up on the other side. That was something we wanted to re-create in rendering.
The biggest thing, relating to the final look, and which nearly destroyed my life, was that the paper had to have thickness to it. It made the whole pipeline so complicated, which I never ever expected. We usually use displacement maps to help create complex surfaces but don’t need modeling. However, once you have some thickness, you can’t use displacement maps anymore because displacement maps only calculate based on the surface normal. So if you add them to paper with thickness, what happens is the paper gets ‘fat’ on every side, which includes that super sharp edge where the paper should be flat. What happens is you end up getting this rounded, mushroom bubble-looking thing.
As soon as we used displacement maps, the characters looked like they were made of a thick leather, because all of a sudden they were getting puffy. We tried all these techniques to try to get it to work, but we ended up having to make a really high resolution paper character that I sculpted all the folds and the dimples of the paper into, and then that had to be wrapped onto the low resolution cloth-simulated mesh.
I couldn’t find any tutorials or anything on how to do this, and then I just watched a documentary on Frozen about the cloth, and there it was! You can’t sleep at night, you’re stressed out, and it’s one goddamn check box that no one talks about! You basically link your character to the original animated mesh to make it work.

b&a: You mentioned before you only had a few computers at your studio. So how are you approaching rendering? How did Conductor and a cloud approach come into all of this?
Daniel Gies: Well, here’s the thing. We previously did a ‘Monster Fish’ short for National Geographic. We were rendering with mental ray at the time. There was this one shot of the fish and we were running really late on the project–it was just two of us working on it, and we only had two computers. We had this shot we had to render of the fish swimming across the scene. It was going to take 72 hours to render on our machines, and I was freaking out. There was a cloud rendering service I’d heard of–before I knew about Conductor–and I bought some render time. But it took me forever to set up the service. It wasn’t easy to authenticate anything.
I rendered the shot and it cost me like $7,000. There was actually a cloth glitch that happened on the back of the fish, but you couldn’t see it in the final shot. Thank God we didn’t need to render it twice, because I did not have the time or money. In the end, I found it hard to set up, and it broke, and I never went back to cloud after that.
Then, I was working with someone else and he came into the studio to see what we were doing and suggested we really needed to get back into cloud rendering. He put me in touch with Conductor. He said, ‘This is a really good artist tool for cloud rendering.’
I was really sceptical, but what I was able to do is I was just able to click and download a little app that would interface directly with Maya. I could open the Conductor app in Maya, and directly from Maya, I could set up my render, I could tell it what render engine I was using, which was Redshift, and just press go. There was even a tool to double-check your scene to make sure there was nothing you were missing. It made sure all your files were linked, it made sure everything was in the right place and that there was nothing wrong.

b&a: And were you basically doing this on your own?
Daniel Gies: Yes, that was huge for me because I don’t have a render wrangler. There’s just two or three of us. Usually it’s just one or two people working on this project at any one time. So, there’s not that head space to tackle that kind of complexity.
So then, when I was able to press go, I pressed go, and by default Conductor sets up a render where it does three frames that will render the beginning of the shot, the middle of the shot and the end of the shot. It says, ‘You need to check this before you press go.’ It just feels like it’s the first time that technology is speaking to artists who don’t really have the head space. The other ones I’ve used don’t do that. It’s really all on you to manage what you’re doing.
b&a: What about once you’ve sent it to the cloud, how do you then handle the next steps?
Daniel Gies: It’s very easy to go online and check your account, and see the state of your shot. You can watch the frames being rendered. It’s a super straightforward interface. Literally, it renders, you can see if there’s a mistake, it tells you if a shot failed. It’s really easy to set up. And then, downloading your footage is super easy, because there’s a desktop companion app that you open. You just click download, and it sends it directly to your Maya folder, just as if you rendered it on your own computer. I really feel like they’re focused on the user experience a lot more, and I was actually able to utilize cloud for the first time and not be completely stressed out about it.

b&a: What’s next for this project, what are the next steps?
Daniel Gies: One thing we’ve also been doing is using Unreal Engine. That’s because there are two parts to the film. First there’s the interiors, which have a very stop-motion and hand-painted feeling to them. Then we have the second half of the film, when the main character leaves the house, and most of that was being done in the game engine. Probably about 30% of the film will be in the game engine and then 70% will be rendered in Redshift, mixed with hand paintings. It’s a huge collage of techniques, at the end of the day.
Brought to you by Conductor:
This article is part of the befores & afters VFX Insight series. If you’d like to promote your VFX/animation/CG tech or service, you can find out more about the VFX Insight series here.