Behind the scenes of Dissós, with director Steven Elford.
If I tried to describe what happens in Dissós, the short film made by Steven Elford as part of Epic Games’ Unreal Fellowship, I might just spoil the surprises. So it’s best to watch it yourself, below.
Suffice to say, Elford, who is VP, creative technology and R&D at Mainframe Studios in Vancouver, takes advantage of the mix of real-time tools available from Reallusion with Character Creator and iClone used along side Unreal Engine to craft the clever and eerie CG short.
Here he tells befores & afters about the process within the Fellowship to make Dissós, how Character Creator and iClone were utilized for the main character, and about his own rich history in model making, puppetry fabrication and CG animation.
b&a: Tell me about the idea behind Dissós and what you set out to achieve in the Unreal Fellowship?
Steven Elford: In the past, at Mainframe Studios, I had the opportunity to work on a few Unreal projects, one of which was the reboot of ReBoot, but I was never as hands-on as much as I would have liked. The Unreal Fellowship was my chance to open up Unreal and actually use it, see if I really understood how it worked, and test my skills across the board, to see where I would fall down.
At the first Unreal Fellowship kick-off meeting, they announced that the theme would be ‘Duality’. I knew that I wanted to try to make something in the horror/mystery genre and I liked the idea of keeping the location to one place. A house in the woods seemed like a good choice. That was my starting point.

b&a: How did you approach the writing, concept, and planning side of the short, in terms of scripting, boards and any early concept work?
Steven Elford: Once I had the rough idea in my head I wrote down the scene, and some key shot ideas on Post-it notes so I could move them around to see how they placed out. I pitched versions of the short to some close friends and changed things until I had a story that I thought would work. From there I treated the production as if it was a live-action short. I made a list of what I needed, a car, house, props, etc, and then ‘scouted’ the Unreal Marketplace, found what I thought would work, and bought them.
I could have gone down a rabbit hole of building everything I needed but I wanted to stay focused on storytelling and wanted to use my time wisely. From there I made a few adjustments to the house set in Unreal and exported out a low-res version to bring into Storyboarder, a free storyboarding tool that was introduced to me by the Fellowship. Then it was a case of blocking out the film, adding some sounds, and seeing what worked. This was all done in week one.

b&a: Can you talk about the tools you settled on to help make it?
Steven Elford: Here’s the main tools I used:
Storyboarder – It was free and looked powerful so I thought I would give it a go. It worked great and I would recommend it, you can get away without needing to be great at drawing as it has dummies you can use and real-world scale cameras.
Unreal Engine 4.27 – This is what the Unreal Fellowship Storytelling course was based on. Unreal 5 was still in its early days.
Character Creator – The course was only 5 weeks long. I needed a fully rigged character, so I narrowed my choices to building one myself then texturing and rigging it; using Unreal Meta Humans which at the time was still fairly new and had limited clothing options; or use Character Creator 4, which had just come out. I had used CC3 a couple of months before so I was familiar with how it worked.
For animation, I used a mixture of motion capture from sites like ActorCore and Mixamo, along with some keyframed animation. I brought the same low-res set that I used for storyboarding into iClone 8 (it had just been released) and manipulated the mo-cap data. I also did this direct in Unreal.
I used Premiere and Soundly for the final edit.
b&a: How did you go about making your main character in CC4? What were some of the key aspects of that tool that helped you achieve something that must have been a tight turnaround?
Steven Elford: I had used Character Creator before so I knew what I was doing and built my guy in about half a day. It’s a fun process and very powerful. I love the fact that you can push and tweak the model without breaking the rig and textures. You can also use built-in animations to see how they look when moving. I then spent another day making him look like he had had a rough time, adding mud, dirt and scratches.

b&a: One of the fascinating aspects is the ‘wet’ sheen your main character had to have on his face — what were the challenges of achieving this, and what different options did you consider?
Steven Elford: The wet look was in fact quite easy. When building the character in Character Creator I added some sweat to his skin to make him look wet. Then in Unreal I used the Dynamic weather pack for the rain and thunderstorm. In this pack, there is a material for making things look wet. I applied this to the character and then dialed it slowly down so he looked like he was drying out as he made his way through the house.
b&a: How did performance capture work for Dissós, in terms of both body and facial capture?
Steven Elford: For body capture, on the Fellowship course, we had the chance to direct someone at a mocap studio to capture about 30 seconds worth of data. I used this time for the ‘rolling over, getting up’ shots.
The rest was done using a mixture of motion capture from sites like ActorCore and Mixamo along with some keyframed animation.
Then as I mentioned, I brought the same low-res set that I used for storyboarding into iClone 8. If I couldn’t make it work I cut around it or moved the camera a bit. Because there’s no rendering time needed, I was moving the camera and tweaking shots right up until the 11th hour.
For facial capture, I used every technique that iClone offered me for the facial animation. I don’t have an iPhone so I used my webcam. I used the puppet tool, hand animation and expressions. Once I was happy with the shot I exported the animation to Unreal and then if needed would keyframe extra animation on top. I did this for the shots where he squinted and his pupils dilated (not that I think anyone noticed).

b&a: There’s some very interesting virtual cinematography going on in the short, especially POV shots and the ‘exploring’ of dark spaces and corridors – how did you approach this?
Steven Elford: I tried to treat it like a live-action shoot. I dressed the set with things that I thought might be interesting if we caught a glimpse of them, like the creepy paintings on the walls. I lit the house by placing the candles in areas where I knew he would be walking so we could see his face and so he would cast shadows on the walls, slowly adding and moving once he was in the shots. Less is more.
The lighting was dynamic so every time I did a render pass it would look different. I got lucky on most of the shots, not many re-renders due to bad lightning strikes. The POV shots were done by attaching a light to the camera with a slight offset so it was the correct distance to look like it was in his hand. I then animated the camera path to look around the room and then added secondary animation to the flashlight to lead the motion. Once that was done, a little bit of camera shake on top to sell it.
b&a: Can you give me a brief history of how you got into the world of CG, including that transition from doing practical puppet fab work?
Steven Elford: I grew up on 70’s and 80’s TV shows and movies, where everything was ‘practical’, not a CGI shot to be seen. ‘How can I do that?’ I wondered. I went to Falmouth Art College to figure things out and after two years of drawing, painting, and sculpting I decided ‘making’ is what I was good at, so I went to Kent Institute of Art and Design to study model making and design.

While there I built up my portfolio by doing freelance work building props, models, and puppets for TV commercials, stage shows, exhibitions, and the like. After graduation, I worked at various companies, mostly in animation, which led me to work on puppets for Mars Attacks!, and chickens for the Aardman movie Chicken Run.
While working at Aardman I saw a couple of people doing cool things on a computer and I asked them what it was all about. I built a machine and taught myself (there was no other way back then, I’m old). Aardman gave me the opportunity to move from practical set building and puppet work to working in computer animation which I did with them for about three years.
I then moved to Vanguard Animation where I was a layout artist on Valiant, the first full-length computer animated feature film made in the UK and produced at Ealing Studios, one of the oldest film studios in England. In 2005 I relocated from the UK to work as a lighting technical director on Disney’s The Wild for Toronto-based C.O.R.E. Digital Entertainment. I continued to work with C.O.R.E until 2006, serving as a lighting technical director and compositor on The Ant Bully.
From 2006 to 2008, I worked as the layout supervisor and head of VFX for Vanguard Animation while working on the feature film Space Chimps in Vancouver.
I joined Mainframe after three years with Nitrogen Studios, serving first as layout supervisor on the live-action CG hybrid version of the iconic preschool property Thomas & Friends, and then as its CG supervisor from 2008 through 2011. I’ve been at Mainframe Studios now for 12 years, starting as a CG supervisor, with my current position being VP creative technology and R&D.

Brought to you by Reallusion:
This article is part of the befores & afters VFX Insight series. If you’d like to promote your VFX/animation/CG tech or service, you can find out more about the VFX Insight series here.