Han Yang on finding new 3D workflows for ‘The Lander’

The Lander

Practical lessons on moving to real-time rendering.

#3dartistsrock week is brought to you by RebusFarm.

When 3D artist Han Yang posted work in progress stills and then a trailer for his personal project, The Lander, he received much acclaim for the look of this Predator-inspired short. But, in addition, he captivated viewers by explaining his mix of V-Ray and Unreal Engine in lighting and rendering the film.

Han has also been revealing his process for making The Lander, including on the platform Yiihuu.com. It’s a great example of the kind of sharing that happens in the 3D community. I decided to ask Han about what led him to make The Lander, what he learnt from doing it (especially the hard lessons), and what he’s up to next.

b&a: What inspired you to make The Lander? In particular, how did experimenting with V-Ray and Unreal Engine lead you to the project?

Han Yang: I was a long-time fan of Blur Studio’s cinematic trailers. I was always amazed by their style of visual storytelling. Their short films were packed with intense action sequences and dynamic cinematography. I’m also a huge sci-fi fan so I decided to make a short film to study and practice my camera language, also to push myself in all aspects of the production see how far I can go. The storyline of Lander is pretty simple, an alien soldier lands on a planet to investigate a spaceship crash site, and gets attacked by a creature on his way to the wreckage. The story allowed me to showcase a good mix of cool spaceship flying shots and intense creature melee combat.

V-Ray has always been my go-to renderer and I’ve been using it for almost 10 years. The render quality is just amazing but really time consuming. The first storyboard included about 60-ish shots, so I had to find a much faster way to render without compromising the quality too much. I was looking at UE’s demo scene for A Boy and His Kite and was really amazed by the great render quality in real-time, especially the environment and foliage. I started digging into a couple of demo files and playing around with the lighting. It was just so easy to get a great looking environment render so I decided to use V-Ray for character/FG elements, UE for background, atmosphere and environment. Without a doubt, I wouldn’t be able to finish this project if I didn’t use UE in my workflow.

3D workflows interface

b&a: What did you need to learn about UE and real-time to make the short possible? What was your overall workflow for the short (and what were the main tools you ended up using)?

Han Yang: The first two weeks in UE were just painful. Learning a new software from scratch and the whole mindset was just totally different. I had a hard time to wrap my head around UE’s timeline, it was just nothing like Maya. Because I use UE mostly for rendering purposes, I had to teach myself about shading, lighting, animation and some particle FX in UE. One good thing is that I didn’t need to worry about the performance and frame rate, which saved me a lot time from optimizing the assets.

Lighting in UE was so much fun and you can get the result basically instantly. It was so much faster to get the first pass lighting setup done and start rendering out shots for comp tests. Coming from an offline renderer background, I’ve always been relying heavily on comp to do the atmosphere and other volumetric effects. UE basically took care of these in engine and even the depth of field and motion blur looks really promising.

Sequencer was another big part I had to dive really deep into. Things like synchronizing animations for different actors, triggering particles at a specific time, matching the camera animation in Maya etc. For some shots I even ended up animating the characters or tweaking the camera in sequencer.


3D workflows character

The workflow was pretty standard I would say for the most part. I had two sets of lookdev for every asset, for V-Ray and UE, animation always in Maya. I did use the iPhone AR kit to help me with previs in UE to block out shots quickly. It was almost like doing location scouting with a virtual camera on set through your iPhone. Most of the environment was built in UE with assets from marketplace. I also had proxies of those environments in Maya to help me with animation and shadow casting for rendering.

After the animation was done, then it started the painful process of bringing everything into UE. The rigs had to be UE friendly (no extra deformer other than blendshape), camera had to match, everything had to be baked etc. All that small stuff. If the shot has very little character or the character is not in closeup I would just render them out straight from UE. Otherwise I would do the background lighting in UE and then match the character lighting in V-Ray. Then add some dust and particles in comp to blend the character into the background. Almost like doing live-action compositing except the plate is from Unreal. With the help of real-time render, basically I could go from animation to compositing with in hours. At the peak, I was able to do 2-3 shots a day from animation to final render.

b&a: What did you find was the toughest parts of trying out a new real-time workflow, including pushing data between applications?

Han Yang: Pushing data between Maya and UE was just not that streamlined, especially for rigs that were not designed or optimized for Unreal. At the time UE’s support for Alembic cache was very unstable so I had a hard time pushing my characters from Maya to UE smoothly. Although UE offered great render quality for the most part, it still had some disadvantages when it comes to skin, fur and character rendering. AOVs were another big challenge too. I think without customized shader and engine tweaks, UE still cannot output renders with alpha channel and zdepth is hit or miss if transparent materials were involved. The limitation with render passes left very little room in compositing.

Shading was another big challenge I would say. Again comparing to a V-Ray shading workflow, the UE shading network is just a totally different world, one with a lot of math involved. Thankfully Epic provides a lot really good learning examples and I was able to study their shader and recycle the parts that I needed to use in my project. Despite all these challenges, I still think the fast render time can give indie artists a huge advantage in their own production.

3D workflows interface

b&a: For those who might not already know, what is Yiihuu? How does the tutorial that you made work for viewers?

Han Yang: Yiihuu is one of the biggest CG and art tutorial platforms in China and it offers high quality content for both Chinese and international audiences. Most of the tutors on Yiihuu are industry veterans and have worked in big top studios like Blizzard, Naughty Dog and ILM. Their tutorials cover all aspects of production for both game and vfx. I have been following their content for a long time and they are particularly strong in assets and shading tutorials. I found a couple really great UE tutorials too on Yiihuu covering lighting and game design. A really great website for artists.

I launched the tutorial with them back in last August and I’m really grateful for the platform they offered. My tutorial was a bit different because it covered pretty much the whole production, more like a generalist tutorial. Basically I took 3 shots from my film and broke it down to every step. From rigging, animation all the way to rendering out in UE and finishing compositing in Nuke. My goal was to help others to make their own CG animation or personal work by utilizing the tools that can boost up their productivity. I’ve seen a lot of great animations and renders from the students following the tutorial and it’s amazing to see the work they did in UE, I’m really glad my tutorial brought something new into their workflow.

3D workflows interface

b&a: Can you talk a little about your own career in CG/VFX? What’s next from you?

Han Yang: I have been in the industry for about 7 years and I started as an animator. I always wanted to do more than animation and even my own film. So I never stopped making personal work outside animation. For the most part of my career I worked as an animator or layout artist, so I’m pretty comfortable with cameras and animation. I think it helped me a lot in making The Lander, especially to achieve the cinematic feeling through camera language. Currently I’m the CG supervisor in IGG Canada. We have a small elite team producing cinematic trailers for games and short film. Working in the VFX industry for almost 5 years, I wanted to explore different opportunities that bit more challenging creatively. Now I’m allowed with more creative freedom and we are trying to push the quality for game cinematic trailers. We are looking to release our latest trailer in September, can’t wait to show you guys what we’ve been cooking!

I’m also working on a new personal film that I hope to finish the end of this year. This time I’m pushing my animation skills and with more complicated camera work. Facial capture and performance is going to play an important part in the new film too. Also I’m pushing the render quality in UE, even for close up shots. It’ll have a sequence with a lot fast paced close combat. And I’m not skipping cloth sim this time, haha.

3D workflows

Film credits

Creator/director: Han Yang – www.han-yang.ca

Character Asset:
Ben Erdt – artstation.com/benerdt
Vick Gaza – artstation.com/vickgaza

Rigging: Perry Leijten – perryleijten.com/

Music Production: Shaobo Zhang
Sound FX & mixing: Lynn Lai, Cedar Zhao

This week at befores & afters is #3dartistsrock week, diving deep into a different 3D artist each day to reveal their work and their process.

Leave a Reply