Step-by-step: how to make your own VFX-heavy music video

Behind the scenes of Ivan Dorn’s ‘Wasted’ promo with director and VFX specialist Denys Shchukin.

A goal of many visual effects artists is to able to use skills learnt from working on complex feature films, TV shows or games to make their own content, say, a short film or even a feature. Certainly, the path to creating any content can be tough, particularly because of time and budget. However, with tools becoming more accessible, and projects able to be realized across borders, there’s now more and more scope for realizing personal projects.

That’s one reason why Vancouver-based Denys Shchukin, who has worked at many studios including Framestore and Image Engine, decided to take on an adventurous individual project. He scouted around for a suitable match for his desire to create imagery, and came upon the song ‘Wasted’ by Ukrainian singer Ivan Dorn. Shchukin would ultimately successfully pitch the singer on a music video featuring a completely CG human character, who performed a number of magic-power-like moves.

Here’s how Shchukin came to direct, produce and write the music video, and contribute heavily to the VFX, along with other key collaborators. befores & afters asked him to break down the process, step-by-step, from initial idea to the final frames.

1. The origins of the idea

I decided that I needed to create a ‘masterpiece’ on my own. Originally I thought of making some sort of FX/CG mini reel/story, but then it very quickly transformed to the idea of creating a full CG music video. Funnily enough, my first few years in the VFX industry were in music video production/post-production.

An evolution of Dorn’s character in the music video.

Usually, artists might make a project and then look for suitable music. I decided to do it in reverse order. Because of my past experience in dancing and choreography, I decided to find a proper music composition first, and then have the music itself to inspire me and trigger visuals and rhythm in my imagination.

At the time, the new album from Ivan Dorn (one of my favorite musicians), OTD, was released. Three tracks really stuck in my head – ‘Wasted’, ‘Collaba’, and ‘Such a Bad Surprise’.

Cloth sims would become a significant part of realizing the character.

After I found contacts for Ivan, we arranged a chat and I expressed to him how much I liked his song, what ideas I had in my mind, and that it would be full CG. Ivan liked my style and creative way of thinking and shared my excitement about this project, so we decided to start the ball rolling. A few days later I presented the script for the music video. Ivan approved it without any comments or changes and we moved to imagery.



2. Planning it out

Considering that whole project was supposed to be 3D from the beginning, I decided to skip traditional concepts, sketches and storyboards. Instead I made stills of all 140 shots in 3D.

A final still from the music video.

Later on we edited it to the music and it was completely clear which direction we were going and how it would look from editing, rhythm, camera angles, layout, and logistics inside the scene. After the base concept was established, we started to do some animation, render and shading tests, project scheduling and, on the side, we started to assemble a ‘modular’ team.

3. Motion capture

Valentine Ushakova from Digital Cinema Ukraine helped us with the mocap session. They used a T160 Vicon set-up and 52 markers for the shoot. Ivan did every move by himself and that made the motion capture even more realistic.

A still from the mocap session.

Unfortunately I wasn’t able to attend mocap—it was happening on the other side of the globe—but I was controlling it remotely with video calls and constant updates from the b-roll camera. We booked two shifts of 8 hours each and I was online the whole time. At the end of the day I was reviewing it all and making a wish list of comments and improvements for the second day.

I was very lucky to have at this mocap session our project animation supervisor Slava Lisovsky. We were speaking 24/7 to make sure that we had an identical idea of what we were trying to achieve and how it was going to look.

Ivan Dorn covered in tracking dots for a facial capture session.

We had planned the whole sequence before the mocap shoot and had two types of animation clips at the end of the day: story driven animation and a dancing library. Crazy shots like flying, falling and underwater swimming were animated in a classic keyframe way—frame by frame.

4. Digital human build

The first stage of the build was done using hundreds of reference photos of Ivan in a T-pose, other essential poses and detailed close-up photos of Ivan’s forehead, ears, arms, palms etc.



Previs head modeling.

We also used a separate huge set of static photos of Ivan in different poses of different emotions, and then two video recordings of Ivan singing without markers, and performing emotions and expressions without markers. Then we did the same thing again with markers on.

The build approach was more or less traditional, except that we had no option to use a full body 3D scanner. We did do a very rough scan using photogrammetry, and then finalized the model in ZBrush.

Dorn’s CG model takes shape.

The head model for previs purposes was made by Andrey Ryzhov. The final version of the head and body were made by Alexey Rodenko from KB Studio. Facial rigging and animation was done by Slava Lisovsky.

5. Effects sims

For the CG hair and clothing, Igor Velichko carried out tailoring in Marvelous Designer, after which we used two versions of clothes; for rendering and for simulation (with lower resolutions and without thickness). For the simulation we used Vellum in Houdini.

Dorn’s hair and cloth sims set-up.

After that, the motion was transferred to a denser high-res geo with a thickness for rendering, using a custom asset. Grooming was created in Houdini using the Hair and Fur system. Vellum was also used to simulate hair. We used Python for automatization since we had a lot of shots. This made it possible to simulate each shot in one pass just by choosing a shot number.

For the more magical effects such as the energy rays, energy hits and teleportations, we were trying to make them attractive and appealing not with complexity and amount of the elements, but rather with light/color/shape/motion integration into the shots.

Lightning effects were a big part of the FX sims.

I had had experience with fire simulations before so I decided to do these effects myself. Volumetrics were always the trickiest part. I used a parallel simulation strategy (not distributed). The difference was, our final fire was split into 22 independent containers which could run on the farm in parallel or in sequential order. The benefits of that approach were that every container simulation did not take so much time, and then you could adjust separate parts individually.



6. Rendering

We chose Redshift for rendering. One of the reasons is that it has very good integration into Houdini. I thought, if I wanted to keep my micro-pipeline as small as possible, then I needed to be able to do the job of the multiple departments in the same DCC, so Houdini for me was a no-brainer. Redshift integration is close to perfect. It meant I could just work in a traditional way with a procedural node-based approach.

Rendering was handled in Redshift.

Another big and important factor that led me towards Redshift was its scalability. Redshift is a GPU render, and in order to multiply render power, I could just buy more video cards , plug them into my system, and add 100-plus per cent render speed with each additional GPU. With CPU-based renders, I would need to assemble more workstations, which is much more costly.

There are great camera settings in Redshift, with custom DOF, bokeh, lens distortion and photographic exposure setting with PostFX. Later on, I polished my settings and was able to render frames in final quality in about 30 to 45 minutes.

7. No compositing

The whole project was in a compact pipeline with a limited number of artists. Rendering in AOVs, managing versions, dealing with multiple interdependent layers—these are good and smart ways to go on big feature film projects, especially when you’re mixing practical 2D plates with various 3D layers from multiple departments.

A final electric shot.

In our case, however, which was a need to go fully 3D/CG, all the images were delivered from the same DCC with the same render engine. Considering the speed of the Redshift renderer, and the dynamic nature of changes, it was easier to re-render the shot or sequence directly into beauty without an extra step.

Another reason for rendering directly into beauty: I was using Redshift PostFX, which you obviously can apply later in the post process, but it was much easier to be able to see the final image right in the viewport.

8. Working modularly

Overall, the modular or distributed approach worked just great for me, considering all the pros and cons. At the preliminary end while we were doing head and body modeling, layout, etc, there was no need for 24/7 communications. A few emails, group chats and group video calls a few times a week was just enough.



A screenshot from the animation stage of production.

Later on when we started to do intense blocking of all 142 shots with animation supervisor Slava Lisovsky, we jumped into Shotgun and managed the middle part of the project over there. Then after animation was almost done—when the project was already in the middle of shading, lighting and lookdev—I was taking care of this stage by myself, so no much external communication was needed.

I used an external farm for rendering called ForRender, with Ruslan Imanov and Roman Rudiuk, and they provided 24/7 support over Skype. There was the ability to log into the farm machine to see what was happening or to be able to fix it by myself.

Dorn’s character in CG form.

And at the very end of the project, the Abracadabra FX team joined us for the last few weeks in order to help with FX creation, generation and population, they had their own project management system. I was given my own personal credentials in it and all versions were sent to me in Telegram Bot channel as well. Which is very convenient; you wake up in the morning and you have ready to go playlist right in your phone.

Considering that I needed to ramp up and down extremely quickly, the modular/distributed approach was the only way to go. Shotgun is also perfect for this kind of collaboration when artists and management are not necessary in the same place.

Credits

Director / Producer / Scriptwriter – Denys Shchukin
VFX / CG / FX Supervisor – Denys Shchukin
Animation Supervisor – Slava Lisovsky
Camera Animation – Slava Lisovsky
Digital Grooming and Tailoring – Igor Velichko
Cloth and Hair Simulation – Igor Velichko

Head and Body created by “KB Studio”:
Kate Bekasova
Alexey Rodenko
Render Farm “FORRENDER”:
Ruslan Imanov
Roman Rudiuk

Motion Capture “Digital Cinema Ukraine”
Valentine Ushakova



FX by ABRAKAДАБРА
Petr Kuznetsov
Nikolay Prudov
Roman Bazhura
Ilya Lindberg
Andrey Shvetsov
Leonid Panov

Additional FX: Alexander Kratinov

Previz Head Modeling: Andrey Ryzhov

Additional CG-Artists:
Oleksandr Nepomniashchy
Denys Leontyev

Leave a Reply