A cloud rendering explainer for people who want to know more.
In visual effects, ‘cloud rendering’ has been a buzz term now for a couple of years. But what does it really mean, and what’s involved in starting the process of rendering your project in the cloud?
I asked Conductor Technologies president and CEO Mac Moore to break down some of the key concepts of cloud rendering, including why you’d consider choosing it over an on-premises renderfarm to begin with, whether you really have to understand anything technical, and what it takes to jump into rendering in the cloud.
Why choose cloud rendering at all?
In short, the reason for choosing cloud rendering is the flexibility and elasticity of compute power, using it only when you need it. We’ve run loads of ROI calculations, and for a studio to financially benefit from on-prem equipment, they would need to maintain 75% utilization of the total capacity at all times. In practice, this rarely happens, as the project fluctuations grossly contradict a static compute environment.
Even in ‘steady-state’ animation projects, where rendering occurs daily, the reality is that renders are submitted infrequently during that day, and the remaining time, on-premises equipment sits idle. This idle time translates to sunk cost that could be used elsewhere.
There is also a top-line revenue consideration, where we’ve seen studios take on more work (or more iterations) with cloud rendering, because they know they can allocate temporary render capacity for each project, as needed. This process also much better aligns the studio’s project-based financials to the overall compute cost.
With cloud rendering, you only use it when you need it
The biggest mental hurdle to overcome centers around the idea that cloud should be dynamically leveraged, only as render demand warrants, not in an IT-centric forecasting model. By this I mean that cloud resources should spin up immediately when a shot is submitted for rendering, and not a moment before. It should also spin down with equal urgency.
We see many (larger) studios looking at project demand forecasts, comparing it to on-prem capacity, and then allocating cloud resources before they’re actually needed, essentially extending their on-prem equipment as if it’s a long-term colo. This is a very pricey endeavor, and the main reason cloud initially got the reputation as being expensive.
OK, I want to try cloud rendering, what infrastructure is necessary?
Getting started, there is very little required to get moving with Conductor. Integrations with various DCCs allow for automatic dependency scanning and uploading to the cloud. From there, we handle the rest, spinning up 1 machine per frame in the scene, rendering the job, and sending the resulting frames back to the local environment. These days, with the abundance of multi-Gb connections, uploading really isn’t a large consideration.
Once a studio gets beyond initial testing and needs more advanced integration, we see them setting up dedicated servers for uploading and downloading of scene files and resulting images, respectively. This removes the burden from the artists’ workstations and frees them up to focus on the shot, rather than submission. For larger studios, we also work with them on hybrid workflows, integrating with their on-prem managers like Deadline and Tractor.
Going from ‘I’d like to try cloud rendering’ to ’signing up’
Signing up is very simple. From our website, www.conductortech.com, artists/studios can click the “Get Started” button at the top and navigate through some simple signup questions; email, account name, project, etc. No payment information is required at first, as they can evaluate the service free of charge up to $100 for the first month.
Once they set up the account, they receive a confirmation email with the Conductor plugin installer and how-to documentation. Installing Conductor will drop a submission button into all of our supported DCCs automatically. From there, and artist simply needs to open the scene and submit.
The set up process
As mentioned above, once installed, there will be a Conductor ribbon along the top of the supported DCC. When the scene is open, an artist will click the Conductor submitter icon, which will open up a dialogue box for submission. This will give them options on frame range, cloud instance type, layers to render, etc. The artist will then navigate to their Conductor Web Dashboard to monitor their progress.
We also give an option to simply get an email notification once the render is complete, if they want to continue working on other things. Iterating on the shot after initial render, Conductor will look at the files in the scene and only upload changed items (by performing an MD5 hash). This alleviates redundant uploading and speeds up the iterative process. A great example of this is a Maya to Nuke workflow, where the generated Maya output is already on Conductor, so the subsequent composite submission only needs to verify the files rather than uploading all the previous work.
More advanced workflows can also be accomplished, for items like custom plugins. Custom shader, crowd, or hair libraries, for instance, can be uploaded along with the scene and run on the cloud VM. Only thing to note with items like this is that we’re running a Linux backend, so even if the artist is on Windows, we’d need to Linux libraries for use on Conductor. For some of the off-the-shelf plugins like Yeti, this is handled automatically for you.
Cores and costs – here’s what that’s all about
Conductor started on Google Cloud Platform, but we’ve just launched production availability of our multi-cloud offering, expanding into Amazon Web Services. Artists can now run standard and low cost (preemptible/spot) instances on both cloud providers, as cost/preference warrants. Selections vary anywhere from 1 and 2-core instances, for simple work, to 160-core instances with over 3TB of RAM, for supporting the most complex scenes.
Balancing cost and time is a question we get often, as some render farms charge premiums for higher priority, due to their own fixed resources. With Conductor using public cloud, availability is never an issue, so we simply charge per core per hour. The benefit there, with render engines scaling in performance nicely along with more cores, is that larger machines don’t necessarily cost more money for the shot. They simply get the work done faster.
Quick example, if a shot renders in 1 hour on a 32-core machine, it will likely run 30 minutes on a 64-core machine, so the cost is the same (32*1 = 64*0.5). The other benefit to larger machines running shorter is that if a studio is leveraging low-cost preemptible or spot instances, the likelihood of getting “preempted” is less. Quick educational note, for those not aware, low-cost cloud instances are temporarily available “sub-leased” resources that can be taken at any time, if there’s another cloud customer willing to pay standard price. These instances are key to getting good economics out of cloud.
From a cost-structure standpoint, Conductor is an all-inclusive pricing model. We have metered per-minute pricing with our software and compute partners, so when an artist or studio submits work, they don’t need to contend with license entitlements, cloud subscriptions and storage across providers, etc., it’s all managed by Conductor. If, however, a major studio has special pricing entitlements with partners, we can work within those requirements, as well.
Security – it’s a big deal
Conductor was born out of a production studio, so security was imperative in order to work on large Tier 1 projects. In product development, we worked with a third-party security firm to perform frequent audits of our software stack to ensure discovery and remediation of any potential vulnerabilities. As the industry matures in this space and moves towards standards, however, we’re working with the new Trusted Partner Network (TPN), and as their coverage expands to include cloud-based applications like Conductor, we’ll get that certification, as well.
Here’s where you’ve already seen cloud rendering in action
Some quick feature examples where Conductor has been used are Bladerunner 2049, Welcome to Marwen, Transformers: The Last Knight, Pirates of the Caribbean: Dead Men Tell No Tales, Stranger Things, Game of Thrones, and the latest Hellboy .
We have had hundreds of short form projects on the platform, most notably from Stockholm-based FABLEfx, who won several awards on their recent ‘Dancing on Ice’ project.
We also have some unique use-cases where customers are using the auto-scaling capability of Conductor to run highly parallel workloads for things such as automatic image customization, simulations, as well as data generation for VR and image recognition applications.
To better summarize the overall use scale, Conductor has run almost 300 million core-hours of work to-date, in a large variety of customer profiles, from small freelance to large enterprise. You can see more examples of where Conductor’s been used at https://www.conductortech.com/blog.Get exclusive content, join the befores & afters Patreon community