A breakdown of the fun ways you can already use CopyCat in Nuke

How to use the new machine learning tools to aid in correcting out-of-focus, removing markers, garbage mattes and beauty work.

With its Nuke 13.0 release, Foundry implemented a new suite of machine learning tools into the compositing . One of them was CopyCat, a plug-in that lets artists train neural networks to create custom effects.

Want to know some of the best ways to use CopyCat for your custom effects? You’ve come to the right place, with this overview of CopyCat use-cases all the way from correcting out-of-focus shots to beauty work. Plus, we take a look at the key differences between traditional machine learning (ML) in visual effects, and how CopyCat approaches ML.

How traditional ML in VFX works

Machine learning tools in VFX are generally pre-defined. As a developer you do ideation, a cost-benefit analysis, and you present it to your client–after their feedback, you develop it to solve a particular problem.

For example, designing a neural network that detects humans in an image will require before and after images of humans and their corresponding manually created masks. Over hundreds of thousands of training steps, the network will go from outputting random garbage to, hopefully, reasonable quality masks of humans.

This pre-trained network is then passed on to artists who run it on their footage. As the developer doesn’t know what footage the network will be used on, it creates it as generic as possible.

Humans come in all shapes and sizes, with different clothes or none at all, they can fill a frame, or there can be hundreds of tiny people in the distance–we need our network to work on all of these cases. To do that, they need to train on a huge dataset (tens of thousands of images) because the network needs to have a semantic level of knowledge, not just to recognise colours and textures. This takes a lot of time, from days to weeks, and you might need to do this a few times.

Most of the time, you end up with an unconfigurable black box–it might work sometimes, but sometimes it won’t, and it’s hard to know why and even harder to explain to customers. The results might not be perfect–and in VFX, they need to be perfect–good enough won’t cut it.

CopyCat in action.

CopyCat and ML

With the CopyCat Nuke Plugin, an artist trains their neural network specific to the shot or set of shots they are working on. What it does is not pre-defined, not pre-trained, and not generic.

It doesn’t require a vast dataset–the artist gives CopyCat just a small number of before, and after images, for an effect, they decided to produce, they hit the start training button, and CopyCat learns to replicate the transformation from one to another. This exported in a .cat file can be used in another node in Nuke called Inference that can use the trained model on the rest of the sequence or even multiple similar sequences.

If the results are not quite right, the artist can redo and replace the initial shots and retrain the model. It’s controlled by the artist, and the results are only as good as the data they put in. This is because we don’t want to replace artists with ML. We want to assist and empower them to be creative.

CopyCat: how it can be used

We’re putting machine learning into the hands of the artist’s imagination. Here are some examples of what artists can do with CopyCat.

Successfully tackling garbage matting for complex shots

CopyCat can be used to create a garbage matte in a complex shot such as this one, a human running in a moving background. The girl in this shot is also wearing a set of headphones, and it’s unlikely any pre-trained network would pick up on that, so you’d still have to do a lot of manual rotoscoping using traditional ML models. With CopyCat, the artist is telling the network exactly what they want it to do.

Successfully tackling beauty work in CopyCat

The other area where we see CopyCat very useful is specific beauty work. For example, in the shot below we have a complex elbow bruise and the lighting changes from frame to frame. This example used only 6 frames for training for just the patch of the elbow which was applied to a few hundred frames.

Or, check out this shot breakdown below as part of a longer video on ML in Nuke. At around 39secs in, it deals with clean-up of the actor’s bruise on the face with two frames pained out by hand replicated throughout the whole sequence. Bruise removal is unlikely to be the sort of generic effect any developer would create. Even if cleanup is pretty common – bruise removal is a very specific case of that.

What if you wanted to do beard removal? In this example, the beard was removed by hand in 11 frames and then applied it to the shot which is hundreds of frames long. This effect would be pretty time-consuming to do entirely manually.

Successfully correcting out-of-focus shots

Deblur and out-of-focus shots are another big challenge to tackle by VFX artists. With CopyCat, this was solved by finding a similar frame that was in focus and training the model to replicate it for the out-of-focus frames. It was done by matching 11 sample crops from the out of focus frame with 11 corresponding crops in two in-focus frames.

Successfully removing markers with CopyCat

CopyCat can help with effects in the prep stage before applying the effect itself. Shots change all the time as the client changes their mind and pre-trained models help, but you need specific training to solve a particular problem–this is where CopyCat gets laser-focused on one particular shot and effect.

In this shot shown in the video below, there are a few layers of photos that need to be comped, and every scene has variations of lighting, shadows, the perspective of tracking markers. These markers need to be removed from all shots.

In the traditional way, you would manually clean the tracking markers one by one in a very realistic way so the lighting is kept the same. This will take a bit of time. VFX artist Thiago Porto, who runs the demo in this video, estimates the number of days’ work. With CopyCat, the same task is done very fast. By choosing 5 frames where you get the most variation possible and applying them to the rest of the scene without worrying about cropping, coding or adjusting the image.

For more on CopyCat in Nuke check out Foundry’s website.

Brought to you by Foundry:
This article is part of the befores & afters VFX Insight series. If you’d like to promote your VFX/animation/CG tech or service, you can find out more about the VFX Insight series here.