A journey in machine learning, with Foundry’s Johanna Barbier

Barbier is part of Foundry’s upcoming ‘Voices Behind the Tech’ series.

Johanna Barbier is a Senior Research Engineer, Machine Learning, at Foundry. She’s worked on some of the newest tech you might know in Foundry products like Nuke, such as the ML Server and CopyCat.

This week, Foundry is launching a series of interviews called Voices Behind the Tech featuring women who work across different departments at the company, including in R&D, product design, product management and more.

Barbier is featured in the series, and right here, befores & afters got the chance to talk to her specifically about her own journey from study to landing a job at Foundry in machine learning. She also gives some great insights into the importance of mentorship along the way.

Johanna Barbier.

b&a: Looking at what you’ve done so far in your career, it feels like it’s such a nice intersection of art and technology. How important was that to you as you worked out what you wanted to do?

Johanna Barbier: That’s a very good point. That’s actually something that was very important for me, having in my career a mix between art and science. I am originally from France. I have a math and physics background and I gradually specialized my education towards computer graphics and machine learning. And the reason for that is that I was always very interested in art and science, in art and movies. I really wanted to find a place which would bridge that gap, that would bring art and science together. The visual effects industry was perfect for that.

Then, following that education, I had some great mentors. During my masters, I aimed my projects towards visual effects. For instance, my master thesis was on automatic color segmentation for video. My professor Aljosa Smolic put me in contact with Dan Ring, who is Head of Research at Foundry. I was based in Dublin at that time so I was able to meet Dan, and we had a first conversation and it just went really well. So, that’s where it started.

b&a: Because you had a mathematics and physics background, had you done some machine learning in your studies as well?

Johanna Barbier: Definitely. Originally it was mainly math and physics and some computer science. Then I actively went towards computer science. So, I have one masters in computer science from France and then I came to Dublin to do a masters in computer graphics and machine learning.

b&a: Was machine learning the main thing you started looking into when you started working at Foundry?

Johanna Barbier: Yes, I actually joined as a research engineer in machine learning. I think I was very lucky because I started at Foundry while the efforts in machine learning were starting as well. I’ve been at Foundry for just over three years now, and since the start, I was able to participate in the evolution of our machine learning technologies. Early on, I worked on the ML Server, and then more recently on CopyCat and CatFileCreator. All the things I’ve been working with have been connected to machine learning.

CopyCat training on more than 4 channels for style transfer and mask segmentation. This is a new feature in Nuke 13.2.

b&a: Machine learning is something that keeps getting talked about in visual effects and filmmaking. But I’m curious, even when you first started doing it, did you find that visual effects people didn’t quite get how machine learning could be useful? I’m sure Foundry did, but I’m curious if you felt like the VFX industry was like, ‘How can this be used?’

Johanna Barbier: When I started at Foundry, there was already a lot of excitement around machine learning in research and academia, but also more and more in visual effects, too. To put this into some context, machine learning in research has been booming for the last 10 years, and when I started three years ago, there were already many academic papers with fantastic results.

Honestly, I think AI and machine learning are powerful and are here to stay. Because there was a lot of discussion around it, I found that most people in visual effects were already aware of the potential of machine learning but it’s true that there was some uncertainty as to exactly what form it would take.

b&a: I saw you talk at DigiPro a couple of years ago and I remember that at the SIGGRAPH conference around then that everyone was talking about artificial intelligence and machine learning. You were just new to Foundry, but what did you take away from that early conference experience?

Johanna Barbier: That was really an amazing experience for me. I was quite new to the industry when that happened. It was the moment where machine learning was booming, so it was a great opportunity to share that there was this new platform we had created to experiment with machine learning in post-production software. The paper we presented at DigiPro was introducing Foundry open source ML Server. At the time, there was no way to just play and experiment with machine learning in post production software. The ML Server was created as a way to solve that problem, it is an open-source client-server system which enables artists to add and train new models and then use them inside of Nuke.

The DigiPro conference was followed by the SIGGRAPH week, and during that time a lot of people from different companies came to me and were eager to talk about machine learning. Some were saying it was time for it to come to our industry. There was a lot of excitement about the technology.

Nuke Inference for makeup editing using a face parsing model created with the CatFileCreator node (face parsing model from here).

b&a: As someone in the research group at Foundry, what would you say to someone who is in visual effects, but maybe hasn’t actually got their head around all these machine learning aspects just yet?

Johanna Barbier: At Foundry, our vision is to put machine learning into the hands of artists. We do not want to be the gatekeepers of this technology but instead enable artists to have full technical and creative control over their shots. One important aspect for this is to make our AI tools simple and easy-to-use, even to someone who has less machine learning experience.

One of the things which is sometimes problematic with machine learning is that you have a model, it takes an input in, and gives an output out, but you don’t have much control over it. With tools such as the CopyCat node released last year in Nuke 13, we want to give more control to the artists so they can train and tweak a model using their own data, for their own custom effects. What CopyCat does is learn to replicate any image-to-image effects from a small number of frames and then apply it to the rest of the sequence.

For instance, you would apply your effect on, let’s say, four frames, and then you would feed these frames to the network, with and without the effects. You let it train for some time, and then you get a model that you can apply to the rest of the sequence. The idea here is to reduce the time spent on tedious and repetitive tasks, and leave more time for creativity.

b&a: Actually, I wanted to ask you about something to do with CopyCat in particular. I recently did a story on Matrix Resurrections, and Framestore had done some cool CopyCat stuff in terms of compositing and color correction. As a researcher who’s been part of it, to then see it used in the wild, what’s that feeling like?

Johanna Barbier: It’s really exciting and inspiring. When we created CopyCat, we really wanted to leave the creativity to the artist, so it is really fantastic to see it being used in the wild in new and innovative ways. We’ve seen applications where artists have mapped the 3D mesh of a face to a rendered face, and then trained a model to learn the transformation from the 3D face animation to the rendered face. That’s a really smart way of using this.

I heard about CopyCat being used in Matrix Resurrections and other movies. When you hear about ways it has been used that we didn’t expect, it’s really motivating. It really means that we managed to reach people and that it’s used to create shots that probably weren’t possible before.

How Framestore used machine learning techniques in their comps on ‘The Matrix Resurrections’

b&a: Now that you’ve worked in that field for a little while and you’ve seen people using CopyCat in different ways, do you have a view right now about how AI and machine learning might continue to be used in new ways and different ways in visual effects?

Johanna Barbier: That’s a great question. First, I believe machine learning will keep on being used in visual effects. It’s really here to stay. I think it will continue to be used to accelerate and help artists solve repetitive or tedious tasks, such as matting, keying or rotoscoping, and could also offer solutions to defocus, denoise or deblur that weren’t possible before. Then, an important next step in machine learning would be towards faster, and eventually real-time, model training and inference. Increasing the training speed is one of the things we are working on at the moment to improve CopyCat, by allowing multi-GPU training and having a better optimized training pipeline.

There is another tool in Nuke that I haven’t talked about yet, and I believe it will allow artists to use AI in new and interesting ways. It’s called the CatFileCreator, and it’s been here since 13.1. It offers a way for people to add their own models inside of Nuke. Imagine you find a great deblur model online or you have trained your custom model to do human segmentation. This node gives a framework to go from this model, to a cat file you can use directly in Nuke through the inference node.

Right now, CopyCat offers some specific image-to-image models. But this CatFileCreator allows you to use different types of models, such as generative models (GANs) that could be used for inpainting or upscaling. There are a lot of different possibilities there.

b&a: I feel like you’ve managed to really combine your interests so well in your job. How have you continued to keep that relevant as an engineer and working in Foundry and doing the machine learning side of things?

Johanna Barbier: It is really important for me to see that our research has a real impact, for instance by seeing how our tools are used by artists. I really enjoy meetings with customers, artists and vfx studios, as it keeps the work real and close to the artists.

I also think it’s essential to keep learning. That’s true on the scientific side, e.g. it’s very important to stay at the state of the art, and be aware of the latest technologies, the latest machine learning papers. But it’s also true on the artistic side. For instance, we have a mentorship programme at Foundry, where people with special skills within the company mentor other people. I did a 6-month compositing mentorship with my colleague Luca Prestini who is a Machine Learning Data Specialist in our AI research team.

It was a great experience for me where I learned a lot about the techniques and challenges of compositing. There is nothing better than doing a comp or rotoscoping to understand the challenges and the time that it takes. Luca was really great at sharing his knowledge throughout the mentoring.

b&a: Finally, speaking of mentors, you mentioned how Dan Ring was an early mentor for you. Who did you personally have as mentors say at university and then at Foundry, and what kind of pieces of advice did he and any others tend to give you as you started, especially in working out where to go and what to do?

Johanna Barbier: That’s a great question. It’s true that when you are still in university, it can be difficult to know where to go or what to do next. In my case, I really wanted to find a place between art and science, but that still left a broad number of jobs and industries. That could mean animation studios, visual effects studios or streaming platforms among others. So, what was really key for me in university were my professors and supervisors. My masters supervisor at Trinity College in Dublin, Aljosa Smolic, was the one that introduced me to Dan Ring, the head of Research at Foundry.

I had actually met Dan Ring beforehand, while working on a virtual reality project during an internship in Dublin. I was working on a computer graphics and virtual reality application for archaeology. We met at the the 3Dcamp Dublin & Irish VR meetup, where Dan was doing a presentation about Foundry and I was presenting MAAP Annotate, a VR application for annotating megalithic art for science and archaeology.

When you’re at university, I think it’s really important to take all the opportunities to go to meetup places, and to present your work. It will give you visibility and it will be a great way to meet the people who are interested in the same things you are.

To see Johanna’s interview visit Foundry’s website and keep an eye on social media for the #VoicesBehindTheTech tag.

Brought to you by Foundry:
This article is part of the befores & afters VFX Insight series. If you’d like to promote your VFX/animation/CG tech or service, you can find out more about the VFX Insight series here.

Leave a Reply