Highlights of some of the major VFX and animation sessions
For the first time, I’ll be heading to NVIDIA’s GPU Technology Conference (GTC). It takes place in San Jose from March 22 to 26.
If you’re already familiar with NVIDIA’s GPU offerings, then it will come as no surprise that GTC is heavy on real-time, AI, and deep learning.
There’s a whole lot of other stuff on offer too in many other fields, but I wanted to get a preview of the event for those in the VFX and animation space. So I asked NVIDIA GM of Media & Entertainment Richard Kerris to run down what he thought some of the GTC program must-sees would be. Here’s his three highlights.
From StoryTELLING to StoryLIVING: My Journey to a Galaxy Far, Far Away
– Vicki Beck, Executive in Charge, ILMxLAB
Vicki and I worked together for years when I was there at Lucasfilm, so I’m a bit partial, but what they’re doing in immersive storytelling is pretty amazing. The idea of VR and AR emerging as a viable medium for entertainment – we’re there and we’re seeing a lot of stuff happen in the Oculus and in other headsets. I think that they understand the medium better than most so that they play to the medium and they create to the medium, meaning that they don’t try to make it all photoreal, but instead they take you in an environment with familiar characters.
The Future of GPU Rendering: Real-Time Raytracing, Holographic Displays, and Light Field Media
– Jules Urbach, CEO, OTOY Inc
Jules Urbach is going to be talking about GPU rendering and real-time ray tracing using holographic displays and what’s happening with the idea of creating a holodeck. That’s been an ongoing project that OTOY has been working on, including with Light Field Labs, which has a working prototype of a true holographic display. You really should see it. It’s an incredible thing. They’ll show you things that you swear are right there and you’ll reach to try to grab them and they’re not.
Photorealistic, Real-Time, Digital Humans: From Our TED Talk to Now
– Doug Roble, Senior Director of Software R&D, Digital Domain
Doug Roble will be giving an update on DigiDoug [which he originally presented as a TED Talk involving him in an Xsens suit and a real-time avatar projected on stage]. It’s always fun to see. I keep wondering when are we going to go there and everybody’s just going to be digitally presented!
Those are just some of the highlights specifically mentioned by Kerris. There are more sessions to be announced. Other talks that look like do-not-miss sessions include:
Creating In-Camera VFX with Real-Time Workflows
– David Morin, Head of Los Angeles Lab, Epic Games
Facial Scanning for Virtual Avatars On the Fly
– Kalle Bladin, Research Programmer, USC Institute for Creative Technologies
– Yajie Zhao, Research Associate, USC Institute For Creative Technologies
Studio Workflows with Omniverse: From Virtual Production to Shipping Titles
– Kevin Margo, Creative Director, NVIDIA, NVIDIA
– Omer Shapira, Engineer, Artist, NVIDIA
– Damien Fagnou, Senior Director, Software, NVIDIA
Kerris also hinted to me that ILM visual effects supervisor Pablo Helman will delve into the VFX of The Irishman, where a new camera rig and software were developed to make the characters look younger, without facial tracking markers. Here, Quadro RTX cards were used to do real-time manipulation of the digital puppetry that ILM created, and GPU rendering relied upon for preview scenes. Plus, a machine learning approach to building a library of target images of the actors – called Face Finder – was employed.
I’m looking forward to heading over for GTC, and hope I can say hi to any readers who are also considering attending. Check out the full conference website at https://www.nvidia.com/en-us/gtc/ for more details.