The tech behind Baobab Studios’ real-time platform for making VR narratives

It was used for the studio’s latest interactive piece, Baba Yaga.

I’ve been lucky enough to see and cover just about every release from interactive animation outfit Baobab Studios over their short history. And I’ve seen, each time, a leap ahead in the immersive storytelling methods and the technology used.

For their latest narrative, Baba Yaga, directed by Eric Darnell and co-directed by Mathias Chelebourg and now available on Oculus Quest, Baobab Studios continued to innovate. It took full advantage of its proprietary real-time Storyteller Platform to build the narrative, as well as utilizing new AI and machine learning techniques, and crafting the narrative with new headsets and hand controllers in mind.

Here, Baobab Studios CTO/co-founder Larry Cutler, who is also one of the executive producers of Baba Yaga, answered befores & afters’ questions about the new release, going into detail on the new tech.

b&a: What has Baobab as a studio and as filmmakers learnt so far about how viewers ‘take in’ a VR project, and did that impact on the ‘way’ you told this story or how you immersed viewers into it?

Larry Cutler: Baobab celebrated its 5 year anniversary in 2020. During this journey, we have learned to take advantage of VR’s superpower, immersion, by placing you, the viewer, directly inside the story. With each project, we have experimented in giving you a role to play and empowering you to participate in the narrative. Because everything in VR is rendered in real time, the characters can acknowledge your presence and respond to you based on your actions. We can empower you to develop meaningful relationships with the characters where you actually matter in the story. Baba Yaga marks our studio’s sixth interactive narrative, and it represents the next step forward in terms of interactivity, VR storytelling, and making your role in the story consequential.

Baobab Studio notes, in relation to the Magda character: “With our Real-Time Storyteller Platform we created our first human character, with the challenge of needing her to feel present all the time from her hair, skin, clothing, and facial expressions. Important for virtual reality due to how close you are to Magda, but difficult to do in real-time on a mobile headset running at 90 frames per second in each eye. This scene was one of the most crucial to get right as it is the first time the viewer is interacting with their sister.”
“Texture and surfacing of hair, skin and clothing is an important detail to consider in VR, since the viewer can be standing close to the characters, and can take time to note how hair falls, or how fabric looks and moves. We chose to hand-animate Magda’s clothing to give a nice tactile feel and help build the bond with your sister. Magda is a complex character that while brave, is also a young girl who must overcome her fears to face Baba Yaga.”
“The final render of Magda handing you, the viewer, the lantern and thereby asking you to accompany her. With our Storyteller Platform we set the tone by how we positioned Magda, the use of theatrical lighting, and Magda’s complex facial animations to create an emotional connection between you and your sister. The lantern helps establish that bond and makes Magda’s journey, your journey.”

b&a: Baobab crafted human characters here – what extra artistic and technical challenges did ‘humans’ bring?

Larry Cutler: Baba Yaga is the first VR project where we have tackled real-time human characters. Through our past lives creating CG films at places like Pixar and DreamWorks, we know that audiences are quite adept at detecting when human CG or AI characters don’t look right, since we carry with us a lifetime of experience interacting with humans. This often leads to the uncanny valley problem which is only exacerbated in VR. Bringing Baba Yaga’s characters to life posed numerous challenges for our engineering and animation teams, who needed to capture incredible nuance and emotional detail, while rendering in real-time at 72 fps on a mobile VR headset, like the Oculus Quest.

To create the human characters, our engineering team pushed our novel character technology framework as part of our real-time Storyteller platform. This enables our artists to build feature film quality human character rigs and performances that are nuanced and convincing but that also render in real time at 72 fps in a game engine. With Storyteller, our human character performances can be both highly expressive and highly interactive based on the action you take. This was a critical innovation needed for our artistic team.

But most importantly, our human characters like Magda were brought to life by great creative artistry, animation, and storytelling. Magda is a three dimensional little person who epitomizes bravery. Eric Darnell (writer/director), kept reminding our animation team that “Bravery is not the lack of fear. Bravery is being afraid and doing it anyway.” Each animator subconsciously kept that in mind while animating Magda.

We were incredibly lucky to have our characters brought to life by such a powerful all-female voice cast for Baba Yaga. Kate Winslet plays the title character, Daisy Ridley plays your sister Magda, Jennifer Hudson portrays the voices of the Forest, and Glenn Close is your mother, The Chief of the village. Jennifer Hudson is also an executive producer on the project. Each of these amazing actresses added incredible depth and gravitas in bringing the characters and the story to life.

b&a: Can you talk about the work behind your in-house system called Storyteller? What is that exactly, how was it built and what does it enable you to do?

Larry Cutler: Our proprietary Real-time Storyteller Platform is a comprehensive and pioneering toolset for creating real-time animation at film quality, both to author interactive VR narratives where you, the audience, matter to the story, and to re-invent traditional animated film production. The key motivation and innovation of the system is its ability to deliver feature-film fidelity results that are rendered in real-time.

The Storyteller platform empowers the viewer to interact with characters in interactive VR stories. Not only can these characters capture the illusion of life, but they can respond to user movement and action in real-time. Storyteller can ultimately make the viewer matter by allowing characters to acknowledge the viewer exists in their world and by giving the viewer the opportunity to build deep, meaningful connections.

We initially began development on Storyteller five years ago for our first VR project Invasion!, creating one of, if not the first, empathetic real-time VR characters. Invasion! showcased the promise of VR to the industry, becoming the top downloaded VR experience across all platforms and winning the Emmy Award for Interactive Media in 2017. The Storyteller platform has become the foundation for each of Baobab’s six real-time animation projects, culminating in Baba Yaga. At its core, Storyteller enables artists, animators, and filmmakers to bring to life endearing stories in real-time, without having to sacrifice our high standards for animation.

Magda early concepts.

b&a: Is AI and machine learning entering the process at Baobab at all – can you talk about how it does right now and what you might be looking to do with this in the future?

Larry Cutler: In Baba Yaga, AI and interactivity play a critical role in making you matter. Baba Yaga and our more recent interactive projects have a non-linear story structure that can vary widely depending upon the actions viewers take. Adopting the above methodology allows for flexible, rich interactions, where the characters and the environmental elements can always be “alive” and reacting to your decisions and participation.

Our Storyteller platform enables our team to author non-linear VR animated content, so that artists and engineers can easily encode interactive designs in the game engine. Storyteller allows artists to encode the story structure into hierarchical behavior trees, which acts as the brain at the top level of the experience. It monitors game states, evaluates triggered conditions, and orchestrates story sections and individual moments to happen at the exactly the right time. Behavior trees have been used extensively in robotics and game development, and are well suited for this problem space.

In Baba Yaga and our previous projects Bonfire and Asteroids! the characters have AI brains that generate autonomous behavior based on your actions. A key innovation with our character AI system is that it generates procedural motion that blends seamlessly with our hand-crafted, straight head animations, and is hopefully indistinguishable. Storyteller and our character AI allow you to deepen your relationship with Magda during the course of the experience as well as with other creatures (e.g. little baby chompy plants) and the rainforest itself.

Machine Learning is already transforming various parts of the CG animation and rendering pipeline. We see an incredibly bright future where ML will be intertwined in creating real-time AI characters and generating procedural animated performances that capture the same fidelity of emotion as in our hand-animated characters.

b&a: Tell me about how any advancements in the headsets and hand controllers (tracking) were utilized for this project?

Larry Cutler: Our early projects were designed to run on headsets such as the Oculus Rift and PSVR, which are powered by high-end gaming PCs. The Oculus Quest released in 2019. This was an inflection point for the VR industry because the Quest is the first consumer-focused, stand-alone VR headsets where users are completely mobile and untethered to a computer. Our last project Bonfire was a launch title for the Oculus Quest.

Rendering a VR experience in real time on the Quest’s mobile chipset means that we have, approximately, an order of magnitude less compute power compared to the high-end headsets. Our production designers developed a hand-crafted stagecraft design for Baba Yaga that embraces the render constraints inherent in mobile VR. Over the course of production we tackled numerous optimization challenges such as rendering complex human characters with hair and cloth in full fidelity, and creating a stylized theatrical lighting aesthetic which required a variety of dynamic lighting setups.

Hand controllers have been the primary input device in VR for the past four years. In 2020, the Oculus Quest introduced hand tracking where the headset tracks your hand and finger positions without the use of any controller hardware. Hand tracking provides us the opportunity to make VR experiences even more natural and immersive because you don’t have to hold the controllers and users can see their individual fingers moving versus only overall hand motion.

We dove full steam into creating Baba Yaga so that the interactivity works seamlessly with hand tracking throughout. We first prototyped a core mechanic that worked intuitively with hand tracking. Once we proved out the hand-tracked core mechanic, we began applying this to our other interactions. Baba Yaga works with both controllers and hand tracking, but our early feedback from the Venice Film Festival and from viewer feedback is that people actually prefer the experience with hand tracking. We are very excited about the potential for hand tracking to push the level of immersion, especially as the technology becomes more mainstream.

A screenshot from the narrative.

b&a: Anything else from a technical point of view that Baobab was doing differently this time around?

Larry Cutler: To embrace the stagecraft design, we both begin and end Baba Yaga with a 3d popup book. VR gave us the opportunity to turn a traditional pop-up into an immersive storybook where the popup theatre takes place all around you. We did an early prototype which looked great and we figured that this would be one of the most technically simple scenes in the production since rendering 2d sprites is highly optimized. Boy were we wrong! Getting the artistry, animation, and depth cues just right were quite challenging. But the end result captures something we have not seen before in VR.

Leave a Reply