A motion captured dancer and Cinema 4D artistry made these projections possible.
If you’re lucky enough to have been in Sydney right around now, then you’ll probably have encountered Vivid, an event that incorporates lights, music and other festivities around the city. One of the drawcard attractions each year is a projection of animated content onto the Sydney Opera House sails.
This year those ‘Lighting of the Sails’ projections were realized with the aid of a motion captured dancer as Austral Flora Ballet by Andrew Thomas Huang. The filmmaker worked with creative studio BEMO, which animated a selection of native Australian flora ‘characters’ using Cinema 4D.
befores & afters chatted to BEMO’s executive producer Brandon Hirzel, and to motion designer and director Brandon Parvini, about creating the 15 minutes of animated content for projection mapping onto the Opera House’s iconic sails.
b&a: Can you talk a little about how you collaborated with Andrew Thomas Huang? What was his brief to you and how did you refine designs for the project?
Brandon Hirzel (executive producer, BEMO): Andrew approached us with this project and we were honored to receive this task. He described his vision of using 5 native Australasian flowers as the core elements of design. He went on to describe collaborating with Dancer Genna Moroni and Choreographer Toogie Barcelo and capturing their movements on a mocap stage.
From there he asked us to develop characters based on core flowers he landed on and connect them to the dancers’ movement from the motion capture session. The overall structure was there – now it was on us to get into the details on the look and feel of the piece as well as the rise and fall of a 12 minute animated ballet of generated abstract floral dancers.
b&a: What are some of the first logistical issues that have to be solved when working on motion graphics for a building projection, such as designing for the right space, and what extra ones were present to deal with the particular shapes of the Opera House sails?
Brandon Hirzel: When we were first starting to plan out the production of this we had to be very smart on how we built our team. We have had experience with projection mapping and building live immersive content in the past. We are also experienced with complex 3D workflows and motion capture rigs but marrying the two together at this scale was new territory for us.
It was clear from the start that Brandon Parvini was going to take the lead on the project as his skillset was most advanced in both areas and he loves a good challenge. Once we started to look at the shape of the building as our canvas it became clear that we had a bit of a puzzle to put together. Our environments were built to loosely mimic the shape of the architecture but that was not enough to sell the piece as a whole, we needed our characters to be involved in the same way.
As a foundation, we collaborated with Rouge mocap studio to come up with a system to have a live avatar composited over the shape of the Opera House in realtime as Genna went through her movements. This previs helped to ensure that the tone was right as far as our core ingredients we would be building from. From there we sculpted the character design to be conducive to our oddly shaped canvas.
[perfectpullquote align=”right” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]’We had to put out nearly 30,000 frames of 4k content.'[/perfectpullquote]
Even with the intentional elements from our mocap session, though, it became quickly evident that we would need to do more to activate this unique canvas than mirror its form through a single take of mocap. This gave rise to a pivot in the project where we leaned into a more detailed edit for each dance built from the core dance sequence we had created from the capture session – cuing specific moments and arrangements to line up on each ‘Sail’ of the Opera House. While it did create a new scope of work, the results were immediately telling as the piece now had a new life and flow to it.
All of this relied on content, though. We had to put out nearly 30,000 frames of 4k content in order to have enough material to work within this new edited approach. Which of course meant that we needed to optimize our builds incredibly. Each character was essentially a moving simulation so having to make the rigs durable enough to withstand that amount of content being passed through them was a bit of an undertaking on its own. Then we would need to lock down those sims to keep renders reliable as we were leaning on Redshift and Octane’s GPU render engines to help keep the time per frame down.
The point here is if you have a live sim running to render and something fails in the render you need to be able to pick back up where you left off and not need to start again. Not usually an issue for standard character workflows – but in this case we had dynamic simulations whose results, in turn, drove other elements of the character which of course were dynamic simulations as well.
Design-wise, we knew from the start we would need bold colors and both high and low-frequency shapes and movement styles in order to keep the final look from being too noisy and clearly legible on the Opera House. This presented a mix of guidelines and restrictions we tried to operate under during the design and refinement process of the characters themselves.
b&a: How was the flower animation driven by motion capture? Can you talk about the mocap shoot? What data was obtained and how that was used in the actual motion?
Brandon Parvini (design and technical director): Going into the shoot we tried to set the table with as many production-ready elements as we could and give us a toolkit of info to parlay our plan of attack from. Due to the scheduling of the project we needed to have the mocap shoot be one of the first things that happened which was great as it gave us a lot of time to work with the actual animation, but left us at the same time having to think on our feet when we arrived to set.
Being so early in the conversation on so many fronts when the shoot took place we needed to approach the capture process as having our choreographer and dancer help create a motion toolkit for each of the flowers — different motion styles, different tones in movement so it didn’t feel monotone when it came to each character’s movement styles. We had done a couple of motion tests off of mocap we had on hand at the studio to start getting our heads going on what we thought would work and what we thought we may need.
[perfectpullquote align=”right” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]’We used a mix of influencers from the original base mesh of the female form.'[/perfectpullquote]
Once we finished the shoot session we went about making selects and as we waited for the mocap to be fully cleaned and prepped, we sought out getting each of the characters developed. For the characters’ actions, we used a mix of influencers from the original base mesh of the female form (skinned geometry) we had sculpted for the mocap previs and generative builds off the skeleton system itself. This would allow us to have elements that would reflect at times the human form and dancer and other times have more abstract elements that simply reflected the movement style.
We worked from a set of delivered FBX files for each of the selects. Once we had those in hand, and our total running time was becoming more clear, we went about creating a single motion clip for each of the floral dancers. Ranging from 2 min to 5 min in length for each character, we blended and edited together the motions of all the various FBX deliverables to create a single piece of mocap that would serve as the core dance sequence for each flower.
We knew early on that simply having a rigid character to purely reflect what the dancer had done on stage would be leaving opportunity at the door, so to speak. So much of our process was creating elements and character designs that would allow for a good amount of secondary and reactionary motion for the characters as well as ways to reinterpret the mocap itself, ‘what if the legs bent the other way?’ ‘can we play with the proportions of the character?
We came up with some wild results along the way, some horrifying, some beautiful, but there was a balance and reverence that we wanted to have to the original dance itself, we didn’t want to bury it too deeply under our work, the initial poetic tone hit by Andy, Toogie, and Genna had to be kept intact. Much of this project was about balance.
b&a: Can you talk about your specific workflow in C4D for generating flower imagery and motion? The imagery feels so fluid and almost procedural – what tools and techniques allowed you to get that feeling?
Brandon Parvini: We thought we would be in multiple DCC packages when we started the project (ZBrush, Houdini, Motion Builder, etc) but during our testing, we found having to round-trip all of these elements back and forth were going to cause delay and lag in the creative process of refining the look of the characters. This wasn’t your usual character dev were you have a sculpt of a character, you spin it around make sure it looks engaging and move forward.
The characters were much more gestural in nature in the fact that much of their presence was from their motion itself. So it wasn’t so much about the character sculpt’s personality, it was about getting that personality from how it moved. We would be doing much of our work for developing the characters on active mocap files in order to see how they felt. We knew that without timeline and team size we didn’t have time for hand keyframing – we needed to develop elegant systems that would carry the load for these long sequences.
[perfectpullquote align=”right” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]’It wasn’t so much about the character sculpt’s personality, it was about getting that personality from how it moved.'[/perfectpullquote]
This is where Cinema 4D’s robust toolset came in such deep play for us. The rigs were very procedural in their nature. But they were deep in regards to their steps of locking down what were essentially cascading simulations for each character. In one example you would use C4D’s mograph tools to clone elements to a volume of the female character we had, then off that same character we would attach a hair system to her skin to create petals or leaves, but then off of some of those elements we would attach secondary branches and elements, maybe even a tertiary set of elements off of those.
You can see how this could get very cumbersome if we had to send to another package and keep round-tripping like that. Thankfully C4D has implemented new Alembic workflows that became instant must-haves for us. When looking at optimizing the simulation and overall file overhead (a high poly simulation for 3 min + of content would be a data bomb on any render) we were able to find great wins in being able to rather than cache a full geometry for a character, we could ask Cinema to simply sim and bake out a set of splines that we would upon render, instance geometry onto and in turn reduce our overall file size and data amount when going to render.
Cinema 4D gave us so many ways to think differently about how to reinterpret motion and rigging, it was both liberating and a little terrifying as you realized you had moments where it felt like you could iterate forever and still find gold. But with C4D, the ability to quickly dev a half baked notion midday and see within an hour or two if it was going to work was so critical for us. This wasn’t a large team by any means, so you had to own a lot in each built, modeling, optimizing, rigging, texture and renderability would all have to be handled by a single person per character. So it had to be as easy as possible to do, and most other software wouldn’t really allow for this kind of approach.
[perfectpullquote align=”right” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]’Cinema 4D gave us so many ways to think differently about how to reinterpret motion and rigging.'[/perfectpullquote]
We definitely pushed the software to breaking points at times but it took quite a bit to do and usually at those moments it would become clear for us the C4D had 5 other ways for us to do the same thing using faster methods. One thing began to show up over the refinement process is that not everything would need to be fully simulated. You could have a core piece of info that was a simulation, but then once you had that, you could lean on Cinema’s new Fields systems and MoGraph systems to clone onto that data and essentially fake the remaining motions and the eye wouldn’t really be able to pick the difference between the two of them.
For the Kingsmalli Eucalyptus character, for example, he’s draped in tendrils and ropes all over the place. It was a simulation nightmare, but then with Cinema’s fields system we were able to say in his case, to droop the cables between two fully art directable points and give them a touch of a delay and presto, no sim fully art directable interactions that we could now lean on the mograph systems to help drive around the look and feel. This was a massive boon for us as we had a moment where a single character’s simulations were becoming an overnight affair. With these kinds of ‘cheats’ we could get a full bake down of a new mocap and rig setting done in about 10 min usually. We were in a constant start of looking at all of the inbuilt effector, deformer, and mograph systems and asking ourselves, how can we achieve a look we are having to sim, with as little or without simulation at all.
b&a: How did you ‘test’ your work, such as on any kind of proxy of the sails, and what things did you perhaps have to adapt to and change to make the animation work?
Brandon Parvini: Once we started kicking out content, we were lucky enough to have a scale model of the Opera House furnished to us by the team at TDC in Sydney. After a few tweaks to the UV for our purposes we had a pretty clean round trip system where we could kick out any content at a 4k x 2k (or 2:1) ratio and have it mapped nicely to the sails in 3D so we could previs how it would look and feel when upon the sails.
Mind you, nothing is like being there, but at least it allowed us to look more clinically at the content for any gaffs or areas we would need to tweak before delivery. It really also helped everyone when we output on the sails themselves as it took a lot of the mystery out of the previs. Everything became much more real when you weren’t looking at flat content with a mask of the sails simply on it.
[perfectpullquote align=”right” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]’Seeing it on the Opera House sails really hit home that this isn’t simply a unique shaped canvas.'[/perfectpullquote]
As mentioned previously, we had a pretty key pivot in the project workflow when we realized there would need to be a more edit based, per sail approach to the piece. Seeing the previs was key to this. When you were just looking at a masked output you innately ingest the material like you would any piece of 2:1 or 16:9 content, and be annoyed you have a mask on it.
Seeing it on the Opera House sails really hit home that this isn’t simply a unique shaped canvas, it’s 5 canvases that at times can be viewed as one, but always needed to be respected as 5 unique elements. So we set about creating the edit that would bounce between a singular and split view of the canvas, bouncing back and forth between and allowing there to become a sort of tension that could be created when all five sails acted as one and a lively nature to things when each sail could go off on its own.
In some ways, there was a fundamental musical element to how all things were designed in the piece. It wasn’t initially intentional but fitting in some way. Every aspect about the piece came down to rhythms, from the characters to the edit, to the colors, all of it had to have a flow.