Now You See Me, Game of Thrones and Fantastic Beasts.
So far in befores & afters’ Body of Work series, we’ve looked at creatures, robots, fluid sims and face replacements. Now we’re diving into crowds with Rodeo FX, and their particular challenges faced on the projects Now You See Me, Game of Thrones and Fantastic Beasts: The Crimes of Grindelwald.
Hear from key crew members at Rodeo FX as they explore the way a mix of proprietary and off-the-shelf tools were utilized to achieve the complex crowd scenes on those shows, which range from stadium audiences, to street scenes and vast armies.
Now You See Me
Ara Khanikian, visual effects supervisor, Rodeo FX: For Now You See Me, one of the many scenes we worked on was the MGM Grand scene where the Four Horsemen do a magic show in a sold out arena in Las Vegas where the grand finale is comprised of millions of Euro bills falling from the ceiling into the hands of the thousands of spectators. Our mandate for this scene was creating a photoreal crowd to fill up the entire arena, add CG props on stage, add CG giant screens, and finally create a simulation of millions of Euro bills being ejected by the AC ducts in the rafters of the arena.
We were provided plates of the main protagonists on a custom-built stage in the arena. There was a handful of extras that were placed strategically close to the stage and the seats were all empty. Depending on the framing of the shots, sometimes the extras close to stage were actually enough, but most of the time they were occupying a very small section of crowd, which meant we needed to add a lot CG people around the stage on top of filling all the empty seats.
Production provided us with a large number of crowd tiles seated in sections. These were heavily used for the simpler shots with minimal camera move. The rest of the time, we relied on CG people and actual people shot against greenscreen to populate all these shots.
We knew that the lighting was going to be very dynamic in this sequence with a lot of animated lights, follow spots, and moving cameras. Crowd tiles seated in sections were shot with multiple cameras to get the different angles that would be required ie: crowd tiles facing camera, crowd tiles at 45 degrees, camera tilted up looking at crowd, etc.
We shot the crowd tiles with a flat lighting setup in the arena and all the plates with dynamic, animated light rigs. The lights were illuminating the empty seats, but it was easy to ‘extract’ these lighting changes from the plates and apply them to the crowd tiles when they were used. This setup and approach worked very well for all the shots with fairly static camera, but we all knew that CG and 2.5D approaches were necessary for the more complex shots (which were most shots!)
To create the CG crowd, we thought about how many of them we would need, with how many variations, and more specifically what resolution or level of detail we would need them to have. Based on the shots, and the very dynamic lighting rigs, we decided that having them at a lower resolution would be more than sufficient. We explored different methodologies and ended up using a very cost-effective ‘off-the-shelf’ hardware and software system.
We bought a Kinect camera that we used for 3d scanning as well as three PS3 cameras to do mocap. We set up a small section of our shooting stage to handle this. Basically, Rodeo FX employees would walk in, get 3D scanned, get photographed for textures, and walk a couple of feet and perform in the makeshift mocap area with PS3 cameras. It would take under 30 min to capture everything we needed per ‘actor’. This data acquisition allowed us to easily create the CG crowd we needed for the shots.
We decided to use CG crowd whenever we needed the crowd to be in a standing position, such as around the stage. The CG crowd was also used in sections of the stands where a great deal of perspective changes due to camera movement was to be expected. Background crowds that had minimal perspective shifts were created with the crowd tile elements, and everything else in-between created with individual sprites elements.
To create the individual sprite elements for the 2.5D approach, we used the same set of Rodeo ‘actors’ but this time, we shot them with our RED Epic against greenscreen. We asked every person to perform a specific set of motion and each for a specific amount of time. So for example, we would ask someone to sit idle for 15sec, than start looking interested for 15sec, then start smiling and pointing at the stage and pretend looking at people around them for 15sec, then clap, then stand up and cheer and clap, then look up, and finally pretend to grab money falling from the ceiling. By having everyone do the same actions, individually, at set time intervals, it allowed us to create a huge database where we can easily search ‘by action’.
We pre-keyed each performance and ingested them into Shotgun. Meanwhile we had the lidar of the MGM Grand cleaned up and simplified for use in Nuke. We identified each section of seating and proceeded with creating a set of custom tools that would go fetch the pre-keyed crowd sprites and populate a section of the stands.
Patrick David (digital compositor): In terms of those custom tools, we already had that system of crowd sprites that were strategically shot and labelled by the camera angle they were shot in so I took it upon myself to try a quick test using sprites on cards and show Ara, as I thought it might work really well for this sequence.
Through some simple python scripting in Nuke, I was able to generate a single row of cards along the X axis and then replicate that row several times along the Z axis. Using the very helpful Nuke node called ‘CrossTalkGeo’ you can apply a positional lookup curve to a group of cards/geometry based on their position along each axis. So, it was surprisingly easy to create and match different pitches of seats to match to the seating in the MGM Grand.
The first test was a simple section of 10 rows of 10 sprites shot from the front, with a simple camera orbiting around them. This first test allowed us to gauge how many degrees of rotation the sprites would hold up for. Luckily, on the sequence, the crowd stays quite dark so we could get away with quite a lot of off-axis movement. It worked surprisingly well and this basic technique was elaborated into a master setup for the entire arena, all achieved in Nuke.
We were provided topdown architectural plans of the seating arrangement in the stadium and using some basic python and CrossTalkGeo we were able to re-create each section so we had a sprite in every seat. We also had lidar that we lined up with this overhead plan so we could make sure every sprite was perfectly aligned with every seat. Due to memory constraints in Nuke—this was nearly 10 years ago after all—the stadium had to be split into 22 sections in order to not reach a file limit that would make the system crash because nuke was loading thousands of sprites. Once that worked, the compositor on each shot could easily load the sections they needed and render them out individually through the matchmove camera and merge them in the comp, it worked really well!
A few other tools we wrote to work with the system was a randomization function, where the compositor could randomly re-assign new sprites to any selected system of cards. That meant that if the front angle didn’t work well for a section (if for example the camera was behind the crowd), you could swap that system to only use sprites shot from behind or from a 3/4 angle. I seem to remember we also had interesting filters like remove anyone with a hat, all based on the naming tags of the sprite file.
Certain shots required specific actions, as the crowd was sometimes sitting quietly and other times standing up and applauding the magicians on stage, so we had those different actions as one long clip in each sprite element. The compositor could select any section and retime the sprite actions to choose if they were sitting quietly, clapping or standing and cheering. Randomized time offset functions were also available for the compositor to randomize the timings offset per section between the individual sprites so we could break up any repetitive actions.
The cherry on top to make the crowd look more dynamic was to add some sweeping spotlights/volume lights to the stadium. We used Nuke’s particle system to generate volume lights throughout the arena, to which we applied a basic cycling sine curve to animate them sweeping over the crowd and so that they behaved similarly in all the shots. Instead of the sprites we placed flat planes in the arena and rendered out a matte of the spotlight contact area, which we were able to use in compositing to relight our sprite crowd elements in 2D. As Louis Leterrier, the director, was a big fan of anamorphic lens flares we generated some procedural flares whenever the source point of the volume light overlapped with the cone of the volume—which meant that the light was pointing right into the lens and should generate a flare.
It was really fun to have such a specific challenge and build artist-friendly tools to be able to achieve it. I think that the use of the sprite technique, along with the cg crowd we used for some closeup areas and the on-set crowd that was in some plates really combined to make a result that still stands up to scrutiny to this day!
Ara Khanikian: Towards the end of the scene, the magicians pull their heist and distribute millions of Euros to the full house by forcing them out the air ventilation ducts. The euro bills were obviously all CG simulations in these shots. Lighting was the trick once again as we increased the energy of the light rig animations to their limit and had a huge number of lights moving, flickering, and changing intensity throughout the shots.
All in all, it was a very clever way of creating a cost-effective crowd system and one which, in hindsight, would still work well in a COVID19 world where shooting crowds is a huge challenge!
Game of Thrones
Matthew Rouleau, visual effects supervisor: Our crowd work for Game of Thrones started on season 4. We had a pretty simple crowd sequence in which the unsullied were lined up outside the Mereen gates, preparing an attack. It was simple animation loops, all handled within Softimage back in the day.
At the time we were a small team of artists at Rodeo and we didn’t really have a crowd system to speak of. After season 4, as crowd work became more and more requested by the show, we ramped up our own crowd pipeline accordingly. We did a pitch in season 5 for a battle sequence and pulled off a pretty convincing crowd sim with horses and soldiers, but were unfortunately not awarded the sequence. But the test was used in multiple pitches for other sequences and shows for a few years.
Season 6 saw our most ambitious crowd work, with multiple crowds of soldiers, horses and prisoners walking and running around complex terrain. At this point we had pretty much settled on softwares and techniques to accomplish crowd work and were ready for more complex things.
For season 7, we ramped up once again, and had to generate crowd sims for full on sea battles, with hundreds of digital actors sword fighting on hundreds of boats. In the same season, we created the crowds of whitewalkers and zombies, as well as zombie horses walking up to, and ultimately past the ice wall that blows up.
Our crowd sims kind of matured with the show, as it was our main project that needed crowd work, so we tried a lot of different tools throughout the seasons. It was first handled in Softimage, using ice for simple behaviours. Then, as we moved on to Maya, due to Autodesk shutting down Softimage, we had to pick up other industry tools to generate crowds. We used Miarmy for season 5, then Golaem for season 6 and 7. It’s at that point that we also decided to integrate Houdini as a crowd tool. It was very beta when we first started using it, SideFX had recently released crowd tools and we wanted to give it a try. We ended up adopting Houdini for quite a few shows from then on.
For animation loops, we always relied on in-house mocap sessions, even going so far as visiting a local trampoline center and jumping around everywhere for more complex movements. We often had to have soldiers fall and flip over, so we did those ourselves. That was awesome. Motion builder was used for cleanup, as well as Maya. Then we would make sure to push everything into Arnold for rendering, that’s been our standard for the longest time at Rodeo.
The show was pretty ground breaking as far as visual effects for television, so I remember us starting every season with a lot of excitement, but also a feeling of, ‘Oh man, we’re going to have to be creative to pull this off!’ And our crowd work was a good example of this—as demands grew more and more complex as the seasons went on, it really pushed us to develop new techniques. And as a team we always pulled through. A lot of this knowledge we gained is still with us in many ways in our pipeline today.
Fantastic Beasts: The Crimes of Grindelwald
Fabrice Vienne, CG supervisor: We used digital crowds in several sequences for Fantastic Beasts: The Crimes of Grindelwald, but mainly in the Paris extension shots. We also used it during the British Ministry of Magic sequence and also during the flashback of the storm in the open sea.
In the streets of Paris, we first created the extensions of the environment on which we added our crowd. Thanks to Golaem we were able to use its behaviors so that people naturally avoid each other when several people meet on a sidewalk or at an intersection.
For the part featuring the British Ministry of Magic, we used a more traditional animation approach due to the more specific placement of the crowd. Everything was done in Maya and another part in Houdini for the people in each offices. We used different tools to manage the animation. First, for the street crowd animation, we used mocap from a Mixamo collection. We adapted the mocap in to our model and type of walk animation matching the plate. For this part of transfer we used Motion Builder. It’s easy to retarget and match the proportion in between the mocap collection to our assets.
We built our assets with Maya as a classic pipeline and published all animation clips needed. After that we could start to build the setup with Golaem crowd simulation software integrated into Maya. This program allows us to create a multitude of variations and to mix the different animation clips between them.
Become a befores & afters Patreon for bonus VFX content