A breakdown of Father John Misty’s single ‘Funny Girl’.
Music videos can be fascinating visual effects projects.
They often provide creators with a chance to do something very different with VFX, away from the commercial realities of, well, commercials, and without the necessary narrative structures of films or TV shows.
Such was the case for the music video for ‘Funny Girl’, by Father John Misty (aka Joshua Michael Tillman), from the album Chloë and The Next 20th Century.
Producer and director by Nicholas Ashe Bateman and director of photography David A. Ross collaborated on the video, which makes for a fantastical Wizard of Oz interpretation, complete with jellyfish.
The pair had previously recently made the independent feature, The Wanting Mare, and Bateman, under his Maere Studios banner, had also delivered visual effects for David Lowery’s The Green Knight.
For ‘Funny Girl’, Ross and Bateman decided to work heavily in Blender, with Ross, in particular, jumping headfirst into CG lighting directly in the open source 3D software. The final result is extensive 3D imagery along with matte paintings. In a visual breakdown for befores & afters magazine, Bateman explained how they made the music video. This is an excerpt.
This is the first example of the in-scene matte paintings. The digital set was built first, according to rough plans and references from actual studio soundstages of the period, then a digital ‘matte’ was painted in Photoshop and ‘placed’ on the in-scene walls.
At this point, DP David A. Ross lit the set (including this matte wall), and brought in the elements of the ‘spotlight’ which became one of our guiding motifs. This gave us our final look before rendering out that whole image.
Finally, we again turned this into another matte, painting over this render and lit set (with its interior matte on the wall) to add detail and some style.
We tried almost a dozen different attempts at building a nice 3D jellyfish, but the majority of them just didn’t look quite right. Eventually, we modeled the jelly on a reference of macro cell photography, and instead of any fluid simulations we ran passes of cloth simulations over it. The final model had four different ‘wave’ modifiers and relatively basic keyframe animation, after which we’d run the cloth simulation.
The hope was almost to get the jelly to look like it was moving in a big flowing dress, ideally creating all these beautiful frames as it’s turning and looking around. This was a big pull from the Sesame Street school of ‘sympathetic motion’ almost giving it hair that would give it animations and expressions separate from whatever we put into it.
From there, we got to one of our most detailed 3D set builds the farm. David Ross took diagrams of the set and designed his own lighting setups as though it were a large physical space. One of the most wonderful surprises we found in working this way were all the little tricks of live photography we wound up including: flagging off our spotlights, using white bounce cards around the camera, and really playing with the light as much as possible.
David wanted to keep the spotlight motif throughout, so that required each of our spotlights to have those black cards around them to block off any light. You can really see this in the shot of the farm, where there are a few angles spots to get just the right rim light David wanted. However, we ran into issues with the softening of the light not being dramatic enough, but the black cards being too sharp of a cut.
The final idea was to make a variation of black gradient cards in Photoshop—just as we would on set—and import them as our black flags, essentially letting us paint how steep the falloff should be from the lights. This would pretty much be the equivalent of different meshes or diffusion on a practical set.
It meant we had heavy spotted gradients, smooth fine gradients, checker boxes, and all of that went into David’s digital gaffing setup. We wound up using them throughout the video, and really trying to make as many strong motivated lighting choices as we could in the render, before getting to the comp stage.
This was also before Blender included lighting groups in the newer section, so aside from some basic layers, we were essentially rendering out the final shots right from Blender. In terms of the camera movement, we spent a massive amount of time trying to match that very specific look of old studio crane-arms with all the bouncing and bobbing as it begins to move and winds to a halt.
Most of it wound up being modifiers on the camera head, but also measuring out the camera moves so they’d stay within the movement of the arm; we found that the convincing movement wasn’t as much the camera tilt noise, but the range of motion of the arm.
Once that seemed believable, it really cemented the scale of the whole set. Of course, as we went through the doorway to the next section, we took a lot of those restrictions off.
Read the rest of the article in issue #6.