Inside the meticulous planing on ‘Game of Thrones’ that brought Emilia Clarke, Kit Harrington and a CG dragon together for the throne room finale.

#makingtheshot is brought to you by ActionVFX.
Having fully established itself as a game-changer in the world of visual effects for television, HBO’s Game of Thrones saved some of its most ambitious, yet still subtle, effects for the final episode of season 8. Here, Jon Snow confronts Daenerys in the Iron Throne room, and kills her. Drogon melts the throne and then carries Daenerys’ body delicately away.
It’s a confined moment that needed to involve a digital Drogon, real and CG fire elements and the interaction of all those things with live-action actors. For that reason, the show’s makers elected to employ VR into the mix for planning out the action, and then, on set, utilized a number of virtual production techniques in order to shoot the plates. The final shots involved intricate CG dragon performance and equally intricate fire sims.
befores & afters talked to key members of the Game of Thrones production team behind the visualization and execution of the throne room sequence.

Scroll down to see updated pics for the ‘throne lava’ part of the scene.
Planning with VR
Steve Kullback (visual effects producer, Game of Thrones): This was the first time we had used a VR setup for the show. The Third Floor created a set piece based on the model provided by the art department, and had it functioning in a real-time engine so that you could put the goggles on and then, almost with a laser pointer device through the VR, you could point to any part in the set, and click the button, and be there, and then turn around, and set your camera position, and then those could be saved.

The virtual scout employed by The Third Floor allowed DOP Jonathan Freeman to explore sets with virtual cameras using VR goggles. From that scout, photoboards (like this one) were captured that helped to refine the shot list and form set design, visual effects and filming plans.
Joe Bauer (visual effects supervisor, Game of Thrones): The end result was incredibly useful – the DOP Jonathan Freeman and the directors David Benioff and D. B. Weiss were able to generate a mini-library of camera views for the different moments in the scene. So by the time we took it over and started working with Pixomondo, who was the lead dragon animator, and Scanline, which did the throne melting, there’d already been a lot of R&D for the look and the angles and the visual approach.
Adam Kiriloff (senior real-time technical artist, The Third Floor): I was stationed in the show’s art department in Belfast for four months. Assisted by an asset builder to speed up the 3D modeling process, I would receive SketchUp models from concept artists, hi-res models from the visual effects team, elevations from the architects and even hand-drawn sketches from set decorators. First we would ingest the digital assets, matching real-world scale and doing a first optimization pass.
[perfectpullquote align=”right” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]The DOP and the directors were able to generate a mini-library of camera views for the different moments in the scene.[/perfectpullquote]
Once the digital 3D assets were homogenized, I would ingest them into Unreal Engine. From there, I would create environment materials in Substance Designer, paint bespoke props in Substance Painter, develop particle effects and set up lighting and atmospheric effects like fog and ash in Unreal Engine. In a few instances, I used photogrammetry to capture props like the Iron Throne and the mural in the throne room.
Having then recreated a rudimentary version of the set in Unreal, it could be scouted in a HTC Vive head-mounted display using our VR prototype, now called Pathfinder, which was developed in house at The Third Floor, also in Unreal Engine. Often we needed to turn around complete sets in a matter of days, sometimes hours, so we kept the models, textures and lighting to a basic quality level. Our scouting tool was purpose-built to be quick to learn and intuitive to use yet have a toolset we could expand and customize depending on user requirements.

The Third Floor’s virtual camera tool became known as Pathfinder.
One of the key features is the ‘virtual lens.’ It’s essentially a virtual screen attached to the controller in VR that you can hold up to plan shots within the virtual environment. The virtual lens mimics real-life camera settings. Lens configurations can be set in advance based on the director of photography’s request and lens swapping in VR is as easy as scrolling through your lens selection.
Jonathan Freeman made extensive use of all functions in the toolset we developed, planning shots in virtual reality, switching lenses, taking screenshots and producing ‘photoboards.’ The photoboards were rough storyboards consisting of frames (screenshots, sometimes annotated) that Jonathan had skillfully captured in VR during his scouting sessions.
[perfectpullquote align=”right” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]The throne room scene ultimately had over 100 virtual cameras from various camera angles.[/perfectpullquote]
Group sessions often took place where one or more of the directors individually use the VR system, and the art department would gather around a large spectator TV we had set up and discuss what the director was planning visually. Our scout tool has 3D annotation tools, a laser pointer and several different measuring devices, which compliment and help facilitate group discussions.
Jonathan and I had two concurrent processes for capturing his shots, one was using the VR tool. However the VR tool was never meant to record complex camera movements, so in some cases we used animated virtual cameras in the scenes, and also did chess-piece animation with the characters. I was able to work closely with Jonathan throughout as the virtual camera shots were planned. The throne room scene ultimately had over 100 virtual cameras from various camera angles Jonathan was testing. Jonathan is a true perfectionist and left no potential shot positions unexplored.

Previs frame by The Third Floor.
Michelle Blok (previs supervisor, The Third Floor): Then, to previs the scene, our work was informed from animatics prepared by Jonathan, Adam and the virtual camera team. Many variations of shots and angles were explored during the process. One big challenge of this sequence was the inclusion of Drogon into the very confined space of the throne room. The set, or more accurately, the set destruction, needed to be designed based on the area the dragon would take up and how much space he needed to enter and exit the room.
[perfectpullquote align=”right” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]This really allowed everyone to react in real-time with the subtleties of performance, but that framed both characters correctly.[/perfectpullquote]
Once all the shots were completed in previs, we stitched all the dragon animation together, creating a continuous animation that lasted for the entire length of the scene. This ensured not only that the action was consistent from shot to shot but that the dragon’s placement in the scene was also consistent.
Shooting with a simulcam
Steve Kullback: Another of the technologies we used, which was something we had done elsewhere in the series, but not very much of, was the simulcam. Here the dragon had to interact so precisely with our character, and our character with the dragon. We pre-animated the dragons as we often do, but this time we brought it into simulcam so that we could use Ncam – a real-time 3D motion-tracking device – so that the camera operators could have the CG character, Drogon, visible to them. And we could all see how the camera angles would move in concert with the movement of the dragon, and they could follow.
Joe Bauer: The simulcam was great for this because it was such an intimate scene. We had previs, and had worked out, over a number of iterations, the angles. But with simulcam, the camera operators were able to add movements, or Jonathan was able to adjust movements or reframe, having all characters at the scene represented for blocking. Traditionally, if you don’t have anything representing the Dragon, then the operator, left with nothing else, will frame up the person that they do have. And then you may discover that you don’t have the space for the dragon or you’ve made a bad composition. So this really allowed everyone to react in real-time with the subtleties of performance, but that framed both characters correctly.

Plates from the live-action shoot.
Casey Schatz (virtual production/motion contol supervisor, The Third Floor): Given what was meant to happen in the scene, we knew that regardless of how good the animation and rendering was of the dragon, none of it would matter if the photographed footage didn’t show a true connection between the creature, which was of course CG, and the real actor, Kit Harrington, playing Jon Snow. It was a moment of Jon and Drogon really looking each other in the eye, so looking in the right direction across the length of the action was vital. We needed a way to have the two characters ‘be’ in the same room together, in the same physical space.
[perfectpullquote align=”right” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]The unique way we used the live composite is what I’d call ‘techvis meets simulcam’.[/perfectpullquote]
It was a consistent thought for everyone about this scene, that we had to have a way to have an accurate visual to follow for the dragon’s action. A first idea was to use a Spydercam remote-operated cablecam system, virtually programmed as The Third Floor had done for dragon flight and fire shoots, to create a ‘flying eyeline.’ The approach ultimately wasn’t do-able for the throne room work due to schedule – but does work nicely for animated eyelines, as we found implementing this on a subsequent project – so we developed another method that was based around using a real-time animation composite to inform live-operated eyeline pole cues.
The unique way we used the live composite is what I’d call ‘techvis meets simulcam’ and it’s a bit different from the standard way of having the imagery just appear for reference on a display. It was based not just on having an Ncam system doing a digital overlay onto the shooting set but also took full advantage of accurate dragon animation from Pixomondo together with technical layouts and measurements that had been worked out by The Third Floor’s Michelle Blok and her previs and techvis team.
The Ncam generates a point cloud, and we aligned that to a lidar scan of the throne room set. Michelle Blok and her artists had used the same lidar when blocking of the dragon performance so the framing and relationships were accurate to the size and space of the set. Within its pipeline and rigs, Pixomondo had then produced the final dragon animation. We brought an approved version of this into MotionBuilder for use in Ncam for the live comps. The live composite didn’t need highly detailed renders of the dragon image, but we did need very high spatial accuracy and the files from Pixo were dead on.

Ncam and Unreal Engine were tools used on set to marry the pre-animated Drogon with the live action plate for real-time viewing.
Working from super-detailed technical mockups we had already produced for all the throne room dragon shots, basing from Michelle’s previs, I put marks on the physical floor of the set that outlined the coordinates from start and end. That determined the linear path the eyeline needed to take. Determining the height was next. We gave the eyeline pole operator a video display playing the real-time dragon animation composite and we flipped the image on the device so the execution of the cue movements would be more intuitive. Watching him interact with the display, we realized that the height was easy to judge simply by working within the framing on the monitor. So, between the markers, the mirror image and the bounds defined by the height of the monitor, it was possible to have the eyeline pole operator puppeteer the eyeline that Kit needed to look to.
[perfectpullquote align=”right” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]It was the first time they were able to ‘see’ the dragon on the set – the final animated dragon really through the lens.[/perfectpullquote]
So, in our case, the Ncam use was actually two-fold. It provided the usual capability of showing an important CG element in camera to help with timing and composition, and the Ncam output of the running animation informed the operation of the on-set eyeline pole.
I would give the operator a start, middle and end mark and the timing in between came from the dragon animation. When he saw the dragon move, that was his cue to move. It was a really flexible approach that gave Dan and David who were directing this episode, the freedom to pause and say, ‘Ok, Kit, you’re scared. Step back a little or lean more this way.’
For them, it was the first time they were able to ‘see’ the dragon on the set – the final animated dragon really through the lens. The workflow also made it possible to capture the shots more quickly, and the element of guesswork about the eyeline guides went away. If something missed the mark, we knew it right away and we would re-do the shot then and there. It wasn’t getting all the way to post to discover that this or that was off, and having to go in and do fixes. That was a big help for the pipeline.

A final shot from the sequence.
We brought assets into the Ncam system via Autodesk MotionBuilder. We imported a lidar of the throne room set – which was the same world-space as Michelle’s previs. Then we had the animated dragon that had been further evolved from original blocking into Pixomondo’s pipeline. They provided skeleton animation baked down into a FBX.
We had a simplified fire cone to show the direction of Drogon’s blasts and their rough size. The fire was in the same layer as the dragon, with mathematically defined start and stops, so we could see that the direction made sense and motivated where the lights were.
When the cone came on, the Ncam was used to trigger practical orange lights on the set to illuminate the real shot as we were filming. We were able to precisely time illuminating the physical actor, set and props, etc. to our digital dragon fire strafes. The Ncam work was supported by Kirsten Sweete and Devin Lyons from VER.
The arrival of Drogon
Joe Bauer: We’d had be then a certain amount of previs that had gone back and forth with Dan, David, and Jonathan. So, we were able to provide Pixomondo with a pretty good performance path before they even started with their pre-animation. So, you know, beyond that it was just getting the mood of the animation right.

Drogon surveys the scene.
For example, Drogon doesn’t charge into the scene like a mad bull. The idea was, because it was going to be snowing, and there would be a depth of snow buildup into the distance, that he would sort of appear like almost like a mirage, and then come forward and resolved, both in focus and visibility to the snow, but not overtake the scene, you know, when he shows up because the big act has just happened. So we had to add him without stomping on the moment.
[perfectpullquote align=”right” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]Drogon doesn’t charge into the scene like a mad bull.[/perfectpullquote]
And then, it was a lot of direct guidance from Dan and David about, well, how surprised is he meant to be? How anthropomorphic is he? What are his emotional beats? Generally, we will be pretty subtle with the emotions of the dragons, and lean more towards the lizardness. So, you know, it might be 80% lizard, and 20% readable emotion. We got a bit more emotion, obviously, into this moment. But we kind of went back and forth a few times, as far as anger versus him just trying to understand what has happened.
What’s so nice about the sequence is when he shows up, he doesn’t know what’s happened, but he knows it’s not good, which is why he nudges Daenerys, comes to a conclusion, and you think that he’s just going to chomp Jon. But it’s more of just a mournful moment than striking out in anger.
A fiery few moments
Joe Bauer: For Drogon’s fire breath, there was only really interactive light on set. We shot some fire as a reference, but there wasn’t anything that was used on-camera fire-wise. There were some pipes of gas in around the destroyed room, but that again was really just for reference.


What we did do after the shoot was use motion control and the pre-animated dragon performance to film flame-thrower elements. We had a half-scale plaster buck of the throne, in three configurations, based on how much of it had melted at that point.
Steve Kullback: It was a Bolt Jr. High-Speed Cinebot, because it needed to whip around pretty quickly. Then we also had breakaway wall sections with a fireproof foam that were used to represent the fire interacting with the back wall as it was broken away. This was overseen by special effects supervisor Sam Conway.

Fire plate.
Casey Schatz: We needed the fire to move at a speed that was faster than could be done by putting a flamethrower on a Technodolly, as we had done on the show previously. Eric Carney, one of The Third Floor founders and key collaborator on Thrones since season 4 did say, ‘The Third Floor, we will put a flamethrower on anything…’.
So, this season we had live pyro on the Bolt to capture elements for the throne room and other sequences, like the Jon oner shot in Episode 3. The virtual programming for the Bolt was based on meticulous calculations from The Third Floor’s techvis, which drew from the team’s original previs. Artists painstakingly worked through the whole series of burns to detail what happened in each pass and what directions and lengths were needed.

The Third Floor’s techvis (upper panels, lower left panel) is seen through the Technodolly camera at 1/3rd scale with a live composite of the robotic flame thrower and Pixomondo’s set comp (lower right).
Mohsen Mousavi (visual effects supervisor, Scanline VFX): We started on the burning throne shots in March 2018, when we did some tests. Originally they wanted it to be one continuous event, but editorially it was sort of impossible, so we ended up doing a simulation for every shot from scratch to get that look and the mood and the flow of the shot going properly. We started to do some development with Flowline and we ended up doing a combination of Flowline and Houdini, this was one of the first time that we actually used Houdini and Scanline to do some very high viscous fluids.


We got a few references from production, but interestingly, we searched everywhere and we could not really find any good reference material. It was hard to get something that was not CG. The way Joe works is that he doesn’t get excited about a good looking effect. It doesn’t matter if this was the most advanced simulation that we had ever done, or it didn’t matter if it took three weeks to simulate or two weeks to render, if it didn’t look real to him, he wouldn’t get excited about it.


So we had to prove that this has something to do with the reality of the event. We found as much references of melting steel and melting iron, and how that would look in an overcast setting, which is how it was lit on set. What we were trying not to do was over-process it and make it look like a fantastical effect.
We were working on the sims and exploring ideas and we came up with these witness camera shots to show Joe how this looked from other angles. A lot of these witness cameras we were just presenting as tests, and they became actual shots. I believe originally maybe it was three or four shots but it ended up being 14, 15 shots and a lot of those shots were basically cameras that were just designed to show the effects from a witness camera.
Once we were done with all of that work, once Joe was happy with the overall flow, overall speed, and once we’d explored all the options and we got a look signed off from the producers/directors and Joe, then that was tech-viz’d for them, so that they could shoot the interactive fire with the maquette and motion control.
Once we got that back from him, since the actual maquette obviously wasn’t melting, for some shots at the very beginning of the sequence that you didn’t see much of the melting – it sort of warped out – but down the road we’d have to augment it to show the interaction and to show that the flames are properly wrapping around the throne. And that was done using a lot of Flowline simulations to be able to enhance and augment the practical fire that we had received.


Then, we spent a lot of time figuring out the look of the lava, the look of the melted stone. I think it went through 20 different versions of the look – more lava-ish, darker or brighter, a bit more structure, less structure. We got our dragon animations for the throne sequence from Pixomondo. Very early on, we went through it shot by shot and we talked through how we could approach this to make sure that down the road everything sticked together.
The methodology that Joe had was that the moco-fire, the fire that they would shoot, would be the bible. Basically, the idea was that once that’s shot, then we’re now going to track that fire 2D into the set-up. That fire, the camera that they were used, that becomes the master camera of the shot, and we’re not going to move around and mess around with the fire.


So Pixomondo took that fire, and they added the dragon head exactly where the flamethrower was. And we took the tip of the fire, we made sure that the throne was interacting with that fire. Then we basically merged the two things together, the dragon from Pixomondo and our melting throne and for 99% of the shots, it lined up.
Drogon and Daenerys
Joe Bauer: Emilia was there lying on the ground and when Drogon had to roll her over, it was Rowley Irlam and the stunt team, who had a wire rig that was fastened to a pretty heavy pole arm with its counterweight. So the wire was placed so that when they lifted the arm it would appear that she was getting nudged. And all we had to do, really, was remove the wire.

Drogon readies himself to carry Daenerys away.
And then there was a similar thing for lifting her up. The trick there was figuring out what part of Drogon’s giant claw would grab her. First of all, how would he get anything underneath her without digging into the floor? And second, what is her body position as she’s being lifted up, so that it makes sense with the toe? So we worked with Pixomondo, and then made the configurations by using Emilia’s body scan.
Steve Kullback: Ultimately, the considerations for the performance that were important to Dan and David were that they wanted it to be very, very gentle when she was picked up. She’s almost beautiful in the way that she’s lifted up and cared for. So these were considerations that were important to them. And it wasn’t about a gimmick, it wasn’t about the performance of the dragon taking centre stage, but just to help lift her out in that way.
Follow along during this special weekly series, #makingtheshot, to see how more individual shots and sequences from several films and TV shows were pulled off.
Become a befores & afters Patreon for bonus VFX content
Poor ending to a great program