Making the koids in ‘The Peripheral’ involved some ingenious deepfake techniques


Join the VFX community by becoming a b&a Patreon...and get bonus content!

One of the computer game sim sequences also relied on AI approaches.

There’s some astonishing imagery in Prime Video’s The Peripheral, the adaptation of William Gibson’s sci-fi novel about a world that uses VR to step into alternate–and future–realities.

This included the futuristic koids (which used some AI deepfake techniques when face projections were required), gaming sims entered into via VR by the characters (which also used some AI tools), views of a future London after the cataclysmic ‘Jackpot’ event, and special invisibility effects.

Spearheading the effort to craft that imagery were visual effects supervisor Jay Worth (also a producer on the series) and visual effects supervisor/producer Mark Spatny. Here they tell befores & afters about some key VFX challenges on the show.

b&a: I was really taken by the London skyline and street-level scenes in the show. What was your overarching approach on these?

Jay Worth: We started talking about this show years ago with Jonah [Jonathan Nolan] and Lisa [Joy], and Vincenzo Natali, and the whole team. Part of what we really wanted to do was to keep some of this classic London feel to it, while giving it this futurism that also fit into the narrative of the show, which was after the Jackpot, so it’s not like they would’ve needed to build new things.

So it was fun to build a future version of a world that wasn’t tall, skinny glass buildings, and metal, and things like that, but to keep this classical tone throughout – especially in terms of the air scrubbers which were the large neo-classical statues circling London. Oh my goodness, Ian, you have no idea how many versions of air scrubbers we looked at. First we had organic things, we had classical things, but they were always big.



But it was trying to capture what it was going to be like to have something iconic that was interesting, served the story, but really more about creating this visual landscape. The team at Tendril did some initial concept work way back in the day to help give us the starting point for a lot of the stuff. And then the team at Blue Bolt was able to take those ideas and run with them. And then it was trying to figure out what these street level things would look like, and how we’d tie all those pieces together. I’ll let Mark jump in on that one.

Mark Spatny: One of the things that was really important to the story is that we reveal later that London isn’t really what it looks like. That the view of London that we are seeing is really generated by augmented reality in each person’s implant to make it look nicer than it is, because 90% of the world’s population has been killed over the last 70 years through all these different disasters. And there’s been no reason to rebuild everything.

So they wanted to see a version of London that’s really the classical, nostalgic version of what everybody imagined London should have been like. So we went into the show saying, aside from those giant air scrubbers, which serve an important practical purpose of cleaning out a lot of the pollution and radiation, we wanted to keep London as it is, with just little touches of futurism. So some of the streets are glass, because they’re meant to be actually solar panels instead of regular streets.

Those were done simply by brute force roto. And we had multiple vendors doing that, just taking those streets and painting out all the traffic lines on them, and signage, and everything. And adding those reflections from the surrounding buildings. And then the light up strips that go as the vehicles move. But for the most part, we wanted it to look like London today, with just a little bit of extra technology.

b&a: I particularly love those glass streets and the arrows, because I feel like I want to go to London and see those for real now. They just were so well integrated.

Jay Worth: Well, we always just try and make up rules for things, and Jonah came up with the idea of what if those are actually for pedestrians? You don’t quite know what they are when you’re watching them, but then when you start feeling that they’re there, it does feel like there’s some method to the madness. They stay in front of the vehicles for approximate stopping distance. So it’s always fun to try and ground it, even if it’s just made up for our own reasons. I’ve always found it helps when doing world building to make give the world rules to help the visuals feel more grounded. So as vehicles are accelerating or decelerating and things like that, you can feel that they’re there for a reason, if you don’t exactly know what, from an exposition standpoint.



Mark Spatny: Yeah, they’re supposed to communicate to pedestrians the safe areas that you can cross. If the arrows are right there, or have gone past you, you’re going to get hit. And it’s not obvious, because we don’t change speeds a lot in the show, but as the Burton peripheral is driving the motorcycle around London and slowing down around corners, the arrows get closer to the motorcycle, and then as he gains speed, they get farther away to indicate that. And that’s just a rule Jay came up with to say, well, if we’re going to have these, what’s a functional use for them? How do we make it feel like it’s serving a purpose?

b&a: I love the appearance of the future droids, constables and butlers–the koids–what was your methodology for shooting live action with those, and how did you approach augmentation of them?

Jay Worth: Well, when we initially shot it was literally going to be a person with this white mask on, and we were really concerned that it was going to look too much just like a person in a mask. So we did a bunch of versions where we made the neck slimmer, made the head larger. And after we had shot everything, Jonah gave us a call, and he was like, guys, you kind of knew this day was coming. It doesn’t feel like a Gibson novel. It doesn’t feel like our world, it doesn’t feel futuristic enough, it just didn’t quite land the way we wanted it to land.

So every single thing with the koids–the triangles, the monofilaments, these hollowed out heads, how they get formed, none of that was planned on the day. It all was retrofitted to this idea that Jonah had. And all he came to us was, I want it to be elegant and a hollow head made out of triangles.

Mark Spatny: And it wasn’t even that at first. At first, the direction was that it needed to feel like it could be both functional and artistic, which is why we got to the level of putting projections on them. So when koids are the servers at the palace, you’ve got butterflies flying on them, and when they are policemen, you can see the UK police checkerboard pattern on their faces, and later they adopt the faces of other people.

And like Jay said, that wasn’t the original plan going in. So we didn’t even have tracking markers on the masks that the actors were wearing. They were just going to be white masks. We were just going to paint out the eyes, originally, because they had eyeholes, so the performers could see.



Jay Worth: This was done by Refuge. They really delivered, oh my goodness, so many things this season for us. Half the time, not having set data, or a lot of the time having all these things that you might normally have if you’re just planning on a full 3D head object in a room. But they were able to deliver above and beyond in so many ways.

Mark Spatny: And really they pushed a lot of technology for us. In the show, at different points, you have to see that there are faces projected on these koids. We didn’t have any of that footage of the actors speaking, because they were just going to be the white helmets. And so, Refuge really helped us plan a system for getting that effect. We ended up doing volumetric video capture with our actors to shoot that, and make 3D models.

Then, because of the resolution of the volumetric capture, the result was really video game quality, not close-up in a TV show quality. Refuge took that footage, and figured out a way to then deepfake it, using other footage of the actors from the show as reference to make the volumetric video 3D asset look more like them.

Jay Worth: By the way, we are not talking about a lot of footage to train on – it was like 12 shots. It’s not like we had, hours of dailies to train your model on. It was just a situation of, let’s just see if this is going to work. And that’s what I love about Refuge, is we have a conversation and they’re like, I’ve got an idea, I’ve got an idea, I’ve got an idea. And we’re able to bring back the skin texture of our actor, and the eyes and nuance of his face. And we actually put the deepfake on top of our volumetric capture. So we pushed that technology even further by taking the deepfake and using it in the mix of the volumetric capture that we did for that performance.

b&a: Can we talk about the ‘invisibility’ effect, in particular, where the officer finds the cup of coffee and then is searching for the car and the car door. How did you do that?

Jay Worth: We didn’t want to go ‘full Predator’, but we liked the mirage and outline vibe of that and I wanted something that felt like flipping mirrors. We always want to build on the techonology in our world building. And that one actually was done by the team at FutureWorks. And what made this finally work was when we decided to break the car into separate pieces – so it revealed the car in different ways based on the disparate pieces – that was what finally helped it not feel like a traditional wipe and more like a cool piece of technology.



Mark Spatny: Yes, and the way that we shot that was just having a real car there. So they actually knocked the coffee cup down the car, and it was hand roto to cut out the coffee cup, and then put a clean plate behind it. And when the sheriff gets hit by the invisible car, it’s actually a stunt guy rolling over a car that’s driving through, it’s a real car hit stunt, and we just painted out the car, and added that little bit of ‘Predator machine’ to it as it’s going over.

b&a: I think one other thing I was interested in, it must have been in the first two episodes, where they go into the sim, and it’s this video game, which has a really nice look and feel to that. It wasn’t quite reality, it wasn’t quite a video game. I’m really curious about how you did that, if you did anything to that.

Jay Worth: You’re going to love this one. So once again, we had all these conversations about how are we going to make it different? And at the beginning when things were tight, we’re like, oh, all we can maybe do is a color correction. And we get into it, and, of course, we all agree it doesn’t feel different enough. That one, we sent it off to this amazing artist named Gilles Augustijnen at Kontrol Media in Belgium, who did all the Jackpot sequence. And we were asked like, can you do anything with this? Here’s our reference images of other video games that we are liking.

He then did a deep dive, totally on his own, he took those images, and using AI created a new and interesting look. It basically created a unique patina over the image that gave it this kind of hand drawn hard edge. And it looks a lot like the references we sent of the video game, which was just a brilliant way to come across as doing a new video game.

Because as we know, I’m sure Mark can speak to it, too, doing video games, anytime it says, and they created a video game, I’m always like, oh good lord, how am I going to make this different?

Mark Spatny: And again, it was an AI that was trained with video games. Gilles created the process with the hopes of having it look like Unreal Metahumans, and then put that color effect over the top of the original footage. So it’s amazing what we’re able to do with AI now, just train it to do different things that people hadn’t thought of before.



b&a: Finally, one thing I think I’ve always keen to quickly talk about is you made this still in the pandemic. What were some of the big challenges from both of your points of view in making that possible during this time?

Mark Spatny: The big challenge for us as a VFX team was working at home. We ended up being in a hybrid mode. Our VFX editor worked in the office. I would work at home in the morning, and then I would go in for dailies in the afternoon, so I could look at a good monitor and not be watching shots on streaming video. I could see the higher quality. But I think from the aspect of managing the project, it wasn’t really that different for us, because we’re just dealing with a bunch of different vendors just like you would on a show before COVID from our offices.

It was a little trickier in that my coordinators and I were not in the office. Jay and I were very seldom together. So everything was conversations between us on Zoom and phone calls. But managing the show was the same as ever, I think.

Jay Worth: I think the only other one for me was just the shooting of it, because we shot it so long ago, more when it was in the height of COVID, and not getting to travel, not getting to go there for a little bit, which meant that I felt a little bit more of the challenges just from a production aspect. I think production is harder honestly on the COVID side with all those different ramifications than our part of it. I feel more for production a lot of the time, than it is in post.

Mark Spatny: Normally on a show like this I would try to go at least for the first six weeks of shooting, to make sure that it was all going smoothly and that we had a plan, and stay with production until post really ramped up, and we started to do temps and deliver shots. But we couldn’t do that this time. I couldn’t go to London, because of the travel restrictions.

So I guess the one real inconvenience was getting texts at four in the morning from the set crew going, “Here’s what they want to do, how do we handle it? What limitations do you want to give us? What should we tell the director?” And so that was a little painful, those early morning wake-up calls. And the other thing that we had to do was get LIDAR scans of sets. And because of the COVID bubbles, we couldn’t bring an outside crew periodically in and out to scan our sets, because there had to be a regimen of testing before anybody could go on set.



So we ended up buying a LIDAR scanner, and training our crew how to do that. So our VFX data wranglers also ended up scanning all of our sets. And honestly, I would do that from now on. It was just so great. I mean, there’s an initial cost of the scanner, which is a pretty hefty price tag, but not really in the scheme of the whole visual effects budget of the show. So having our wranglers when they’re not shooting have this extra duty of just scanning the set, it worked out really, really well for us.


Join the VFX community by becoming a b&a Patreon...and get bonus content!

Leave a Reply