The Arnold Schwarzenegger film foresaw both the use of digital VFX and machine learning. An excerpt from issue 3 of befores & afters magazine.
Before Glen Powell’s The Running Man in 2025, there was Arnold Schwarzenegger’s The Running Man from 1987, directed by Paul Michael Glaser. Both deal with the idea of deep fakes. Today, of course, that’s something that can be achieved with machine learning techniques on film. But in 1987, it was an effect that had to be crafted without even digital visual effects technology.
Remember, we’re not just talking about how to actually do a face swap on screen with the tech of the day, we’re talking about what tech was used to show it being done as part of the story. In Glaser’s film, the deep fake moments come as the broadcasters behind a deadly television show stage a battle of re-edited stand-ins. They literally 3D map the face of their ‘runner’, in this case Arnold Schwarzenegger’s Ben Richards character, onto another face.
This is all shown happening in real-time in a broadcast control room and even utilizes VFX-related terms like ‘digital matte tracking’. Since obviously this visual effects / deep fake technology wasn’t possible at the time, the filmmakers needed a way of both showing it on the screens with a 3D/wireframe/digital look, as well as actually a way to carry out the effects for that look. So they turned to the film’s visual effects supervisor Robert Grasmere for a solution, which actually began first with the need to portray a ‘Wanted’ poster of Richards.
“We were starting to do some computer graphics, and they needed these ‘Wanted’ posters, so we dreamed up a 3D version,” details Grasmere. “I was working with a company called Video Image, doing mostly video effects—this was Greg McMurry, Rhonda Gunner, John Wash, Richard Hollander and others. They later started another visual effects company called VIFX. Anyway, in order to do this 3D ‘Wanted’ poster, we needed a 3D image of Arnold.”
In a technique somewhat reminiscent of Triple-I’s approach to ‘digitizing’ Susan Dey for Looker, the team at Video Image brought Schwarzenegger in one weekend during filming and photographed him on a regular office chair. “We marked off on the circular bar at the bottom of the chair about 360 little arrow points,” describes Grasmere. “We sat him in the chair and put a camera on a tripod and then we turned him a point, shot of picture, turned him, turned him, etc. It gave us a 360 degrees of original photos of Arnold.”

Video Image then combined these separate still photographs as frames on video which, when run together, appeared to be Richards’ head spinning. When later the production needed to showcase the fake face swapping, a slight variation on the 3D-like Schwarzenegger head could be employed.
“For those shots of the wireframe and the face swapping,” says Grasmere, “it was based on an idea that we had pitched to production of how we could visualize that. We came up with the idea that you could put the two characters together and you ‘swap’ the wireframes. It might not have really worked, technically, but we knew the audience was going to understand what was going on. In fact, after you’ve sold them on that and you’ve cheated who this person is, the audience believes it, and then we just go back to filming it for real.
![]()
“I mean, today you can extract a topo in 3D,” adds Grasmere. “But then, you couldn’t. So Video Image would draw the wireframes. They could create a mesh and then distort it and shape it. It was really a hand-done graphic of the terrain map of his face based on all those photos we had taken. To have it go on and on over time was a very laborious process that today would take seconds.”
As for the VFX-specific terminology mentioned in the actual scenes, “we made those terms up,” declares Grasmere. “We literally wrote those terms. They were not given to us by production. All those things came in with the lingo of what we were pitching and when we did the graphics.”
Another aspect of the work, relates Grasmere, is that it all had to be pre-made and then played back on set via a 24-frame per second video machine. “Back then, video machines were all 30 frames. But the ones we used in the movie business were all taken apart and modified to playback at 24 fps. A special sync generator was created between the video playback machine and the camera to run in sync so that when you looked at a TV in a film, you didn’t get that line coming up through it and so it didn’t look all fuzzy.”

Although these face replacement scenes proved to effectively be hand-drawn and animated scenes, Grasmere comments that even then there was a feeling that computer graphics and digital techniques would ultimately be used to handle the whole process.
“Back then, no computer was doing it. We had to go, ‘We need to do face replacement, how do we do it?’ So that’s why we had to go with the idea of, how can you create a 3D image as a graphic? Hence, spinning Arnold in 360 and turning that into a terrain map type of an image. And then, doing the terrain map mesh over a face. It’s so funny that later on, all those polygons and fractals which we only represent in the shots would become the way all 3D objects are actually built. And who knew that deep fakes, which is really what these shots, would be so big right now.”


