Well, kinda, with the RTX-Powered Apollo 11 Demo at SIGGRAPH 2019, and some nifty A.I.
At SIGGRAPH 2019 in LA, one of the fun demos is at the NVIDIA booth, and it involves participants pretending to take a walk on the moon.
I hopped on the set – the site of the Apollo 11 moon landing – and saw myself photo-realistically live-streamed as an astronaut into a scene involving the lunar lander on the moon’s surface.
I could bounce around, and see myself, in a full spacesuit, do the same moves.
What was actually happening was that a single camera was capturing my poses and matching these to a rendered CG astronaut. The final scene is accomplished with real-time raytracing.
Pose estimation tech courtesy of NVIDIA Research meant that only that single camera was required (ie no mocap, no multi-camera setup, and no depth sensors).
The means used to reconstruct my 3D human body motion and position from a single 2D video feed was made possible with that pose estimation tech, “with Tensor Cores in RTX GPUs speeding up the AI inference to understand the person’s movements,” as NVIDIA advises in this blog post about the demo. The post also states: “That information is then translated and sent to the Omniverse renderer to match the precise movements to the 3D astronaut.”
It’s part of a whole series of demos NVIDIA has going at SIGGRAPH 2019 – you can check them out at booths 1303 and 1313.
This week at befores & afters are #realtimewrapup week, with reports on real-time tech and virtual production, direct from SIGGRAPH 2019.