Wrapping around the UV map in 80 frames


January 26, 2021

UV mapping concepts and history you might not know about, from RE:Vision Effects’ Pierre Jasmin.

Need to get your head around UV maps? Want to dive into more of the technical details of UV mapping? Pierre Jasmin from RE:Vision Effects has you covered in this befores & afters breakdown.

RE:Vision Effects is a specialist VFX software developer that offers, among many other tools, a product called RE:Map, which handles UV and displacement mapping. There’s also RE:Lens—it provides the option to export UV maps instead of directly warping images.

Reliance on forms of texture mapping has become popular in television motion graphics pipelines and is very important to accomplish real fast photo-realistic rendering from modest geometry (via different polygon reduction methods).

Below, check out this in-depth take on UV mapping from Jasmin—including a few surprising facts from CG history—that will help you understand more on what ‘UVs’ mean and where to use them in your visual effects work.

What is a UV map?

A UV map is a form of 2D spatial LUT describing how a texture should be indexed. Let’s start with simplest example, a rectangle has xy(z) coordinates (i.e. is defined by 4 vertices). To the vertices one can attach color information (RGBA) and UV coordinates (UV). One can attach tons of other things these days…we only focus here on one extra data component, UV. The UV map is rasterized into an image (from the vertex description a full image is filled). We often now come to call such extra image channels, AOVs (Arbitrary Output Variable), and these are commonly used in compositing to complement the base RGBA image (i.e the Beauty Pass).

Theoretically, this, above, is a default UV map that does nothing. If the UV image has the same dimension as the texture, Image in = Image out.

 

Above is a simple indexing transform, here we rotate 180 degrees and scale X by -1 (flip and flop – mirror). We can concatenate a set of transform without ever resampling the color image to get more precise rendition. RE:Map, for example, has a plug-in called Inverse UV that turns this transformed UV back to the original, when symmetry is possible. That symmetry is for undistort – redistort workflows, for example to integrate a tracking process that assumes rectilinear images.

So, this map defines coordinates for indexing into another image (the texture – texture mapping). Texture is a primitive in graphics hardware distinct from other kinds of image buffers. In hardware it is characterized by special fast sampling hardware (e.g. to perform bilinear interpolation) and is the basis to accelerate many image processing functions, why frameworks like OpenCL and Cuda are popular acceleration methods.

Simple example, above: mapping one image on 6 faces of a cube in post.

Representing Transformations

Some simpler transformations (e.g. corner pinning) can be represented parametrically, for example, for corner pin only 4 points need to be defined, there is no need to explicitly generate a UV image (although you can) as the indexing in the texture can easily be generated on the fly by a simple algorithm.

Similarly, simple distortions can often be represented with a few distortion coefficients. A 1D distortion table for radial symmetric distortion (the simplest being barrel and pincushion), or a 2D set of coefficients such as Cooke lens I/ 3 protocol defines to accommodate the fact that large piece of glass deviate from the lens math model in asymmetric fashion (hand polishing can only be so precise). Of course, these ‘parametric’ values can also be rasterized at a particular resolution into a UV map.

Above, Karta, by Andrew Hazelden, is one example of a pretty elaborate set of projections/transformation, in his case done directly via LUA scripting in Fusion.

Historically, (say, as per Pixar RenderMan v1, for example), we used the term UV for direct texture mapping, and the term ST to express indirections (i.e. ST mapping is to sample ST map to get a new index value into the UV to sample the actual texture image). UV and ST mapping if you only have one such image are the same thing from which you do direct texture mapping. We will refine the distinction a bit later, but an application like Nuke calls such node ST mapping and it inputs a UV map, and all 3D rendering software refer only to UV maps. That ST mapping itself may also be parametric. Transformations can be concatenated (and thus cumulated into a new map instead of resampling the image multiple times) with some limits we address later.

Techno-Culture Wayback tidbit: some of the initial hardware texture mapping from Quantel Mirage (before Quantel Harry) and Ampex ADO DVE. (From: Ampex ADO-1000 Digital Video Effects)

The internet is full of fun videos, check this one that Stefan Sargent produced for the Ampex, in 2021 it’s almost hilarious.

Of cultural interest, for someone interested in commercial production in the 1980s, there was also the 3D animation Bosch FGS-4000. Some people, like Daniel Leduc at Studio Morin Heights in Quebec, were able to get corner points from the Bosch system, to drive the ADO to essentially texture map video on flat geometry, an effect everyone saw ad-nauseum in commercials in the 80s. Yes, same Daniel who later co-founded Hybride Technologies (now part of Ubisoft, and a player in the movie visual effects space).

Of course, on general-purpose computers, texture mapping was initially popularized by Silicon Graphics.

This pic, above, is a wayback reference from Paul Haeberli, one of the co-author of OpenGL (initially developed as a standard at SGI) texture map functions. Source.

To be complete historically we should note the ‘pyramidal parametrics’ mapping done by Alvy-Ray Smith in one of the original computer animation movies, Sunstone by Ed Emshwiller (see above). It’s mentioned as it preceded Digital Video Effects systems.

Inside a material/shading rendering graph, shaders have access to UV mappings, and shaders can be complex, even include z displacement on the mesh itself, altering in that sense the UV mapping post-geometry. We use the expression procedural texture mapping when using UV indices to sample a function instead of another image.

So, let’s come back to our simple inverse corner pinning (planar warping). This can be described as transposing UV in input space (the texture space) to an output space. The 4 points (a quadrilateral) can be used to change the coordinates to another spatial domain, transposing 0 to 1 to other values that samples the texture with the projection baked in.

An early reference to this method is described in Paul Heckbert’s Masters thesis, proving the mathematical equivalence.

Above, the simplest form of ST mapping would be a crop, 0 to 1 becomes a new 0 to 1 with a different texture size.

UV domain of definition (or reference)

UV coordinates encoded transformation have a spatial domain of definition, that includes orientation, we call forward or inverse. For e.g. corner mapping versus inverse corner mapping from the same 4 points. Direct mapping (texture mapping) is thus a spatial look-up into another image – a UV value represents a pixel value to sample (sub-sample) in the texture. There is a tight binding between the UV image 0 to 1 normalized representation and the image width X height aspect ratio of the texture.

Direct UV mapping has become a workflow in many motion graphics departments as it allows you to create a base 3D rendered scenes and continuously re-render it, like in sports programming – change team logos, score, time of game etc when making a game announcement spot or animated lower thirds. Even for replays during intermission if you have a couple of minutes to render a new video.

Above, flat shaded tubular object texture mapped (result and source UV map)

Here’s a nice example tutorial that is a good example of how UV map workflows are used in television production / motion graphics these days. It works because it allows the 3D department to make a single animation with “replaceable” elements. The tutorial is from Eran Stern, using a 3D animation done by Promotheus.

Other mappings

Note, to be complete here without becoming too dense, we note that for procedural objects, there is a class of mapping sometimes referred to as UVW mapping (always as 3ds Max have called it) used for 3D solid procedural texture (XYZ) and a variation UV+depth (imagine Earth 2D map with terrain elevation at each pixel of the texture). Another form is called volume texture mapping, which approximates voxel (3D pixels) coloring via 2D slices from which values are interpolated.

There is also a form of mapping referred to as reflection mapping (using surface normals based indexing), and environment mapping image-based lighting (using same representations as we use in 360 imaging). This form of environment mapping we first saw from Ned Green at NYIT (Three Worlds, Ned Greene 1983 animation). The movie I believe I initially saw in 1984 (?) impressed me a lot back then and is built around a Moebius strip seamlessly joining 3 environments through a camera path travelling across the strip.

Source.

And some environmental mapping techniques were later popularized particularly by Paul Debevec (light probes, HDR shop etc).

We refer to Debevec’s historical notes about reflection mapping. The above image relates to light probes converted to the vertical cross cube format using HDR Shop.

Finally, we note that 3D paint programs internally do inverse perspective projection so you can look at what you are doing through a perspective camera but edit the texture space. There were three to four 3D paint programs commercialized in the 1990s, and new ones have since taken over. Essentially a user points somewhere in screen space and the object hit has UV indices there that are used to locate the paint brush operating on the texture and being reprojected as texture map onto the model in an interactive loop. For example, see this Mudbox setup.

Distortion or displacement maps

Some applications, in particular 3D camera trackers, can, either or both, ingest a lens distortion model as a UV map and export a UV map representing the effective per pixel displacement. RE:Lens stabilizer allows the same, but of importance here for vocabulary sanity across applications, to the best of my knowledge at some point PFTrack (Pixel Farm) started to call such maps, ST Maps. All fine but a characteristic of such map is that they can have values outside the 0 to 1 range (a video does not have pixels outside of 0 to 1 range, although the 3D scene can) but a stabilization process can create space out of that range for example.

Such ST maps are also different than how we do Inverse UV mapping (which also assumes UV in range 0 to1, and further complicates vocabulary synchronization). What we can do with these mapping is simply convert to a forward displacement map format (same format as forward motion vectors, i.e. look up where pixel goes instead of where it comes from) and use a forward displacement mapping tool on converted mapping. In 2D it’s a simple arithmetic conversion function of location of value in image space versus value in such ST map. This can then be pushed to a Displacement mapping tool like RE:Map Displace (see below).

UV unwrapping

You can also inverse/unproject the UV (inverse UV mapping) and generate what we call unwrapping so you can paint or edit / image-process the image if you want (texture prep). You then have warped images, so mileage vary in that space. Often to paint directly on the projected view makes more intuitive sense. We re-created this issue with 360 videos more recently and another common indirection is creating such as cube-mapping (6 faces) so you can at least have a rectilinear view on what you are editing.

We won’t be discussing stitching here, but for camera arrays and projector arrays (as used in dome theaters which may have multi-projectors to fill the dome) UV based workflows are common. You can use a UV based workflow with PTGui for example, rendering UV maps and for the projector layout as it’s always the same, a reusable UV layout with blend (alpha channel) layout for all video can be applied.

The two images, above, showcase the California Academy of Sciences Morrison Dome theater hemispherical surface of the dome is covered by the warped and edge-blended output of six Sony 4K projectors – five around the circumference of the dome and one covering the zenith.

Above, a face mapped on UV.

By pre-filing holes, it simplifies the base pass having holes (often issues with edge alignment) The UV remapping can then retake the original alpha but also has the relevant color.

This technique (‘scattered data interpolation’) was independently originally developed at Apple Technology Group by Peter Litwinowicz (in Apple animation system Inkwell) and Lance Williams, and independently by Thad Beier at PDI (in their morpher – remember the Michael Jackson video).

UV filtering

The individual mesh points are moved (things deform), so now we have triangles that shrink and others that expand (the surface area of a triangle is smaller or larger in texture space).

The most popular technique developed for filtering is mip mapping. See original Lance Williams paper.

This picture from the article is one of the first pictures I could find that expresses replacing polygonal density with texture, hinting at polygon reduction via high resolution textures.

The basic idea here is you can take an image and transpose it to a set of images of different resolution, a pyramid. Then depending on how much the texture is squeezed/contracted (the area of the triangle after deformation or projection is smaller) you simply sample a different resolution of the pyramid. This allows you per pixel when rendering to not have to sample a super-large area to render a tiny patch. It’s possible as well to use the same logic to sharpen adaptively a texture where it expands. The next new level above that is recent GPU hardware attempts with large VRAM capacity available, to simply double the texture base resolution using AI super-resolution resizing.

Left, no filtering (just sampling nearest point in texture) versus 60% mip-mapping (right).


Above, RE:Vision’s RE:Map UV plug-in screenshot.

Other methods to handle extreme angles such as ‘anisotropic’ sampling have been developed over time but are not often used as they require a lot more texture memory and usually end up affecting just a few pixels in the far away background. For casual application as used by Motion Graphics, if the mipmapping is a slider as typically texture is applied per element, this is usually sufficient and easier to set. Aside from that, the basics has not dramatically changed since.

Note the adaptive sharpening slider (perhaps unique to RE:Map) is contemporary implementation of what Heckbert described as space variant filter.

UV discontinuities

Some UV maps are continuous (without hard cuts), these in 32-bit floating point format are pretty robust to a chain of UV transformations and resampling statying in the UV domain.

However, UV hard edges are tougher. We refer to discontinuities to express the case of texture maps that have seams, these often require additional handling in a game or an animation production pipeline. To create seamless texture maps remains a specialized skill in 2021, the texture technically specialized artist in the computer animation process is an unsung hero.

Above: Texture seam. Note discontinuity between arm and shoulder, head and body.

RE:Map allows you to special case UV edges so color is not pulled from the wrong face in filtering. It allows users control on that as it’s a compromise that can vary with image content.

We should mention two more texture mapping concepts, the sprite sheet (grid) and the texture atlas (UV layouts), so texture also have an ID (in layout format with associated offset values).

Above: ‘Imposter Sprites’ from Unreal Engine documentation. (picture of normals at right, but same idea).

Mari (Foundry) has popularized layout optimization as ‘UDIM’ (a grid indexing system). The idea being to maximize the density of individual textures in the smallest possible texture footprint, to pack UV ‘islands’ as tightly as possible. See here.

Picture from Houdini doc (tightly packed UV islands).

Projection mapping, lens shaders

There is a special type of shader known as lens shaders that can be used to do image plane level distortions, that is supported via input UV map in camera settings, allowing you in a ray tracer to essentially bend per sample the initial ray of the sample. See here.

Above: Arnold renderer has a Remap Camera setting in Maya and 3ds Max – pic shows effect of Arnold UV remap

Above: RE:Lens exports ‘Lens’ distortions to be applied as UV map.

Actual lens shaders (as shading language code) are somehow renderer specific, particularly as you increase the amount of optical properties supported and it’s thus often much simpler to pass UV maps around then having to deal with individual complicated huge shader assets (sometimes proprietary) for things like that. For example, some Pixar movies like Toy Story 4 have such lens distortion model in their rendering pipeline.

We use the expression projection mapping, when we go through a ‘camera projection’ process. It’s not a new idea, if you have used an animation camera stand in your youth (or your uncle), these had a way to put an exposed film roll in camera and project frame by frame onto the animation plane an image so one could ‘rotoscope’. For example, you pegged some acetate cells and projected the film frame by frame and traced outlines. The cel animation could then be recorded on high contrast film to do in optical printer compositing.

Projection mapping has become popular in video arts, where the video is warped and matted to account for the surface it is projected on. It’s used in live performance, where it can integrate some idea of the geometry onto which it is projected.

Above: Philippe Bergeron’s Paintscaping.

For more information about RE:Vision Effects products head to their website.

Brought to you by Re:Vision Effects:
This article is part of the befores & afters VFX Insight series. If you’d like to promote your VFX/animation/CG tech or service, you can find out more about the VFX Insight series here.


Subscribe (for FREE) to the VFX newsletter




Leave a Reply

Discover more from befores & afters

Subscribe now to keep reading and get access to the full archive.

Continue reading

Don't Miss

Learn more about Autograph and RE:Vision Effects at the Remote Production Conference

It's happening May 18th.

New compositing tool Autograph released by Left Angle

It's designed for motion design, visual effects, and 3D compositing.