‘Imagine Tilt Brush, which allows you to paint in 3D space, but for hair.’
I’ve been following the work of Hao Li, a USC/ICT researcher and the CEO and co-founder of Pinscreen, for several years now. He and his collaborators have been pushing the art of CG avatars and CG hairstyles in big ways over that time, and now they’re bringing some of it altogether for a Real-Time Live! demo at SIGGRAPH 2019.
The demo is a VR hair salon, where people can use a VR hand-controller as a brush to draw hairstrips. The result – aided with A.I. – is an animatable hairstrip in minutes. The Real-Time Live! presentation is related to a UIST paper (Jun Xing, Koki Nagano, Weikai Chen, Haotian Xu, Li-Yi Wei, Yajie Zhao, Jingwan Lu, Byungmoon Kim, Hao Li, Proceedings of the 32nd ACM User Interface Software and Technology Symposium 2019, 10/2019 – UIST 2019).
Here’s Hao’s special preview for befores & afters readers.
b&a: What will your team be demo’ing at Real-Time Live?
Hao Li: This year we will be showing something completely new! We will demonstrate a new VR tool which allows people to create high-quality and super complex polycard hair models in minutes. Imagine tilt brush, which allows you to paint in 3D space, but for hair. You can adjust the width of your brush and draw hairstrips efficiently and intuitively. Instead of simply drawing strokes in space, it creates an animatable hairstrip from the scalp of an avatar and ensures that the curves are smooth and plausible.
But that’s not it, the system has a sophisticated AI that is analyzing your strokes and tries to predict what type of hair you are trying to produce. For instance, if you paint a curve behind the head, it may think you are trying to create a pony tail based on a large database of reference hairstyles. We developed a suggestive system which autocompletes hairstyles based on your stroke history.
The user can select the hair regions that it likes, or erase those that are irrelevant. This autocomplete mechanism allows us to very quickly create high-quality hair models, recombine them into new ones, and even produces styles that don’t exist in the database. The system also comes with a set of practical UI components such symmetric painting, hair deformations, and various styling tools.
Our demo will consist of drawing a few challenging hairstyles from scratch within the allocated 6 minutes demo on stage and showcase a high-speed hairstyling procedure for our Pinscreen avatars by a novice user.
b&a: Where did the artist-curated hair data set come from? How is that inputted?
Hao Li: We actually started the hairstyles from our existing Pinscreen database and augmented those with new hairstyles, created using our VR tool.
b&a: Why did you decide to make this a ‘VR’ thing? Is it something that would also work without the immersive approach? What extra abilities does VR give you?
Hao Li: The main author, Dr. Jun Xing, was working on a general immersive 3D modeling tool using VR, and we looked at the pros and cons of such system. For man-made objects, certain shapes are very difficult to achieve when drawing freely in 3D space, for example perfectly straight lines, or other exact measurements. For shapes such as hair, however, it is tedious to produce large number of curves in space using 2D input devices, but exact curve positions are not critical for the overall hairstyle.
It became clear to us that using VR for hair modeling would be a clear benefit. We further found that many hairstyles have similar base components (fringe, parting to the left or right, etc.). Hence, there is an opportunity to develop a suggestive system that can try to predict these base hair structures and speed up the hair modeling process.
The main advantage here of using VR is the ability to indicate in a natural way the type of hair curves we are trying to achieve, whether they are curly, straight, and how they would be curved around the head. VR obviously gives us a natural way to inspect our creation, pretty much like a hairstylist would be able to see his client when cutting his or her hair. In addition, we can arbitrarily adjust the size of the head and hair, and also add new hair to it, and undo. These are not possible in physical settings.
b&a: How would you say your past research in digital hair, neural networks and faces has led to this ability to do quick hair modelling?
Hao Li: Our work in the past has mostly focussed on creating hair models automatically, especially from a single input image. We did have some work where user strokes in 2D are used to guide the digitization process. We have laid the ground work for hair model retrieval and deep neural network based inference, which allows us to develop an effective suggestive hairstyle modeling tool. Combined with Jun’s expertise in UX design and interaction, we were able to create this novel real-time hair modeling paradigm.
b&a: What’s the toughest part of making this ‘work’ for Real-Time Live?
Hao Li: We want to be able to wow the users, and make sure they feel the ease and fun of creating such complex hairstyles. When someone uses this tech it often feels very natural and the desired hair models are often created too quickly before a viewer can really understand what is going on. We will focus on going through the individual features carefully and make sure the audience will experience the creation process as much as possible from the perspective of the user.
b&a: Where do you see this being used the most – games/VFX/mobile?
Hao Li: Most AAA video games, VR applications or mobile games with 3D avatars nowadays use polycards for hairstyles for efficiency reasons and the flexibility to represent a wide range of hairstyles. For hero characters, some studios spend months to create a single high quality polycard hair model (e.g., Uncharted 4). Our tool can create high quality models of similar quality in a few minutes. I’m confident that at least secondary characters or even for prototyping purposes, our tool will be of high interest.
For VFX, our VR hairstyle modeling tool could be of interest for virtual production settings, but for offline visual effects, hair strands are still going to be the method of choice for a while.
‘VR Hair Salon for Avatars’ Contributors
Hao Li – Pinscreen, USC/ICT
Jun Xing – miHoYo Inc.
Koki Nagano – Pinscreen
Liwen Hu – Pinscreen
Li-Yi Wei – Adobe Research
Find out what else is happening at Real-Time Live! at SIGGRAPH here.Sign up to the weekly b&a VFX newsletter