Unconstrained Facial Expression Transfer using Style-based Generator

12 Dec 2019  ·  Chao Yang, Ser-Nam Lim ·

Facial expression transfer and reenactment has been an important research problem given its applications in face editing, image manipulation, and fabricated videos generation. We present a novel method for image-based facial expression transfer, leveraging the recent style-based GAN shown to be very effective for creating realistic looking images. Given two face images, our method can create plausible results that combine the appearance of one image and the expression of the other. To achieve this, we first propose an optimization procedure based on StyleGAN to infer hierarchical style vector from an image that disentangle different attributes of the face. We further introduce a linear combination scheme that fuses the style vectors of the two given images and generate a new face that combines the expression and appearance of the inputs. Our method can create high-quality synthesis with accurate facial reenactment. Unlike many existing methods, we do not rely on geometry annotations, and can be applied to unconstrained facial images of any identities without the need for retraining, making it feasible to generate large-scale expression-transferred results.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods