TediGAN: Text-Guided Diverse Face Image Generation and Manipulation

CVPR 2021  ·  Weihao Xia, Yujiu Yang, Jing-Hao Xue, Baoyuan Wu ·

In this work, we propose TediGAN, a novel framework for multi-modal image generation and manipulation with textual descriptions. The proposed method consists of three components: StyleGAN inversion module, visual-linguistic similarity learning, and instance-level optimization. The inversion module maps real images to the latent space of a well-trained StyleGAN. The visual-linguistic similarity learns the text-image matching by mapping the image and text into a common embedding space. The instance-level optimization is for identity preservation in manipulation. Our model can produce diverse and high-quality images with an unprecedented resolution at 1024. Using a control mechanism based on style-mixing, our TediGAN inherently supports image synthesis with multi-modal inputs, such as sketches or semantic labels, with or without instance guidance. To facilitate text-guided multi-modal synthesis, we propose the Multi-Modal CelebA-HQ, a large-scale dataset consisting of real face images and corresponding semantic segmentation map, sketch, and textual descriptions. Extensive experiments on the introduced dataset demonstrate the superior performance of our proposed method. Code and data are available at https://github.com/weihaox/TediGAN.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract

Datasets


Introduced in the Paper:

Multi-Modal CelebA-HQ

Used in the Paper:

CelebA-HQ
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Text-to-Image Generation Multi-Modal-CelebA-HQ TediGAN-A FID 106.37 # 6
LPIPS 0.456 # 1
Acc 18.4 # 2
Real 22.6 # 1

Methods