Analyzing and Improving the Image Quality of StyleGAN

The style-based GAN architecture (StyleGAN) yields state-of-the-art results in data-driven unconditional generative image modeling. We expose and analyze several of its characteristic artifacts, and propose changes in both model architecture and training methods to address them. In particular, we redesign the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images. In addition to improving image quality, this path length regularizer yields the additional benefit that the generator becomes significantly easier to invert. This makes it possible to reliably attribute a generated image to a particular network. We furthermore visualize how well the generator utilizes its output resolution, and identify a capacity problem, motivating us to train larger models for additional quality improvements. Overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Conditional Image Generation ArtBench-10 (32x32) StyleGAN2 FID 4.491 # 4
Image Generation FFHQ 1024 x 1024 StyleGAN2 FID 2.84 # 6
Image Generation LSUN Car 256 x 256 StyleGAN2 FID 2.32 # 1
Image Generation LSUN Car 512 x 384 StyleGAN2 FID 2.32 # 2
Image Generation LSUN Cat 256 x 256 StyleGAN2 FID 6.93 # 5
Clean-FID (trainfull) 6.97 ± 0.16 # 2
Image Generation LSUN Churches 256 x 256 StyleGAN2 FID 3.86 # 9
Clean-FID (trainfull) 4.28 ± 0.03 # 2
Image Generation LSUN Horse 256 x 256 StyleGAN2 FID 3.43 # 4
Clean-FID (trainfull) 4.06 ± 0.03 # 2

Methods