Training Deep Generative Models via Auxiliary Supervised Learning

29 Sep 2021  ·  Yijia Zheng, Yixuan Qiu ·

Deep generative modeling has long been viewed as a challenging unsupervised learning problem, partly due to the lack of labels and the high dimension of the data. Although various latent variable models have been proposed to tackle these difficulties, the latent variable only serves as a device to model the observed data, and is typically averaged out during training. In this article, we show that by introducing a properly pre-trained encoder, the latent variable can play a more important role, which decomposes a deep generative model into a supervised learning problem and a much simpler unsupervised learning task. With this new training method, which we call the auxiliary supervised learning (ASL) framework, deep generative models can benefit from the enormous success of deep supervised learning and representation learning techniques. By evaluating on various synthetic and real data sets, we demonstrate that ASL is a stable, efficient, and accurate training framework for deep generative models.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here