no code implementations • 13 Dec 2023 • Jihao Xin, Ivan Ilin, Shunkang Zhang, Marco Canini, Peter Richtárik
In distributed training, communication often emerges as a bottleneck.
no code implementations • 25 Feb 2019 • Shunkang Zhang, Yuan Gao, Yuling Jiao, Jin Liu, Yang Wang, Can Yang
To address the challenges in learning deep generative models (e. g., the blurriness of variational auto-encoder and the instability of training generative adversarial networks, we propose a novel deep generative model, named Wasserstein-Wasserstein auto-encoders (WWAE).
1 code implementation • 24 Jan 2019 • Yuan Gao, Yuling Jiao, Yang Wang, Yao Wang, Can Yang, Shunkang Zhang
We propose a general framework to learn deep generative models via \textbf{V}ariational \textbf{Gr}adient Fl\textbf{ow} (VGrow) on probability spaces.