Paper

Decoupled Diffusion Models: Simultaneous Image to Zero and Zero to Noise

We propose decoupled diffusion models (DDMs) for high-quality (un)conditioned image generation in less than 10 function evaluations. In a nutshell, DDMs decouple the forward image-to-noise mapping into \textit{image-to-zero} mapping and \textit{zero-to-noise} mapping. Under this framework, we mathematically derive 1) the training objectives and 2) for the reverse time the sampling formula based on an analytic transition probability which models image to zero transition. The former enables DDMs to learn noise and image components simultaneously which simplifies learning. Importantly, because of the latter's analyticity in the \textit{zero-to-image} sampling function, DDMs can avoid the ordinary differential equation-based accelerators and instead naturally perform sampling with an arbitrary step size. Under the few function evaluation setups, DDMs experimentally yield very competitive performance compared with the state of the art in 1) unconditioned image generation, \textit{e.g.}, CIFAR-10 and CelebA-HQ-256 and 2) image-conditioned downstream tasks such as super-resolution, saliency detection, edge detection, and image inpainting.

Results in Papers With Code
(↓ scroll down to see all results)