no code implementations • 1 Apr 2024 • Siddhant Jain, Daniel Watson, Eric Tabellion, Aleksander Hołyński, Ben Poole, Janne Kontkanen
We present VIDIM, a generative model for video interpolation, which creates short videos given a start and end frame.
no code implementations • 5 Dec 2023 • Rundi Wu, Ben Mildenhall, Philipp Henzler, Keunhong Park, Ruiqi Gao, Daniel Watson, Pratul P. Srinivasan, Dor Verbin, Jonathan T. Barron, Ben Poole, Aleksander Holynski
3D reconstruction methods such as Neural Radiance Fields (NeRFs) excel at rendering photorealistic novel views of complex scenes.
no code implementations • 15 Feb 2023 • Hshmat Sahak, Daniel Watson, Chitwan Saharia, David Fleet
Diffusion models have shown promising results on single-image super-resolution and other image- to-image translation tasks.
no code implementations • 6 Oct 2022 • Daniel Watson, William Chan, Ricardo Martin-Brualla, Jonathan Ho, Andrea Tagliasacchi, Mohammad Norouzi
We demonstrate that stochastic conditioning significantly improves the 3D consistency of a naive sampler for an image-to-image diffusion model, which involves conditioning on a single fixed view.
no code implementations • 11 Feb 2022 • Daniel Watson, William Chan, Jonathan Ho, Mohammad Norouzi
We introduce Differentiable Diffusion Sampler Search (DDSS): a method that optimizes fast samplers for any pre-trained diffusion model by differentiating through sample quality scores.
no code implementations • ICLR 2022 • Daniel Watson, William Chan, Jonathan Ho, Mohammad Norouzi
We propose Generalized Gaussian Diffusion Processes (GGDP), a family of non-Markovian samplers for diffusion models, and we show how to improve the generated samples of pre-trained DDPMs by optimizing the degrees of freedom of the GGDP sampler family with respect to a perceptual loss.
no code implementations • 7 Jun 2021 • Daniel Watson, Jonathan Ho, Mohammad Norouzi, William Chan
Key advantages of DDPMs include ease of training, in contrast to generative adversarial networks, and speed of generation, in contrast to autoregressive models.
no code implementations • EMNLP 2018 • Daniel Watson, Nasser Zalmout, Nizar Habash
We show that providing the model with word-level features bridges the gap for the neural network approach to achieve a state-of-the-art F1 score on a standard Arabic language correction shared task dataset.