no code implementations • 2 Feb 2024 • Tuan Anh Le, Pavel Sountsov, Matthew D. Hoffman, Ben Lee, Brian Patton, Rif A. Saurous
How do we infer a 3D scene from a single image in the presence of corruptions like rain, snow or fog?
no code implementations • NeurIPS 2023 • Du Phan, Matthew D. Hoffman, David Dohan, Sholto Douglas, Tuan Anh Le, Aaron Parisi, Pavel Sountsov, Charles Sutton, Sharad Vikram, Rif A. Saurous
Large language models (LLMs) solve problems more accurately and interpretably when instructed to work out the answer step by step using a ``chain-of-thought'' (CoT) prompt.
no code implementations • 27 Oct 2022 • Matthew D. Hoffman, Tuan Anh Le, Pavel Sountsov, Christopher Suter, Ben Lee, Vikash K. Mansinghka, Rif A. Saurous
The problem of inferring object shape from a single 2D image is underconstrained.
no code implementations • ICLR 2022 • Erik Nijkamp, Ruiqi Gao, Pavel Sountsov, Srinivas Vasudevan, Bo Pang, Song-Chun Zhu, Ying Nian Wu
However, MCMC sampling of EBMs in high-dimensional data space is generally not mixing, because the energy function, which is usually parametrized by deep network, is highly multi-modal in the data space.
no code implementations • 12 Jun 2020 • Erik Nijkamp, Ruiqi Gao, Pavel Sountsov, Srinivas Vasudevan, Bo Pang, Song-Chun Zhu, Ying Nian Wu
Learning energy-based model (EBM) requires MCMC sampling of the learned model as an inner loop of the learning algorithm.
no code implementations • 4 Feb 2020 • Junpeng Lao, Christopher Suter, Ian Langmore, Cyril Chimisov, Ashish Saxena, Pavel Sountsov, Dave Moore, Rif A. Saurous, Matthew D. Hoffman, Joshua V. Dillon
Markov chain Monte Carlo (MCMC) is widely regarded as one of the most important algorithms of the 20th century.
1 code implementation • 9 Mar 2019 • Matthew Hoffman, Pavel Sountsov, Joshua V. Dillon, Ian Langmore, Dustin Tran, Srinivas Vasudevan
Hamiltonian Monte Carlo is a powerful algorithm for sampling from difficult-to-normalize posterior distributions.
no code implementations • EMNLP 2016 • Pavel Sountsov, Sunita Sarawagi
Encoder-decoder networks are popular for modeling sequences probabilistically in many applications.