no code implementations • 21 Aug 2023 • Jessica Hullman, Ari Holtzman, Andrew Gelman
In this essay, we focus on an unresolved tension when we bring this dilemma to bear in the context of generative AI: are we looking for proof that generated media reflects something about the conditions that created it or some eternal human essence?
1 code implementation • 8 Feb 2023 • Han Guo, Philip Greengard, Hongyi Wang, Andrew Gelman, Yoon Kim, Eric P. Xing
A recent alternative formulation instead treats federated learning as a distributed inference problem, where the goal is to infer a global posterior from partitioned client data (Al-Shedivat et al., 2021).
no code implementations • 12 Mar 2022 • Jessica Hullman, Sayash Kapoor, Priyanka Nanayakkara, Andrew Gelman, Arvind Narayanan
We conclude by discussing risks that arise when sources of errors are misdiagnosed and the need to acknowledge the role of human inductive biases in learning and reform.
no code implementations • 5 Dec 2021 • Tamara Broderick, Andrew Gelman, Rachael Meager, Anna L. Smith, Tian Zheng
Probabilistic machine learning increasingly informs critical decisions in medicine, economics, politics, and beyond.
5 code implementations • 9 Aug 2021 • Lu Zhang, Bob Carpenter, Andrew Gelman, Aki Vehtari
Pathfinder returns draws from the approximation with the lowest estimated Kullback-Leibler (KL) divergence to the true posterior.
1 code implementation • 22 Jan 2021 • Yuling Yao, Gregor Pirš, Aki Vehtari, Andrew Gelman
We show that stacking is most effective when model predictive performance is heterogeneous in inputs, and we can further improve the stacked mixture with a hierarchical model.
1 code implementation • 30 Nov 2020 • Andrew Gelman, Aki Vehtari
We review the most important statistical ideas of the past half century, which we categorize as: counterfactual causal inference, bootstrapping and simulation-based inference, overparameterized models and regularization, Bayesian multilevel models, generic computation algorithms, adaptive decision analysis, robust inference, and exploratory data analysis.
Causal Inference Methodology
1 code implementation • 1 Sep 2020 • Yuling Yao, Collin Cademartori, Aki Vehtari, Andrew Gelman
The normalizing constant plays an important role in Bayesian computation, and there is a large literature on methods for computing or approximating normalizing constants that cannot be evaluated in closed form.
Computation Methodology
1 code implementation • 22 Jun 2020 • Yuling Yao, Aki Vehtari, Andrew Gelman
When working with multimodal Bayesian posterior distributions, Markov chain Monte Carlo (MCMC) algorithms have difficulty moving between modes, and default variational or mode-based approximate inferences will understate posterior uncertainty.
2 code implementations • 19 Aug 2019 • Yuxiang Gao, Lauren Kennedy, Daniel Simpson, Andrew Gelman
A central theme in the field of survey statistics is estimating population-level quantities through data coming from potentially non-representative samples of the population.
Methodology
2 code implementations • 19 Mar 2019 • Aki Vehtari, Andrew Gelman, Daniel Simpson, Bob Carpenter, Paul-Christian Bürkner
In this paper we show that the convergence diagnostic $\widehat{R}$ of Gelman and Rubin (1992) has serious flaws.
Computation Methodology
6 code implementations • 18 Apr 2018 • Sean Talts, Michael Betancourt, Daniel Simpson, Aki Vehtari, Andrew Gelman
Verifying the correctness of Bayesian computation is challenging.
Methodology
1 code implementation • ICML 2018 • Yuling Yao, Aki Vehtari, Daniel Simpson, Andrew Gelman
While it's always possible to compute a variational approximation to a posterior distribution, it can be difficult to discover problems with this approximation.
2 code implementations • 5 Sep 2017 • Jonah Gabry, Daniel Simpson, Aki Vehtari, Michael Betancourt, Andrew Gelman
Bayesian data analysis is about more than just computing a posterior distribution, and Bayesian visualization is about more than trace plots of Markov chains.
Methodology Applications
2 code implementations • 6 Apr 2017 • Yuling Yao, Aki Vehtari, Daniel Simpson, Andrew Gelman
The widely recommended procedure of Bayesian model averaging is flawed in the M-open setting in which the true data-generating process is not one of the candidate models being fit.
Methodology Computation
4 code implementations • 2 Mar 2016 • Alp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, David M. Blei
Probabilistic modeling is iterative.
7 code implementations • 16 Jul 2015 • Aki Vehtari, Andrew Gelman, Jonah Gabry
Leave-one-out cross-validation (LOO) and the widely applicable information criterion (WAIC) are methods for estimating pointwise out-of-sample prediction accuracy from a fitted Bayesian model using the log-likelihood evaluated at the posterior simulations of the parameter values.
Computation Methodology
9 code implementations • 9 Jul 2015 • Aki Vehtari, Daniel Simpson, Andrew Gelman, Yuling Yao, Jonah Gabry
Importance weighting is a general way to adjust Monte Carlo integration to account for draws from the wrong distribution, but the resulting estimate can be highly variable when the importance ratios have a heavy right tail.
no code implementations • NeurIPS 2015 • Alp Kucukelbir, Rajesh Ranganath, Andrew Gelman, David M. Blei
With ADVI we can use variational inference on any model we write in Stan.
2 code implementations • 16 Dec 2014 • Aki Vehtari, Andrew Gelman, Tuomas Sivula, Pasi Jylänki, Dustin Tran, Swupnil Sahai, Paul Blomstedt, John P. Cunningham, David Schiminovich, Christian Robert
A common divide-and-conquer approach for Bayesian computation with big data is to partition the data, perform local inference for each piece separately, and combine the results to obtain a global posterior approximation.
8 code implementations • 18 Nov 2011 • Matthew D. Hoffman, Andrew Gelman
Hamiltonian Monte Carlo (HMC) is a Markov chain Monte Carlo (MCMC) algorithm that avoids the random walk behavior and sensitivity to correlated parameters that plague many MCMC methods by taking a series of steps informed by first-order gradient information.