no code implementations • 24 Feb 2023 • Randal Douc, Sylvain Le Corff
This paper introduces a general framework for iterative optimization algorithms and establishes under general assumptions that their convergence is asymptotically geometric.
no code implementations • NeurIPS 2021 • Kamélia Daudel, Randal Douc
This paper focuses on $\alpha$-divergence minimisation methods for Variational Inference.
no code implementations • 9 Mar 2021 • Kamélia Daudel, Randal Douc, François Roueff
In this paper, we introduce a novel family of iterative algorithms which carry out $\alpha$-divergence minimisation in a Variational Inference context.
3 code implementations • 9 Jul 2020 • Mathieu Gerber, Randal Douc
We introduce a new online algorithm for expected log-likelihood maximization in situations where the objective function is multi-modal and/or has saddle points, that we term G-PFSO.