2 code implementations • 6 Jan 2022 • Louis Faury, Marc Abeille, Kwang-Sung Jun, Clément Calauzènes
Logistic Bandits have recently undergone careful scrutiny by virtue of their combined theoretical and practical relevance.
no code implementations • 9 Mar 2021 • Louis Faury, Yoan Russac, Marc Abeille, Clément Calauzènes
Generalized Linear Bandits (GLBs) are powerful extensions to the Linear Bandit (LB) setting, broadening the benefits of reward parametrization beyond linearity.
no code implementations • 13 Nov 2020 • Otmane Sakhi, Louis Faury, Flavian vasile
Our approach relies on the construction of asymptotic confidence intervals for offline contextual bandits through the DRO framework.
no code implementations • 2 Nov 2020 • Yoan Russac, Louis Faury, Olivier Cappé, Aurélien Garivier
Contextual sequential decision problems with categorical or numerical observations are ubiquitous and Generalized Linear Bandits (GLB) offer a solid theoretical framework to address them.
no code implementations • 23 Oct 2020 • Marc Abeille, Louis Faury, Clément Calauzènes
It was shown by Faury et al. (2020) that the learning-theoretic difficulties of Logistic Bandits can be embodied by a large (sometimes prohibitively) problem-dependent constant $\kappa$, characterizing the magnitude of the reward's non-linearity.
no code implementations • ICML 2020 • Louis Faury, Marc Abeille, Clément Calauzènes, Olivier Fercoq
For logistic bandits, the frequentist regret guarantees of existing algorithms are $\tilde{\mathcal{O}}(\kappa \sqrt{T})$, where $\kappa$ is a problem-dependent constant.
no code implementations • 14 Jun 2019 • Louis Faury, Ugo Tanielian, Flavian vasile, Elena Smirnova, Elvis Dohmatob
This manuscript introduces the idea of using Distributionally Robust Optimization (DRO) for the Counterfactual Risk Minimization (CRM) problem.
no code implementations • 31 Jan 2019 • Louis Faury, Clement Calauzenes, Olivier Fercoq, Syrine Krichen
Evolutionary Strategies (ES) are a popular family of black-box zeroth-order optimization algorithms which rely on search distributions to efficiently optimize a large variety of objective functions.
no code implementations • 22 May 2018 • Louis Faury, Flavian vasile, Clément Calauzènes, Olivier Fercoq
The aim of global optimization is to find the global optimum of arbitrary classes of functions, possibly highly multimodal ones.
no code implementations • 22 Jan 2018 • Louis Faury, Flavian vasile
Learning to optimize - the idea that we can learn from data algorithms that optimize a numerical criterion - has recently been at the heart of a growing number of research efforts.