no code implementations • 8 Feb 2024 • Pierre Marion, Anna Korba, Peter Bartlett, Mathieu Blondel, Valentin De Bortoli, Arnaud Doucet, Felipe Llinares-López, Courtney Paquette, Quentin Berthet
We present a new algorithm to optimize distributions defined implicitly by parameterized stochastic diffusions.
no code implementations • 5 Feb 2024 • Tianlin Liu, Shangmin Guo, Leonardo Bianco, Daniele Calandriello, Quentin Berthet, Felipe Llinares, Jessica Hoffmann, Lucas Dixon, Michal Valko, Mathieu Blondel
Aligning language models with human preferences is crucial for reducing errors and biases in these models.
no code implementations • 18 Nov 2022 • Marin Ballu, Quentin Berthet
Optimal transport is an important tool in machine learning, allowing to capture geometric properties of the data through a linear program on transport polytopes.
1 code implementation • 10 Nov 2022 • Lawrence Stewart, Francis Bach, Quentin Berthet, Jean-Philippe Vert
Neural networks can be trained to solve regression problems by using gradient-based methods to minimize the square loss.
1 code implementation • 25 May 2022 • Benjamin Dubois-Taine, Francis Bach, Quentin Berthet, Adrien Taylor
We consider the problem of minimizing the sum of two convex functions.
no code implementations • 6 Jan 2022 • Artem R. Muliukov, Laurent Rodriguez, Benoit Miramond, Lyes Khacef, Joachim Schmidt, Quentin Berthet, Andres Upegui
This work also demonstrates the distributed and scalable nature of the model through both simulation results and hardware execution on a dedicated FPGA-based platform named SCALP (Self-configurable 3D Cellular Adaptive Platform).
1 code implementation • NeurIPS 2021 • Mathieu Blondel, Quentin Berthet, Marco Cuturi, Roy Frostig, Stephan Hoyer, Felipe Llinares-López, Fabian Pedregosa, Jean-Philippe Vert
In this paper, we propose automatic implicit differentiation, an efficient and modular approach for implicit differentiation of optimization problems.
no code implementations • 17 Mar 2021 • Andrew N Carr, Quentin Berthet, Mathieu Blondel, Olivier Teboul, Neil Zeghidour
Second, we show that inverting permutations is a meaningful pretext task for learning audio representations in an unsupervised fashion.
no code implementations • 1 Jan 2021 • Andrew N Carr, Quentin Berthet, Mathieu Blondel, Olivier Teboul, Neil Zeghidour
In particular, we also improve music understanding by reordering spectrogram patches in the frequency space, as well as video classification by reordering frames along the time axis.
no code implementations • NeurIPS 2020 • Quentin Berthet, Mathieu Blondel, Olivier Teboul, Marco Cuturi, Jean-Philippe Vert, Francis Bach
Machine learning pipelines often rely on optimizers procedures to make discrete decisions (e. g., sorting, picking closest neighbors, or shortest paths).
1 code implementation • 26 Apr 2020 • Marco Cuturi, Olivier Teboul, Quentin Berthet, Arnaud Doucet, Jean-Philippe Vert
Our goal in this paper is to propose new group testing algorithms that can operate in a noisy setting (tests can be mistaken) to decide adaptively (looking at past results) which groups to test next, with the goal to converge to a good detection, as quickly, and with as few tests as possible.
2 code implementations • ICML 2020 • Mathieu Blondel, Olivier Teboul, Quentin Berthet, Josip Djolonga
While numerous works have proposed differentiable proxies to sorting and ranking, they do not achieve the $O(n \log n)$ time complexity one would expect from sorting and ranking operations.
no code implementations • ICML 2020 • Marin Ballu, Quentin Berthet, Francis Bach
We show that this algorithm can be extended to other tasks, including estimation of Wasserstein barycenters.
3 code implementations • 20 Feb 2020 • Quentin Berthet, Mathieu Blondel, Olivier Teboul, Marco Cuturi, Jean-Philippe Vert, Francis Bach
Machine learning pipelines often rely on optimization procedures to make discrete decisions (e. g., sorting, picking closest neighbors, or shortest paths).
no code implementations • 11 Oct 2018 • Xavier Fontaine, Quentin Berthet, Vianney Perchet
We consider the stochastic contextual bandit problem with additional regularization.
no code implementations • 6 Aug 2018 • Quentin Berthet, Varun Kanade
We study the problem of hypothesis testing between two discrete distributions, where we only have access to samples after the action of a known reversible Markov chain, playing the role of noise.
3 code implementations • 29 May 2018 • Edouard Grave, Armand Joulin, Quentin Berthet
A library for Multilingual Unsupervised or Supervised word Embeddings
no code implementations • 19 Mar 2018 • Nicolai Baldin, Quentin Berthet
We consider the problem of link prediction, based on partial observation of a large network, and on side information associated to its vertices.
no code implementations • NeurIPS 2017 • Quentin Berthet, Vianney Perchet
We consider the problem of bandit optimization, inspired by stochastic optimization and online learning problems with bandit feedback.
no code implementations • NeurIPS 2016 • Tengyao Wang, Quentin Berthet, Yaniv Plan
The restricted isometry property (RIP) for design matrices gives guarantees for optimal recovery in sparse linear models.
no code implementations • 21 Feb 2015 • Quentin Berthet, Jordan S. Ellenberg
We describe the properties of random instances of flat satisfiability, as well of the optimal rates of detection of the associated hypothesis testing problem.
no code implementations • 22 Aug 2014 • Tengyao Wang, Quentin Berthet, Richard J. Samworth
In this paper, we show that, under a widely-believed assumption from computational complexity theory, there is a fundamental trade-off between statistical and computational performance in this problem.
no code implementations • 3 Apr 2013 • Quentin Berthet, Philippe Rigollet
In the context of sparse principal component detection, we bring evidence towards the existence of a statistical price to pay for computational efficiency.
no code implementations • 23 Feb 2012 • Quentin Berthet, Philippe Rigollet
We perform a finite sample analysis of the detection levels for sparse principal components of a high-dimensional covariance matrix.