no code implementations • 3 Jun 2023 • Arya Akhavan, Evgenii Chzhen, Massimiliano Pontil, Alexandre B. Tsybakov
The first algorithm uses a gradient estimator based on randomization over the $\ell_2$ sphere due to Bach and Perchet (2016).
no code implementations • 31 May 2023 • Evgenii Chzhen, Christophe Giraud, Gilles Stoltz
We consider the problem of minimizing a convex function over a closed convex set, with Projected Gradient Descent (PGD).
no code implementations • 22 May 2023 • Evgenii Chzhen, Sholom Schechtman
The core idea is first instantiated on the problem of minimizing sums of convex and Lipschitz functions and is then extended to the smooth case via variance reduction.
no code implementations • 1 Sep 2022 • Solenne Gaucher, Nicolas Schreuder, Evgenii Chzhen
In the awareness framework, akin to the classical unconstrained classification case, we show that maximizing accuracy under this fairness constraint is equivalent to solving a corresponding regression problem followed by thresholding at level $1/2$.
no code implementations • 27 May 2022 • Arya Akhavan, Evgenii Chzhen, Massimiliano Pontil, Alexandre B. Tsybakov
We present a novel gradient estimator based on two function evaluations and randomization on the $\ell_1$-sphere.
no code implementations • NeurIPS 2021 • Evgenii Chzhen, Christophe Giraud, Gilles Stoltz
We provide a setting and a general approach to fair online learning with stochastic sensitive and non-sensitive contexts.
1 code implementation • 24 Feb 2021 • Nicolas Schreuder, Evgenii Chzhen
Building on this result, we propose a post-processing classification algorithm, which is able to modify any off-the-shelf score-based classifier using only unlabeled sample.
no code implementations • 24 Feb 2021 • Evgenii Chzhen, Christophe Denis, Mohamed Hebiri, Titouan Lorieul
Multi-class classification problem is among the most popular and well-studied statistical frameworks.
no code implementations • NeurIPS 2020 • Evgenii Chzhen, Christophe Denis, Mohamed Hebiri, Luca Oneto, Massimiliano Pontil
We study the problem of learning an optimal regression function subject to a fairness constraint.
no code implementations • 13 Nov 2020 • Evgenii Chzhen, Nicolas Schreuder
We provide a non-trivial example of a prediction $x \to f(x)$ which satisfies two common group-fairness notions: Demographic Parity \begin{align} (f(X) | S = 1) &\stackrel{d}{=} (f(X) | S = 2) \end{align} and Equal Group-Wise Risks \begin{align} \mathbb{E}[(f^*(X) - f(X))^2 | S = 1] = \mathbb{E}[(f^*(X) - f(X))^2 | S = 2].
no code implementations • NeurIPS 2020 • Evgenii Chzhen, Christophe Denis, Mohamed Hebiri, Luca Oneto, Massimiliano Pontil
It demands the distribution of the predicted output to be independent of the sensitive attribute.
1 code implementation • NeurIPS 2019 • Evgenii Chzhen, Christophe Denis, Mohamed Hebiri, Luca Oneto, Massimiliano Pontil
We study the problem of fair binary classification using the notion of Equal Opportunity.
no code implementations • 14 Mar 2017 • Evgenii Chzhen, Christophe Denis, Mohamed Hebiri, Joseph Salmon
The modern multi-label problems are typically large-scale in terms of number of observations, features and labels, and the amount of labels can even be comparable with the amount of observations.