no code implementations • NeurIPS 2023 • Parikshit Gopalan, Michael P. Kim, Omer Reingold
We establish an equivalence between swap variants of omniprediction and multicalibration and swap agnostic learning.
no code implementations • 16 Oct 2022 • Parikshit Gopalan, Lunjia Hu, Michael P. Kim, Omer Reingold, Udi Wieder
This decomposition highlights the utility of a new multi-group fairness notion that we call calibrated multiaccuracy, which lies in between multiaccuracy and multicalibration.
no code implementations • 4 Oct 2022 • Michael P. Kim, Juan C. Perdomo
This performative prediction setting raises new challenges for learning "optimal" decision rules.
1 code implementation • 23 Jun 2022 • Moritz Hardt, Michael P. Kim
When does a machine learning model predict the future of individuals and when does it recite patterns that predate the individuals?
no code implementations • 14 Apr 2022 • Shafi Goldwasser, Michael P. Kim, Vinod Vaikuntanathan, Or Zamir
Second, we demonstrate how to insert undetectable backdoors in models trained using the Random Fourier Features (RFF) learning paradigm or in Random ReLU networks.
no code implementations • 2 Mar 2022 • Parikshit Gopalan, Michael P. Kim, Mihir Singhal, Shengjia Zhao
This stringent notion -- that predictions be well-calibrated across a rich class of intersecting subpopulations -- provides its strong guarantees at a cost: the computational and sample complexity of learning multicalibrated predictors are high, and grow exponentially with the number of class labels.
no code implementations • NeurIPS 2021 • Shengjia Zhao, Michael P. Kim, Roshni Sahoo, Tengyu Ma, Stefano Ermon
In this work, we introduce a new notion -- \emph{decision calibration} -- that requires the predicted distribution and true distribution to be ``indistinguishable'' to a set of downstream decision-makers.
no code implementations • 26 Nov 2020 • Cynthia Dwork, Michael P. Kim, Omer Reingold, Guy N. Rothblum, Gal Yona
Prediction algorithms assign numbers to individuals that are popularly understood as individual "probabilities" -- what is the probability of 5-year survival after cancer diagnosis?
no code implementations • ICML 2020 • Amirata Ghorbani, Michael P. Kim, James Zou
Shapley value is a classic notion from game theory, historically used to quantify the contributions of individuals within groups, and more recently applied to assign values to data points when training machine learning models.
no code implementations • 22 Apr 2019 • Sumegha Garg, Michael P. Kim, Omer Reingold
As algorithmic prediction systems have become widespread, fears that these systems may inadvertently discriminate against members of underrepresented populations have grown.
no code implementations • 3 Apr 2019 • Michael P. Kim, Aleksandra Korolova, Guy N. Rothblum, Gal Yona
We introduce and study a new notion of preference-informed individual fairness (PIIF) that is a relaxation of both individual fairness and envy-freeness.
1 code implementation • 31 May 2018 • Michael P. Kim, Amirata Ghorbani, James Zou
Prediction systems are successfully deployed in applications ranging from disease diagnosis, to predicting credit worthiness, to image recognition.
no code implementations • NeurIPS 2018 • Michael P. Kim, Omer Reingold, Guy N. Rothblum
We study the problem of fair classification within the versatile framework of Dwork et al. [ITCS '12], which assumes the existence of a metric that measures similarity between pairs of individuals.
1 code implementation • 22 Nov 2017 • Úrsula Hébert-Johnson, Michael P. Kim, Omer Reingold, Guy N. Rothblum
We develop and study multicalbration -- a new measure of algorithmic fairness that aims to mitigate concerns about discrimination that is introduced in the process of learning a predictor from data.