no code implementations • ICML 2020 • Amanda Bower, Laura Balzano
Finally we demonstrate the strong performance of maximum likelihood estimation of our model on both synthetic data and two real data sets: the UT Zappos50K data set and comparison data about the compactness of legislative districts in the United States.
no code implementations • 12 Sep 2022 • Amanda Bower, Kristian Lum, Tomo Lazovich, Kyra Yee, Luca Belli
Traditionally, recommender systems operate by returning a user a set of items, ranked in order of estimated relevance to that user.
1 code implementation • 11 May 2022 • Kristian Lum, Yunfeng Zhang, Amanda Bower
When a model's performance differs across socially or culturally relevant groups--like race, gender, or the intersections of many such groups--it is often called "biased."
no code implementations • 3 Feb 2022 • Tomo Lazovich, Luca Belli, Aaron Gonzales, Amanda Bower, Uthaipon Tantipongpipat, Kristian Lum, Ferenc Huszar, Rumman Chowdhury
We show that we can use these metrics to identify content suggestion algorithms that contribute more strongly to skewed outcomes between users.
no code implementations • 19 Mar 2021 • Amanda Bower, Hamid Eftekhari, Mikhail Yurochkin, Yuekai Sun
We develop an algorithm to train individually fair learning-to-rank (LTR) models.
no code implementations • ICLR 2021 • Amanda Bower, Hamid Eftekhari, Mikhail Yurochkin, Yuekai Sun
We develop an algorithm to train individually fair learning-to-rank (LTR) models.
1 code implementation • 22 Feb 2020 • Amanda Bower, Laura Balzano
Finally we demonstrate strong performance of maximum likelihood estimation of our model on both synthetic data and two real data sets: the UT Zappos50K data set and comparison data about the compactness of legislative districts in the US.
2 code implementations • ICLR 2020 • Mikhail Yurochkin, Amanda Bower, Yuekai Sun
We consider training machine learning models that are fair in the sense that their performance is invariant under certain sensitive perturbations to the inputs.
no code implementations • 3 Jul 2017 • Amanda Bower, Sarah N. Kitchen, Laura Niss, Martin J. Strauss, Alexander Vargas, Suresh Venkatasubramanian
This work facilitates ensuring fairness of machine learning in the real world by decoupling fairness considerations in compound decisions.