no code implementations • 18 Dec 2023 • Lucas Rosenblatt, Julia Stoyanovich, Christopher Musco
Our theoretical results center on the private mean estimation problem, while our empirical results center on extensive experiments on private data synthesis to demonstrate the effectiveness of stratification on a variety of private mechanisms.
1 code implementation • 12 Dec 2023 • R. Teal Witter, Lucas Rosenblatt
In order to simulate the impact of opening streets, we first compare models for predicting vehicle collisions given network and temporal data.
1 code implementation • 1 Oct 2023 • Lucas Rosenblatt, Bin Han, Erin Posthumus, Theresa Crimmins, Bill Howe
An invasive species of grass known as "buffelgrass" contributes to severe wildfires and biodiversity loss in the Southwest United States.
no code implementations • 13 Feb 2023 • Andrew Bell, Lucius Bynum, Nazarii Drushchak, Tetiana Herasymova, Lucas Rosenblatt, Julia Stoyanovich
The ``impossibility theorem'' -- which is considered foundational in algorithmic fairness literature -- asserts that there must be trade-offs between common notions of fairness and performance when fitting statistical models, except in two special cases: when the prevalence of the outcome being predicted is equal across groups, or when a perfectly accurate predictor is used.
1 code implementation • 5 Jan 2023 • Lorena Piedras, Lucas Rosenblatt, Julia Wilkins
Detecting "toxic" language in internet content is a pressing social and technical challenge.
no code implementations • 7 Aug 2022 • Lucas Rosenblatt, R. Teal Witter
Making fair decisions is crucial to ethically implementing machine learning algorithms in social settings.
no code implementations • 27 Apr 2022 • Lucas Rosenblatt, Joshua Allen, Julia Stoyanovich
Our methods are based on the insights that feature importance can inform how privacy budget is allocated, and, further, that per-group feature importance and fairness-related performance objectives can be incorporated in the allocation.
1 code implementation • 11 Nov 2020 • Lucas Rosenblatt, Xiaoyan Liu, Samira Pouyanfar, Eduardo de Leon, Anuj Desai, Joshua Allen
Differentially private data synthesis protects personal details from exposure, and allows for the training of differentially private machine learning models on privately generated datasets.