no code implementations • 27 May 2024 • Marcel Hussing, Michael Kearns, Aaron Roth, Sikata Bela Sengupta, Jessica Sorrell
Reinforcement learning (RL) in large or infinite state spaces is notoriously challenging, both theoretically (where worst-case sample and computational complexities must scale with state space cardinality) and experimentally (where function approximation and policy gradient techniques often scale poorly and suffer from instability and high variance).
no code implementations • 22 Mar 2023 • Mark Bun, Marco Gaboardi, Max Hopkins, Russell Impagliazzo, Rex Lei, Toniann Pitassi, Satchit Sivakumar, Jessica Sorrell
In particular, we give sample-efficient algorithmic reductions between perfect generalization, approximate differential privacy, and replicability for a broad class of statistical problems.
1 code implementation • 31 Jan 2023 • Ira Globus-Harris, Declan Harrison, Michael Kearns, Aaron Roth, Jessica Sorrell
Using this characterization, we give an exceedingly simple algorithm that can be analyzed both as a boosting algorithm for regression and as a multicalibration algorithm for a class H that makes use only of a standard squared error regression oracle for H. We give a weak learning assumption on H that ensures convergence to Bayes optimality without the need to make any realizability assumptions -- giving us an agnostic boosting algorithm for regression.
no code implementations • 20 Jan 2022 • Russell Impagliazzo, Rex Lei, Toniann Pitassi, Jessica Sorrell
We introduce the notion of a reproducible algorithm in the context of learning.
no code implementations • 14 Jun 2021 • Ilias Diakonikolas, Russell Impagliazzo, Daniel Kane, Rex Lei, Jessica Sorrell, Christos Tzamos
Our upper and lower bounds characterize the complexity of boosting in the distribution-independent PAC model with Massart noise.
no code implementations • 4 Feb 2020 • Mark Bun, Marco Leandro Carmosino, Jessica Sorrell
To demonstrate our framework, we use it to construct noise-tolerant and private PAC learners for large-margin halfspaces whose sample complexity does not depend on the dimension.