1 code implementation • NeurIPS 2023 • Giannis Daras, Kulin Shah, Yuval Dagan, Aravind Gollakota, Alexandros G. Dimakis, Adam Klivans
We present the first diffusion-based framework that can learn an unknown distribution using only highly-corrupted samples.
no code implementations • 28 Feb 2023 • Aravind Gollakota, Adam R. Klivans, Konstantinos Stavropoulos, Arsen Vasilyan
Prior work on testable learning ignores the labels in the training set and checks that the empirical moments of the covariates are close to the moments of the base distribution.
no code implementations • 23 Nov 2022 • Aravind Gollakota, Adam R. Klivans, Pravesh K. Kothari
A remarkable recent paper by Rubinfeld and Vasilyan (2022) initiated the study of \emph{testable learning}, where the goal is to replace hard-to-verify distributional assumptions (such as Gaussianity) with efficiently testable ones and to require that the learner succeed whenever the unknown distribution passes the corresponding test.
no code implementations • 10 Feb 2022 • Sitan Chen, Aravind Gollakota, Adam R. Klivans, Raghu Meka
We give superpolynomial statistical query (SQ) lower bounds for learning two-hidden-layer ReLU networks with respect to Gaussian inputs in the standard (noise-free) model.
no code implementations • 9 Feb 2021 • Aravind Gollakota, Daniel Liang
Our results position the problem of learning stabilizer states as a natural quantum analogue of the classical problem of learning parities: easy in the noiseless setting, but seemingly intractable even with simple forms of noise.
no code implementations • 22 Oct 2020 • Aravind Gollakota, Sushrut Karmalkar, Adam Klivans
Generalizing a beautiful work of Malach and Shalev-Shwartz (2022) that gave tight correlational SQ (CSQ) lower bounds for learning DNF formulas, we give new proofs that lower bounds on the threshold or approximate degree of any function class directly imply CSQ lower bounds for PAC or agnostic learning respectively.
no code implementations • NeurIPS 2020 • Surbhi Goel, Aravind Gollakota, Adam Klivans
We give the first statistical-query lower bounds for agnostically learning any non-polynomial activation with respect to Gaussian marginals (e. g., ReLU, sigmoid, sign).
no code implementations • ICML 2020 • Surbhi Goel, Aravind Gollakota, Zhihan Jin, Sushrut Karmalkar, Adam Klivans
Our lower bounds hold for broad classes of activations including ReLU and sigmoid.