no code implementations • 1 Jan 2021 • Rares C Cristian, Max Dabagia, Christos Papadimitriou, Santosh Vempala
Here we hypothesize that (a) Brains employ synaptic plasticity rules that serve as proxies for GD; (b) These rules themselves can be learned by GD on the rule parameters; and (c) This process may be a missing ingredient for the development of ANNs that generalize well and are robust to adversarial perturbations.
2 code implementations • 17 Jun 2020 • Mehrdad Ghadiri, Samira Samadi, Santosh Vempala
We show that the popular k-means clustering algorithm (Lloyd's heuristic), used for a variety of scientific data, can result in outcomes that are unfavorable to subgroups of data (e. g., demographic groups).
no code implementations • 26 Nov 2019 • He Jia, Santosh Vempala
We give an efficient algorithm for robustly clustering of a mixture of two arbitrary Gaussians, a central open problem in the theory of computationally efficient robust estimation, assuming only that the the means of the component Gaussians are well-separated or their covariances are well-separated.
no code implementations • NeurIPS Workshop Neuro_AI 2019 • Sruthi Gorantla, Anand Louis, Christos H. Papadimitriou, Santosh Vempala, Naganand Yadati
Artificial neural networks (ANNs) lack in biological plausibility, chiefly because backpropagation requires a variant of plasticity (precise changes of the synaptic weights informed by neural events that occur downstream in the neural circuit) that is profoundly incompatible with the current understanding of the animal brain.
2 code implementations • NeurIPS 2019 • Uthaipon Tantipongpipat, Samira Samadi, Mohit Singh, Jamie Morgenstern, Santosh Vempala
Our main result is an exact polynomial-time algorithm for the two-criterion dimensionality reduction problem when the two criteria are increasing concave functions.
1 code implementation • NeurIPS 2018 • Samira Samadi, Uthaipon Tantipongpipat, Jamie Morgenstern, Mohit Singh, Santosh Vempala
This motivates our study of dimensionality reduction techniques which maintain similar fidelity for A and B.
no code implementations • NeurIPS 2018 • Nima Anari, Constantinos Daskalakis, Wolfgang Maass, Christos H. Papadimitriou, Amin Saberi, Santosh Vempala
We give an application to recovering assemblies of neurons.
no code implementations • 7 May 2018 • Santosh Vempala, John Wilmes
We give an agnostic learning guarantee for GD: starting from a randomly initialized network, it converges in mean squared loss to the minimum error (in $2$-norm) of the best approximation of the target function using a polynomial of degree at most $k$.
2 code implementations • 11 Oct 2017 • Laurent Heirendt, Sylvain Arreckx, Thomas Pfau, Sebastián N. Mendoza, Anne Richelle, Almut Heinken, Hulda S. Haraldsdóttir, Jacek Wachowiak, Sarah M. Keating, Vanja Vlasov, Stefania Magnusdóttir, Chiam Yu Ng, German Preciat, Alise Žagare, Siu H. J. Chan, Maike K. Aurich, Catherine M. Clancy, Jennifer Modamio, John T. Sauls, Alberto Noronha, Aarash Bordbar, Benjamin Cousins, Diana C. El Assal, Luis V. Valcarcel, Iñigo Apaolaza, Susan Ghaderi, Masoud Ahookhosh, Marouen Ben Guebila, Andrejs Kostromins, Nicolas Sompairac, Hoai M. Le, Ding Ma, Yuekai Sun, Lin Wang, James T. Yurkovich, Miguel A. P. Oliveira, Phan T. Vuong, Lemmer P. El Assal, Inna Kuperstein, Andrei Zinovyev, H. Scott Hinton, William A. Bryant, Francisco J. Aragón Artacho, Francisco J. Planes, Egils Stalidzans, Alejandro Maass, Santosh Vempala, Michael Hucka, Michael A. Saunders, Costas D. Maranas, Nathan E. Lewis, Thomas Sauter, Bernhard Ø. Palsson, Ines Thiele, Ronan M. T. Fleming
This protocol can be adapted for the generation and analysis of a constraint-based model in a wide variety of molecular systems biology scenarios.
no code implementations • NeurIPS 2017 • Le Song, Santosh Vempala, John Wilmes, Bo Xie
Moreover, this hard family of functions is realizable with a small (sublinear in dimension) number of activation units in the single hidden layer.
no code implementations • 12 Aug 2016 • Ravi Kannan, Santosh Vempala
We give a polynomial-time algorithm to identify all the hidden hubs with high probability for $k \ge n^{0. 5-\delta}$ for some $\delta >0$, when $\sigma_1^2>2\sigma_0^2$.
2 code implementations • 24 Apr 2016 • Kevin A. Lai, Anup B. Rao, Santosh Vempala
We consider the problem of estimating the mean and covariance of a distribution from iid samples in $\mathbb{R}^n$, in the presence of an $\eta$ fraction of malicious noise; this is in contrast to much recent work where the noise itself is assumed to be from a distribution of known type.
no code implementations • 26 Feb 2016 • Christos Papadimitrou, Samantha Petti, Santosh Vempala
We study the rate of convergence, finding that while linear convergence to the correct function can be achieved for any threshold using a fixed set of primitives, for quadratic convergence, the size of the primitives must grow as the threshold approaches 0 or 1.
no code implementations • 30 Dec 2015 • Vitaly Feldman, Cristobal Guzman, Santosh Vempala
Stochastic convex optimization, where the objective is the expectation of a random convex function, is an important and widely used method with numerous applications in machine learning, statistics, operations research and other areas.
no code implementations • 6 Nov 2014 • Maria-Florina Balcan, Avrim Blum, Santosh Vempala
Specifically, we consider the problem of learning many different target functions over time, that share certain commonalities that are initially unknown to the learning algorithm.
1 code implementation • 25 Jun 2013 • Navin Goyal, Santosh Vempala, Ying Xiao
Fourier PCA is Principal Component Analysis of a matrix obtained from higher order derivatives of the logarithm of the Fourier transform of a distribution. We make this method algorithmic by developing a tensor decomposition method for a pair of tensors sharing the same vectors in rank-$1$ decompositions.