no code implementations • 15 Nov 2023 • Misha Ivkov, Tselil Schramm
Approximate message passing (AMP) is a family of iterative algorithms that generalize matrix power iteration.
no code implementations • 29 Apr 2023 • Shuangping Li, Tselil Schramm
Gaussian mixture block models are distributions over graphs that strive to model modern networks: to generate a graph from such a model, we associate each vertex $i$ with a latent feature vector $u_i \in \mathbb{R}^d$ sampled from a mixture of Gaussians, and we add edge $(i, j)$ if and only if the feature vectors are sufficiently similar, in that $\langle u_i, u_j \rangle \ge \tau$ for a pre-specified threshold $\tau$.
no code implementations • 19 May 2022 • Afonso S. Bandeira, Ahmed El Alaoui, Samuel B. Hopkins, Tselil Schramm, Alexander S. Wein, Ilias Zadik
We define a free-energy based criterion for hardness and formally connect it to the well-established notion of low-degree hardness for a broad class of statistical problems, namely all Gaussian additive models and certain models with a sparse planted signal.
no code implementations • 5 Mar 2022 • Samuel B. Hopkins, Tselil Schramm, Jonathan Shi
We give a spectral algorithm for decomposing overcomplete order-4 tensors, so long as their components satisfy an algebraic non-degeneracy condition that holds for nearly all (all but an algebraic set of measure $0$) tensors over $(\mathbb{R}^d)^{\otimes 4}$ with rank $n \le d^2$.
no code implementations • NeurIPS 2021 • Arun Jambulapati, Jerry Li, Tselil Schramm, Kevin Tian
For the general case of smooth GLMs (e. g. logistic regression), we show that the robust gradient descent framework of Prasad et.
no code implementations • 17 Feb 2021 • Ronen Eldan, Dan Mikulincer, Tselil Schramm
We study the extent to which wide neural networks may be approximated by Gaussian processes when initialized with random weights.
no code implementations • 13 Sep 2020 • Matthew Brennan, Guy Bresler, Samuel B. Hopkins, Jerry Li, Tselil Schramm
Researchers currently use a number of approaches to predict and substantiate information-computation gaps in high-dimensional statistical estimation problems.
no code implementations • 5 Aug 2020 • Tselil Schramm, Alexander S. Wein
One fundamental goal of high-dimensional statistics is to detect or recover planted structure (such as a low-rank matrix) hidden in noisy data.
no code implementations • 30 Jul 2018 • Prasad Raghavendra, Tselil Schramm, David Steurer
On one hand, there is a growing body of work utilizing sum-of-squares proofs for recovering solutions to polynomial systems when the system is feasible.
no code implementations • NeurIPS 2019 • Boaz Barak, Chi-Ning Chou, Zhixian Lei, Tselil Schramm, Yueqi Sheng
Specifically, for every $\gamma>0$, we give a $n^{O(\log n)}$ time algorithm that given a pair of $\gamma$-correlated $G(n, p)$ graphs $G_0, G_1$ with average degree between $n^{\varepsilon}$ and $n^{1/153}$ for $\varepsilon = o(1)$, recovers the "ground truth" permutation $\pi\in S_n$ that matches the vertices of $G_0$ to the vertices of $G_n$ in the way that minimizes the number of mismatched edges.
no code implementations • 27 Jun 2017 • Tselil Schramm, David Steurer
We develop fast spectral algorithms for tensor decomposition that match the robustness guarantees of the best known polynomial-time algorithms for this problem based on the sum-of-squares (SOS) semidefinite programming hierarchy.
no code implementations • 8 Dec 2015 • Samuel B. Hopkins, Tselil Schramm, Jonathan Shi, David Steurer
For tensor decomposition, we give an algorithm with running time close to linear in the input size (with exponent $\approx 1. 086$) that approximately recovers a component of a random 3-tensor over $\mathbb R^n$ of rank up to $\tilde \Omega(n^{4/3})$.
no code implementations • 9 Jun 2015 • Tselil Schramm, Benjamin Weitz
We apply our tensor completion algorithm to the problem of learning mixtures of product distributions over the hypercube, obtaining new algorithmic results.