no code implementations • 26 May 2023 • Gavin Zhang, Hong-Ming Chiu, Richard Y. Zhang
Recently, the technique of preconditioning was shown to be highly effective at accelerating the local convergence of non-convex gradient descent when the measurements are noiseless.
no code implementations • 23 Oct 2022 • Geyu Liang, Gavin Zhang, Salar Fattahi, Richard Y. Zhang
This paper focuses on complete dictionary learning problem, where the goal is to reparametrize a set of given signals as linear combinations of atoms from a learned dictionary.
1 code implementation • 24 Aug 2022 • Gavin Zhang, Hong-Ming Chiu, Richard Y. Zhang
The matrix completion problem seeks to recover a $d\times d$ ground truth matrix of low rank $r\ll d$ from observations of its individual elements.
no code implementations • 7 Jun 2022 • Gavin Zhang, Salar Fattahi, Richard Y. Zhang
We consider using gradient descent to minimize the nonconvex function $f(X)=\phi(XX^{T})$ over an $n\times r$ factor matrix $X$, in which $\phi$ is an underlying smooth convex cost function defined over $n\times n$ matrices.
no code implementations • NeurIPS 2020 • Gavin Zhang, Richard Y. Zhang
Optimizing the threshold over regions of the landscape, we see that for initial points around the ground truth, a linear improvement in the quality of the initial guess amounts to a constant factor improvement in the sample complexity.