Can Learning Be Explained By Local Optimality In Low-rank Matrix Recovery?

21 Feb 2023  ·  Jianhao Ma, Salar Fattahi ·

We explore the local landscape of low-rank matrix recovery, aiming to reconstruct a $d_1\times d_2$ matrix with rank $r$ from $m$ linear measurements, some potentially noisy. When the true rank is unknown, overestimation is common, yielding an over-parameterized model with rank $k\geq r$. Recent findings suggest that first-order methods with the robust $\ell_1$-loss can recover the true low-rank solution even when the rank is overestimated and measurements are noisy, implying that true solutions might emerge as local or global minima. Our paper challenges this notion, demonstrating that, under mild conditions, true solutions manifest as \textit{strict saddle points}. We study two categories of low-rank matrix recovery, matrix completion and matrix sensing, both with the robust $\ell_1$-loss. For matrix sensing, we uncover two critical transitions. With $m$ in the range of $\max\{d_1,d_2\}r\lesssim m\lesssim \max\{d_1,d_2\}k$, none of the true solutions are local or global minima, but some become strict saddle points. As $m$ surpasses $\max\{d_1,d_2\}k$, all true solutions become unequivocal global minima. In matrix completion, even with slight rank overestimation and mild noise, true solutions either emerge as non-critical or strict saddle points.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods