no code implementations • 1 Sep 2023 • Leyang Zhang, Yaoyu Zhang, Tao Luo
Under mild assumptions, we investigate the structure of loss landscape of two-layer neural networks near global minima, determine the set of parameters which give perfect generalization, and fully characterize the gradient flows around it.
no code implementations • 18 Jul 2023 • Yaoyu Zhang, Zhongwang Zhang, Leyang Zhang, Zhiwei Bai, Tao Luo, Zhi-Qin John Xu
We propose an optimistic estimate to evaluate the best possible fitting performance of nonlinear models.
no code implementations • 21 Nov 2022 • Yaoyu Zhang, Zhongwang Zhang, Leyang Zhang, Zhiwei Bai, Tao Luo, Zhi-Qin John Xu
By these results, model rank of a target function predicts a minimal training data size for its successful recovery.
no code implementations • 28 Jan 2022 • Leyang Zhang, Zhi-Qin John Xu, Tao Luo, Yaoyu Zhang
In recent years, understanding the implicit regularization of neural networks (NNs) has become a central task in deep learning theory.