no code implementations • 18 May 2023 • Brandon Livio Annesi, Clarissa Lauditi, Carlo Lucibello, Enrico M. Malatesta, Gabriele Perugini, Fabrizio Pittorino, Luca Saglietti
Empirical studies on the landscape of neural networks have shown that low-energy configurations are often found in complex connected structures, where zero-energy paths between pairs of distant solutions can be constructed.
no code implementations • 26 Apr 2023 • Carlo Baldassi, Enrico M. Malatesta, Gabriele Perugini, Riccardo Zecchina
We analyze the geometry of the landscape of solutions in both models and find important similarities and differences.
no code implementations • 29 Mar 2023 • Matteo Negri, Clarissa Lauditi, Gabriele Perugini, Carlo Lucibello, Enrico Malatesta
The Hopfield model is a paradigmatic model of neural networks that has been analyzed for many decades in the statistical physics, neuroscience, and machine learning communities.
no code implementations • 7 Feb 2022 • Fabrizio Pittorino, Antonio Ferraro, Gabriele Perugini, Christoph Feinauer, Carlo Baldassi, Riccardo Zecchina
This lets us derive a meaningful notion of the flatness of minimizers and of the geodesic paths connecting them.
no code implementations • 27 Oct 2021 • Carlo Lucibello, Fabrizio Pittorino, Gabriele Perugini, Riccardo Zecchina
Message-passing algorithms based on the Belief Propagation (BP) equations constitute a well-known distributed computational scheme.
no code implementations • 1 Oct 2021 • Carlo Baldassi, Clarissa Lauditi, Enrico M. Malatesta, Rosalba Pacelli, Gabriele Perugini, Riccardo Zecchina
Current deep neural networks are highly overparameterized (up to billions of connection weights) and nonlinear.
no code implementations • 2 Jul 2021 • Carlo Baldassi, Clarissa Lauditi, Enrico M. Malatesta, Gabriele Perugini, Riccardo Zecchina
The success of deep learning has revealed the application potential of neural networks across the sciences and opened up fundamental theoretical problems.
1 code implementation • ICLR 2021 • Fabrizio Pittorino, Carlo Lucibello, Christoph Feinauer, Gabriele Perugini, Carlo Baldassi, Elizaveta Demyanenko, Riccardo Zecchina
The properties of flat minima in the empirical risk landscape of neural networks have been debated for some time.