no code implementations • 18 May 2023 • Brandon Livio Annesi, Clarissa Lauditi, Carlo Lucibello, Enrico M. Malatesta, Gabriele Perugini, Fabrizio Pittorino, Luca Saglietti
Empirical studies on the landscape of neural networks have shown that low-energy configurations are often found in complex connected structures, where zero-energy paths between pairs of distant solutions can be constructed.
no code implementations • 29 Mar 2023 • Matteo Negri, Clarissa Lauditi, Gabriele Perugini, Carlo Lucibello, Enrico Malatesta
The Hopfield model is a paradigmatic model of neural networks that has been analyzed for many decades in the statistical physics, neuroscience, and machine learning communities.
no code implementations • 6 Feb 2023 • Miguel Ibáñez-Berganza, Carlo Lucibello, Francesca Santucci, Tommaso Gili, Andrea Gabrielli
We observe that the so called Optimal Rotationally Invariant Estimator, based on Random Matrix Theory, leads to a significantly lower distance from the true precision matrix in synthetic data, and higher test likelihood in natural fMRI data.
no code implementations • 22 Aug 2022 • Miguel Ibáñez-Berganza, Carlo Lucibello, Luca Mariani, Giovanni Pezzulo
Processing faces accurately and efficiently is a key capability of humans and other animals that engage in sophisticated social tasks.
no code implementations • 27 Oct 2021 • Carlo Lucibello, Fabrizio Pittorino, Gabriele Perugini, Riccardo Zecchina
Message-passing algorithms based on the Belief Propagation (BP) equations constitute a well-known distributed computational scheme.
no code implementations • ICLR Workshop EBM 2021 • Christoph Feinauer, Carlo Lucibello
Pairwise models like the Ising model or the generalized Potts model have found many successful applications in fields like physics, biology, and economics.
1 code implementation • ICLR 2021 • Fabrizio Pittorino, Carlo Lucibello, Christoph Feinauer, Gabriele Perugini, Carlo Baldassi, Elizaveta Demyanenko, Riccardo Zecchina
The properties of flat minima in the empirical risk landscape of neural networks have been debated for some time.
no code implementations • 15 Nov 2019 • Carlo Baldassi, Riccardo Della Vecchia, Carlo Lucibello, Riccardo Zecchina
The geometrical features of the (non-convex) loss landscape of neural network models are crucial in ensuring successful optimization and, most importantly, the capability to generalize well.
no code implementations • 13 May 2019 • Luca Saglietti, Yue M. Lu, Carlo Lucibello
In Generalized Linear Estimation (GLE) problems, we seek to estimate a signal that is observed through a linear transform followed by a component-wise, possibly nonlinear and noisy, channel.
no code implementations • ICLR 2020 • George Stamatescu, Federica Gerace, Carlo Lucibello, Ian Fuss, Langford B. White
Moreover, we predict theoretically and confirm numerically, that common weight initialisation schemes used in standard continuous networks, when applied to the mean values of the stochastic binary weights, yield poor training performance.
no code implementations • 26 Oct 2017 • Carlo Baldassi, Federica Gerace, Hilbert J. Kappen, Carlo Lucibello, Luca Saglietti, Enzo Tartaglione, Riccardo Zecchina
Stochasticity and limited precision of synaptic weights in neural network models are key aspects of both biological and hardware modeling of learning processes.
no code implementations • 20 May 2016 • Carlo Baldassi, Christian Borgs, Jennifer Chayes, Alessandro Ingrosso, Carlo Lucibello, Luca Saglietti, Riccardo Zecchina
We define a novel measure, which we call the "robust ensemble" (RE), which suppresses trapping by isolated configurations and amplifies the role of these dense regions.
no code implementations • 12 Feb 2016 • Carlo Baldassi, Federica Gerace, Carlo Lucibello, Luca Saglietti, Riccardo Zecchina
Learning in neural networks poses peculiar challenges when using discretized rather then continuous synaptic states.
no code implementations • 18 Nov 2015 • Carlo Baldassi, Alessandro Ingrosso, Carlo Lucibello, Luca Saglietti, Riccardo Zecchina
We introduce a novel Entropy-driven Monte Carlo (EdMC) strategy to efficiently sample solutions of random Constraint Satisfaction Problems (CSPs).
no code implementations • 18 Sep 2015 • Carlo Baldassi, Alessandro Ingrosso, Carlo Lucibello, Luca Saglietti, Riccardo Zecchina
We also show that the dense regions are surprisingly accessible by simple learning protocols, and that these synaptic configurations are robust to perturbations and generalize better than typical solutions.