Search Results for author: Emanuele Zangrando

Found 3 papers, 1 papers with code

Neural Rank Collapse: Weight Decay and Small Within-Class Variability Yield Low-Rank Bias

no code implementations6 Feb 2024 Emanuele Zangrando, Piero Deidda, Simone Brugiapaglia, Nicola Guglielmi, Francesco Tudisco

Recent work in deep learning has shown strong empirical and theoretical evidence of an implicit low-rank bias: weight matrices in deep networks tend to be approximately low-rank and removing relatively small singular values during training or from available trained models may significantly reduce model size while maintaining or even improving model performance.

Rank-adaptive spectral pruning of convolutional layers during training

no code implementations30 May 2023 Emanuele Zangrando, Steffen Schotthöfer, Gianluca Ceruti, Jonas Kusch, Francesco Tudisco

The computing cost and memory demand of deep learning pipelines have grown fast in recent years and thus a variety of pruning techniques have been developed to reduce model parameters.

Low-rank lottery tickets: finding efficient low-rank neural networks via matrix differential equations

4 code implementations26 May 2022 Steffen Schotthöfer, Emanuele Zangrando, Jonas Kusch, Gianluca Ceruti, Francesco Tudisco

The main idea is to restrict the weight matrices to a low-rank manifold and to update the low-rank factors rather than the full matrix during training.

Cannot find the paper you are looking for? You can Submit a new open access paper.