no code implementations • 27 Apr 2024 • Victor Quétu, Zhu Liao, Enzo Tartaglione
While deep neural networks are highly effective at solving complex tasks, large pre-trained models are commonly employed even to solve consistently simpler downstream tasks, which do not necessarily require a large model's complexity.
no code implementations • 24 Apr 2024 • Zhu Liao, Victor Quétu, Van-Tam Nguyen, Enzo Tartaglione
While deep neural networks are highly effective at solving complex tasks, their computational demands can hinder their usefulness in real-time applications and with limited-resources systems.
no code implementations • 31 Aug 2023 • Victor Quétu, Marta Milovanović
In energy-efficient schemes, finding the optimal size of deep learning models is very important and has a broad impact.
1 code implementation • 12 Aug 2023 • Zhu Liao, Victor Quétu, Van-Tam Nguyen, Enzo Tartaglione
Pruning is a widely used technique for reducing the size of deep neural networks while maintaining their performance.
1 code implementation • 26 Jul 2023 • Victor Quétu, Marta Milovanovic, Enzo Tartaglione
Vision transformers (ViT) have been of broad interest in recent theoretical and empirical works.
1 code implementation • 2 Mar 2023 • Victor Quétu, Enzo Tartaglione
Second, we introduce an entropy measure providing more insights into the insurgence of this phenomenon and enabling the use of traditional stop criteria.
no code implementations • 26 Feb 2023 • Victor Quétu, Enzo Tartaglione
Very recently, an unexpected phenomenon, the ``double descent'', has caught the attention of the deep learning community.