Search Results for author: Victor Quétu

Found 7 papers, 3 papers with code

The Simpler The Better: An Entropy-Based Importance Metric To Reduce Neural Networks' Depth

no code implementations27 Apr 2024 Victor Quétu, Zhu Liao, Enzo Tartaglione

While deep neural networks are highly effective at solving complex tasks, large pre-trained models are commonly employed even to solve consistently simpler downstream tasks, which do not necessarily require a large model's complexity.

Image Classification

NEPENTHE: Entropy-Based Pruning as a Neural Network Depth's Reducer

no code implementations24 Apr 2024 Zhu Liao, Victor Quétu, Van-Tam Nguyen, Enzo Tartaglione

While deep neural networks are highly effective at solving complex tasks, their computational demands can hinder their usefulness in real-time applications and with limited-resources systems.

The Quest of Finding the Antidote to Sparse Double Descent

no code implementations31 Aug 2023 Victor Quétu, Marta Milovanović

In energy-efficient schemes, finding the optimal size of deep learning models is very important and has a broad impact.

Image Classification

Can Unstructured Pruning Reduce the Depth in Deep Neural Networks?

1 code implementation12 Aug 2023 Zhu Liao, Victor Quétu, Van-Tam Nguyen, Enzo Tartaglione

Pruning is a widely used technique for reducing the size of deep neural networks while maintaining their performance.

Sparse Double Descent in Vision Transformers: real or phantom threat?

1 code implementation26 Jul 2023 Victor Quétu, Marta Milovanovic, Enzo Tartaglione

Vision transformers (ViT) have been of broad interest in recent theoretical and empirical works.

Inductive Bias

DSD$^2$: Can We Dodge Sparse Double Descent and Compress the Neural Network Worry-Free?

1 code implementation2 Mar 2023 Victor Quétu, Enzo Tartaglione

Second, we introduce an entropy measure providing more insights into the insurgence of this phenomenon and enabling the use of traditional stop criteria.

Can we avoid Double Descent in Deep Neural Networks?

no code implementations26 Feb 2023 Victor Quétu, Enzo Tartaglione

Very recently, an unexpected phenomenon, the ``double descent'', has caught the attention of the deep learning community.

Cannot find the paper you are looking for? You can Submit a new open access paper.