no code implementations • 13 Mar 2024 • Siqi Li, Jun Chen, Jingyang Xiang, Chengrui Zhu, Yong liu
AutoDFP assesses the similarity of channels for each layer and provides this information to the reinforcement learning agent, guiding the pruning and reconstruction process of the network.
no code implementations • 17 Dec 2023 • Jingyang Xiang, Zhuangzhi Chen, Jianbiao Mei, Siqi Li, Jun Chen, Yong liu
In this paper, we propose to mitigate this gap by learning consistent representation for soft filter pruning, dubbed as CR-SFP.
1 code implementation • 12 Dec 2023 • Jingyang Xiang, Siqi Li, JunHao Chen, Zhuangzhi Chen, Tianxin Huang, Linpeng Peng, Yong liu
Meanwhile, a sparsity strategy that gradually increases the percentage of N:M weight blocks is applied, which allows the network to heal from the pruning-induced damage progressively.
1 code implementation • 10 Oct 2023 • Jingyang Xiang, Siqi Li, Jun Chen, Shipeng Bai, Yukai Ma, Guang Dai, Yong liu
To overcome them, this paper proposes a novel \emph{\textbf{S}oft \textbf{U}niform \textbf{B}lock \textbf{P}runing} (SUBP) approach to train a uniform 1$\times$N sparse structured network from scratch.
no code implementations • 28 Oct 2021 • Zhuangzhi Chen, Jingyang Xiang, Yao Lu, Qi Xuan, Xiaoniu Yang
In this paper, we study the graph structure of the neural network, and propose regular graph based pruning (RGP) to perform a one-shot neural network pruning.
no code implementations • 9 Jul 2021 • Zuohui Chen, Renxuan Wang, Jingyang Xiang, Yue Yu, Xin Xia, Shouling Ji, Qi Xuan, Xiaoniu Yang
Deep Neural Networks (DNN) are known to be vulnerable to adversarial samples, the detection of which is crucial for the wide application of these DNN models.
no code implementations • ICML Workshop AML 2021 • Zuohui Chen, Renxuan Wang, Yao Lu, Jingyang Xiang, Qi Xuan
Experiments on CIFAR10 and SVHN show that the FLOPs and size of our generated model are only 24. 46\% and 4. 86\% of the original model.
no code implementations • 28 Oct 2020 • Zhuangzhi Chen, Hui Cui, Jingyang Xiang, Kunfeng Qiu, Liang Huang, Shilian Zheng, Shichuan Chen, Qi Xuan, Xiaoniu Yang
More interestingly, our proposed models behave extremely well in small-sample learning when only a small training dataset is provided.