no code implementations • 24 Nov 2023 • Zimian Wei, Hengyue Pan, Lujun Li, Peijie Dong, Zhiliang Tian, Xin Niu, Dongsheng Li
In this paper, for the first time, we investigate how to search in a training-free manner with the help of teacher models and devise an effective Training-free ViT (TVT) search framework.
1 code implementation • ICCV 2023 • Peijie Dong, Lujun Li, Zimian Wei, Xin Niu, Zhiliang Tian, Hengyue Pan
In particular, we devise an elaborate search space involving the existing proxies and perform an evolution search to discover the best correlated MQ proxy.
no code implementations • 22 May 2023 • Haoqi Zheng, Qihuang Zhong, Liang Ding, Zhiliang Tian, Xin Niu, Dongsheng Li, DaCheng Tao
However, most of the mixup methods do not consider the varying degree of learning difficulty in different stages of training and generate new samples with one hot labels, resulting in the model over confidence.
no code implementations • 24 Jan 2023 • Peijie Dong, Xin Niu, Zhiliang Tian, Lujun Li, Xiaodong Wang, Zimian Wei, Hengyue Pan, Dongsheng Li
Practical networks for edge devices adopt shallow depth and small convolutional kernels to save memory and computational cost, which leads to a restricted receptive field.
1 code implementation • 24 Jan 2023 • Peijie Dong, Xin Niu, Lujun Li, Zhiliang Tian, Xiaodong Wang, Zimian Wei, Hengyue Pan, Dongsheng Li
In this paper, we propose Ranking Distillation one-shot NAS (RD-NAS) to enhance ranking consistency, which utilizes zero-cost proxies as the cheap teacher and adopts the margin ranking loss to distill the ranking knowledge.
no code implementations • 28 Dec 2022 • Zimian Wei, Hengyue Pan, Xin Niu, Dongsheng Li
OVO samples sub-nets for both teacher and student networks for better distillation results.
no code implementations • 16 Sep 2022 • Zimian Wei, Hengyue Pan, Lujun Li, Menglong Lu, Xin Niu, Peijie Dong, Dongsheng Li
Vision transformers have shown excellent performance in computer vision tasks.
3 code implementations • 23 Aug 2022 • Ren Yang, Radu Timofte, Qi Zhang, Lin Zhang, Fanglong Liu, Dongliang He, Fu Li, He Zheng, Weihang Yuan, Pavel Ostyakov, Dmitry Vyal, Magauiya Zhussip, Xueyi Zou, Youliang Yan, Lei LI, Jingzhu Tang, Ming Chen, Shijie Zhao, Yu Zhu, Xiaoran Qin, Chenghua Li, Cong Leng, Jian Cheng, Claudio Rota, Marco Buzzelli, Simone Bianco, Raimondo Schettini, Dafeng Zhang, Feiyu Huang, Shizhuo Liu, Xiaobing Wang, Zhezhu Jin, Bingchen Li, Xin Li, Mingxi Li, Ding Liu, Wenbin Zou, Peijie Dong, Tian Ye, Yunchen Zhang, Ming Tan, Xin Niu, Mustafa Ayazoglu, Marcos Conde, Ui-Jin Choi, Zhuang Jia, Tianyu Xu, Yijian Zhang, Mao Ye, Dengyan Luo, Xiaofeng Pan, Liuhan Peng
The homepage of this challenge is at https://github. com/RenYang-home/AIM22_CompressSR.
1 code implementation • 27 Jun 2022 • Peijie Dong, Xin Niu, Lujun Li, Linzhen Xie, Wenbin Zou, Tian Ye, Zimian Wei, Hengyue Pan
In this paper, we present Prior-Guided One-shot NAS (PGONAS) to strengthen the ranking correlation of supernets.
2 code implementations • 14 Apr 2022 • Hengyue Pan, Yixin Chen, Xin Niu, Wenbo Zhou, Dongsheng Li
The most important motivation of this research is that we can use the straightforward element-wise multiplication operation to replace the image convolution in the frequency domain based on the Cross-Correlation Theorem, which obviously reduces the computation complexity.
no code implementations • 8 Mar 2022 • Zimian Wei, Hengyue Pan, Lujun Li, Menglong Lu, Xin Niu, Peijie Dong, Dongsheng Li
Neural architecture search (NAS) has brought significant progress in recent image recognition tasks.
no code implementations • 29 May 2020 • Hengyue Pan, Xin Niu, Rongchun Li, Siqi Shen, Yong Dou
Instead, we propose a novel method to encode all background objects in each image by using one fixed-size vector (i. e., FBE vector).
no code implementations • 28 Aug 2019 • Yanan Wang, Tong Xu, Xin Niu, Chang Tan, Enhong Chen, Hui Xiong
Moreover, based on the temporally-dependent traffic information, we design a Graph Neural Network based model to represent relationships among multiple traffic lights, and the decision for each traffic light will be made in a distributed way by the deep Q-learning method.
no code implementations • 16 Nov 2018 • Hengyue Pan, Hui Jiang, Xin Niu, Yong Dou
Most of previous methods mainly consider to drop features from input data and hidden layers, such as Dropout, Cutout and DropBlocks.