no code implementations • 22 Mar 2022 • Shixiao Fan, Xuan Cheng, Xiaomin Wang, Chun Yang, Pan Deng, Minghui Liu, Jiali Deng, Ming Liu
Recently, researchers have shown an increased interest in the online knowledge distillation.
no code implementations • 2 Dec 2021 • Tianshu Xie, Xuan Cheng, Minghui Liu, Jiali Deng, Xiaomin Wang, Ming Liu
In this paper, we observe that the reduced image retains relatively complete shape semantics but loses extensive texture information.
no code implementations • 18 Jul 2021 • Tianshu Xie, Xuan Cheng, Xiaomin Wang, Minghui Liu, Jiali Deng, Ming Liu
In this paper, we propose a novel training strategy for convolutional neural network(CNN) named Feature Mining, that aims to strengthen the network's learning of the local feature.
no code implementations • 12 Jun 2021 • Xuan Cheng, Tianshu Xie, Xiaomin Wang, Jiali Deng, Minghui Liu, Ming Liu
Regularization and data augmentation methods have been widely used and become increasingly indispensable in deep learning training.
no code implementations • 8 Jun 2021 • Xuan Cheng, Tianshu Xie, Xiaomin Wang, Jiali Deng, Minghui Liu, Ming Liu
The promising performances of CNNs often overshadow the need to examine whether they are doing in the way we are actually interested.
no code implementations • 29 Mar 2021 • Tianshu Xie, Minghui Liu, Jiali Deng, Xuan Cheng, Xiaomin Wang, Ming Liu
In convolutional neural network (CNN), dropout cannot work well because dropped information is not entirely obscured in convolutional layers where features are correlated spatially.
no code implementations • 29 Mar 2021 • Xuan Cheng, Tianshu Xie, Xiaomin Wang, Qifeng Weng, Minghui Liu, Jiali Deng, Ming Liu
In this paper, we propose Selective Output Smoothing Regularization, a novel regularization method for training the Convolutional Neural Networks (CNNs).
1 code implementation • 9 Mar 2021 • Tianshu Xie, Xuan Cheng, Minghui Liu, Jiali Deng, Xiaomin Wang, Ming Liu
In this paper, we propose a novel data augmentation strategy named Cut-Thumbnail, that aims to improve the shape bias of the network.