Connection Reduction of DenseNet for Image Recognition

2 Aug 2022  ·  Rui-Yang Ju, Jen-Shiun Chiang, Chih-Chia Chen, Yu-Shian Lin ·

Convolutional Neural Networks (CNN) increase depth by stacking convolutional layers, and deeper network models perform better in image recognition. Empirical research shows that simply stacking convolutional layers does not make the network train better, and skip connection (residual learning) can improve network model performance. For the image classification task, models with global densely connected architectures perform well in large datasets like ImageNet, but are not suitable for small datasets such as CIFAR-10 and SVHN. Different from dense connections, we propose two new algorithms to connect layers. Baseline is a densely connected network, and the networks connected by the two new algorithms are named ShortNet1 and ShortNet2 respectively. The experimental results of image classification on CIFAR-10 and SVHN show that ShortNet1 has a 5% lower test error rate and 25% faster inference time than Baseline. ShortNet2 speeds up inference time by 40% with less loss in test accuracy. Code and pre-trained models are available at https://github.com/RuiyangJu/Connection_Reduction.

PDF Abstract

Datasets


Results from the Paper


Ranked #3 on Image Classification on SVHN (Percentage correct metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Image Classification CIFAR-10 ShortNet1-53 Accuracy 86.64 # 9
PARAMS 2.16M # 188
Top-1 Accuracy 86.64 # 33
Parameters 2.16M # 14
Image Classification SVHN ShortNet2-43 Percentage correct 94.52 # 3

Methods