Aggregated Residual Transformations for Deep Neural Networks

We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call "cardinality" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.

PDF Abstract CVPR 2017 PDF CVPR 2017 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Image Classification GasHisSDB ResNeXt-50-32x4d Accuracy 98.59 # 3
Precision 99.94 # 3
F1-Score 99.25 # 3
Image Classification ImageNet ResNeXt-101 64x4 Top 1 Accuracy 80.9% # 618
Number of params 83.6M # 812
GFLOPs 31.5 # 397
Domain Generalization VizWiz-Classification ResNeXt-101 32x16d Accuracy - All Images 51.7 # 3
Accuracy - Corrupted Images 48.1 # 2
Accuracy - Clean Images 54.8 # 3

Methods