Self-labelling via simultaneous clustering and representation learning

Combining clustering and representation learning is one of the most promising approaches for unsupervised learning of deep neural networks. However, doing so naively leads to ill posed learning problems with degenerate solutions. In this paper, we propose a novel and principled learning formulation that addresses these issues. The method is obtained by maximizing the information between labels and input data indices. We show that this criterion extends standard crossentropy minimization to an optimal transport problem, which we solve efficiently for millions of input images and thousands of labels using a fast variant of the Sinkhorn-Knopp algorithm. The resulting method is able to self-label visual data so as to train highly competitive image representations without manual labels. Our method achieves state of the art representation learning performance for AlexNet and ResNet-50 on SVHN, CIFAR-10, CIFAR-100 and ImageNet and yields the first self-supervised AlexNet that outperforms the supervised Pascal VOC detection baseline. Code and models are available.

PDF Abstract ICLR 2020 PDF ICLR 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Clustering ImageNet SeLa NMI 66.4 # 10
Accuracy - # 11
Self-Supervised Image Classification ImageNet SeLa Top 1 Accuracy 61.5% # 112
Top 5 Accuracy 84.0% # 30
Self-Supervised Image Classification ImageNet SeLa (AlexNet) Top 1 Accuracy 48.4% # 123
Contrastive Learning imagenet-1k ResNet50 ImageNet Top-1 Accuracy 61.5 # 8

Methods