DINO as a von Mises-Fisher mixture model

Self-distillation methods using Siamese networks are popular for self-supervised pre-training. DINO is one such method based on a cross-entropy loss between $K$-dimensional probability vectors, obtained by applying a softmax function to the dot product between representations and learnt prototypes. Given the fact that the learned representations are $L^2$-normalized, we show that DINO and its derivatives, such as iBOT, can be interpreted as a mixture model of von Mises-Fisher components. With this interpretation, DINO assumes equal precision for all components when the prototypes are also $L^2$-normalized. Using this insight we propose DINO-vMF, that adds appropriate normalization constants when computing the cluster assignment probabilities. Unlike DINO, DINO-vMF is stable also for the larger ViT-Base model with unnormalized prototypes. We show that the added flexibility of the mixture model is beneficial in terms of better image representations. The DINO-vMF pre-trained model consistently performs better than DINO on a range of downstream tasks. We obtain similar improvements for iBOT-vMF vs iBOT and thereby show the relevance of our proposed modification also for other methods derived from DINO.

PDF Abstract ICLR 2023 PDF ICLR 2023 Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Self-Supervised Image Classification ImageNet DINO-vMF (ViT-S/16) Top 1 Accuracy 77.0% # 53
Number of Params 21M # 77
Self-Supervised Image Classification ImageNet DINO-vMF (ViT-B/16) Top 1 Accuracy 78.8% # 40
Number of Params 85M # 38
Self-Supervised Image Classification ImageNet iBOT-vMF (ViT-B/16) Top 1 Accuracy 80.3% # 24
Number of Params 85M # 38

Methods