Paper

Take More Positives: An Empirical Study of Contrastive Learing in Unsupervised Person Re-Identification

Unsupervised person re-identification (re-ID) aims at closing the performance gap to supervised methods. These methods build reliable relationship between data points while learning representations. However, we empirically show that the reason why they are successful is not only their label generation mechanisms, but also their unexplored designs. By studying two unsupervised person re-ID methods in a cross-method way, we point out a hard negative problem is handled implicitly by their designs of data augmentations and PK sampler respectively. In this paper, we find another simple solution for the problem, i.e., taking more positives during training, by which we generate pseudo-labels and update models in an iterative manner. Based on our findings, we propose a contrastive learning method without a memory back for unsupervised person re-ID. Our method works well on benchmark datasets and outperforms the state-of-the-art methods. Code will be made available.

Results in Papers With Code
(↓ scroll down to see all results)