Fine-Grained Re-Identification

26 Nov 2020  ·  Priyank Pathak ·

Research into the task of re-identification (ReID) is picking up momentum in computer vision for its many use cases and zero-shot learning nature. This paper proposes a computationally efficient fine-grained ReID model, FGReID, which is among the first models to unify image and video ReID while keeping the number of training parameters minimal. FGReID takes advantage of video-based pre-training and spatial feature attention to improve performance on both video and image ReID tasks. FGReID achieves state-of-the-art (SOTA) on MARS, iLIDS-VID, and PRID-2011 video person ReID benchmarks. Eliminating temporal pooling yields an image ReID model that surpasses SOTA on CUHK01 and Market1501 image person ReID benchmarks. The FGReID achieves near SOTA performance on the vehicle ReID dataset VeRi as well, demonstrating its ability to generalize. Additionally we do an ablation study analyzing the key features influencing model performance on ReID tasks. Finally, we discuss the moral dilemmas related to ReID tasks, including the potential for misuse. Code for this work is publicly available at https: //github.com/ppriyank/Fine-grained-ReIdentification.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Person Re-Identification iLIDS-VID FGReID Rank-1 91.5 # 3
Rank-20 100 # 1
Rank-5 99.2 # 2
Rank-10 99.8 # 1
Person Re-Identification MARS FGReID mAP 86.2 # 4
Rank-1 89.6 # 6
Rank-20 98.8 # 1
Rank-5 97.0 # 2
Person Re-Identification PRID2011 FGReID Rank-1 96.1 # 2
Rank-20 100 # 1
Rank-5 99.1 # 2
Rank-10 99.9 # 1

Methods


No methods listed for this paper. Add relevant methods here