Auto-view contrastive learning for few-shot image recognition

1 Jan 2021  ·  Xu Luo, Yuxuan Chen, Liangjian Wen, Lili Pan, Zenglin Xu ·

Few-shot learning aims to recognize new classes with few annotated instances within each category. Recently, metric-based meta-learning approaches have shown the superior performance in tackling few-shot learning problems. Despite their success, existing metric-based few-shot approaches often fail to push the fine-grained sub-categories apart in the embedding space given no fine-grained labels. This may result in poor generalization to fine-grained sub-categories, and thus affects model interpretation. To alleviate this problem, we introduce contrastive loss into few-shot classification for learning latent fine-grained structure in the embedding space. Furthermore, to overcome the drawbacks of random image transformation used in current contrastive learning in producing noisy and inaccurate image pairs (i.e., views), we develop a learning-to-learn algorithm to automatically generate different views of the same image. Extensive experiments on standard few-shot learning benchmarks and few-shot fine-grained image classification demonstrate the superiority of our method.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods