Learning a Mixture of Granularity-Specific Experts for Fine-Grained Categorization

We aim to divide the problem space of fine-grained recognition into some specific regions. To achieve this, we develop a unified framework based on a mixture of experts. Due to limited data available for the fine-grained recognition problem, it is not feasible to learn diverse experts by using a data division strategy. To tackle the problem, we promote diversity among experts by combing an expert gradually-enhanced learning strategy and a Kullback-Leibler divergence based constraint. The strategy learns new experts on the dataset with the prior knowledge from former experts and adds them to the model sequentially, while the introduced constraint forces the experts to produce diverse prediction distribution. These drive the experts to learn the task from different aspects, making them specialized in different subspace problems. Experiments show that the resulting model improves the classification performance and achieves the state-of-the-art performance on several fine-grained benchmark datasets.

PDF Abstract

Results from the Paper


Ranked #14 on Fine-Grained Image Classification on NABirds (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Fine-Grained Image Classification CUB-200-2011 MGE-CNN Accuracy 89.4% # 34
Fine-Grained Image Classification NABirds MGE-CNN Accuracy 88.6% # 14

Methods


No methods listed for this paper. Add relevant methods here