Improving Prototypical Part Networks with Reward Reweighing, Reselection, and Retraining

8 Jul 2023  ·  Robin Netzorg, Jiaxun Li, Bin Yu ·

In recent years, work has gone into developing deep interpretable methods for image classification that clearly attributes a model's output to specific features of the data. One such of these methods is the Prototypical Part Network (ProtoPNet), which attempts to classify images based on meaningful parts of the input. While this method results in interpretable classifications, it often learns to classify from spurious or inconsistent parts of the image. Hoping to remedy this, we take inspiration from the recent developments in Reinforcement Learning with Human Feedback (RLHF) to fine-tune these prototypes. By collecting human annotations of prototypes quality via a 1-5 scale on the CUB-200-2011 dataset, we construct a reward model that learns human preferences and identify non-spurious prototypes. In place of a full RL update, we propose the Reweighed, Reselected, and Retrained Prototypical Part Network (R3-ProtoPNet), which adds an additional three steps to the ProtoPNet training loop. The first two steps are reward-based reweighting and reselection, which align prototypes with human feedback. The final step is retraining to realign the model's features with the updated prototypes. We find that R3-ProtoPNet improves the overall meaningfulness of the prototypes, and maintains or improves individual model performance. When multiple trained R3-ProtoPNets are incorporated into an ensemble, we find increases in both interpretability and predictive performance.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods