no code implementations • 25 Oct 2023 • Romain Xu-Darme, Jenny Benois-Pineau, Romain Giot, Georges Quénot, Zakaria Chihani, Marie-Christine Rousset, Alexey Zhukov
In the field of Explainable AI, multiples evaluation metrics have been proposed in order to assess the quality of explanation methods w. r. t.
no code implementations • 25 Oct 2023 • Romain Xu-Darme, Georges Quénot, Zakaria Chihani, Marie-Christine Rousset
In this work, we perform an analysis of the visualisation methods implemented in ProtoPNet and ProtoTree, two self-explaining visual classifiers based on prototypes.
no code implementations • 20 Jan 2023 • Romain Xu-Darme, Georges Quénot, Zakaria Chihani, Marie-Christine Rousset
In this work, we perform an in-depth analysis of the visualisation methods implemented in two popular self-explaining models for visual classification based on prototypes - ProtoPNet and ProtoTree.
no code implementations • 27 Jun 2022 • Romain Xu-Darme, Georges Quénot, Zakaria Chihani, Marie-Christine Rousset
We apply our method on two public fine-grained datasets (Caltech-UCSD Bird 200 and Stanford Cars) and show that our detectors can consistently highlight parts of the object while providing a good measure of the confidence in their prediction.
2 code implementations • ICLR 2018 • Anuvabh Dutt, Denis Pellerin, Georges Quénot
We refer to this branched architecture as "coupled ensembles".