no code implementations • 25 Oct 2023 • Romain Xu-Darme, Jenny Benois-Pineau, Romain Giot, Georges Quénot, Zakaria Chihani, Marie-Christine Rousset, Alexey Zhukov
In the field of Explainable AI, multiples evaluation metrics have been proposed in order to assess the quality of explanation methods w. r. t.
no code implementations • 25 Oct 2023 • Romain Xu-Darme, Georges Quénot, Zakaria Chihani, Marie-Christine Rousset
In this work, we perform an analysis of the visualisation methods implemented in ProtoPNet and ProtoTree, two self-explaining visual classifiers based on prototypes.
no code implementations • 24 Oct 2023 • Romain Xu-Darme, Julien Girard-Satabin, Darryl Hond, Gabriele Incorvaia, Zakaria Chihani
In this work, we propose CODE, an extension of existing work from the field of explainable AI that identifies class-specific recurring patterns to build a robust Out-of-Distribution (OoD) detection method for visual classifiers.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • 24 Jan 2023 • Romain Xu-Darme, Julien Girard-Satabin, Darryl Hond, Gabriele Incorvaia, Zakaria Chihani
Out-of-distribution (OoD) detection for data-based programs is a goal of paramount importance.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • 20 Jan 2023 • Romain Xu-Darme, Georges Quénot, Zakaria Chihani, Marie-Christine Rousset
In this work, we perform an in-depth analysis of the visualisation methods implemented in two popular self-explaining models for visual classification based on prototypes - ProtoPNet and ProtoTree.
no code implementations • 27 Jun 2022 • Romain Xu-Darme, Georges Quénot, Zakaria Chihani, Marie-Christine Rousset
We apply our method on two public fine-grained datasets (Caltech-UCSD Bird 200 and Stanford Cars) and show that our detectors can consistently highlight parts of the object while providing a good measure of the confidence in their prediction.