no code implementations • 15 Feb 2024 • Chris Hamblin, Thomas Fel, Srijani Saha, Talia Konkle, George Alvarez
Most research has primarily centered around attribution methods, which provide explanations in the form of heatmaps, showing where the model directs its attention for a given feature.
no code implementations • 24 Oct 2023 • Katherine L. Hermann, Hossein Mobahi, Thomas Fel, Michael C. Mozer
Deep-learning models can extract a rich assortment of features from data.
no code implementations • 18 Jul 2023 • Sabine Muzellec, Thomas Fel, Victor Boutin, Léo Andéol, Rufin VanRullen, Thomas Serre
Attribution methods correspond to a class of explainability methods (XAI) that aim to assess how individual inputs contribute to a model's decision-making process.
1 code implementation • 11 Jun 2023 • Thomas Fel, Thibaut Boissin, Victor Boutin, Agustin Picard, Paul Novello, Julien Colin, Drew Linsley, Tom Rousseau, Rémi Cadène, Laurent Gardes, Thomas Serre
However, its widespread adoption has been limited due to a reliance on tricks to generate interpretable images, and corresponding challenges in scaling it to deeper neural networks.
no code implementations • 5 Jun 2023 • Drew Linsley, Pinyuan Feng, Thibaut Boissin, Alekh Karkada Ashok, Thomas Fel, Stephanie Olaiya, Thomas Serre
Harmonized DNNs achieve the best of both worlds and experience attacks that are detectable and affect features that humans find diagnostic for recognition, meaning that attacks on these models are more likely to be rendered ineffective by inducing similar effects on human perception.
1 code implementation • 11 May 2023 • Fanny Jourdan, Agustin Picard, Thomas Fel, Laurent Risser, Jean Michel Loubes, Nicholas Asher
COCKATIEL is a novel, post-hoc, concept-based, model-agnostic XAI technique that generates meaningful explanations from the last layer of a neural net model trained on an NLP classification task by using Non-Negative Matrix Factorization (NMF) to discover the concepts the model leverages to make predictions and by exploiting a Sensitivity Analysis to estimate accurately the importance of each of these concepts for the model.
Explainable Artificial Intelligence (XAI) Sentiment Analysis
no code implementations • 12 Apr 2023 • Léo Andéol, Thomas Fel, Florence De Grancey, Luca Mossina
Deploying deep learning models in real-world certified systems requires the ability to provide confidence estimates that accurately reflect their uncertainty.
1 code implementation • 27 Jan 2023 • Victor Boutin, Thomas Fel, Lakshya Singhal, Rishav Mukherji, Akash Nagaraj, Julien Colin, Thomas Serre
An important milestone for AI is the development of algorithms that can produce drawings that are indistinguishable from those of humans.
no code implementations • 26 Jan 2023 • Léo Andéol, Thomas Fel, Florence De Grancey, Luca Mossina
We present an application of conformal prediction, a form of uncertainty quantification with guarantees, to the detection of railway signals.
1 code implementation • CVPR 2023 • Thomas Fel, Agustin Picard, Louis Bethune, Thibaut Boissin, David Vigouroux, Julien Colin, Rémi Cadène, Thomas Serre
However, recent research has exposed the limited practical value of these methods, attributed in part to their narrow focus on the most prominent regions of an image -- revealing "where" the model looks, but failing to elucidate "what" the model sees in those areas.
3 code implementations • 8 Nov 2022 • Thomas Fel, Ivan Felipe, Drew Linsley, Thomas Serre
Across 84 different DNNs trained on ImageNet and three independent datasets measuring the where and the how of human visual strategies for object recognition on those images, we find a systematic trade-off between DNN categorization accuracy and alignment with human visual strategies for object recognition.
1 code implementation • 17 Aug 2022 • Mohit Vaishnav, Thomas Fel, Ivań Felipe Rodríguez, Thomas Serre
Vision transformers are nowadays the de-facto choice for image classification tasks.
Ranked #1 on Fine-Grained Image Classification on Herbarium 2022
no code implementations • NeurIPS 2023 • Mathieu Serrurier, Franck Mamalet, Thomas Fel, Louis Béthune, Thibaut Boissin
Input gradients have a pivotal role in a variety of applications, including adversarial attack algorithms for evaluating model robustness, explainable AI techniques for generating Saliency Maps, and counterfactual explanations. However, Saliency Maps generated by traditional neural networks are often noisy and provide limited insights.
1 code implementation • 13 Jun 2022 • Paul Novello, Thomas Fel, David Vigouroux
HSIC measures the dependence between regions of an input image and the output of a model based on kernel embeddings of distributions.
1 code implementation • 9 Jun 2022 • Thomas Fel, Lucas Hervier, David Vigouroux, Antonin Poche, Justin Plakoo, Remi Cadene, Mathieu Chalvidal, Julien Colin, Thibaut Boissin, Louis Bethune, Agustin Picard, Claire Nicodeme, Laurent Gardes, Gregory Flandin, Thomas Serre
Today's most advanced machine-learning models are hardly scrutable.
no code implementations • CVPR 2023 • Thomas Fel, Melanie Ducoffe, David Vigouroux, Remi Cadene, Mikael Capelle, Claire Nicodeme, Thomas Serre
A variety of methods have been proposed to try to explain how deep neural networks make their decisions.
1 code implementation • 6 Dec 2021 • Julien Colin, Thomas Fel, Remi Cadene, Thomas Serre
A multitude of explainability methods and associated fidelity performance metrics have been proposed to help better understand how modern AI systems make decisions.
1 code implementation • NeurIPS 2021 • Thomas Fel, Remi Cadene, Mathieu Chalvidal, Matthieu Cord, David Vigouroux, Thomas Serre
We describe a novel attribution method which is grounded in Sensitivity Analysis and uses Sobol indices.
no code implementations • 7 Sep 2020 • Thomas Fel, David Vigouroux, Rémi Cadène, Thomas Serre
A plethora of methods have been proposed to explain how deep neural networks reach their decisions but comparatively, little effort has been made to ensure that the explanations produced by these methods are objectively relevant.