1 code implementation • 22 Jun 2023 • Benedict Clark, Rick Wilming, Stefan Haufe
The field of 'explainable' artificial intelligence (XAI) has produced highly cited methods that seek to make the decisions of complex machine learning (ML) methods 'understandable' to humans, for example by attributing 'importance' scores to input features.
1 code implementation • 21 Jun 2023 • Marta Oliveira, Rick Wilming, Benedict Clark, Céline Budding, Fabian Eitel, Kerstin Ritter, Stefan Haufe
Here, we propose a benchmark dataset that allows for quantifying explanation performance in a realistic magnetic resonance imaging (MRI) classification task.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +2
no code implementations • 2 Jun 2023 • Rick Wilming, Leo Kieslich, Benedict Clark, Stefan Haufe
In recent years, the community of 'explainable artificial intelligence' (XAI) has created a vast body of methods to bridge a perceived gap between model 'complexity' and 'interpretability'.
1 code implementation • 14 Nov 2021 • Rick Wilming, Céline Budding, Klaus-Robert Müller, Stefan Haufe
It has been demonstrated that some saliency methods can highlight features that have no statistical association with the prediction target (suppressor variables).
Explainable Artificial Intelligence (XAI) Feature Importance