no code implementations • 16 Apr 2024 • Christian Tinauer, Anna Damulina, Maximilian Sackl, Martin Soellradl, Reduan Achtibat, Maximilian Dreyer, Frederik Pahde, Sebastian Lapuschkin, Reinhold Schmidt, Stefan Ropele, Wojciech Samek, Christian Langkammer
Using quantitative R2* maps we separated Alzheimer's patients (n=117) from normal controls (n=219) by using a convolutional neural network and systematically investigated the learned concepts using Concept Relevance Propagation and compared these results to a conventional region of interest-based analysis.
no code implementations • 15 Apr 2024 • Dilyara Bareeva, Maximilian Dreyer, Frederik Pahde, Wojciech Samek, Sebastian Lapuschkin
Deep Neural Networks are prone to learning and relying on spurious correlations in the training data, which, for high-risk applications, can have fatal consequences.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
1 code implementation • 18 Aug 2023 • Maximilian Dreyer, Frederik Pahde, Christopher J. Anders, Wojciech Samek, Sebastian Lapuschkin
Deep Neural Networks are prone to learning spurious correlations embedded in the training data, leading to potentially biased predictions.
1 code implementation • 22 Mar 2023 • Frederik Pahde, Maximilian Dreyer, Wojciech Samek, Sebastian Lapuschkin
To tackle this problem, we propose Reveal to Revise (R2R), a framework entailing the entire eXplainable Artificial Intelligence (XAI) life cycle, enabling practitioners to iteratively identify, mitigate, and (re-)evaluate spurious model behavior with a minimal amount of human interaction.
no code implementations • 30 Nov 2022 • Frederik Pahde, Galip Ümit Yolcu, Alexander Binder, Wojciech Samek, Sebastian Lapuschkin
We further suggest a XAI evaluation framework with which we quantify and compare the effect sof model canonization for various XAI methods in image classification tasks on the Pascal-VOC and ILSVRC2017 datasets, as well as for Visual Question Answering using CLEVR-XAI.
Explainable Artificial Intelligence (XAI) Image Classification +2
no code implementations • 7 Feb 2022 • Frederik Pahde, Maximilian Dreyer, Leander Weber, Moritz Weckbecker, Christopher J. Anders, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin
With a growing interest in understanding neural network prediction strategies, Concept Activation Vectors (CAVs) have emerged as a popular tool for modeling human-understandable concepts in the latent space.
no code implementations • 17 Nov 2020 • Frederik Pahde, Mihai Puscas, Tassilo Klein, Moin Nabi
Although providing exceptional results for many computer vision tasks, state-of-the-art deep learning algorithms catastrophically struggle in low data scenarios.
no code implementations • 4 Jan 2019 • Frederik Pahde, Mihai Puscas, Jannik Wolff, Tassilo Klein, Nicu Sebe, Moin Nabi
Since the advent of deep learning, neural networks have demonstrated remarkable results in many visual recognition tasks, constantly pushing the limits.
no code implementations • 22 Nov 2018 • Frederik Pahde, Oleksiy Ostapenko, Patrick Jähnichen, Tassilo Klein, Moin Nabi
State-of-the-art deep learning algorithms yield remarkable results in many visual recognition tasks.
no code implementations • 13 Jun 2018 • Frederik Pahde, Patrick Jähnichen, Tassilo Klein, Moin Nabi
State-of-the-art deep learning algorithms generally require large amounts of data for model training.