no code implementations • 25 Mar 2024 • Georgii Mikriukov, Gesina Schwalbe, Franz Motzkus, Korinna Bade
Adversarial attacks (AAs) pose a significant threat to the reliability and robustness of deep neural networks.
no code implementations • 24 Nov 2023 • Georgii Mikriukov, Gesina Schwalbe, Christian Hellert, Korinna Bade
The latter, though, is of particular interest for debugging, like finding and understanding outliers, learned notions of sub-concepts, and concept confusion.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 8 Sep 2023 • Youssef Shoeb, Robin Chan, Gesina Schwalbe, Azarm Nowzard, Fatma Güney, Hanno Gottschalk
In this work, we extend beyond identifying OoD road obstacles in video streams and offer a comprehensive approach to extract sequences of OoD road obstacles using text queries, thereby proposing a way of curating a collection of OoD data for subsequent analysis.
no code implementations • 30 Apr 2023 • Georgii Mikriukov, Gesina Schwalbe, Christian Hellert, Korinna Bade
These allow insights into both the flow and likeness of semantic information within CNN layers, and into the degree of their similarity between different network architectures.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1
no code implementations • 28 Apr 2023 • Georgii Mikriukov, Gesina Schwalbe, Christian Hellert, Korinna Bade
The guiding use-case is a post-hoc explainability framework for object detection (OD) CNNs, towards which existing concept analysis (CA) methods are successfully adapted.
Dimensionality Reduction Explainable artificial intelligence +4
no code implementations • 10 May 2022 • Julian Wörmann, Daniel Bogdoll, Christian Brunner, Etienne Bührle, Han Chen, Evaristus Fuh Chuo, Kostadin Cvejoski, Ludger van Elst, Philip Gottschall, Stefan Griesche, Christian Hellert, Christian Hesels, Sebastian Houben, Tim Joseph, Niklas Keil, Johann Kelsch, Mert Keser, Hendrik Königshof, Erwin Kraft, Leonie Kreuser, Kevin Krone, Tobias Latka, Denny Mattern, Stefan Matthes, Franz Motzkus, Mohsin Munir, Moritz Nekolla, Adrian Paschke, Stefan Pilar von Pilchau, Maximilian Alexander Pintz, Tianming Qiu, Faraz Qureishi, Syed Tahseen Raza Rizvi, Jörg Reichardt, Laura von Rueden, Alexander Sagel, Diogo Sasdelli, Tobias Scholl, Gerhard Schunk, Gesina Schwalbe, Hao Shen, Youssef Shoeb, Hendrik Stapelbroek, Vera Stehr, Gurucharan Srinivas, Anh Tuan Tran, Abhishek Vivekanandan, Ya Wang, Florian Wasserrab, Tino Werner, Christian Wirth, Stefan Zwicklbauer
The availability of representative datasets is an essential prerequisite for many successful artificial intelligence and machine learning models.
no code implementations • 25 Mar 2022 • Gesina Schwalbe
The research field of concept (embedding) analysis (CA) tackles this problem: CA aims to find global, assessable associations of humanly interpretable semantic concepts (e. g., eye, bearded) with internal representations of a DNN.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +2
no code implementations • 3 Jan 2022 • Gesina Schwalbe, Christian Wirth, Ute Schmid
In this work, we present a simple, yet effective, approach to verify that a CNN complies with symbolic predicate logic rules which relate visual concepts.
1 code implementation • 16 May 2021 • Johannes Rabold, Gesina Schwalbe, Ute Schmid
We show that our explanation is faithful to the original black-box model.
no code implementations • 15 May 2021 • Gesina Schwalbe, Bettina Finzel
With the amount of XAI methods vastly growing, a taxonomy of methods is needed by researchers as well as practitioners: To grasp the breadth of the topic, compare methods, and to select the right XAI method based on traits required by a specific use-case context.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 14 May 2021 • Gesina Schwalbe
One approach to this is concept analysis, which aims to establish a mapping between the internal representation of a DNN and intuitive semantic concepts.
no code implementations • 29 Apr 2021 • Sebastian Houben, Stephanie Abrecht, Maram Akila, Andreas Bär, Felix Brockherde, Patrick Feifel, Tim Fingscheidt, Sujan Sai Gannamaneni, Seyed Eghbal Ghobadi, Ahmed Hammam, Anselm Haselhoff, Felix Hauser, Christian Heinzemann, Marco Hoffmann, Nikhil Kapoor, Falk Kappel, Marvin Klingner, Jan Kronenberger, Fabian Küppers, Jonas Löhdefink, Michael Mlynarski, Michael Mock, Firas Mualla, Svetlana Pavlitskaya, Maximilian Poretschkin, Alexander Pohl, Varun Ravi-Kumar, Julia Rosenzweig, Matthias Rottmann, Stefan Rüping, Timo Sämann, Jan David Schneider, Elena Schulz, Gesina Schwalbe, Joachim Sicking, Toshika Srivastava, Serin Varghese, Michael Weber, Sebastian Wirkert, Tim Wirtz, Matthias Woehrle
Our paper addresses both machine learning experts and safety engineers: The former ones might profit from the broad range of machine learning topics covered and discussions on limitations of recent methods.