no code implementations • 19 Mar 2024 • Kacper Sokol, Julia E. Vogt
Despite significant progress, evaluation of explainable artificial intelligence remains elusive and challenging.
1 code implementation • 8 Mar 2024 • Thomas M. Sutter, Yang Meng, Norbert Fortin, Julia E. Vogt, Stephan Mandt
Such architectures impose hard constraints on the model.
no code implementations • 24 Jan 2024 • Ričards Marcinkevičs, Sonia Laguna, Moritz Vandenhirtz, Julia E. Vogt
Recently, interpretable machine learning has re-explored concept bottleneck models (CBM), comprising step-by-step prediction of the high-level concepts from the raw features and the target variable from the predicted concepts.
1 code implementation • 24 Jan 2024 • Mike Laszkiewicz, Imant Daunhawer, Julia E. Vogt, Asja Fischer, Johannes Lederer
Recent years have witnessed a rapid development of deep generative models for creating synthetic media, such as images and videos.
1 code implementation • 25 Oct 2023 • Claudio Fanconi, Moritz Vandenhirtz, Severin Husmann, Julia E. Vogt
Prototype learning, a popular machine learning method designed for inherently interpretable decisions, leverages similarities to learned prototypes for classifying new data.
1 code implementation • 16 Oct 2023 • Paweł Czyż, Frederic Grabowski, Julia E. Vogt, Niko Beerenwinkel, Alexander Marx
Mutual information quantifies the dependence between two random variables and remains invariant under diffeomorphisms.
1 code implementation • 7 Sep 2023 • Ece Ozkan, Thomas M. Sutter, Yurong Hu, Sebastian Balzer, Julia E. Vogt
Early detection of cardiac dysfunction through routine screening is vital for diagnosing cardiovascular diseases.
2 code implementations • NeurIPS 2023 • Paweł Czyż, Frederic Grabowski, Julia E. Vogt, Niko Beerenwinkel, Alexander Marx
Mutual information is a general statistical dependency measure which has found applications in representation learning, causality, domain generalization and computational biology.
no code implementations • 4 Jun 2023 • Kacper Sokol, Julia E. Vogt
Ante-hoc interpretability has become the holy grail of explainable artificial intelligence for high-stakes domains such as healthcare; however, this notion is elusive, lacks a widely-accepted definition and depends on the operational context.
1 code implementation • 31 May 2023 • Moritz Vandenhirtz, Laura Manduchi, Ričards Marcinkevičs, Julia E. Vogt
We propose Signal is Harder (SiH), a variational-autoencoder-based method that simultaneously trains a biased and unbiased classifier using a novel, disentangling reweighting scheme inspired by the focal loss.
1 code implementation • NeurIPS 2023 • Thomas M. Sutter, Alain Ryser, Joram Liebeskind, Julia E. Vogt
Partitioning a set of elements into an unknown number of mutually exclusive subsets is essential in many machine learning problems.
1 code implementation • 16 Mar 2023 • Imant Daunhawer, Alice Bizeul, Emanuele Palumbo, Alexander Marx, Julia E. Vogt
Our work generalizes previous identifiability results by redefining the generative process in terms of distinct mechanisms with modality-specific latent variables.
1 code implementation • 28 Feb 2023 • Ričards Marcinkevičs, Patricia Reis Wolfertstetter, Ugne Klimiene, Kieran Chin-Cheong, Alyssia Paschke, Julia Zerres, Markus Denzinger, David Niederberger, Sven Wellmann, Ece Ozkan, Christian Knorr, Julia E. Vogt
Appendicitis is among the most frequent reasons for pediatric abdominal surgeries.
no code implementations • 23 Dec 2022 • Ričards Marcinkevičs, Ece Ozkan, Julia E. Vogt
Many modern research fields increasingly rely on collecting and analysing massive, often unstructured, and unwieldy datasets.
1 code implementation • 13 Oct 2022 • Alexander Immer, Christoph Schultheiss, Julia E. Vogt, Bernhard Schölkopf, Peter Bühlmann, Alexander Marx
We study the class of location-scale or heteroscedastic noise models (LSNMs), in which the effect $Y$ can be written as a function of the cause $X$ and a noise source $N$ independent of $X$, which may be scaled by a positive function $g$ over the cause, i. e., $Y = f(X) + g(X)N$.
1 code implementation • 26 Jul 2022 • Ričards Marcinkevičs, Ece Ozkan, Julia E. Vogt
In addition, we compare several intra- and post-processing approaches applied to debiasing deep chest X-ray classifiers.
1 code implementation • 30 Jun 2022 • Alain Ryser, Laura Manduchi, Fabian Laumer, Holger Michel, Sven Wellmann, Julia E. Vogt
The introduced method takes advantage of the periodic nature of the heart cycle to learn three variants of a variational latent trajectory model (TVAE).
no code implementations • 17 Jun 2022 • Yuge Shi, Imant Daunhawer, Julia E. Vogt, Philip H. S. Torr, Amartya Sanyal
As such, there is a lack of insight on the robustness of the representations learned from unsupervised methods, such as self-supervised learning (SSL) and auto-encoder based algorithms (AE), to distribution shift.
1 code implementation • 3 Mar 2022 • Thomas M. Sutter, Laura Manduchi, Alain Ryser, Julia E. Vogt
We introduce reparameterizable gradients to learn the importance between groups and highlight the advantage of explicitly learning the size of subsets in two typical applications: weakly-supervised learning and clustering.
no code implementations • NeurIPS Workshop ICBINB 2021 • Imant Daunhawer, Thomas M. Sutter, Kieran Chin-Cheong, Emanuele Palumbo, Julia E. Vogt
Multimodal variational autoencoders (VAEs) have shown promise as efficient generative models for weakly-supervised data.
1 code implementation • NeurIPS 2021 • Laura Manduchi, Kieran Chin-Cheong, Holger Michel, Sven Wellmann, Julia E. Vogt
Constrained clustering has gained significant attention in the field of machine learning as it can leverage prior information on a growing amount of only partially labeled data.
1 code implementation • ICLR 2022 • Laura Manduchi, Ričards Marcinkevičs, Michela C. Massi, Thomas Weikert, Alexander Sauter, Verena Gotta, Timothy Müller, Flavio Vasella, Marian C. Neidert, Marc Pfister, Bram Stieltjes, Julia E. Vogt
In this work, we study the problem of clustering survival data $-$ a challenging and so far under-explored task.
1 code implementation • ICLR 2021 • Thomas M. Sutter, Imant Daunhawer, Julia E. Vogt
Multiple data types naturally co-occur when describing real-world phenomena and learning from them is a long-standing goal in machine learning research.
1 code implementation • ICLR 2021 • Ričards Marcinkevičs, Julia E. Vogt
Exploratory analysis of time series data can yield a better understanding of complex dynamical systems.
no code implementations • 3 Dec 2020 • Ričards Marcinkevičs, Julia E. Vogt
In this review, we examine the problem of designing interpretable and explainable machine learning models.
Counterfactual Explanation Explainable artificial intelligence +1
1 code implementation • NeurIPS 2020 • Thomas M. Sutter, Imant Daunhawer, Julia E. Vogt
Learning from different data types is a long-standing goal in machine learning research, as multiple information sources co-occur when describing natural phenomena.
no code implementations • 5 Jun 2020 • Kieran Chin-Cheong, Thomas Sutter, Julia E. Vogt
In this work, we explore using Generative Adversarial Networks to generate synthetic, heterogeneous EHRs with the goal of using these synthetic records in place of existing data sets for downstream classification tasks.
no code implementations • 29 Apr 2019 • Stefan G. Stark, Stephanie L. Hyland, Melanie F. Pradier, Kjong Lehmann, Andreas Wicki, Fernando Perez Cruz, Julia E. Vogt, Gunnar Rätsch
To demonstrate the utility of our approach, we perform an association study of clinical features with somatic mutation profiles from 4, 007 cancer patients and their tumors.
no code implementations • 14 Apr 2015 • Julia E. Vogt, Marius Kloft, Stefan Stark, Sudhir S. Raman, Sandhya Prabhakaran, Volker Roth, Gunnar Rätsch
We present a novel probabilistic clustering model for objects that are represented via pairwise distances and observed at different time points.