no code implementations • NeurIPS 2023 • Roland S. Zimmermann, Thomas Klein, Wieland Brendel
We use a psychophysical paradigm to quantify one form of mechanistic interpretability for a diverse suite of nine models and find no scaling effect for interpretability - neither for model nor dataset size.
1 code implementation • 7 Jun 2023 • Robert Geirhos, Roland S. Zimmermann, Blair Bilodeau, Wieland Brendel, Been Kim
Today, visualization methods form the foundation of our knowledge about the internal workings of neural networks, as a type of mechanistic interpretability.
no code implementations • 30 May 2023 • Roland S. Zimmermann, Sjoerd van Steenkiste, Mehdi S. M. Sajjadi, Thomas Kipf, Klaus Greff
Self-supervised methods for learning object-centric representations have recently been applied successfully to various datasets.
no code implementations • 23 May 2023 • Jack Brady, Roland S. Zimmermann, Yash Sharma, Bernhard Schölkopf, Julius von Kügelgen, Wieland Brendel
Under this generative process, we prove that the ground-truth object representations can be identified by an invertible and compositional inference model, even in the presence of dependencies between objects.
no code implementations • 28 Jun 2022 • Roland S. Zimmermann, Wieland Brendel, Florian Tramer, Nicholas Carlini
Hundreds of defenses have been proposed to make deep neural networks robust against minimal (adversarial) input perturbations.
no code implementations • 1 Oct 2021 • Roland S. Zimmermann, Lukas Schott, Yang song, Benjamin A. Dunn, David A. Klindt
In this work, we investigate score-based generative models as classifiers for natural images.
1 code implementation • NeurIPS 2021 • Roland S. Zimmermann, Judy Borowski, Robert Geirhos, Matthias Bethge, Thomas S. A. Wallis, Wieland Brendel
A precise understanding of why units in an artificial network respond to certain stimuli would constitute a big step towards explainable artificial intelligence.
1 code implementation • 17 Feb 2021 • Roland S. Zimmermann, Yash Sharma, Steffen Schneider, Matthias Bethge, Wieland Brendel
Contrastive learning has recently seen tremendous success in self-supervised learning.
Ranked #1 on Disentanglement on KITTI-Masks
1 code implementation • 23 Oct 2020 • Judy Borowski, Roland S. Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel
Even if only a single reference image is given, synthetic images provide less information than natural images ($65\pm5\%$ vs. $73\pm4\%$).
3 code implementations • ECCV 2020 • Evgenia Rusak, Lukas Schott, Roland S. Zimmermann, Julian Bitterwolf, Oliver Bringmann, Matthias Bethge, Wieland Brendel
The human visual system is remarkably robust against a wide range of naturally occurring variations and corruptions like rain or snow.
no code implementations • 1 Jul 2019 • Roland S. Zimmermann
A recent paper by Liu et al. combines the topics of adversarial training and Bayesian Neural Networks (BNN) and suggests that adversarially trained BNNs are more robust against adversarial attacks than their non-Bayesian counterparts.
no code implementations • 16 Apr 2019 • Borja Fernandez-Gauna, Manuel Graña, Roland S. Zimmermann
We present Simion Zoo, a Reinforcement Learning (RL) workbench that provides a complete set of tools to design, run, and analyze the results, both statistically and visually, of RL control applications.
2 code implementations • 19 Sep 2018 • Roland S. Zimmermann, Julien N. Siems
We present an auxiliary task to Mask R-CNN, an instance segmentation network, which leads to faster training of the mask head.