no code implementations • 25 Apr 2024 • Niclas Popp, Jan Hendrik Metzen, Matthias Hein
Multi-modal foundation models such as CLIP have showcased impressive zero-shot capabilities.
no code implementations • 10 Apr 2024 • Valentyn Boreiko, Matthias Hein, Jan Hendrik Metzen
Our approach, BEV2EGO, allows for a realistic generation of the complete scene with road-contingent control that maps 2D bird's-eye view (BEV) scene configurations to a first-person view (EGO).
no code implementations • 28 Sep 2023 • Jan Hendrik Metzen, Piyapat Saranrittichai, Chaithanya Kumar Mummadi
We show that AutoCLIP outperforms baselines across a broad range of vision-language models, datasets, and prompt templates consistently and by up to 3 percent point accuracy.
no code implementations • 23 Sep 2023 • Valentyn Boreiko, Matthias Hein, Jan Hendrik Metzen
Moreover, our framework introduces an evaluation setting that can serve as a benchmark for similar pipelines.
no code implementations • ICCV 2023 • Jan Hendrik Metzen, Robin Hutmacher, N. Grace Hua, Valentyn Boreiko, Dan Zhang
Despite excellent average-case performance of many image classifiers, their performance can substantially deteriorate on semantically coherent subgroups of the data that were under-represented in the training data.
no code implementations • 13 Sep 2022 • Maksym Yatsura, Kaspar Sakmann, N. Grace Hua, Matthias Hein, Jan Hendrik Metzen
Adversarial patch attacks are an emerging security threat for real world deep learning applications.
no code implementations • CVPR 2022 • Giulio Lovisotto, Nicole Finnie, Mauricio Munoz, Chaithanya Kumar Mummadi, Jan Hendrik Metzen
Neural architectures based on attention such as vision transformers are revolutionizing image recognition.
no code implementations • 15 Feb 2022 • Thomas Elsken, Arber Zela, Jan Hendrik Metzen, Benedikt Staffler, Thomas Brox, Abhinav Valada, Frank Hutter
The success of deep learning in recent years has lead to a rising demand for neural network architecture engineering.
1 code implementation • NeurIPS 2021 • Maksym Yatsura, Jan Hendrik Metzen, Matthias Hein
We demonstrate that plugging the learned controller into the attack consistently improves its black-box robustness estimate in different query regimes by up to 20% for a wide range of different models with black-box access.
no code implementations • 8 Jul 2021 • Thomas Elsken, Benedikt Staffler, Arber Zela, Jan Hendrik Metzen, Frank Hutter
While neural architecture search methods have been successful in previous years and led to new state-of-the-art performance on various problems, they have also been criticized for being unstable, being highly sensitive with respect to their hyperparameters, and often not performing better than random search.
no code implementations • 28 Jun 2021 • Chaithanya Kumar Mummadi, Robin Hutmacher, Kilian Rambach, Evgeny Levinkov, Thomas Brox, Jan Hendrik Metzen
This paper focuses on the fully test-time adaptation setting, where only unlabeled data from the target distribution is required.
no code implementations • NeurIPS 2021 • Chaithanya Kumar Mummadi, Robin Hutmacher, Kilian Rambach, Evgeny Levinkov, Thomas Brox, Jan Hendrik Metzen
This paper focuses on the fully test-time adaptation setting, where only unlabeled data from the target distribution is required.
no code implementations • ICLR 2021 • Chaithanya Kumar Mummadi, Ranjitha Subramaniam, Robin Hutmacher, Julien Vitay, Volker Fischer, Jan Hendrik Metzen
We conclude that the data augmentation caused by style-variation accounts for the improved corruption robustness and increased shape bias is only a byproduct.
no code implementations • ICLR 2021 • Jan Hendrik Metzen, Maksym Yatsura
Adversarial patches pose a realistic threat model for physical world attacks on autonomous systems via their perception component.
1 code implementation • 27 Jan 2021 • Jan Hendrik Metzen, Nicole Finnie, Robin Hutmacher
However, tailoring adversarial training to universal patches is computationally expensive since the optimal universal patch depends on the model weights which change during training.
no code implementations • ECCV 2020 • Christoph Kamann, Burkhard Güssefeld, Robin Hutmacher, Jan Hendrik Metzen, Carsten Rother
With respect to our 16 different types of image corruptions and 5 different network backbones, we are in 74% better than training with clean data.
no code implementations • 3 Oct 2020 • Sadaf Gulshad, Jan Hendrik Metzen, Arnold Smeulders
In this paper we aim to explore the general robustness of neural network classifiers by utilizing adversarial as well as natural perturbations.
2 code implementations • CVPR 2020 • Thomas Elsken, Benedikt Staffler, Jan Hendrik Metzen, Frank Hutter
The recent progress in neural architecture search (NAS) has allowed scaling the automated design of neural architectures to real-world domains, such as object detection and semantic segmentation.
1 code implementation • 15 Oct 2019 • Sadaf Gulshad, Zeynep Akata, Jan Hendrik Metzen, Arnold Smeulders
We study the changes in attributes for clean as well as adversarial images in both standard and adversarially robust networks.
1 code implementation • 17 Apr 2019 • Sadaf Gulshad, Jan Hendrik Metzen, Arnold Smeulders, Zeynep Akata
Deep computer vision systems being vulnerable to imperceptible and carefully crafted noise have raised questions regarding the robustness of their decisions.
no code implementations • ICCV 2019 • Chaithanya Kumar Mummadi, Thomas Brox, Jan Hendrik Metzen
Classifiers such as deep neural networks have been shown to be vulnerable against adversarial perturbations on problems with high-dimensional input space.
1 code implementation • 16 Aug 2018 • Thomas Elsken, Jan Hendrik Metzen, Frank Hutter
Deep Learning has enabled remarkable progress over the last years on a variety of tasks, such as image recognition, speech recognition, and machine translation.
4 code implementations • NeurIPS 2018 • Eric Wong, Frank R. Schmidt, Jan Hendrik Metzen, J. Zico Kolter
Recent work has developed methods for learning deep network classifiers that are provably robust to norm-bounded adversarial perturbation; however, these methods are currently only possible for relatively small feedforward networks.
no code implementations • ICLR 2019 • Thomas Elsken, Jan Hendrik Metzen, Frank Hutter
Neural Architecture Search aims at automatically finding neural architectures that are competitive with architectures designed by human experts.
no code implementations • ICLR 2018 • Jan Hendrik Metzen
While adversarial training improves the robustness of classifiers against such adversarial perturbations, it leaves classifiers sensitive to them on a non-negligible fraction of the inputs.
no code implementations • ICCV 2017 • Jan Hendrik Metzen, Mummadi Chaithanya Kumar, Thomas Brox, Volker Fischer
We show empirically that there exist barely perceptible universal noise patterns which result in nearly the same predicted segmentation for arbitrary inputs.
no code implementations • 3 Mar 2017 • Volker Fischer, Mummadi Chaithanya Kumar, Jan Hendrik Metzen, Thomas Brox
Machine learning methods in general and Deep Neural Networks in particular have shown to be vulnerable to adversarial perturbations.
1 code implementation • 14 Feb 2017 • Jan Hendrik Metzen, Tim Genewein, Volker Fischer, Bastian Bischoff
In this work, we propose to augment deep neural networks with a small "detector" subnetwork which is trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations.
1 code implementation • 2 Feb 2016 • Jan Hendrik Metzen
We propose minimum regret search (MRS), a novel acquisition function for Bayesian optimization.
no code implementations • 13 Nov 2015 • Jan Hendrik Metzen
Contextual policy search allows adapting robotic movement primitives to different situations.