no code implementations • 29 Apr 2024 • Khaled Saab, Tao Tu, Wei-Hung Weng, Ryutaro Tanno, David Stutz, Ellery Wulczyn, Fan Zhang, Tim Strother, Chunjong Park, Elahe Vedadi, Juanma Zambrano Chaves, Szu-Yeu Hu, Mike Schaekermann, Aishwarya Kamath, Yong Cheng, David G. T. Barrett, Cathy Cheung, Basil Mustafa, Anil Palepu, Daniel McDuff, Le Hou, Tomer Golany, Luyang Liu, Jean-Baptiste Alayrac, Neil Houlsby, Nenad Tomasev, Jan Freyberg, Charles Lau, Jonas Kemp, Jeremy Lai, Shekoofeh Azizi, Kimberly Kanada, SiWai Man, Kavita Kulkarni, Ruoxi Sun, Siamak Shakeri, Luheng He, Ben Caine, Albert Webson, Natasha Latysheva, Melvin Johnson, Philip Mansfield, Jian Lu, Ehud Rivlin, Jesper Anderson, Bradley Green, Renee Wong, Jonathan Krause, Jonathon Shlens, Ewa Dominowska, S. M. Ali Eslami, Katherine Chou, Claire Cui, Oriol Vinyals, Koray Kavukcuoglu, James Manyika, Jeff Dean, Demis Hassabis, Yossi Matias, Dale Webster, Joelle Barral, Greg Corrado, Christopher Semturs, S. Sara Mahdavi, Juraj Gottweis, Alan Karthikesalingam, Vivek Natarajan
We evaluate Med-Gemini on 14 medical benchmarks, establishing new state-of-the-art (SoTA) performance on 10 of them, and surpass the GPT-4 model family on every benchmark where a direct comparison is viable, often by a wide margin.
1 code implementation • 16 Feb 2024 • Alireza Javanmardi, David Stutz, Eyke Hüllermeier
Credal sets are sets of probability distributions that are considered as candidates for an imprecisely known ground-truth distribution.
1 code implementation • 12 Sep 2023 • Max Losch, David Stutz, Bernt Schiele, Mario Fritz
In this paper, we propose a Calibrated Lipschitz-Margin Loss (CLL) that addresses this issue and improves certified robustness by tackling two problems: Firstly, commonly used margin losses do not adjust the penalties to the shrinking output distribution; caused by minimizing the Lipschitz constant $K$.
2 code implementations • 21 Aug 2023 • Leonard Berrada, Soham De, Judy Hanwen Shen, Jamie Hayes, Robert Stanforth, David Stutz, Pushmeet Kohli, Samuel L. Smith, Borja Balle
The poor performance of classifiers trained with DP has prevented the widespread adoption of privacy preserving machine learning in industry.
2 code implementations • 18 Jul 2023 • David Stutz, Abhijit Guha Roy, Tatiana Matejovicova, Patricia Strachan, Ali Taylan Cemgil, Arnaud Doucet
However, in many real-world scenarios, the labels $Y_1,..., Y_n$ are obtained by aggregating expert opinions using a voting procedure, resulting in a one-hot distribution $\mathbb{P}_{vote}^{Y|X}$.
1 code implementation • 5 Jul 2023 • David Stutz, Ali Taylan Cemgil, Abhijit Guha Roy, Tatiana Matejovicova, Melih Barsbey, Patricia Strachan, Mike Schaekermann, Jan Freyberg, Rajeev Rikhye, Beverly Freeman, Javier Perez Matos, Umesh Telang, Dale R. Webster, YuAn Liu, Greg S. Corrado, Yossi Matias, Pushmeet Kohli, Yun Liu, Arnaud Doucet, Alan Karthikesalingam
In contrast, we propose a framework where aggregation is done using a statistical model.
1 code implementation • ICCV 2023 • Yong Guo, David Stutz, Bernt Schiele
Interestingly, we observe that the attention mechanism of ViTs tends to rely on few important tokens, a phenomenon we call token overfocusing.
1 code implementation • CVPR 2023 • Yong Guo, David Stutz, Bernt Schiele
Despite their success, vision transformers still remain vulnerable to image corruptions, such as noise or blur.
no code implementations • 26 Apr 2022 • Nils Philipp Walter, David Stutz, Bernt Schiele
In order to shed light on the role of BN in adversarial training, we investigate to what extent the expressiveness of BN can be used to robustify fragile features in comparison to random features.
1 code implementation • 30 Jan 2022 • Yong Guo, David Stutz, Bernt Schiele
We show that EWS greatly improves both robustness against corrupted images as well as accuracy on clean data.
2 code implementations • ICLR 2022 • David Stutz, Krishnamurthy, Dvijotham, Ali Taylan Cemgil, Arnaud Doucet
However, using CP as a separate processing step after training prevents the underlying model from adapting to the prediction of confidence sets.
no code implementations • ICML Workshop AML 2021 • Iryna Korshunova, David Stutz, Alexander A. Alemi, Olivia Wiles, Sven Gowal
We study the adversarial robustness of information bottleneck models for classification.
1 code implementation • 16 Apr 2021 • David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele
Moreover, we present a novel adversarial bit error attack and are able to obtain robustness against both targeted and untargeted bit-level attacks.
no code implementations • ICCV 2021 • David Stutz, Matthias Hein, Bernt Schiele
To this end, we propose average- and worst-case metrics to measure flatness in the robust loss landscape and show a correlation between good robust generalization and flatness.
1 code implementation • 24 Jun 2020 • David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele
Low-voltage operation of DNN accelerators allows to further reduce energy consumption significantly, however, causes bit-level failures in the memory storing the quantized DNN weights.
1 code implementation • 5 May 2020 • Sukrut Rao, David Stutz, Bernt Schiele
Then, we apply adversarial training on these location-optimized adversarial patches and demonstrate significantly improved robustness on CIFAR10 and GTSRB.
3 code implementations • ICML 2020 • David Stutz, Matthias Hein, Bernt Schiele
Our confidence-calibrated adversarial training (CCAT) tackles this problem by biasing the model towards low confidence predictions on adversarial examples.
no code implementations • 25 Sep 2019 • David Stutz, Matthias Hein, Bernt Schiele
Adversarial training is the standard to train models robust against adversarial examples.
2 code implementations • CVPR 2019 • David Stutz, Matthias Hein, Bernt Schiele
A recent hypothesis even states that both robust and accurate models are impossible, i. e., adversarial robustness and generalization are conflicting goals.
1 code implementation • CVPR 2018 • David Stutz, Andreas Geiger
Learning-based approaches, in contrast, avoid the expensive optimization step and instead directly predict the complete shape from the incomplete observations using deep neural networks.
4 code implementations • 18 May 2018 • David Stutz, Andreas Geiger
We address the problem of 3D shape completion from sparse and noisy point clouds, a fundamental problem in computer vision and robotics.
2 code implementations • 6 Dec 2016 • David Stutz, Alexander Hermans, Bastian Leibe
As such, and due to their quick adoption in a wide range of applications, appropriate benchmarks are crucial for algorithm selection and comparison.