no code implementations • 29 Nov 2023 • Shashank Agnihotri, Julia Grabinski, Margret Keuper
While during downsampling, aliases and artifacts can be reduced by blurring feature maps, the emergence of fine details is crucial during upsampling.
no code implementations • 25 Jul 2023 • Shashank Agnihotri, Kanchana Vaishnavi Gandikota, Julia Grabinski, Paramanand Chandramouli, Margret Keuper
We consider the recently proposed Restormer model, as well as NAFNet and the "Baseline network" which are both simplified versions of a Restormer.
no code implementations • 19 Jul 2023 • Julia Grabinski, Janis Keuper, Margret Keuper
Motivated by the recent trend towards the usage of larger receptive fields for more context-aware neural networks in vision applications, we aim to investigate how large these receptive fields really need to be.
1 code implementation • 19 Jul 2023 • Julia Grabinski, Janis Keuper, Margret Keuper
Convolutional neural networks encode images through a sequence of convolutions, normalizations and non-linearities as well as downsampling operations into potentially strong semantic embeddings.
1 code implementation • 12 Oct 2022 • Julia Grabinski, Paul Gavrikov, Janis Keuper, Margret Keuper
Further, our analysis of robust models shows that not only AT but also the model's building blocks (like activation functions and pooling) have a strong influence on the models' prediction confidences.
1 code implementation • 1 Apr 2022 • Julia Grabinski, Steffen Jung, Janis Keuper, Margret Keuper
Over the last years, Convolutional Neural Networks (CNNs) have been the dominating neural architecture in a wide range of computer vision tasks.
no code implementations • AAAI Workshop AdvML 2022 • Julia Grabinski, Janis Keuper, Margret Keuper
Many commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness.