Search Results for author: Julia Grabinski

Found 7 papers, 3 papers with code

Improving Stability during Upsampling -- on the Importance of Spatial Context

no code implementations29 Nov 2023 Shashank Agnihotri, Julia Grabinski, Margret Keuper

While during downsampling, aliases and artifacts can be reduced by blurring feature maps, the emergence of fine details is crucial during upsampling.

Disparity Estimation Image Classification +3

As large as it gets: Learning infinitely large Filters via Neural Implicit Functions in the Fourier Domain

no code implementations19 Jul 2023 Julia Grabinski, Janis Keuper, Margret Keuper

Motivated by the recent trend towards the usage of larger receptive fields for more context-aware neural networks in vision applications, we aim to investigate how large these receptive fields really need to be.

Image Classification

Fix your downsampling ASAP! Be natively more robust via Aliasing and Spectral Artifact free Pooling

1 code implementation19 Jul 2023 Julia Grabinski, Janis Keuper, Margret Keuper

Convolutional neural networks encode images through a sequence of convolutions, normalizations and non-linearities as well as downsampling operations into potentially strong semantic embeddings.

Robust Models are less Over-Confident

1 code implementation12 Oct 2022 Julia Grabinski, Paul Gavrikov, Janis Keuper, Margret Keuper

Further, our analysis of robust models shows that not only AT but also the model's building blocks (like activation functions and pooling) have a strong influence on the models' prediction confidences.

Adversarial Robustness

FrequencyLowCut Pooling -- Plug & Play against Catastrophic Overfitting

1 code implementation1 Apr 2022 Julia Grabinski, Steffen Jung, Janis Keuper, Margret Keuper

Over the last years, Convolutional Neural Networks (CNNs) have been the dominating neural architecture in a wide range of computer vision tasks.

Aliasing coincides with CNNs vulnerability towards adversarial attacks

no code implementations AAAI Workshop AdvML 2022 Julia Grabinski, Janis Keuper, Margret Keuper

Many commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness.

Cannot find the paper you are looking for? You can Submit a new open access paper.