no code implementations • 13 Feb 2024 • Felix Petersen, Aashwin Mishra, Hilde Kuehne, Christian Borgelt, Oliver Deussen, Mikhail Yurochkin
We propose a new approach for propagating stable probability distributions through neural networks.
1 code implementation • 1 Dec 2023 • Walid Bousselham, Felix Petersen, Vittorio Ferrari, Hilde Kuehne
To leverage those capabilities, we propose a Grounding Everything Module (GEM) that generalizes the idea of value-value attention introduced by CLIPSurgery to a self-self attention path.
no code implementations • 25 May 2023 • Felix Petersen, Moritz Schubotz, Andre Greiner-Petter, Bela Gipp
We tackle the problem of neural machine translation of mathematical formulae between ambiguous presentation languages and unambiguous content languages.
1 code implementation • 1 May 2023 • Felix Petersen, Tobias Sutter, Christian Borgelt, Dongsung Huh, Hilde Kuehne, Yuekai Sun, Oliver Deussen
We present ISAAC (Input-baSed ApproximAte Curvature), a novel method that conditions the gradient using selected second-order information and has an asymptotically vanishing computational overhead, assuming a batch size smaller than the number of neurons.
1 code implementation • ICCV 2023 • Nina Shvetsova, Felix Petersen, Anna Kukleva, Bernt Schiele, Hilde Kuehne
Contrastive learning has become an important tool in learning representations from unlabeled data mainly relying on the idea of minimizing distance between positive data pairs, e. g., views from the same images, and maximizing distance between negative data pairs, e. g., views from different images.
1 code implementation • 15 Oct 2022 • Felix Petersen, Christian Borgelt, Hilde Kuehne, Oliver Deussen
Recently, research has increasingly focused on developing efficient neural network architectures.
no code implementations • 1 Sep 2022 • Felix Petersen
While classic computer science algorithms are suitable for precise execution of exactly defined tasks such as finding the shortest path in a large graph, neural networks allow learning from data to predict the most likely answer in more complex tasks such as image classification, which cannot be reduced to an exact algorithm.
1 code implementation • 15 Jun 2022 • Felix Petersen, Hilde Kuehne, Christian Borgelt, Oliver Deussen
In this work, we relax this assumption and optimize the model for multiple k simultaneously instead of using a single k. Leveraging recent advances in differentiable sorting and ranking, we propose a differentiable top-k cross-entropy classification loss.
Ranked #58 on Image Classification on ImageNet
no code implementations • 1 May 2022 • Debarghya Mukherjee, Felix Petersen, Mikhail Yurochkin, Yuekai Sun
In this paper, we leverage this connection between algorithmic fairness and distribution shifts to show that algorithmic fairness interventions can help ML models overcome distribution shifts, and that domain adaptation methods (for overcoming distribution shifts) can mitigate algorithmic biases.
1 code implementation • CVPR 2022 • Felix Petersen, Bastian Goldluecke, Christian Borgelt, Oliver Deussen
In this work, we present and study a generalized family of differentiable renderers.
1 code implementation • ICLR 2022 • Felix Petersen, Christian Borgelt, Hilde Kuehne, Oliver Deussen
We introduce a family of sigmoid functions and prove that they produce differentiable sorting networks that are monotonic.
1 code implementation • NeurIPS 2021 • Felix Petersen, Debarghya Mukherjee, Yuekai Sun, Mikhail Yurochkin
In this work, we propose general post-processing algorithms for individual fairness (IF).
no code implementations • 20 Oct 2021 • Felix Petersen, Bastian Goldluecke, Oliver Deussen, Hilde Kuehne
Recently introduced differentiable renderers can be leveraged to learn the 3D geometry of objects from 2D images, but those approaches require additional supervision to enable the renderer to produce an output that can be compared to the input image.
1 code implementation • NeurIPS 2021 • Felix Petersen, Christian Borgelt, Hilde Kuehne, Oliver Deussen
The integration of algorithmic components into neural architectures has gained increased attention recently, as it allows training neural networks with new forms of supervision such as ordering constraints or silhouettes instead of using ground truth labels.
no code implementations • 29 Sep 2021 • Felix Petersen, Christian Borgelt, Hilde Kuehne, Oliver Deussen
We propose a sampling-free approximate formulation of Gaussian variational auto-encoders.
no code implementations • 29 Sep 2021 • Felix Petersen, Christian Borgelt, Mikhail Yurochkin, Hilde Kuehne, Oliver Deussen
We propose a new approach to propagating probability distributions through neural networks.
1 code implementation • 9 May 2021 • Felix Petersen, Christian Borgelt, Hilde Kuehne, Oliver Deussen
Sorting and ranking supervision is a method for training neural networks end-to-end based on ordering constraints.
no code implementations • NeurIPS Workshop Deep_Invers 2019 • Felix Petersen, Christian Borgelt, Oliver Deussen
Artificial neural networks revolutionized many areas of computer science in recent years since they provide solutions to a number of previously unsolved problems.
no code implementations • 16 May 2019 • Felix Petersen, Christian Borgelt, Oliver Deussen
These networks integrate smooth versions of classic algorithms into the topology of neural networks.
no code implementations • 26 Mar 2019 • Felix Petersen, Amit H. Bermano, Oliver Deussen, Daniel Cohen-Or
The long-coveted task of reconstructing 3D geometry from images is still a standing problem.
no code implementations • 10 Nov 2018 • Felix Petersen, Moritz Schubotz, Bela Gipp
We implemented the first translator for mathematical formulae based on recursive neural networks.