no code implementations • 1 Jun 2023 • Christian H. X. Ali Mehmeti-Göpel, Michael Wand
Using large LRs is analogous to applying an explicit solver to a stiff non-linear ODE, causing overshooting and vanishing gradients in lower layers after the first step.
no code implementations • CVPR 2023 • Ann-Christin Woerl, Jan Disselhoff, Michael Wand
In this paper, we examine gradients of logits of image classification CNNs by input pixel values.
no code implementations • NeurIPS 2021 • Daniel Franzen, Michael Wand
Invariance under symmetry is an important problem in machine learning.
no code implementations • 14 Sep 2021 • Daniel Franzen, Michael Wand
Invariance under symmetry is an important problem in machine learning.
no code implementations • NeurIPS 2021 • David Hartmann, Sebastian Brodehl, Michael Wand
We consider the aspect of learning rate (LR-)scheduling in neural networks, which often significantly affects achievable training time and generalization performance.
no code implementations • 13 Jan 2021 • Marc Stieffenhofer, Tristan Bereau, Michael Wand
Switching between different levels of resolution is essential for multiscale modeling, but restoring details at higher resolution remains challenging.
Chemical Physics Computational Physics
no code implementations • ICLR 2021 • Christian H.X. Ali Mehmeti-Göpel, David Hartmann, Michael Wand
In this paper, we apply harmonic distortion analysis to understand the effect of nonlinearities in the spectral domain.
no code implementations • ICML Workshop LifelongML 2020 • Krsto Proroković, Michael Wand, Jürgen Schmidhuber
An EMG-based upper limb prosthesis relies on a statistical pattern recognition system to map the EMG signal of residual forearm muscles into the appropriate hand movements.
no code implementations • CVPR 2020 • Javier Grau Chopite, Matthias B. Hullin, Michael Wand, Julian Iseringhausen
We demonstrate that our feed-forward network, even though it is trained solely on synthetic data, generalizes to measured data from SPAD sensors and is able to obtain results that are competitive with model-based reconstruction methods.
1 code implementation • 3 Apr 2019 • David Hartmann, Michael Wand
By focusing computational attention using progressive sampling, we reduce inference costs on ImageNet further by a factor of up to 33% (before network pruning).
no code implementations • 30 Apr 2018 • Michael Wand, Ngoc Thang Vu, Juergen Schmidhuber
Audiovisual speech recognition (AVSR) is a method to alleviate the adverse effect of noise in the acoustic signal.
no code implementations • 4 Aug 2017 • Michael Wand, Juergen Schmidhuber
We present a Lipreading system, i. e. a speech recognition system using only visual features, which uses domain-adversarial training for speaker independence.
2 code implementations • 15 Apr 2016 • Chuan Li, Michael Wand
This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative neural networks for efficient texture synthesis.
no code implementations • 29 Jan 2016 • Michael Wand, Jan Koutník, Jürgen Schmidhuber
Lipreading, i. e. speech recognition from visual-only recordings of a speaker's face, can be achieved with a processing pipeline based solely on neural networks, yielding significantly better accuracy than conventional methods.
7 code implementations • CVPR 2016 • Chuan Li, Michael Wand
This paper studies a combination of generative Markov random field (MRF) models and discriminatively trained deep convolutional neural networks (dCNNs) for synthesizing 2D images.
no code implementations • 30 Aug 2013 • Alan Brunton, Michael Wand, Stefanie Wuhrer, Hans-Peter Seidel, Tino Weinkauf
In this paper, we introduce a new approach to partial, intrinsic isometric matching.