no code implementations • 22 Nov 2023 • Achref Jaziri, Sina Ditzel, Iuliia Pliushch, Visvanathan Ramesh
Our findings indicate that this form of inductive bias can be beneficial in closing the gap between models with local plasticity rules and backpropagation models as well as learning more robust representations in general.
no code implementations • 18 Sep 2023 • Achref Jaziri, Martin Mundt, Andres Fernandez Rodriguez, Visvanathan Ramesh
Identification of cracks is essential to assess the structural integrity of concrete infrastructure.
2 code implementations • 4 Jun 2021 • Timm Hess, Martin Mundt, Iuliia Pliushch, Visvanathan Ramesh
Several families of continual learning techniques have been proposed to alleviate catastrophic interference in deep neural network training on non-stationary data.
1 code implementation • 19 May 2021 • Iuliia Pliushch, Martin Mundt, Nicolas Lupp, Visvanathan Ramesh
Although a plethora of architectural variants for deep classification has been introduced over time, recent works have found empirical evidence towards similarities in their training process.
1 code implementation • 14 Apr 2021 • Martin Mundt, Iuliia Pliushch, Visvanathan Ramesh
In this paper we analyze the classification performance of neural network structures without parametric inference.
no code implementations • 3 Sep 2020 • Martin Mundt, Yongwon Hong, Iuliia Pliushch, Visvanathan Ramesh
In this work we critically survey the literature and argue that notable lessons from open set recognition, identifying unknown examples outside of the observed set, and the adjacent field of active learning, querying data to maximize the expected performance gain, are frequently overlooked in the deep learning era.
no code implementations • 25 Feb 2020 • Neil A. Thacker, Carole J. Twining, Paul D. Tar, Scott Notley, Visvanathan Ramesh
Artificial Neural Networks (ANNs) implement a specific form of multi-variate extrapolation and will generate an output for any input pattern, even when there is no similar training pattern.
no code implementations • 26 Aug 2019 • Martin Mundt, Iuliia Pliushch, Sagnik Majumder, Visvanathan Ramesh
We present an analysis of predictive uncertainty based out-of-distribution detection for different approaches to estimate various models' epistemic uncertainty and contrast it with extreme value theory based open set recognition.
3 code implementations • 28 May 2019 • Martin Mundt, Iuliia Pliushch, Sagnik Majumder, Yongwon Hong, Visvanathan Ramesh
Modern deep neural networks are well known to be brittle in the face of unknown data instances and recognition of the latter remains a challenge.
2 code implementations • CVPR 2019 • Martin Mundt, Sagnik Majumder, Sreenivas Murali, Panagiotis Panetsos, Visvanathan Ramesh
Recognition of defects in concrete infrastructure, especially in bridges, is a costly and time consuming crucial first step in the assessment of the structural integrity.
1 code implementation • 14 Dec 2018 • Martin Mundt, Sagnik Majumder, Tobias Weis, Visvanathan Ramesh
We characterize convolutional neural networks with respect to the relative amount of features per layer.
no code implementations • ICLR 2018 • Martin Mundt, Tobias Weis, Kishore Konda, Visvanathan Ramesh
Successful training of convolutional neural networks is often associated with suffi- ciently deep architectures composed of high amounts of features.
no code implementations • 18 May 2017 • Martin Mundt, Tobias Weis, Kishore Konda, Visvanathan Ramesh
Successful training of convolutional neural networks is often associated with sufficiently deep architectures composed of high amounts of features.
no code implementations • 31 May 2016 • V. S. R. Veeravasarapu, Constantin Rothkopf, Visvanathan Ramesh
The use of simulated virtual environments to train deep convolutional neural networks (CNN) is a currently active practice to reduce the (real)data-hungriness of the deep CNN models, especially in application domains in which large scale real data and/or groundtruth acquisition is difficult or laborious.