no code implementations • 25 Jun 2021 • Vignesh Srinivasan, Nils Strodthoff, Jackie Ma, Alexander Binder, Klaus-Robert Müller, Wojciech Samek
Our results indicate that models initialized from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
1 code implementation • 31 Aug 2020 • Vignesh Srinivasan, Klaus-Robert Müller, Wojciech Samek, Shinichi Nakajima
Domain translation is the task of finding correspondence between two domains.
no code implementations • 11 Apr 2019 • Vignesh Srinivasan, Ercan E. Kuruoglu, Klaus-Robert Müller, Wojciech Samek, Shinichi Nakajima
Many existing methods employ Gaussian random variables for exploring the data space to find the most adversarial (for attacking) or least adversarial (for defense) point.
2 code implementations • 5 Feb 2019 • Talmaj Marinč, Vignesh Srinivasan, Serhan Gül, Cornelius Hellge, Wojciech Samek
The advantages of our method are two fold: (a) the different sized kernels help in extracting different information from the image which results in better reconstruction and (b) kernel fusion assures retaining of the extracted information while maintaining computational efficiency.
no code implementations • 30 May 2018 • Vignesh Srinivasan, Arturo Marban, Klaus-Robert Müller, Wojciech Samek, Shinichi Nakajima
Adversarial attacks on deep learning models have compromised their performance considerably.
no code implementations • 22 May 2018 • Arturo Marban, Vignesh Srinivasan, Wojciech Samek, Josep Fernández, Alicia Casals
The results suggest that the force estimation quality is better when both, the tool data and video sequences, are processed by the neural network model.
no code implementations • 11 Sep 2016 • Wikor Pronobis, Danny Panknin, Johannes Kirschnick, Vignesh Srinivasan, Wojciech Samek, Volker Markl, Manohar Kaul, Klaus-Robert Mueller, Shinichi Nakajima
In this paper, we propose {multiple purpose LSH (mp-LSH) which shares the hash codes for different dissimilarities.