Paper

A Bayesian Approach to Invariant Deep Neural Networks

We propose a novel Bayesian neural network architecture that can learn invariances from data alone by inferring a posterior distribution over different weight-sharing schemes. We show that our model outperforms other non-invariant architectures, when trained on datasets that contain specific invariances. The same holds true when no data augmentation is performed.

Results in Papers With Code
(↓ scroll down to see all results)