Search Results for author: Bart Keulen

Found 1 papers, 1 papers with code

Improving the Trainability of Deep Neural Networks through Layerwise Batch-Entropy Regularization

2 code implementations1 Aug 2022 David Peer, Bart Keulen, Sebastian Stabinger, Justus Piater, Antonio Rodríguez-Sánchez

We show empirically that we can therefore train a "vanilla" fully connected network and convolutional neural network -- no skip connections, batch normalization, dropout, or any other architectural tweak -- with 500 layers by simply adding the batch-entropy regularization term to the loss function.

Cannot find the paper you are looking for? You can Submit a new open access paper.