Paper

Multilinear Map Layer: Prediction Regularization by Structural Constraint

In this paper we propose and study a technique to impose structural constraints on the output of a neural network, which can reduce amount of computation and number of parameters besides improving prediction accuracy when the output is known to approximately conform to the low-rankness prior. The technique proceeds by replacing the output layer of neural network with the so-called MLM layers, which forces the output to be the result of some Multilinear Map, like a hybrid-Kronecker-dot product or Kronecker Tensor Product. In particular, given an "autoencoder" model trained on SVHN dataset, we can construct a new model with MLM layer achieving 62\% reduction in total number of parameters and reduction of $\ell_2$ reconstruction error from 0.088 to 0.004. Further experiments on other autoencoder model variants trained on SVHN datasets also demonstrate the efficacy of MLM layers.

Results in Papers With Code
(↓ scroll down to see all results)