Compressing LSTM Networks by Matrix Product Operators

22 Dec 2020  ·  Ze-Feng Gao, Xingwei Sun, Lan Gao, Junfeng Li, Zhong-Yi Lu ·

Long Short Term Memory(LSTM) models are the building blocks of many state-of-the-art natural language processing(NLP) and speech enhancement(SE) algorithms. However, there are a large number of parameters in an LSTM model. This usually consumes a large number of resources to train the LSTM model. Also, LSTM models suffer from computational inefficiency in the inference phase. Existing model compression methods (e.g., model pruning) can only discriminate based on the magnitude of model parameters, ignoring the issue of importance distribution based on the model information. Here we introduce the MPO decomposition, which describes the local correlation of quantum states in quantum many-body physics and is used to represent the large model parameter matrix in a neural network, which can compress the neural network by truncating the unimportant information in the weight matrix. In this paper, we propose a matrix product operator(MPO) based neural network architecture to replace the LSTM model. The effective representation of neural networks by MPO can effectively reduce the computational consumption of training LSTM models on the one hand, and speed up the computation in the inference phase of the model on the other hand. We compare the MPO-LSTM model-based compression model with the traditional LSTM model with pruning methods on sequence classification, sequence prediction, and speech enhancement tasks in our experiments. The experimental results show that our proposed neural network architecture based on the MPO approach significantly outperforms the pruning approach.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Networking and Internet Architecture Computational Physics Quantum Physics