Attentive Language Models
In this paper, we extend Recurrent Neural Network Language Models (RNN-LMs) with an attention mechanism. We show that an {``}attentive{''} RNN-LM (with 11M parameters) achieves a better perplexity than larger RNN-LMs (with 66M parameters) and achieves performance comparable to an ensemble of 10 similar sized RNN-LMs. We also show that an {``}attentive{''} RNN-LM needs less contextual information to achieve similar results to the state-of-the-art on the wikitext2 dataset.
PDF Abstract IJCNLP 2017 PDF IJCNLP 2017 AbstractDatasets
Add Datasets
introduced or used in this paper
Results from the Paper
Submit
results from this paper
to get state-of-the-art GitHub badges and help the
community compare results to other papers.
Methods
No methods listed for this paper. Add
relevant methods here