BERT-hLSTMs: BERT and Hierarchical LSTMs for Visual Storytelling

3 Dec 2020  ·  Jing Su, Qingyun Dai, Frank Guerin, Mian Zhou ·

Visual storytelling is a creative and challenging task, aiming to automatically generate a story-like description for a sequence of images. The descriptions generated by previous visual storytelling approaches lack coherence because they use word-level sequence generation methods and do not adequately consider sentence-level dependencies. To tackle this problem, we propose a novel hierarchical visual storytelling framework which separately models sentence-level and word-level semantics. We use the transformer-based BERT to obtain embeddings for sentences and words. We then employ a hierarchical LSTM network: the bottom LSTM receives as input the sentence vector representation from BERT, to learn the dependencies between the sentences corresponding to images, and the top LSTM is responsible for generating the corresponding word vector representations, taking input from the bottom LSTM. Experimental results demonstrate that our model outperforms most closely related baselines under automatic evaluation metrics BLEU and CIDEr, and also show the effectiveness of our method with human evaluation.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


Ranked #21 on Visual Storytelling on VIST (CIDEr metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Visual Storytelling VIST hLSTMs CIDEr 7.98 # 23
Visual Storytelling VIST BERT-hLSTMs CIDEr 8.37 # 21

Methods