no code implementations • IJCNLP 2017 • Nasrin Mostafazadeh, Chris Brockett, Bill Dolan, Michel Galley, Jianfeng Gao, Georgios P. Spithourakis, Lucy Vanderwende
The popularity of image sharing on social media and the engagement it creates between users reflects the important role that visual context plays in everyday conversations.
1 code implementation • NAACL 2016 • Ting-Hao, Huang, Francis Ferraro, Nasrin Mostafazadeh, Ishan Misra, Aishwarya Agrawal, Jacob Devlin, Ross Girshick, Xiaodong He, Pushmeet Kohli, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, Lucy Vanderwende, Michel Galley, Margaret Mitchell
We introduce the first dataset for sequential vision-to-language, and explore how this data may be used for the task of visual storytelling.
no code implementations • 6 Apr 2016 • Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation.
2 code implementations • ACL 2016 • Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Margaret Mitchell, Xiaodong He, Lucy Vanderwende
There has been an explosion of work in the vision & language community during the past few years from image captioning to video transcription, and answering questions about images.
no code implementations • EMNLP 2015 • Francis Ferraro, Nasrin Mostafazadeh, Ting-Hao, Huang, Lucy Vanderwende, Jacob Devlin, Michel Galley, Margaret Mitchell
Integrating vision and language has long been a dream in work on artificial intelligence (AI).