Paper

Discriminative Training: Learning to Describe Video with Sentences, from Video Described with Sentences

We present a method for learning word meanings from complex and realistic video clips by discriminatively training (DT) positive sentential labels against negative ones, and then use the trained word models to generate sentential descriptions for new video. This new work is inspired by recent work which adopts a maximum likelihood (ML) framework to address the same problem using only positive sentential labels. The new method, like the ML-based one, is able to automatically determine which words in the sentence correspond to which concepts in the video (i.e., ground words to meanings) in a weakly supervised fashion. While both DT and ML yield comparable results with sufficient training data, DT outperforms ML significantly with smaller training sets because it can exploit negative training labels to better constrain the learning problem.

Results in Papers With Code
(↓ scroll down to see all results)