Adaptive feature abstraction for translating video to language

Yunchen Pu, Zhe Gan, Lawrence Carin, Martin Renqiang Min

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

A new model for video captioning is developed, using a deep three-dimensional Convolutional Neural Network (C3D) as an encoder for videos and a Recurrent Neural Network (RNN) as a decoder for captions. A novel attention mechanism with spatiotemporal alignment is employed to adaptively and sequentially focus on different layers of CNN features (levels of feature “abstraction”), as well as local spatiotemporal regions of the feature maps at each layer. The proposed approach is evaluated on the YouTube2Text benchmark. Experimental results demonstrate quantitatively the effectiveness of our proposed adaptive spatiotemporal feature abstraction for translating videos to sentences with rich semantic structures.
Original languageEnglish (US)
Title of host publication5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings
PublisherInternational Conference on Learning Representations, ICLR
StatePublished - Jan 1 2019
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2021-02-09

Fingerprint

Dive into the research topics of 'Adaptive feature abstraction for translating video to language'. Together they form a unique fingerprint.

Cite this