Efficient Video Grounding with Which-Where Reading Comprehension

Jialin Gao, Xin Sun, Bernard Ghanem, Xi Zhou, Shiming Ge

Research output: Contribution to journalArticlepeer-review

14 Scopus citations

Abstract

Video grounding aims at localizing the temporal moment related to the given language description, which is very helpful to many cross-modal content understanding applications like visual question answering and sentence-video search. Existing approaches usually directly regress the temporal boundaries of an event described by a query sentence in the video sequence. This direct regression manner often encounters a large decision space due to diverse target events and variable video durations, leading to inaccurate localization as well as inefficient grounding. This paper presents an efficient framework termed from which to where to facilitate video grounding. The core idea is imitating the reading comprehension process to gradually narrow the decision space, in what we decompose the direct regression into two steps. The “which" step first roughly selects a candidate area by evaluating which video segment in the predefined set is closest to the ground truth. To this end, we formulate this step into a multi-choice reading comprehension problem and propose a criterion to select the best-matched segment. In this way, the excessive decision space is effectively reduced. The “where" step aims to precisely regress the temporal boundary of the selected video segment from the shrunk decision space. We thus introduce a triple-span representation for each candidate video segment to use the regional context for better boundary regression. The “which" and “where" steps can be combined into a unified framework and learned end-to-end, leading to an efficient video grounding system. Extensive experiments on Charades-STA, ActivityNet-Captions, and TACoS benchmarks clearly demonstrate the effectiveness of our framework.
Original languageEnglish (US)
Pages (from-to)1-1
Number of pages1
JournalIEEE Transactions on Circuits and Systems for Video Technology
DOIs
StatePublished - May 10 2022

Bibliographical note

KAUST Repository Item: Exported on 2022-05-12
Acknowledgements: Supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research through the Visual Computing Center (VCC) funding, the Beijing Natural Science Foundation (19L2040) and the National Natural Science Foundation of China (61772513). We also thank the support from CloudWalk Technology Co., Ltd.

ASJC Scopus subject areas

  • Media Technology
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Efficient Video Grounding with Which-Where Reading Comprehension'. Together they form a unique fingerprint.

Cite this