Boundary-sensitive Pre-training for Temporal Localization in Videos

Mengmeng Xu, Juan-Manuel Perez-Rua, Victor Escorcia, Brais Martinez, Xiatian Zhu, Li Zhang, Bernard Ghanem, Tao Xiang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

28 Scopus citations


Many video analysis tasks require temporal localization for the detection of content changes. However, most existing models developed for these tasks are pre-trained on general video action classification tasks. This is due to large scale annotation of temporal boundaries in untrimmed videos being expensive. Therefore, no suitable datasets exist that enable pre-training in a manner sensitive to temporal boundaries. In this paper for the first time, we investigate model pre-training for temporal localization by introducing a novel boundary-sensitive pretext (BSP) task. Instead of relying on costly manual annotations of temporal boundaries, we propose to synthesize temporal boundaries in existing video action classification datasets. By defining different ways of synthesizing boundaries, BSP can then be simply conducted in a self-supervised manner via the classification of the boundary types. This enables the learning of video representations that are much more transferable to downstream temporal localization tasks. Extensive experiments show that the proposed BSP is superior and complementary to the existing action classification-based pre-training counterpart, and achieves new state-of-the-art performance on several temporal localization tasks. Please visit our website for more details
Original languageEnglish (US)
Title of host publication2021 IEEE/CVF International Conference on Computer Vision (ICCV)
StatePublished - Oct 2021

Bibliographical note

KAUST Repository Item: Exported on 2022-04-05
Acknowledgements: This work was supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research through the Visual Computing Center funding.


Dive into the research topics of 'Boundary-sensitive Pre-training for Temporal Localization in Videos'. Together they form a unique fingerprint.

Cite this