TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks

Humam Alwassel, Silvio Giancola, Bernard Ghanem

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

65 Scopus citations

Abstract

Due to the large memory footprint of untrimmed videos, current state-of-the-art video localization methods operate atop precomputed video clip features. These features are extracted from video encoders typically trained for trimmed action classification tasks, making such features not necessarily suitable for temporal localization. In this work, we propose a novel supervised pretraining paradigm for clip features that not only trains to classify activities but also considers background clips and global video information to improve temporal sensitivity. Extensive experiments show that using features trained with our novel pretraining strategy significantly improves the performance of recent state-of-the-art methods on three tasks: Temporal Action Localization, Action Proposal Generation, and Dense Video Captioning. We also show that our pretraining approach is effective across three encoder architectures and two pretraining datasets. We believe video feature encoding is an important building block for localization algorithms, and extracting temporally-sensitive features should be of paramount importance in building more accurate models. The code and pretrained models are available on our project website.

Original languageEnglish (US)
Title of host publicationProceedings - 2021 IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2021
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages3166-3176
Number of pages11
ISBN (Electronic)9781665401913
DOIs
StatePublished - 2021
Event18th IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2021 - Virtual, Online, Canada
Duration: Oct 11 2021Oct 17 2021

Publication series

NameProceedings of the IEEE International Conference on Computer Vision
Volume2021-October
ISSN (Print)1550-5499

Conference

Conference18th IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2021
Country/TerritoryCanada
CityVirtual, Online
Period10/11/2110/17/21

Bibliographical note

Funding Information:
We present TSP, a novel temporally-sensitive supervised pretraining for video encoders, which not only trains to classify actions, but also considers background clips and global information to gain temporal sensitivity. We show that TSP features improve SOTA methods on the TAL, Proposals, and Dense-Captioning tasks. We argue TSP features can be preferred over other features to build more accurate models. Acknowledgments. This work is supported the King Ab-dullah University of Science and Technology (KAUST) Office of Sponsored Research (OSR) under Award No. OSR-CRG2017-3405.

Publisher Copyright:
© 2021 IEEE.

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks'. Together they form a unique fingerprint.

Cite this