Abstract
This paper addresses the problem of spatiotemporal localization of actions in videos. Compared to leading approaches, which all learn to localize based on carefully annotated boxes on training video frames, we adhere to a solution only requiring video class labels. We introduce an actor-supervised architecture that exploits the inherent compositionality of actions in terms of actor transformations, to localize actions. We make two contributions. First, we propose actor proposals derived from a detector for human and non-human actors intended for images, which are linked over time by Siamese similarity matching to account for actor deformations. Second, we propose an actor-based attention mechanism enabling localization from action class labels and actor proposals. It exploits a new actor pooling operation and is end-to-end trainable. Experiments on four action datasets show actor supervision is state-of-the-art for action localization from video class labels and is even competitive to some box-supervised alternatives.
Original language | English (US) |
---|---|
Pages (from-to) | 102886 |
Journal | Computer Vision and Image Understanding |
Volume | 192 |
DOIs | |
State | Published - Dec 9 2019 |
Bibliographical note
KAUST Repository Item: Exported on 2020-10-01Acknowledged KAUST grant number(s): OSR-CRG2017-3405
Acknowledgements: This publication is based upon work supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research (OSR) under Award No. OSR-CRG2017-3405. We thank the team members of the IVUL from KAUST and Qualcomm AI Research for helpful comments and discussion. In particular, we appreciate the support of Amirhossein Habibian during the implementation of our Actor Linking.