APES: Audiovisual Person Search in Untrimmed Video

Juan Leon Alcazar, Fabian Caba Heilbron, Long Mai, Federico Perazzi, Joon-Young Lee, Pablo Arbelaez, Bernard Ghanem

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Scopus citations

Abstract

Humans are arguably one of the most important subjects in video streams, many real-world applications such as video summarization or video editing workflows often require the automatic search and retrieval of a person of interest. Despite tremendous efforts in the person re-identification and retrieval domains, few works have developed audiovisual search strategies. In this paper, we present the Audiovisual Person Search dataset (APES), a new dataset composed of untrimmed videos whose audio (voices) and visual (faces) streams are densely annotated. APES contains over 1.9K identities labeled along 36 hours of video, making it the largest dataset available for untrimmed audiovisual person search. A key property of APES is that it includes dense temporal annotations that link faces to speech segments of the same identity. To showcase the potential of our new dataset, we propose an audiovisual baseline and benchmark for person retrieval. Our study shows that modeling audiovisual cues benefits the recognition of people’s identities.
Original languageEnglish (US)
Title of host publication2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
PublisherIEEE
DOIs
StatePublished - Jun 2021

Bibliographical note

KAUST Repository Item: Exported on 2021-09-14

Fingerprint

Dive into the research topics of 'APES: Audiovisual Person Search in Untrimmed Video'. Together they form a unique fingerprint.

Cite this