TY - GEN
T1 - APES: Audiovisual Person Search in Untrimmed Video
AU - Alcazar, Juan Leon
AU - Heilbron, Fabian Caba
AU - Mai, Long
AU - Perazzi, Federico
AU - Lee, Joon-Young
AU - Arbelaez, Pablo
AU - Ghanem, Bernard
N1 - KAUST Repository Item: Exported on 2021-09-14
PY - 2021/6
Y1 - 2021/6
N2 - Humans are arguably one of the most important subjects in video streams, many real-world applications such as video summarization or video editing workflows often require the automatic search and retrieval of a person of interest. Despite tremendous efforts in the person re-identification and retrieval domains, few works have developed audiovisual search strategies. In this paper, we present the Audiovisual Person Search dataset (APES), a new dataset composed of untrimmed videos whose audio (voices) and visual (faces) streams are densely annotated. APES contains over 1.9K identities labeled along 36 hours of video, making it the largest dataset available for untrimmed audiovisual person search. A key property of APES is that it includes dense temporal annotations that link faces to speech segments of the same identity. To showcase the potential of our new dataset, we propose an audiovisual baseline and benchmark for person retrieval. Our study shows that modeling audiovisual cues benefits the recognition of people’s identities.
AB - Humans are arguably one of the most important subjects in video streams, many real-world applications such as video summarization or video editing workflows often require the automatic search and retrieval of a person of interest. Despite tremendous efforts in the person re-identification and retrieval domains, few works have developed audiovisual search strategies. In this paper, we present the Audiovisual Person Search dataset (APES), a new dataset composed of untrimmed videos whose audio (voices) and visual (faces) streams are densely annotated. APES contains over 1.9K identities labeled along 36 hours of video, making it the largest dataset available for untrimmed audiovisual person search. A key property of APES is that it includes dense temporal annotations that link faces to speech segments of the same identity. To showcase the potential of our new dataset, we propose an audiovisual baseline and benchmark for person retrieval. Our study shows that modeling audiovisual cues benefits the recognition of people’s identities.
UR - http://hdl.handle.net/10754/669426
UR - https://ieeexplore.ieee.org/document/9523077/
U2 - 10.1109/cvprw53098.2021.00188
DO - 10.1109/cvprw53098.2021.00188
M3 - Conference contribution
BT - 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
PB - IEEE
ER -