Abstract
Visible–infrared person re-identification (VI-ReID) is an important and practical task for full-time intelligent surveillance systems. Compared to visible person re-identification, it is more challenging due to the large cross-modal discrepancy. Existing VI-ReID methods suffer from heterogeneous structures and the different spectra of visible and infrared images. In this work, we propose the Spectrum-Insensitive Data Augmentation (SIDA) strategy, which effectively alleviates the disturbance in the visible and infrared spectra and forces the network to learn spectrum-irrelevant features. The network also compares samples with both global and local features. We devise a Feature Relation Reasoning (FRR) module to learn discriminative fine-grained representations according to the graph reasoning principle. Compared to the most commonly used uniform partition, our FRR better adopts to the case of VI-ReID, in which human bodies are difficult to align. Furthermore, we design the dual center loss for learning the global feature in order to maintain the intra-modality relations, while learning the cross-modal similarities. Our method achieves better convergence in training. Extensive experiments demonstrate that our method achieves state-of-the-art performance on two visible–infrared cross-modal Re-ID datasets.
Original language | English (US) |
---|---|
Pages (from-to) | 103703 |
Journal | Computer Vision and Image Understanding |
Volume | 232 |
DOIs | |
State | Published - Apr 26 2023 |
Bibliographical note
KAUST Repository Item: Exported on 2023-05-02Acknowledgements: This work was supported by the National Natural Science Foundation of China under Grant 61902027.
ASJC Scopus subject areas
- Signal Processing
- Software
- Computer Vision and Pattern Recognition