Infrared-visible cross-modality person re-identification (IV-ReID) is a challenging task that aims to match infrared person images with visible person images of same identity. The person images of two modalities are captured by visible cameras and infrared cameras, respectively. Due to the variation between two modalities, most existing methods tend to extract common features of different modalities by shared network. However, on account of ignoring the effect of single-modality features, the way of merely extracting common features loses part of single-modality information. To address this problem, we propose an end-to-end model, multi-complement feature network (MFN), to complement common features with single-modality features. We divide MFN into two modules, feature extracting module (FEM) and feature complementing module (FCM). At the stage of FEM, we employ a two-stream network with architecture of multiple granularities to extract single-modality features and common features. Afterward, at the stage of FCM, we utilize the characteristic of graph convolution network (GCN) to associate multiple features of different modalities. In FCM, we design a concise but effective graph structure that takes the features extracted by FEM as input of GCN. Compared with previous methods, our method reserves single-modality features and makes them work with common features. Extensive experiments implemented on two mainstream datasets of IV-ReID, SYSU-MM01, and RegDB demonstrate that our method achieves state-of-the-art performance. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
CITATIONS
Cited by 1 scholarly publication.
Infrared radiation
Infrared imaging
Visible radiation
Finite element methods
Feature extraction
Cameras
Infrared cameras