2017-2018年行人重识别算法在开源数据集上的表现
Market-1501
Method |
Time |
Single Query |
Multi. Query |
||
rank-1 |
mAP |
rank-1 |
mAP |
||
Verif-Identif. + LSRO [1] |
2017 ICCV |
83.97 |
66.07 |
88.42 |
76.10 |
Basel. + LSRO [1] |
2017 ICCV |
78.06 |
56.23 |
85.12 |
68.52 |
SVDNet(C) [2] |
2017 ICCV |
80.5 |
55.9 |
|
|
SVDNet(R,1024-dim) [2] |
2017 ICCV |
82.3 |
62.1 |
|
|
M-net [3] |
2017 ICCV |
73.1 |
|
|
|
HP-net [3] |
2017 ICCV |
76.9 |
|
|
|
CADL [7] |
2017 CVPR |
73.84 |
47.11 |
80.85 |
55.58 |
Fusion [9] |
2017 CVPR |
80.31 |
57.53 |
86.79 |
66.70 |
SSM [10] |
2017 CVPR |
82.21 |
68.80 |
88.18 |
76.18 |
Spindle [12] |
2017 CVPR |
76.9 |
|
|
|
DeepAlign. [13] |
2017 ICCV |
81.0 |
63.4 |
|
|
Zhong et al. [14] |
2017 CVPR |
77.11 |
63.63 |
|
|
TriNet (Re- ranked) [15] |
2017 ICCV |
86.67 |
81.07 |
91.75 |
87.18 |
PDC [16] |
2017 ICCV |
84.14 |
63.41 |
|
|
DPFL [17] |
2017 ICCV |
88.6 |
72.6 |
92.2 |
80.4 |
DPFL(2+) [17] |
2017 ICCV |
88.9 |
73.1 |
92.3 |
80.7 |
PSE [18] |
2018 CVPR |
87.7 |
69.0 |
|
|
PSE+ ECN (rank-dist) [18] |
2018 CVPR |
90.3 |
84.0 |
|
|
IDE∗+Cam Style+RE [19] |
2018 CVPR |
89.49 |
71.55 |
|
|
MobileNet+DML [20] |
2018 CVPR |
87.73 |
68.83 |
91.66 |
77.14 |
Resnet50-res5c (multi-scale) 83.58 64.25+DSR (fine-tuning model) [21] |
2018 CVPR |
83.58 |
64.25 |
|
|
DuATM [23] |
2018 CVPR |
91.42 |
76.62 |
|
|
HA-CNN [25] |
2018 CVPR |
91.2 |
75.7 |
93.8 |
82.8 |
SPReIDcombined-ft*+re-ranking [26] |
2018 CVPR |
94.63 |
90.96 |
|
|
MLFN [27] |
2018 CVPR |
90.0 |
74.3 |
92.3 |
82.4 |
BraidNet-CS + SRL [29] |
2018 CVPR |
83.70 |
69.48 |
|
|
Pose-transfer (D, Tri) [30] |
2018 CVPR |
87.65 |
68.92 |
|
|
DaRe(De)+RE+RR [31] |
2018 CVPR |
90.9 |
86.7 |
|
|
TFusion-sup [32] |
2018 CVPR |
73.13 |
|
|
|
Chen et al. [33] |
2018 ECCV |
81.8 |
93.3 |
87.9 |
95.3 |
HAP2S_E [34] |
2018 ECCV |
84.20 |
69.76 |
|
|
HAP2S_P [34] |
2018 ECCV |
84.59 |
69.43 |
|
|
Mancs [35] |
2018 ECCV |
93.1 |
82.3 |
95.4 |
87.5 |
Suh et al. [37] |
2018 ECCV |
93.4 |
89.9 |
95.4 |
93.1 |
Base Model + SGGNN [38] |
2018 ECCV |
92.3 |
82.8 |
|
|
DukeMTMC-reID
Method |
Time |
rank-1 |
mAP |
Basel. + LSRO [1] |
2017 ICCV |
67.68 |
47.13 |
SVDNet (C) [2] |
2017 ICCV |
67.6 |
45.8 |
SVDNet (R) [2] |
2017 ICCV |
76.7 |
56.8 |
DPFL [17] |
2017 ICCV |
79.2 |
60.6 |
PSE [18] |
2018 CVPR |
79.8 |
62.0 |
PSE+ ECN (rank-dist) [18] |
2018 CVPR |
85.2 |
79.8 |
IDE∗+Cam Style+RE [19] |
2018 CVPR |
78.32 |
57.61 |
DuATM [23] |
2018 CVPR |
81.82 |
64.58 |
HA-CNN [25] |
2018 CVPR |
80.5 |
63.8 |
Inception-V3ft*+re-ranking [26] |
2018 CVPR |
89.41 |
84.82 |
SPReIDcombined-ft*+re-ranking [26] |
2018 CVPR |
88.96 |
84.99 |
MLFN [27] |
2018 CVPR |
81.0 |
62.8 |
BraidNet-CS + SRL [29] |
2018 CVPR |
76.44 |
59.49 |
DaRe(De)+RE+RR [31] |
2018 CVPR |
84.4 |
80.0 |
HAP2S_E [34] |
2018 ECCV |
76.08 |
59.58 |
HAP2S_P [34] |
2018 ECCV |
75.94 |
60.64 |
Mancs [35] |
2018 ECCV |
84.9 |
71.8 |
Suh et al. [37] |
2018 ECCV |
88.3 |
83.9 |
Base Model + SGGNN [38] |
2018 ECCV |
81.1 |
68.2 |
CUHK03-labeled
Method |
Time |
rank-1 |
rank-5 |
rank-10 |
rank-20 |
mAP |
Basel.+LSRO [1] |
2017 ICCV |
73.1 |
92.7 |
96.7 |
|
77.4 |
Verif-Identif. + LSRO [1] |
2017 ICCV |
84.6 |
97.6 |
98.9 |
|
87.4 |
SVDNet(C) [2] |
2017 ICCV |
68.5 |
|
|
|
73.3 |
SVDNet(R,1024-dim) [2] |
2017 ICCV |
81.8 |
|
|
|
84.8 |
M-net [3] |
2017 ICCV |
88.2 |
98.2 |
99.1 |
99.5 |
|
HP-net [3] |
2017 ICCV |
91.8 |
98.4 |
99.1 |
99.6 |
|
Quadruplet + MargOHNM [6] |
2017 CVPR |
75.53 |
95.15 |
99.16 |
|
|
Quadruplet [6] |
2017 CVPR |
74.47 |
96.92 |
98.95 |
|
|
Fusion [9] |
2017 CVPR |
74.21 |
94.33 |
97.54 |
99.25 |
|
SSM [10] |
2017 CVPR |
76.6 |
|
94.6 |
98.0 |
|
Spindle [12] |
2017 CVPR |
88.5 |
97.8 |
98.6 |
99.2 |
|
DeepAlign. [13] |
2017 ICCV |
85.4 |
97.6 |
99.4 |
99.9 |
90.9 |
PDC [16] |
2017 ICCV |
88.70 |
98.61 |
99.24 |
99.67 |
|
DPFL [17] |
2017 ICCV |
86.7 |
82.8 |
|
|
|
Guo et al. [24] |
2018 CVPR |
87.50 |
97.85 |
99.45 |
|
|
SPReIDcombined-ft*+re-ranking [26] |
2018 CVPR |
96.22 |
99.34 |
99.7 |
|
|
MLFN [27] |
2018 CVPR |
82.8 |
|
|
|
|
BraidNet-CS + SRL [29] |
2018 CVPR |
88.18 |
|
98.66 |
99.48 |
|
DaRe(De)+RE+RR [31] |
2018 CVPR |
73.8 |
|
|
|
74.7 |
Chen et al. [33] |
2018 ECCV |
92.5 |
98.8 |
|
|
|
HAP2S_P [34] |
2018 ECCV |
90.39 |
99.54 |
99.90 |
|
|
Mancs [35] |
2018 ECCV |
93.8 |
99.3 |
99.8 |
|
|
Suh et al. [37] |
2018 ECCV |
91.5 |
99.0 |
99.5 |
99.9 |
|
Base Model + SGGNN [38] |
2018 ECCV |
95.3 |
99.1 |
99.6 |
|
94.3 |
MC-PPMN (hnm) [39] |
2018 AAAI |
86.36 |
98.54 |
99.66 |
|
|
CUHK01(p=486)
Method |
Time |
rank-1 |
rank-5 |
rank-10 |
rank-20 |
Quadruplet + MargOHNM [6] |
2017 CVPR |
62.55 |
83.44 |
89.71 |
|
CSBT [8] |
2017 CVPR |
51.2 |
76.3 |
|
91.8 |
Spindle [12] |
2017 CVPR |
79.9 |
94.4 |
97.1 |
98.6 |
DeepAlign. [13] |
2017 ICCV |
75.0 |
93.5 |
95.7 |
97.7 |
Chen et al. [33] |
2018 ECCV |
84.8 |
95.1 |
98.4 |
|
Suh et al. [37] |
2018 ECCV |
80.7 |
94.4 |
97.3 |
98.6 |
MC-PPMN (hnm) [39] |
2018 AAAI |
78.95 |
94.67 |
97.64 |
|
CUHK01(p=100)
Method |
Time |
rank-1 |
rank-5 |
rank-10 |
rank-20 |
DeepAlign. [13] |
2017 ICCV |
88.5 |
98.4 |
99.6 |
99.9 |
Guo et al. [24] |
2018 CVPR |
88.20 |
98.20 |
99.35 |
|
BraidNet-CS + SRL [29] |
2018 CVPR |
93.04 |
|
99.97 |
100.00 |
Suh et al. [37] |
2018 ECCV |
90.4 |
97.1 |
98.1 |
98.9 |
MC-PPMN (hnm) [39] |
2018 AAAI |
93.45 |
99.62 |
99.98 |
|
Viper
Method |
Time |
rank-1 |
rank-5 |
rank-10 |
rank-20 |
M-net [3] |
2017 ICCV |
51.6 |
73.1 |
81.6 |
88.3 |
HP-net [3] |
2017 ICCV |
76.9 |
91.3 |
94.5 |
96.7 |
SHaPE [5] |
2017 ICCV |
34.26 |
57.34 |
67.86 |
80.78 |
Quadruplet + MargOHNM [6] |
2017 CVPR |
49.05 |
73.10 |
81.96 |
|
CSBT [8] |
2017 CVPR |
36.6 |
66.2 |
|
88.3 |
Fusion [9] |
2017 CVPR |
38.08 |
64.14 |
73.52 |
82.91 |
SSM [10] |
2017 CVPR |
53.73 |
|
91.49 |
96.08 |
Spindle [12] |
2017 CVPR |
53.8 |
74.1 |
83.2 |
92.1 |
DeepAlign. [13] |
2017 ICCV |
48.7 |
74.7 |
85.1 |
93.0 |
PDC [16] |
2017 ICCV |
51.27 |
74.05 |
84.18 |
91.46 |
Guo et al. [24] |
2018 CVPR |
50.10 |
73.10 |
84.35 |
|
MC-PPMN [39] |
2018 AAAI |
50.13 |
81.17 |
91.46 |
|
PRW
Method |
Time |
rank-1 |
mAP |
NPSM [4] |
2017 ICCV |
53.1 |
24.2 |
Zhong et al. [14] |
2017 CVPR |
52.54 |
31.51 |
MARS
Method |
time |
Single Query |
|||
rank-1 |
rank-5 |
rank-20 |
mAP |
||
Fusion+XQDA [9] |
2017 CVPR |
71.77 |
86.57 |
93.08 |
56.05 |
STRN [11] |
2017 CVPR |
70.6 |
90.0 |
97.6 |
50.7 |
Zhong et al. [14] |
2017 CVPR |
73.94 |
|
|
68.45 |
TriNet (Re- ranked) [15] |
2017 ICCV |
81.21 |
90.76 |
|
77.43 |
PSE [18] |
2018 CVPR |
72.1 |
|
|
56.9 |
PSE+ ECN (rank-dist) [18] |
2018 CVPR |
76.7 |
|
|
71.8 |
SpaAtn+Q+TemAtn+Ind [22] |
2018 CVPR |
82.3 |
|
|
65.8 |
DuATM [23] |
2018 CVPR |
78.74 |
90.86 |
95.76 |
62.26 |
Zhang et al. [28] |
2018 CVPR |
71.2 |
85.7 |
91.8 |
94.3 |
DaRe(De)+RE+RR [31] |
2018 CVPR |
85.1 |
|
|
81.9 |
Suh et al. [37] |
2018 ECCV |
85.1 |
94.2 |
97.4 |
83.9 |
QMUL GRID
Method |
Time |
rank-1 |
rank-5 |
rank-10 |
rank-20 |
SSM[ [10] |
2017 CVPR |
27.20 |
|
61.12 |
70.56 |
TFusion-sup [32] |
2018 CVPR |
64.10 |
91.90 |
96.50 |
|
NK3ML [36] |
2018 ECCV |
27.20 |
|
60.96 |
71.04 |
iLIDS-VID
Method |
Time |
rank-1 |
rank-5 |
rank-10 |
rank-20 |
STRN [11] |
2017 CVPR |
55.2 |
|
86.5 |
97.0 |
Spindle [12] |
2017 CVPR |
66.3 |
86.6 |
91.8 |
95.3 |
SpaAtn+Q+TemAtn+Ind [22] |
2018 CVPR |
80.2 |
|
|
|
Zhang et al. [28] |
2018 CVPR |
60.2 |
84.7 |
91.7 |
95.2 |
MC-PPMN [39] |
2018 AAAI |
62.69 |
84.80 |
93.31 |
|
PRID2011
Method |
Time |
rank-1 |
rank-5 |
rank-10 |
rank-20 |
SSM [10] |
2017 CVPR |
72.98 |
|
96.76 |
99.11 |
STRN [11] |
2017 CVPR |
79.4 |
|
94.4 |
99.3 |
Spindle [12] |
2017 CVPR |
67.0 |
89.0 |
89.0 |
92.0 |
SpaAtn+Q+TemAtn+Ind [22] |
2018 CVPR |
93.2 |
|
|
|
Zhang et al. [28] |
2018 CVPR |
85.2 |
97.1 |
98.9 |
99.6 |
MC-PPMN [39] |
2018 AAAI |
34.00 |
60.00 |
69.00 |
|
|
|
|
|
|
|
3DPeS
Method |
Time |
rank-1 |
rank-5 |
rank-10 |
rank-20 |
Spindle [12] |
2017 CVPR |
62.1 |
83.4 |
90.5 |
95.7 |
参考文献:
[1] Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in vitro
[2] SVDNet for Pedestrian Retrieval
[3] HydraPlus-Net: Attentive Deep Features for Pedestrian Analysis
[4] Neural Person Search Machines
[5] SHaPE: A Novel Graph Theoretic Algorithm for Making Consensus-based Decisions in Person Re-identification Systems
[6] Beyond triplet loss: a deep quadruplet network for person re-identification
[7] Consistent-Aware Deep Learning for Person Re-identification in a Camera Network
[8] Fast Person Re-identification via Cross-camera Semantic Binary Transformation
[9] Learning Deep Context-aware Features over Body and Latent Parts for Person Reidentification
[10] Scalable Person Re-identification on Supervised Smoothed Manifold
[11] see the forest for the trees:spitial and temporal recurrent neural networks for video-based re-id
[12] Spindle Net: Person Re-identification with Human Body Region Guided Feature Decomposition and Fusion
[13] Deeply-Learned Part-Aligned Representations for Person Re-Identification
[14] Re-ranking Person Re-identification with k-reciprocal Encoding
[15] In Defense of the Triplet Loss for Person Re-Identification
[16] Pose-driven Deep Convolutional Model for Person Re-identification
[17] Person Re-Identification by Deep Learning Multi-Scale Representations
[18] A Pose-Sensitive Embedding for Person Re-Identification with Expanded Cross Neighborhood Re-Ranking
[19] Camera Style Adaptation for Person Re-identification
[20] Deep Mutual Learning
[21] Deep Spatial Feature Reconstruction for Partial Person Re-identification: Alignment-Free Approach
[22] Diversity Regularized Spatiotemporal Attention for Video-based Person Re-identification
[23] Dual Attention Matching Network for Context-Aware Feature Sequence based Person Re-Identification
[24] Efficient and Deep Person Re-Identification using Multi-Level Similarity
[25] Harmonious Attention Network for Person Re-Identification
[26] Human Semantic Parsing for Person Re-identification
[27] Multi-Level Factorisation Net for Person Re-Identification
[28] Multi-shot Pedestrian Re-identification via Sequential Decision Making
[29] Person Re-identification with Cascaded Pairwise Convolutions
[30] Pose Transferrable Person Re-Identification
[31] Resource Aware Person Re-identification across Multiple Resolutions
[32] Unsupervised Cross-dataset Person Re-identification by Transfer Learning of Spatial-Temporal Patterns
[33] Improving Deep Visual Representation for Person Re-identification by Global and Local Image-language Association
[34] Hard-Aware Point-to-Set Deep Metric for Person Re-identification
[35] Mancs: A Multi-task Attentional Network with Curriculum Sampling for Person Re-identification
[36] Maximum Margin Metric Learning Over Discriminative Nullspace for Person Re-identification
[37] Part-Aligned Bilinear Representations for Person Re-identification
[38] Person Re-identification with Deep Similarity-Guided Graph Neural Network
[39] Multi-Channel Pyramid Person Matching Network for Person Re-Identification