ICCV2017 re-id 论文所用的指标

ICCV2017 相关论文所用的指标:

1、Unlabeled samples generated by GAN improve the person re-identification baseline in vitro

resnet50为基础,GAN生成的图像和真是图像数据一起输入CNN网络进行re-id

market-1501(single query,也只使用此数据集来生成图像):  rank-1(78.06%) mAP(56.23%)

CUHK03:rank-1(73.1%) mAP(77.4%)

使用微调的模型达到最高的性能(使用文章所提出来的LSRO):

market-1501:rank-1(83.97%)  mAP(66.07%)

CUHK03 :rank-1(84.6%)  mAP(87.4%)

2、SVDNet for pedestrian retrieval

resnet50为基础,在最后一个全连接层的前面加了一个Eigenlayer,对参数进行SVD分解

market-1501:rank-1(82.3%)   rank-5(92.3%)  mAP(62.1%) 

CUHK03: rank-1(81.8%)  rank-5(95.2%)  mAP(84.8)

DukeMTMC-reID: rank-1(76.7%)  rank-5(86.4%)   mAP(56.8%)

3、Pose-driven Deep Convolutional Model for Person Re-identification

GoogLeNet为基础模型

3、Pose-driven Deep Convolutional Model for Person Re-identification

ICCV2017 相关论文所用的指标:

1、Unlabeled samples generated by GAN improve the person re-identification baseline in vitro

resnet50为基础,GAN生成的图像和真是图像数据一起输入CNN网络进行re-id

market-1501(single query,也只使用此数据集来生成图像):  rank-1(78.06%) mAP(56.23%)

CUHK03:rank-1(73.1%) mAP(77.4%)

使用微调的模型达到最高的性能(使用文章所提出来的LSRO):

market-1501:rank-1(83.97%)  mAP(66.07%)

CUHK03 :rank-1(84.6%)  mAP(87.4%)

2、SVDNet for pedestrian retrieval

resnet50为基础,在最后一个全连接层的前面加了一个Eigenlayer,对参数进行SVD分解

market-1501:rank-1(82.3%)   rank-5(92.3%)  mAP(62.1%) 

CUHK03: rank-1(81.8%)  rank-5(95.2%)  mAP(84.8)

DukeMTMC-reID: rank-1(76.7%)  rank-5(86.4%)   mAP(56.8%)

3、Pose-driven Deep Convolutional Model for Person Re-identification

GoogLeNet为基础模型

3、Pose-driven Deep Convolutional Model for Person Re-identification

你可能感兴趣的:(Person,Re-id)