Strong_Baseline_of_Pedestrian_Attribute_Recognition的readme.md的译文

1. 行人属性识别

论文:Rethinking of Pedestrian Attribute Recognition: Realistic Datasets with Efficient Method.

考虑到各种SOTA基准存在巨大的性能差距,因此我们提供了坚实而强大的基准进行公平比较。

2. 更新

  • 20200901 添加了 infer.py文件

3. 依赖的python库

  • scipy==1.4.1
  • torch==1.4.0(pytorch 1.4.0)
  • torchvision==0.5.0
  • tqdm==4.43.0
  • easydict==1.9
  • numpy==1.18.1
  • Pillow==7.1.2

4. 技巧

  • 样本丢失而不是标签丢失
  • 高学习率结合梯度裁剪(clip_grad_norm)
  • 增强填充结合随机随机裁剪(RandomCrop)
  • 分类层后添加BN(Batch Normalization,批量归一化)

5. 性能比较

5.1 基准性能

  • 与MsVAA,VAC,ALM的基准性能相比,我们的基准性能有了很大提升。
  • 与复现的MsVAA,VAC,ALM相比,我们的基准更好。
  • 我们尽力复现了 MsVAA, VAC ,很感谢他们的贡献
  • 我们也尽力复现了ALM ,还有尝试联系该作者, 但是都没有得到答复

Strong_Baseline_of_Pedestrian_Attribute_Recognition的readme.md的译文_第1张图片

SOTA 性能

  • 与最新技术方法的性能相比,我们的基准性能相当,甚至更好

  • DeepMAR (ACPR15) Multi-attribute Learning for Pedestrian Attribute Recognition in Surveillance Scenarios.
  • HPNet (ICCV17) Hydraplus-net: Attentive deep features for pedestrian analysis.
  • JRL (ICCV17) Attribute recognition by joint recurrent learning of context and correlation.
  • LGNet (BMVC18) Localization guided learning for pedestrian attribute recognition.
  • PGDM (ICME18) Pose guided deep model for pedestrian attribute recognition in surveillance scenarios.
  • GRL (IJCAI18) Grouping Attribute Recognition for Pedestrian with Joint Recurrent Learning.
  • RA (AAAI19) Recurrent attention model for pedestrian attribute recognition.
  • VSGR (AAAI19) Visual-semantic graph reasoning for pedestrian attribute recognition.
  • VRKD (IJCAI19) Pedestrian Attribute Recognition by Joint Visual-semantic Reasoning and Knowledge Distillation.
  • AAP (IJCAI19) Attribute aware pooling for pedestrian attribute recognition.
  • MsVAA (ECCV18) Deep imbalanced attribute classification using visual attention aggregation.
  • VAC (CVPR19) Visual attention consistency under image transforms for multi-label image classification.
  • ALM (ICCV19) Improving Pedestrian Attribute Recognition With Weakly-Supervised Multi-Scale Attribute-Specific Localization.

数据集

PETA: [Paper][Project]

PA100K:[Paper][Github]

RAP : A Richly Annotated Dataset for Pedestrian Attribute Recognition

  • v1.0 [Paper][Project]
  • v2.0 [Paper][Project]

Zero-shot Protocal

PETA 和 RAPv2 数据集: Google Drive.

只需要将“ dataset.pkl”替换为“ peta_new.pkl”或“ rapv2_new.pkl”即可在新协议(Protocal)下做这个实验。

预训练模型

预训练模型Google Drive.

使用步骤

  1. 克隆项目,项目:github

git clone https://github.com/valencebond/Strong_Baseline_of_Pedestrian_Attribute_Recognition.git

  1. 创建目录存放数据集

    cd Strong_Baseline_of_Pedestrian_Attribute_Recognition mkdir data

  2. 数据存放的结构:

    ${project_dir}/data
        PETA
            images/
            PETA.mat
            README
        PA100k
            data/
            annotation.mat
            README.txt
        RAP
            RAP_dataset/
            RAP_annotation/
        RAP2
            RAP_dataset/
            RAP_annotation/
    
  3. 运行format_xxxx.py 分别生成 dataset.pkl

    python ./dataset/preprocess/format_peta.py
    python ./dataset/preprocess/format_pa100k.py
    python ./dataset/preprocess/format_rap.py
    python ./dataset/preprocess/format_rap2.py
    
  4. 使用resnet50进行训练

    CUDA_VISIBLE_DEVICES=0 python train.py PETA
    

致谢

代码来源:
①Dangwei Li
② Houjing Huang.

文献

在研究中使用此方法或此代码,请引用为:

@misc{jia2020rethinking,
    title={Rethinking of Pedestrian Attribute Recognition: Realistic Datasets with Efficient Method},
    author={Jian Jia and Houjing Huang and Wenjie Yang and Xiaotang Chen and Kaiqi Huang},
    year={2020},
    eprint={2005.11909},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

你可能感兴趣的:(论文)