上一篇讲完安装,这一篇来简单操作一下,搞点估计结果图爽一下。虽然还没深入内容,但是只要搞出了结果,那就是一脚踏进了姿态估计大门!
我再叨叨一句,有时候,先把代码跑通了,测试一波,简单的复现一下,也是初期代码能力的培养!配环境多难啊!看懂别人的英文教程多难啊!直接改代码是要一步登天嘛!我刚入门就读源码,边爬边飞靠谱吗!天才发抖!
这一趴的内容,我都是根据官方文档里面的操作进行的,下面包括一些中英混杂的教程和范例,以及最主要的,我的 HigherHRNet 试水结果!可以挑自己喜欢的方法来做,不要成为复制粘贴的工具人!手敲代码yyds!
data preparation
先来简单搞一个 COCO。
我是事先在 win 上下载,然后 Xftp 到服务器,之后解压缩到指定文件夹,
unzip train2014.zip -d /home/yiming/mmpose/data/coco
unzip train2017.zip -d /home/yiming/mmpose/data/coco
unzip val2014.zip -d /home/yiming/mmpose/data/coco
# 查看文件树检查
tree -L
解压后自动创建 train2014 文件夹,不用提前创造,不然就套娃了。
# single-gpu testing
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}] [--eval ${EVAL_METRIC}] \
[--proc_per_gpu ${NUM_PROC_PER_GPU}] [--gpu_collect] [--tmpdir ${TMPDIR}] [--average_clips ${AVG_TYPE}] \
[--launcher ${JOB_LAUNCHER}] [--local_rank ${LOCAL_RANK}]
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} ${GPU_NUM} [--out ${RESULT_FILE}] [--eval ${EVAL_METRIC}] \
[--proc_per_gpu ${NUM_PROC_PER_GPU}] [--gpu_collect] [--tmpdir ${TMPDIR}] [--average_clips ${AVG_TYPE}] \
[--launcher ${JOB_LAUNCHER}] [--local_rank ${LOCAL_RANK}]
选项解释:(有的我就懒得翻了,用上再说)
RESULT_FILE
: 输出结果文件。如果不指定,不保存。EVAL_METRIC
: 评价指标,根据数据集而定。NUM_PROC_PER_GPU
: 每个 GPU 的进程数。不指定,单 GPU 单进程。--gpu_collect
: If specified, recognition results will be collected using gpu communication. Otherwise, it will save the results on different gpus to TMPDIR
and collect them by the rank 0 worker.TMPDIR
: Temporary directory used for collecting results from multiple workers, available when --gpu_collect
is not specified.AVG_TYPE
: Items to average the test clips. If set to prob
, it will apply softmax before averaging the clip scores. Otherwise, it will directly average the clip scores.JOB_LAUNCHER
: Items for distributed job initialization launcher. Allowed choices are none
, pytorch
, slurm
, mpi
. Especially, if set to none, it will test in a non-distributed mode.LOCAL_RANK
: ID for local rank. If not specified, it will be set to 0.
确保已经下载 checkpoints 到这个文件夹checkpoints/
.
Test ResNet50 on COCO (without saving the test results) and evaluate the mAP.
./tools/dist_test.sh configs/top_down/resnet/coco/res50_coco_256x192.py \
checkpoints/SOME_CHECKPOINT.pth 1 \
--eval mAP
Test ResNet50 on COCO with 8 GPUS, and evaluate the mAP.
./tools/dist_test.sh configs/top_down/resnet/coco/res50_coco_256x192.py \
checkpoints/SOME_CHECKPOINT.pth 8 \
--eval mAP
Test ResNet50 on COCO in slurm environment and evaluate the mAP.
./tools/slurm_test.sh slurm_partition test_job \
configs/top_down/resnet/coco/res50_coco_256x192.py \
checkpoints/SOME_CHECKPOINT.pth \
--eval mAP
mmpose/checkpoints
文件夹下。
./tools/dist_test.sh configs/bottom_up/higherhrnet/coco/higher_hrnet32_coco_512x512.py checkpoints/higher_hrnet32_coco_512x512-8ae85183_20200713.pth 4 --eval mAP
851s≈14min,精度稍微有差,大致相同。
使用 GT bounding boxes 运行 top-down 姿态估计 demos。
python demo/top_down_img_demo.py \
${MMPOSE_CONFIG_FILE} ${MMPOSE_CHECKPOINT_FILE} \
--img-root ${IMG_ROOT} --json-file ${JSON_FILE} \
--out-img-root ${OUTPUT_DIR} \
[--show --device ${GPU_ID}] \
[--kpt-thr ${KPT_SCORE_THR}]
例如 HRNet:
python demo/top_down_img_demo.py \
configs/top_down/hrnet/coco/hrnet_w48_coco_256x192.py \
https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth \
--img-root tests/data/coco/ --json-file tests/data/coco/test_coco.json \
--out-img-root vis_results
More examples and details can be found in the demo folder and the demo docs.
python demo/bottom_up_img_demo.py \
${MMPOSE_CONFIG_FILE} ${MMPOSE_CHECKPOINT_FILE} \
--img-root ${IMG_ROOT} --json-file ${JSON_FILE} \
--out-img-root ${OUTPUT_DIR} \
[--show --device ${GPU_ID or CPU}] \
[--kpt-thr ${KPT_SCORE_THR}]
HigherHRNet:
python demo/bottom_up_img_demo.py configs/bottom_up/higherhrnet/coco/higher_hrnet32_coco_512x512.py checkpoints/higher_hrnet32_coco_512x512-8ae85183_20200713.pth --img-root tests/data/coco/ --json-file tests/data/coco/test_coco.json --out-img-root vis_results
python demo/bottom_up_video_demo.py \
${MMPOSE_CONFIG_FILE} ${MMPOSE_CHECKPOINT_FILE} \
--video-path ${VIDEO_FILE} \
--output-video-root ${OUTPUT_VIDEO_ROOT} \
[--show --device ${GPU_ID or CPU}] \
[--bbox-thr ${BBOX_SCORE_THR} --kpt-thr ${KPT_SCORE_THR}]
HigherHRNet:
python demo/bottom_up_video_demo.py configs/bottom_up/higherhrnet/coco/higher_hrnet32_coco_512x512.py checkpoints/higher_hrnet32_coco_512x512-8ae85183_20200713.pth --video-path demo/demo_video1.mp4 --out-video-root vis_results
结果:solo ,我懒得传上来了。
好!今天就先讲到这!配完数据集和运行完 demo,我们基本就算成功使用HigherHRNet了!下篇我们来一起训练网络!