論文:Weakly and Semi Supervised Human Body Part Parsing via Pose-Guided Knowledge Transfer (CVPR2018)
了解鏈接:https://baijiahao.baidu.com/s?id=1601699961367739276&wfr=spider&for=pc
下面是整體結構:
1,先下載數據集:
https://github.com/MVIG-SJTU/WSHP/tree/master/parsing_network
如下圖的here,裏面有數據集和模型
2,安裝依賴項
pip install tensorflow
pip install -r requirements.txt
3,應用
Inference
1,
python inference.py /home/feng/WSHP/parsing_network/dataset/ /home/feng/WSHP/parsing_network/model/model.ckpt-19315 --data_list /home/feng/WSHP/parsing_network/dataset/dance.txt
dance.txt
test2015/COCO_test2015_000000000014.jpg
test2015/COCO_test2015_000000000057.jpg
test2015/COCO_test2015_000000000063.jpg
test2015/COCO_test2015_000000000173.jpg
2,
python real-time-inference.py (我修改了路徑直接讀取final_model+攝像頭哈哈)
Evaluation
python evaluate.py --data-dir ./dataset/ --restore-from ./models/final_model/model.ckpt-19315
(dataset裏面包括pascal_test.txt,Annotations_Pascal_Part和JPEGImages_Pascal_Part文件夾)
下面是我用數據集的圖像和我自己的圖片進行推理 的結果
測試結果:
訓練:
## Train our network on the whole dataset, model.ckpt-50000 is the pre-trained weights on COCO dataset
python train.py --data-dir ./dataset/ --data-list dataset/train_all.txt --num-epochs 10 --restore-from models/model.ckpt-50000 --not-restore-last --snapshot-dir snapshots-new-fromcoco --random-scale --random-mirror --save-pred-every 50000
## Finetune the model on the original dataset
python train.py --data-dir ./dataset/ --data-list dataset/pascal_train.txt --num-epochs 90 --restore-from snapshots-new-fromcoco/model.ckpt-213129 --snapshot-dir snapshots-new-fromcoco-finetune --random-scale --random-mirror --save-pred-every 10000
訓練問題解決:
輸入訓練命令
python train.py --data-dir ./dataset/ --data-list dataset/train_all.txt --num-epochs 10 --restore-from models/model.ckpt-50000 --not-restore-last --snapshot-dir snapshots-new-fromcoco --random-scale --random-mirror --save-pred-every 50000
進行訓練出現如下問題:
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[2592,2048,5,5] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: gradients/fc1_voc12_c2/convolution_grad/Conv2DBackpropInput = Conv2DBackpropInput[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](gradients/fc1_voc12_c2/convolution_grad/Conv2DBackpropInput-0-VecPermuteNHWCToNCHW-LayoutOptimizer, fc1_voc12_c2/weights/read, gradients/fc1_voc12_c2/convolution_grad/Conv2DBackpropInput-2-TransposeNHWCToNCHW-LayoutOptimizer)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
解決方案:
將train.py文件裏面的BATCH_SIZE = 8改成BATCH_SIZE = 4
成功訓練如下:
4,問題解決:
安裝完依賴性進行圖例出現如下問題:
UnknownError (see above for traceback): Failed to get convolution algorithm. ...
原因是目前版本的TensorFlow的ObjectDection中,使用tensorflow-gpu ==1.12 版本会报错,如上,将版本降到tensorflow-gpu ==1.9.0即可正常运行,運行命令如下:
sudo pip install --upgrade --force-reinstall tensorflow-gpu==1.9.0 --user