HigherHRnet /pytorch-Yolo4环境部署

 

HigherHRNet

 

# The code is developed and tested using 4 NVIDIA P100 GPU cards. Other platforms or GPU cards are not fully tested.

# 环境搭建
python 3.6.2  注意python3.6实测报错

1. pip freeze>requirements.txt 把原有环境中的安装包导出来 这里我导出为install_req.txt
2. 重新创建一个环境:conda create -n hrnet python==3.6.2
3. 查看环境现有环境:conda info -e 或者 conda env list

4. 激活新建的环境:conda activate hrnet

4.安装原有环境中的所有包:pip install -r requirements.txt
eg: pip install -i https://pypi.tuna.tsinghua.edu.cn/simple -r install_req.txt



## demo  test
# For single-scale testing:

python tools/valid.py \
    --cfg experiments/coco/higher_hrnet/w32_512_adam_lr1e-3.yaml \
    TEST.MODEL_FILE models/pytorch/pose_coco/pose_higher_hrnet_w32_512.pth

eg:
python tools/valid.py --cfg experiments/coco/higher_hrnet/w32_512_adam_lr1e-3.yaml TEST.MODEL_FILE models/pytorch/pose_coco/pose_higher_hrnet_w32_512.pth

注:实测过程inference非常慢,大概2s/pcs ,网上有个加速的代码,测得3fps

# By default, we use horizontal flip. To test without flip:

python tools/valid.py \
    --cfg experiments/coco/higher_hrnet/w32_512_adam_lr1e-3.yaml \
    TEST.MODEL_FILE models/pytorch/pose_coco/pose_higher_hrnet_w32_512.pth \
    TEST.FLIP_TEST False

# Multi-scale testing is also supported, although we do not report results in our paper:

python tools/valid.py \
    --cfg experiments/coco/higher_hrnet/w32_512_adam_lr1e-3.yaml \
    TEST.MODEL_FILE models/pytorch/pose_coco/pose_higher_hrnet_w32_512.pth \
    TEST.SCALE_FACTOR '[0.5, 1.0, 2.0]'


 

 

pytorch-YOLOv4


# demo
python3 demo.py  -weightfile ./weights/yolov4.weights -imgfile ./data/dog.jpg


##  train 自己的数据集训练

#  tool/coco_annotatin.py 转换coco data
#  python3 train.py -l 0.001 -g 4 -pretrained ./yolov4.conv.137.pth -classes 3 -dir /home/OCR/coins

python3 train.py -g 0 -pretrained ./yolov4.conv.137.pth


## inference 自己的模型预测 四种方式

# 1. Load the pretrained darknet model and darknet weights to do the inference (image size is configured in cfg file already)
python3 demo.py -cfgfile  -weightfile  -imgfile 

eg:

python3 demo.py -cfgfile ./cfg/yolov4.cfg -weightfile ./weights/yolov4.weights  -imgfile ./data/dog.jpg
# 注意: 上述demo若直接调用训练后的pth文件(./checkpoints/Yolov4_epoch25.pth),运行并无结果,而调用darknet yolov4.weights文件,正常展示
## Predicted in 0.052915 seconds


# 2. pth正确调用方式为
python3 models.py      

python model.py 3 weight/Yolov4_epoch166_coins.pth data/coin2.jpg data/coins.names

python model.py num_classes weightfile imagepath namefile

eg:
python3 models.py 80 ./checkpoints/Yolov4_epoch25.pth ./data/dog.jpg 576 768

python3 models.py 80 ./weights/yolov4.pth ./data/dog.jpg 576 768


# 3.Load converted ONNX file to do inference (See section 3 and 4)

sudo pip3 install --upgrade torch==1.4.0 -i https://pypi.tuna.tsinghua.edu.cn/simple/

sudo pip3 install  onnxruntime -i https://pypi.tuna.tsinghua.edu.cn/simple/

# sudo pip3 install onnxruntime-gpu -i https://pypi.tuna.tsinghua.edu.cn/simple/


pip3 install --user onnx


1) darknet====>onnx
python3 demo_darknet2onnx.py    

eg:
python3 demo_darknet2onnx.py ./cfg/yolov4.cfg ./weights/yolov4.weights ./data/dog.jpg 1

2) pytorch====>onnx
python3 demo_pytorch2onnx.py      
eg:
python3 demo_pytorch2onnx.py ./weights/yolov4.pth ./data/dog.jpg 8 80 416 416

# 4.Load converted TensorRT engine file to do inference (See section 5)

trtexec --onnx= --explicitBatch --saveEngine= --workspace= --fp16

 将onnx转换成engine

(1)下载好TensorRT库

(2)进入~/samples/trtexec,运行

make
在~/bin下面会多出trtexec和trtexec_debug两个文件
在TensorRT的目录下运行

'''半精度导出'''
 ./bin/trtexec --onnx=~.onnx --fp16 --saveEngine=~.engine
'''全精度导出'''
./bin/trtexec --onnx=~.onnx --saveEngine=~.engine

其中‘./bin/trtexec’为刚刚生成的trtexec所在路径,~.onnx为onnx文件所在路径,~.engine为engine的生成路径

eg:
/home/gavin/mysoft/TensorRT-7.0.0.11/bin/trtexec --onnx=./yolov4_1_3_608_608.onnx --explicitBatch --saveEngine=yolov4.engine --fp16

./trtexec --onnx=yolov4_1_3_608_608.onnx --explicitBatch --saveEngine=yolov4_320_5.trt --fp16

5.ONNX2TensorRT (Evolving)

python3 demo_trt.py    

eg:
python3 demo_trt.py yolov4.engine ./data/dog.jpg 608 608   # TRT inference time: 0.035645


 

你可能感兴趣的:(Pose,Estimation)