TensorRT maskrcnn windows下使用自己的数据集(一)

  • 环境准备
    maskrcnn源码:
    https://github.com/matterport/Mask_RCNN
    -- 前向推理环境:
    系统:windows10
    工具:CUDA10.0.130、cudnn7.6.3.30、TensorRT7.0.0.11
    IDE:Visual Studio2019
    -- 训练及模型转换环境:
    系统:Ubuntu18.04,docker-nvidia
    工具:python3.6、tensorflow1.14.0、keras2.1.13、nvcr.io/nvidia/tensorflow:19.10-py3
  • 数据集
    使用自己的数据集制作成coco格式。
  • 训练
git clone https://github.com/matterport/Mask_RCNN.git
cd samples/
mkdir yourtrain 
cd yourtrain
cp ../../coco/coco.py ./
mv coco.py yourtrain.py

修改yourtrain.py训练参数:

class CocoConfig(Config):
    # 改成你想要的名字
    NAME = "yourtrain"
    #每张gpu处理的图片数量,更加你的gpu的性能修改,性能不行就改1
    IMAGES_PER_GPU = 2
    # 你用于训练的gpu的数量,IMAGES_PER_GPU*GPU_COUNT 就是你的batchsize
    GPU_COUNT = 2

    # 改成你的数据的label数量加1
    NUM_CLASSES = 1 + 1 # COCO has 80 classes
    # added 主干网络可以是ResNet50也可以是res101,这里不写BACKBONE那就是默认res101
    BACKBONE = 'resnet50'
    #输入图片512*512
    IMAGE_MIN_DIM = 512
    IMAGE_MAX_DIM = 512
    #anchor大小,根据情况改
    RPN_ANCHOR_SCALES = (14, 28, 56, 112, 224)
    #其他参数可以看情况改,其他参数名可看mrcnn/config.py  

开始训练

cd Mask_RCNN   #cd回Mask_RCNN主文件夹
CUDA_VISIBLE_DEVICES=0,1 python samples/yourtrain/yourtrain.py train --dataset=你的coco格式的数据集文件夹/ --model=imagenet --logs=你存模型的路径/logs    #生成的h5模型会存储在logs下,CUDA_VISIBLE_DEVICES是gpu的序号,自行修改
  • 模型转换
    1、在Ubuntu环境下安装好docker-nvidia
    2、拉去nvidia docker container
docker run --rm -it --gpus all -v $TRT_SOURCE:/workspace/TensorRT -v $TRT_RELEASE:/tensorrt nvcr.io/nvidia/tensorflow:19.10-py3 /bin/bash
#最好记下container的序号或名称,之后可以docker attach 容器序号进入容器

3、看看有没有workspace/TensorRT文件夹,没有就如下操作

cd workspace
git clone https://github.com/NVIDIA/TensorRT.git
cd TensorRT/samples/opensource/sampleUffMaskRCNN/converted/
pip3 install -r requirements.txt

4、出去容器(快捷键alt+q+p),把TensorRT7.0.0.11中的模型转换python安装包docker cp到docker中的TensorRT/samples/opensource/sampleUffMaskRCNN/converted/下

docker cp uff-*-py2.py3-none-any.whl 容器序号:/workspace/TensorRT/samples/opensource/sampleUffMaskRCNN/converted/
docker cp graphsurgeon-*-py2.py3-none-any.whl 容器序号:/workspace/TensorRT/samples/opensource/sampleUffMaskRCNN/converted/

5、进入到docker容器下安装

pip3 uff-*-py2.py3-none-any.whl
pip3 graphsurgeon-*-py2.py3-none-any.whl

6、修改转换模型代码nchw->nhwc

vim /usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter_functions.py

搜索uff_graph.conv_transpose,即输入/uff_graph.conv_transpose,回车,修改如成如下:

uff_graph.conv_transpose(
    inputs[0], inputs[2], inputs[1],

7、下载Mask_RCNN,(因为h5中只存了权重,没有网络图,转换需要网络代码)

git clone https://github.com/matterport/Mask_RCNN.git
export PYTHONPATH=$PYTHONPATH:$PWD/Mask_RCNN

8、修改mrcnn_to_trt_single.py

class InferenceConfig(CocoConfig):
    GPU_COUNT = 1
    IMAGES_PER_GPU = 1
    BACKBONE = 'resnet50' # added ResNet50
    IMAGE_MIN_DIM = 512
    IMAGE_MAX_DIM = 512

9、修改config.py,输入,label数等,res101结构改为res50结构

roi = gs.create_plugin_node("ROI", op="ProposalLayer_TRT", prenms_topk=1024, keep_topk=1000, iou_threshold=0.7)
roi_align_classifier = gs.create_plugin_node("roi_align_classifier", op="PyramidROIAlign_TRT", pooled_size=7)
mrcnn_detection = gs.create_plugin_node("mrcnn_detection", op="DetectionLayer_TRT", num_classes=2, keep_topk=100, score_threshold=0.7, iou_threshold=0.3)
timedistributed_remove_list = [
        "mrcnn_class_conv1/Reshape/shape", "mrcnn_class_conv1/Reshape", "mrcnn_class_conv1/Reshape_1/shape", "mrcnn_class_conv1/Reshape_1",
        "mrcnn_class_bn1/Reshape/shape", "mrcnn_class_bn1/Reshape", "mrcnn_class_bn1/Reshape_5/shape", "mrcnn_class_bn1/Reshape_5",
        "mrcnn_class_conv2/Reshape/shape", "mrcnn_class_conv2/Reshape", "mrcnn_class_conv2/Reshape_1/shape", "mrcnn_class_conv2/Reshape_1",
        "mrcnn_class_bn2/Reshape/shape", "mrcnn_class_bn2/Reshape", "mrcnn_class_bn2/Reshape_5/shape", "mrcnn_class_bn2/Reshape_5",
        "mrcnn_class_logits/Reshape/shape", "mrcnn_class_logits/Reshape","mrcnn_class_logits/Reshape_1/shape", "mrcnn_class_logits/Reshape_1",
        "mrcnn_class/Reshape/shape", "mrcnn_class/Reshape","mrcnn_class/Reshape_1/shape", "mrcnn_class/Reshape_1",
        "mrcnn_bbox_fc/Reshape/shape", "mrcnn_bbox_fc/Reshape","mrcnn_bbox_fc/Reshape_1/shape", "mrcnn_bbox_fc/Reshape_1",

        "mrcnn_mask_conv1/Reshape/shape", "mrcnn_mask_conv1/Reshape", "mrcnn_mask_conv1/Reshape_1/shape", "mrcnn_mask_conv1/Reshape_1",
        "mrcnn_mask_bn1/Reshape/shape", "mrcnn_mask_bn1/Reshape", "mrcnn_mask_bn1/Reshape_5/shape", "mrcnn_mask_bn1/Reshape_5",
        "mrcnn_mask_conv2/Reshape/shape", "mrcnn_mask_conv2/Reshape", "mrcnn_mask_conv2/Reshape_1/shape", "mrcnn_mask_conv2/Reshape_1",
        "mrcnn_mask_bn2/Reshape/shape", "mrcnn_mask_bn2/Reshape", "mrcnn_mask_bn2/Reshape_5/shape", "mrcnn_mask_bn2/Reshape_5",
        "mrcnn_mask_conv3/Reshape/shape", "mrcnn_mask_conv3/Reshape", "mrcnn_mask_conv3/Reshape_1/shape", "mrcnn_mask_conv3/Reshape_1",
        "mrcnn_mask_bn3/Reshape/shape", "mrcnn_mask_bn3/Reshape", "mrcnn_mask_bn3/Reshape_5/shape", "mrcnn_mask_bn3/Reshape_5",
        "mrcnn_mask_conv4/Reshape/shape", "mrcnn_mask_conv4/Reshape", "mrcnn_mask_conv4/Reshape_1/shape", "mrcnn_mask_conv4/Reshape_1",
        "mrcnn_mask_bn4/Reshape/shape", "mrcnn_mask_bn4/Reshape", "mrcnn_mask_bn4/Reshape_5/shape", "mrcnn_mask_bn4/Reshape_5",
        "mrcnn_mask_deconv/Reshape/shape", "mrcnn_mask_deconv/Reshape", "mrcnn_mask_deconv/Reshape_1/shape", "mrcnn_mask_deconv/Reshape_1",
        "mrcnn_mask/Reshape/shape", "mrcnn_mask/Reshape", "mrcnn_mask/Reshape_1/shape", "mrcnn_mask/Reshape_1",
        ]
...
timedistributed_connect_pairs = [
        ("mrcnn_mask_deconv/Relu", "mrcnn_mask/convolution"), # mrcnn_mask_deconv -> mrcnn_mask
        ("activation_40/Relu", "mrcnn_mask_deconv/conv2d_transpose"), #active74 -> mrcnn_mask_deconv
        ("mrcnn_mask_bn4/batchnorm/add_1","activation_40/Relu"),  # mrcnn_mask_bn4 -> active74
        ("mrcnn_mask_conv4/BiasAdd", "mrcnn_mask_bn4/batchnorm/mul_1"), #mrcnn_mask_conv4 -> mrcnn_mask_bn4
        ("activation_39/Relu", "mrcnn_mask_conv4/convolution"), #active73 -> mrcnn_mask_conv4
        ("mrcnn_mask_bn3/batchnorm/add_1","activation_39/Relu"), #mrcnn_mask_bn3 -> active73
        ("mrcnn_mask_conv3/BiasAdd", "mrcnn_mask_bn3/batchnorm/mul_1"), #mrcnn_mask_conv3 -> mrcnn_mask_bn3
        ("activation_38/Relu", "mrcnn_mask_conv3/convolution"), #active72 -> mrcnn_mask_conv3
        ("mrcnn_mask_bn2/batchnorm/add_1","activation_38/Relu"), #mrcnn_mask_bn2 -> active72
        ("mrcnn_mask_conv2/BiasAdd", "mrcnn_mask_bn2/batchnorm/mul_1"), #mrcnn_mask_conv2 -> mrcnn_mask_bn2
        ("activation_37/Relu", "mrcnn_mask_conv2/convolution"), #active71 -> mrcnn_mask_conv2
        ("mrcnn_mask_bn1/batchnorm/add_1","activation_37/Relu"), #mrcnn_mask_bn1 -> active71
        ("mrcnn_mask_conv1/BiasAdd", "mrcnn_mask_bn1/batchnorm/mul_1"), #mrcnn_mask_conv1 -> mrcnn_mask_bn1
        ("roi_align_mask_trt", "mrcnn_mask_conv1/convolution"), #roi_align_mask -> mrcnn_mask_conv1


        ("mrcnn_class_bn2/batchnorm/add_1","activation_35/Relu"), # mrcnn_class_bn2 -> active 69
        ("mrcnn_class_conv2/BiasAdd", "mrcnn_class_bn2/batchnorm/mul_1"), # mrcnn_class_conv2 -> mrcnn_class_bn2
        ("activation_34/Relu", "mrcnn_class_conv2/convolution"), # active 68 -> mrcnn_class_conv2
        ("mrcnn_class_bn1/batchnorm/add_1","activation_34/Relu"), # mrcnn_class_bn1 -> active 68
        ("mrcnn_class_conv1/BiasAdd", "mrcnn_class_bn1/batchnorm/mul_1"), # mrcnn_class_conv1 -> mrcnn_class_bn1
        ("roi_align_classifier", "mrcnn_class_conv1/convolution"), # roi_align_classifier -> mrcnn_class_conv1
        ]

---
dense_compatible_connect_pairs = [
        ("activation_35/Relu","mrcnn_bbox_fc/MatMul"), #activation_69 -> mrcnn_bbox_fc
        ("activation_35/Relu", "mrcnn_class_logits/MatMul"), #activation_69 -> mrcnn_class_logits
        ("mrcnn_class_logits/BiasAdd", "mrcnn_class/Softmax"), #mrcnn_class_logits -> mrcnn_class
        ("mrcnn_class/Softmax", "mrcnn_detection"), #mrcnn_class -> mrcnn_detection
        ("mrcnn_bbox_fc/BiasAdd", "mrcnn_detection"), #mrcnn_bbox_fc -> mrcnn_detection
        ]

10、把训练好的h5文件docker cp到容器中,然后如下转换

ython3 mrcnn_to_trt_single.py -w mask_rcnn_yourtrain.h5 -o mrcnn_nchw.uff -p ./config.py

生成mrcnn_nchw.uff,取出到windows备用

你可能感兴趣的:(TensorRT maskrcnn windows下使用自己的数据集(一))