DeepLabV3+训练自己的数据

1.按照官方文档,配置环境:https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/installation.md

安装tensorflow-gpu时可能给会出现问题,需要cuda和cudnn的版本都正确才可以。

cd /anaconda3/envs/DeepLab/lib/python3.6/site-packages
cd tensorflow
git clone https://github.com/tensorflow/models.git
cd models
cd research
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
python deeplab/model_test.py

(2).利用conda安装指定版本的tensorflow
 conda install --channel https://conda.anaconda.org/anaconda tensorflow=1.6.0

(3).python train.py出现错误:段错误(核心已转储),经查找甄别发现是import matplotlib.pyplot as plt导致的,则使用conda uninstall matplotlib将其先卸载之后,再重新安装即可conda install matplotlib

2.导入原始训练测试数据VOC2012

 

3.导入自己的训练测试数据

 

制作满足tfrecord格式的数据集,需要借助于源码中的build_voc2012_data.py(tensorflow/models/research/deeplab/datasets/build_voc2012_data.py)文件, 在目录tensorflow/models/research/deeplab/datasets/data_pre.sh下,新建脚本文件,内容为:

#!/bin/bash

# Exit immediately if a command exits with a non-zero status.
set -e

mkdir -p "./pascal_voc_seg/VOCdevkit/mydata0611/tfrecord" #if files are exist, then no errors. if files are not exist, then create them. 

python ./build_voc2012_data.py \
  --image_folder="./pascal_voc_seg/VOCdevkit/mydata0611/full-VOC/JPEGImages" \
  --semantic_segmentation_folder="./pascal_voc_seg/VOCdevkit/mydata0611/full-VOC/SegmentationClass" \
  --list_folder="./pascal_voc_seg/VOCdevkit/mydata0611/full-VOC/ImageSets/Segmentation" \
  --image_format="jpg" \
  --output_dir="./pascal_voc_seg/VOCdevkit/mydata0611/tfrecord"

在已经部署好的conda环境下(DeepLabV3),运行sh data_pre.sh。

convert完成后,即可以利用数据进行训练。

4. 训练数据

在训练数据之前,一定要注意修改此处:

旧版本修改tensorflow/models/research/deeplab/datasets/segmentation_dataset.py中的内容,

新版本修改models/research/deeplab/datasets/data_generator.py中的内容。

_PASCAL_VOC_SEG_INFORMATION = DatasetDescriptor(
    splits_to_sizes={
        #'train': 1464,
        #'train_aug': 10582,
        #'trainval': 2913,
        #'val': 1449,

        'train': 1303, #pang-add-mydata5 #训练集个数
        #'train_aug': 10582,
        'trainval': 1303, #训练验证集个数
        'val': 326, #验证集个数


    },
    #num_classes=21,
    #ignore_label=255,
    num_classes=3, #共有三类
    ignore_label=-1, #忽略掉-1的类别,计算IOU时不算

)

 

训练数据需要借助于源码tensorflow/models/research/deeplab/train.py,则在该路径下新建tensorflow/models/research/deeplab/train.sh脚本文件,脚本内容为:

#!/bin/bash

# Exit immediately if a command exits with a non-zero status.
set -e

# Move one-level up to tensorflow/models/research directory.
cd ..

# Update PYTHONPATH.
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim

#mydata0611
# From tensorflow/models/research/
python deeplab/train.py \
    --logtostderr \
    --training_number_of_steps=20000 \
    --train_split="train" \
    --model_variant="xception_65" \
    --atrous_rates=6 \
    --atrous_rates=12 \
    --atrous_rates=18 \
    --output_stride=16 \
    --decoder_output_stride=4 \
    --train_crop_size=513 \
    --train_crop_size=513 \
    --train_batch_size=1 \
    --fine_tune_batch_norm=False \
    --dataset="pascal_voc_seg" \
    --tf_initial_checkpoint='./deeplab/deeplabv3_pascal_train_aug/model.ckpt' \
    --train_logdir='./deeplab/mydata0611/train_logdir' \
    --dataset_dir='./deeplab/datasets/pascal_voc_seg/VOCdevkit/mydata0611/tfrecord'

注意要提前建好文件夹:./deeplab/mydata0611/train_logdir

应用:在已经部署好的conda环境下(DeepLabV3),运行sh train.sh。

5. 验证数据

验证数据需要借助于源码tensorflow/models/research/deeplab/eval.py,则在该路径下新建tensorflow/models/research/deeplab/eval.sh脚本文件,脚本内容为:

#!/bin/bash

# Exit immediately if a command exits with a non-zero status.
set -e

# Move one-level up to tensorflow/models/research directory.
cd ..

# Update PYTHONPATH.
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim

#man and baby data 
#mydata0611

python deeplab/eval.py \
    --logtostderr \
    --eval_split="val" \
    --model_variant="xception_65" \
    --atrous_rates=6 \
    --atrous_rates=12 \
    --atrous_rates=18 \
    --output_stride=16 \
    --decoder_output_stride=4 \
    --eval_crop_size=700 \
    --eval_crop_size=700 \
    --eval_batch_size=1 \
    --dataset="pascal_voc_seg" \
    --checkpoint_dir='./deeplab/mydata0611/train_logdir' \
    --eval_logdir='./deeplab/mydata0611/eval_logdir' \
    --dataset_dir='./deeplab/datasets/pascal_voc_seg/VOCdevkit/mydata0611/tfrecord'

注意要提前建好文件夹:./deeplab/mydata0611/eval_logdir

应用:在已经部署好的conda环境下(DeepLabV3),运行sh eval.sh。

6.可视化数据

 

 

 

7.使用tensorboard来查看训练过程:

tensorboard --logdir='./mydata0611'  #dir为train,eval,vis的路径

 

你可能感兴趣的:(深度学习)