[语义分割]训练deeplabv3(一):建立自己的数据集

[deeplabv3+]:https://github.com/tensorflow/models/tree/master/research/deeplab

[labelme]:https://github.com/wkentaro/labelme

简介

本博客主要介绍了,deeplabv3+训练前的自己数据集准备工作: 即将用labelme标记的自己的语义分割数据集转化为deeplabv3训练支持的tfrecord格式,转化路径为:

json   >>  voc  >>  在voc中生成ImageSets  >>  tfrecord

适用情况

本博客所介绍内容是以deeplab ,labelme项目代码可运行为前提.需提前配置好开发环境.

ubuntu14.04 + pycharm + anoconda 

思路与参考

基于labelme官方的 labelme2voc.py 的转化思路:启发于Su_far的博客..该博客并未详细介绍labelme2voc的是使用,而在labelme的官方代码中找到,使用简洁,是目前发现最方便的labelme json数据集转voc数据集的方法.(成功快速的实现) 

基于labelme官方的 json_to_dataset.py 的转化思路,例如:酸辣土豆丝不要辣的博客介绍多版本代码改进的方法..该方法需改动labelme的源码,且不能一步到位,后续仍需要其他处理,本博客只实现了一部分,就半途放弃.(失败,转化后需继续处理,放弃)

基于网友Shirhe-Lyh的直接json直接转tfrecord方法,尝试过,转换成功,训练报错. (失败)

综上,本博客采用的是labelme2voc.py 将json数据集转化为voc,再额外生成voc数据集中 ImageSets ,再采用deeplabv3官方的 download_and_convert_voc2012.sh 程序,将将voc生成得用于训练的tfrecord.

具体步骤

步骤1:  labelme2voc

  • 需要根据自己标签修改labels.txt
  • 官方是采用终端命令方式运行:
./labelme2voc.py your_json_folder dst_voc --labels your_labels.txt
  • 由于相对路径的困扰: 
ModuleNotFoundError: No module named 'labelme'
  • 本博客采用Pycharm运行 :   /datasets/face_skin_json  /datasets/face_skin_voc --labels labels_face_skin.txt    (输入参数)
  • 生成结果

    [语义分割]训练deeplabv3(一):建立自己的数据集_第1张图片

步骤2: 在VOC中生成ImageSets

代码来源:丹啊丹的博客最后一部分,可见本博客附录 generate_imagesets.py

[语义分割]训练deeplabv3(一):建立自己的数据集_第2张图片

 

步骤3: 修改download_and_convert_voc2012.sh

  • 修改前先备份,该代码实现了下载VOC2012数据集并转化为tfrecord的
  • 本博客主要注释了代码中下载VOC数据集代码,以及修改了数据集路径

注释下载VOC数据集的代码

   [语义分割]训练deeplabv3(一):建立自己的数据集_第3张图片

路径修改(有坑) 

  • 基本按着voc的推荐目录修改就可以
  • SegmentationClass需修改为SegmentationClassPNG否则无法转化

运行完成后得到tfrecord

[语义分割]训练deeplabv3(一):建立自己的数据集_第4张图片


附录

generate_imagesets.py

from sklearn.model_selection import train_test_split
import os

imagedir = '/ai005/datasets/datasets_face_segmentation/face_skin_voc_1900/JPEGImages'
outdir = '/ai005/datasets/datasets_face_segmentation/face_skin_voc_1900/ImageSets/Segmentation/'

images = []
for file in os.listdir(imagedir):
    filename = file.split('.')[0]
    images.append(filename)

train, test = train_test_split(images, train_size=0.95, random_state=0)
val, test = train_test_split(test, train_size=0.9, random_state=0)

with open(outdir + "train.txt", 'w') as f:
    f.write('\n'.join(train))

with open(outdir + "val.txt", 'w') as f:
    f.write('\n'.join(val))

with open(outdir + "trainval.txt", 'w') as f:
    f.write('\n'.join(test))

download_and_convert_voc2012.sh 

#!/bin/bash
# Copyright 2018 The TensorFlow Authors All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
#
# Script to download and preprocess the PASCAL VOC 2012 dataset.
#
# Usage:
#   bash ./download_and_convert_voc2012.sh
#
# The folder structure is assumed to be:
#  + datasets
#     - build_data.py
#     - build_voc2012_data.py
#     - download_and_convert_voc2012.sh
#     - remove_gt_colormap.py
#     + pascal_voc_seg
#       + VOCdevkit
#         + VOC2012
#           + JPEGImages
#           + SegmentationClass
#

# Exit immediately if a command exits with a non-zero status.
set -e

CURRENT_DIR=$(pwd)
#WORK_DIR="./pascal_voc_seg"
#WORK_DIR="/ai005/datasets/datasets_face_segmentation/face_skin_voc_1900"
WORK_DIR="your"
mkdir -p "${WORK_DIR}"
cd "${WORK_DIR}"

# Helper function to download and unpack VOC 2012 dataset.
download_and_uncompress() {
  local BASE_URL=${1}
  local FILENAME=${2}

  if [ ! -f "${FILENAME}" ]; then
    echo "Downloading ${FILENAME} to ${WORK_DIR}"
    wget -nd -c "${BASE_URL}/${FILENAME}"
  fi
  echo "Uncompressing ${FILENAME}"
  tar -xf "${FILENAME}"
}

# Download the images.cd
BASE_URL="http://host.robots.ox.ac.uk/pascal/VOC/voc2012/"
FILENAME="VOCtrainval_11-May-2012.tar"

#download_and_uncompress "${BASE_URL}" "${FILENAME}"   #have down

cd "${CURRENT_DIR}"

# Root path for PASCAL VOC 2012 dataset.
PASCAL_ROOT="${WORK_DIR}/VOC2012"

# Remove the colormap in the ground truth annotations.
SEG_FOLDER="${PASCAL_ROOT}/SegmentationClassPNG"
SEMANTIC_SEG_FOLDER="${PASCAL_ROOT}/SegmentationClassRaw"

echo "Removing the color map in ground truth annotations..."
python /home/ai005/zxy191120/code/models-master/research/deeplab/datasets/remove_gt_colormap.py \
 --original_gt_folder="${SEG_FOLDER}" \
 --output_dir="${SEMANTIC_SEG_FOLDER}"

# Build TFRecords of the dataset.
# First, create output directory for storing TFRecords.
OUTPUT_DIR="${WORK_DIR}/tfrecord"
mkdir -p "${OUTPUT_DIR}"

IMAGE_FOLDER="${PASCAL_ROOT}/JPEGImages"
LIST_FOLDER="${PASCAL_ROOT}/ImageSets/Segmentation"

echo "Converting PASCAL VOC 2012 dataset..."
python /home/ai005/zxy191120/code/models-master/research/deeplab/datasets/build_voc2012_data_zxy.py \
  --image_folder="${IMAGE_FOLDER}" \
  --semantic_segmentation_folder="${SEMANTIC_SEG_FOLDER}" \
  --list_folder="${LIST_FOLDER}" \
  --image_format="jpg" \
  --output_dir="${OUTPUT_DIR}"

 

你可能感兴趣的:(计算机视觉相关,#,语义分割)