像素级语义分割(Pixel-level Semantic Segmentation)-Deeplab 在cityscapes/kitti 上的train/prediction/evaluation

1. GPU和CPU

渣渣电脑GPU,选择CPU,利用了swip内存

2. deeplab

2.1 Installation

Add Libraries to PYTHONPATH

# From tensorflow/models/research/
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim

# [Optional] for panoptic evaluation, you might need panopticapi:
# https://github.com/cocodataset/panopticapi
# Please clone it to a local directory ${PANOPTICAPI_DIR}
touch ${PANOPTICAPI_DIR}/panopticapi/__init__.py
export PYTHONPATH=$PYTHONPATH:${PANOPTICAPI_DIR}/panopticapi

2.2  Running DeepLab on Cityscapes semantic segmentation dataset

# From tensorflow/models/research/deeplab
sh cityscapes_test.sh
#!/bin/bash
# cityscapes_test.sh

WORK_DIR="/home/whu/venv/tensorflow/models/research/deeplab"
PATH_TO_INITIAL_CHECKPOINT="${WORK_DIR}/datasets/cityscapes/init_models/deeplabv3_cityscapes_train/model.ckpt"
PATH_TO_TRAIN_DIR="${WORK_DIR}/datasets/cityscapes/exp/train_on_train_set/train/"
PATH_TO_EVAL_DIR="${WORK_DIR}/datasets/cityscapes/exp/train_on_train_set/eval/"
PATH_TO_VIS_DIR="${WORK_DIR}/datasets/cityscapes/exp/train_on_train_set/vis/"
EXPORT_DIR="${WORK_DIR}/datasets/cityscapes/exp/train_on_train_set/export"
PATH_TO_DATASET="${WORK_DIR}/datasets/cityscapes/tfrecord"

mkdir -p "${EXPORT_DIR}"

NUM_ITERATIONS=10
# From tensorflow/models/research/
python deeplab/train.py \
    --logtostderr \
    --training_number_of_steps=90000 \
    --train_split="train" \
    --model_variant="xception_65" \
    --atrous_rates=6 \
    --atrous_rates=12 \
    --atrous_rates=18 \
    --output_stride=16 \
    --decoder_output_stride=4 \
    --train_crop_size=1025 \
    --train_crop_size=2049 \
    --train_batch_size=1 \
    --training_number_of_steps="${NUM_ITERATIONS}" \
    --dataset="cityscapes" \
    --tf_initial_checkpoint=${PATH_TO_INITIAL_CHECKPOINT} \
    --train_logdir=${PATH_TO_TRAIN_DIR} \
    --dataset_dir=${PATH_TO_DATASET}


# From tensorflow/models/research/
python deeplab/eval.py \
    --logtostderr \
    --eval_split="val" \
    --model_variant="xception_65" \
    --atrous_rates=6 \
    --atrous_rates=12 \
    --atrous_rates=18 \
    --output_stride=16 \
    --decoder_output_stride=4 \
    --eval_crop_size=1025 \
    --eval_crop_size=2049 \
    --dataset="cityscapes" \
    --checkpoint_dir=${PATH_TO_TRAIN_DIR} \
    --eval_logdir=${PATH_TO_EVAL_DIR} \
    --dataset_dir=${PATH_TO_DATASET}
    --max_number_of_iterations=1

# From tensorflow/models/research/
python deeplab/vis.py \
    --logtostderr \
    --vis_split="val" \
    --model_variant="xception_65" \
    --atrous_rates=6 \
    --atrous_rates=12 \
    --atrous_rates=18 \
    --output_stride=16 \
    --decoder_output_stride=4 \
    --vis_crop_size=1025 \
    --vis_crop_size=2049 \
    --dataset="cityscapes" \
    --colormap_type="cityscapes" \
    --checkpoint_dir=${PATH_TO_TRAIN_DIR} \
    --vis_logdir=${PATH_TO_VIS_DIR} \
    --dataset_dir=${PATH_TO_DATASET} \
    --also_save_raw_predictions=true  \
    --max_number_of_iterations=1

# Export the trained checkpoint.
CKPT_PATH="${PATH_TO_TRAIN_DIR}/model.ckpt-${NUM_ITERATIONS}"
EXPORT_PATH="${EXPORT_DIR}/frozen_inference_graph.pb"

# From tensorflow/models/research/
python deeplab/export_model.py \
    --logtostderr \
    --checkpoint_path="${CKPT_PATH}" \
    --export_path="${EXPORT_PATH}" \
    --model_variant="xception_65" \
    --atrous_rates=6 \
    --atrous_rates=12 \
    --atrous_rates=18 \
    --output_stride=16 \
    --decoder_output_stride=4 \
    --num_classes=19 \
    --crop_size=1025 \
    --crop_size=2049 \
    --inference_scales=1.0

2.3 export_model

https://lijiancheng0614.github.io/2018/03/13/2018_03_13_TensorFlow-DeepLab/#%E6%B5%8B%E8%AF%95

#export_model.sh
python ../../../../export_model.py \
    --logtostderr \
    --checkpoint_path="train/model.ckpt-$1" \
    --export_path="export/frozen_inference_graph.pb" \
    --model_variant="xception_65" \
    --atrous_rates=6 \
    --atrous_rates=12 \
    --atrous_rates=18 \
    --output_stride=16 \
    --decoder_output_stride=4 \
    --num_classes=19 \
    --crop_size=1025 \
    --crop_size=2049 \
    --inference_scales=1.0

2.4 prediction

用train好的model进行预测时有两种方案:

1)对于cityscapes,执行vis.py. Note that if the users would like to save the segmentation results for evaluation server, set also_save_raw_predictions = True. 生成预测图,其像素值对应label ID.

2)  对于cityscapes/kitti,export_model.py 获取训练好的模型frozen_inference_graph.pb,inference.oy 根据模型预测,生成image_TrainIds、image_label、image_colored三种结果。(区别参考cityscapesScripts)

使用TensorFlow DeepLab进行语义分割

kitti_deeplab

#Run the script inference.py:
python inference.py path_to_frozen_inference_graph.pb path_to_image_folder

# From devkit_semantics 
python '/media/whu/HD_CHEN_2T/02data/kitti_semanti/kitti_deeplab/inference.py' \
'/home/whu/venv/tensorflow/models/research/deeplab/datasets/cityscapes/exp/train_on_train_set/export/frozen_inference_graph.pb' \
'/media/whu/HD_CHEN_2T/02data/kitti_semanti/data_semanti/training/image_2'

python '/media/whu/HD_CHEN_2T/02data/kitti_semanti/kitti_deeplab/inference.py' \
'/home/whu/venv/tensorflow/models/research/deeplab/datasets/cityscapes/exp/train_on_train_set/export/frozen_inference_graph.pb' \
'/media/whu/HD_CHEN_2T/02data/cityscapes/leftImg8bit/val/lindau'

由于 kitti_deeplab 的inference.py只没有生成image_label,不利于evaluation,做如下修改:

# inference.py
import os
import sys
import scipy
import numpy as np
sys.path.remove('/opt/ros/kinetic/lib/python2.7/dist-packages')
import cv2
import tensorflow as tf
from helper import logits2image
from helper import logits2label

def load_graph(frozen_graph_filename):
    # We load the protobuf file from the disk and parse it to retrieve the 
    # unserialized graph_def
    with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())

    with tf.Graph().as_default() as graph:
        tf.import_graph_def(graph_def, name="prefix")
    return graph

graph = load_graph(sys.argv[1])
image_dir = sys.argv[2]

# DeepLabv3+ input and output tensors
image_input = graph.get_tensor_by_name('prefix/ImageTensor:0')
softmax = graph.get_tensor_by_name('prefix/SemanticPredictions:0')

# Create output directories in the image folder
if not os.path.exists(image_dir+'segmented_images/'):
    os.mkdir(image_dir+'segmented_images/')
if not os.path.exists(image_dir+'segmented_images_label/'):
    os.mkdir(image_dir+'segmented_images_label/')
if not os.path.exists(image_dir+'segmented_images_colored/'):
    os.mkdir(image_dir+'segmented_images_colored/') 

image_dir_segmented = image_dir+'segmented_images/'
image_dir_segmented_label = image_dir+'segmented_images_label/'
image_dir_segmented_colored = image_dir+'segmented_images_colored/'

with tf.Session(graph=graph) as sess:
    for fname in sorted(os.listdir(image_dir)):
        if fname.endswith(".png"):
            img = scipy.misc.imread(os.path.join(image_dir, fname)) 
            img = np.expand_dims(img, axis=0)
            probs = sess.run(softmax, {image_input: img})
            img = np.squeeze(probs)
            cv2.imwrite(image_dir_segmented+fname,img)
            img_label = logits2label(img)
            cv2.imwrite(image_dir_segmented_label+fname,img_label)
            img_colored = logits2image(img)
            img_colored = cv2.cvtColor(img_colored, cv2.COLOR_BGR2RGB)
            cv2.imwrite(image_dir_segmented_colored+fname, img_colored)
            print(fname)
#helper.py
import numpy as np

CITYSCAPE_PALLETE = np.asarray([
    [128, 64, 128],
    [244, 35, 232],
    [70, 70, 70],
    [102, 102, 156],
    [190, 153, 153],
    [153, 153, 153],
    [250, 170, 30],
    [220, 220, 0],
    [107, 142, 35],
    [152, 251, 152],
    [70, 130, 180],
    [220, 20, 60],
    [255, 0, 0],
    [0, 0, 142],
    [0, 0, 70],
    [0, 60, 100],
    [0, 80, 100],
    [0, 0, 230],
    [119, 11, 32],
    [0, 0, 0]], dtype=np.uint8)

CITYSCAPES_TRAIN_ID_TO_EVAL_ID = np.asarray([7, 8, 11, 12, 13, 17, 19, 20, 21, 22,
                                   23, 24, 25, 26, 27, 28, 31, 32, 33], dtype=np.uint8)

#width = 1242
#height = 375

def logits2image(logits):
    [height,width]=logits.shape
    print ('size:',height,width)
    logits = logits.astype(np.uint8)
    image = np.empty([height,width,3],dtype=float)
    for i in range(image.shape[0]):
        for j in range(image.shape[1]):
            if(logits[i,j] == 255):
                image[i,j,:] = CITYSCAPE_PALLETE[19,:]
            else:
                image[i,j,:] = CITYSCAPE_PALLETE[logits[i,j],:]
    image = image.astype(np.uint8)
    return image


def logits2label(logits):
    [height,width]=logits.shape
    print ('size:',height,width)
    logits = logits.astype(np.uint8)
    image = np.empty([height,width,1],dtype=float)
    for i in range(image.shape[0]):
        for j in range(image.shape[1]):
            if(logits[i,j]!= 255):
                image[i,j] = CITYSCAPES_TRAIN_ID_TO_EVAL_ID[logits[i,j]]
    image = image.astype(np.uint8)
    return image

2.5 evaluation

由于cityscapes和kitti的文件结构略有不同,分开进行。

2.5.1 cityscapes

1) 下载 cityscapesScripts,通过cityscapesscritps/evaluation/evalPixelLevelSemanticLabeling.py, 可以比较groudtruth *_gtFine_label.png和神经网络预测的image_label结果图像,计算出classes IoU和Categories IoU.

2) 下载 evalPixelLevelSemanticLabeling_trainId.py, 可以比较groudtruth,*_gtFine_labelTrainIds.png和神经网络预测的image_TrainIds结果图像,计算出classes IoU和Categories IoU.程序下载链接:https://download.csdn.net/download/cxiazaiyu/10637603 。

3)执行时注意路径的设置。两种精度基本一致,一般以label为准即可。

export CITYSCAPES_RESULTS='/media/whu/HD_CHEN_2T/02data/cityscapes/leftImg8bit/val/result_label' 
export CITYSCAPES_DATASET='/media/whu/HD_CHEN_2T/02data/cityscapes'

# from /home/whu/venv/tensorflow/models/research/deeplab/cityscapesScripts
python cityscapesscripts/evaluation/evalPixelLevelSemanticLabeling.py

python '/home/whu/venv/tensorflow/models/research/deeplab/cityscapesScripts/cityscapesscripts/evaluation/evalPixelLevelSemanticLabeling.py'

https://python.ctolib.com/fregu856-deeplabv3.html

https://blog.csdn.net/Cxiazaiyu/article/details/81866173

2.5.2 kitti

download devkit_semantics.zip from http://www.cvlibs.net/datasets/kitti/eval_semseg.php?benchmark=semantics2015

注意:根目录不能出现“semantic”字符串

或者更改 evalPixelLevelSemanticLabeling.py:line 132: config.evalInstLevelScore = False 不惊醒 instance 评价

python evalPixelLevelSemanticLabeling.py predictionPath groundTruthPath 
# From devkit_semantics 
python '/media/whu/HD_CHEN_2T/02data/kitti_semanti/devkit_semantics/devkit/evaluation/evalPixelLevelSemanticLabeling.py' \
'/media/whu/HD_CHEN_2T/02data/kitti_semanti/data_semanti/training/image_2segmented_images_label' \
'/media/whu/HD_CHEN_2T/02data/kitti_semanti/data_semanti/training/semantic' 

 

你可能感兴趣的:(机器学习)