ICnet基于VOC数据集的训练

Voc4ICnet

数据集准备的目的:ICNET基于VOc数据集的训练,和同时做分割和检测的Blitznet做对比.

一、数据集标签制作与准备:

Pascal VOC数据集可用于目标检测和分割,提供了语义分割标签和实例分割标签.
本文中使用的数据集为原始的pascal-voc2012,和B. Hariharan et a提供的额外的带有分割label的voc_aug数据集合并而成.

voc_aug: http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar # 2 GB
下载后解压并重命名为voc_aug

pascal-voc2012: http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz # 1.3 GB
下载后解压,并将voc2012重命名为VOC2012_orig

由于ICNET原代码中使用的标签数据为灰度图,需要将上述数据集中的标签数据都转换为灰度图作为传入的标签.

而voc_aug下载并解压,得到文件夹benchmark_RELEASE,讲其重命名为VOC_aug.该数据集中的label是.mat格式的文件,这里需要将.mat转化为.png的灰度图.

在网上看到deeplab_v2中作者有给出转化代码,下载deeplab_v2项目,转化代码为deeplab_v2/voc2012/mat2png.py,传入参数为.mat文件的存放路径和转化都的png文件存放路径

本文数据集中用到的文件路径如下:

vocdataset
    VOC2012_org
        ImageSets
            Segmentation
                trainval.txt
        JPEGImages
        SegmentationClass
        SegmentationClass_1D
    VOC_aug
        dataset
            cls
            cls_png

本文的运行命令如下:

cd ~/TF-Project/deeplab_v2/voc2012
./mat2png.py /media/yue/DATA/vocdataset/VOC_aug/dataset/cls /media/yue/DATA/vocdataset/VOC_aug/dataset/cls_png

由于原始的数据集VOC2012中语义标签是三通道的彩色图片,需要将其降为单通道的图片

 ./convert_labels.py /media/yue/DATA/vocdataset/VOC2012_orig/SegmentationClass/   /media/yue/DATA/vocdataset/VOC2012_orig/ImageSets/Segmentation/trainval.txt /media/yue/DATA/vocdataset/VOC2012_orig/SegmentationClass_1D/

即可得到两组数据的原图和标签图,具体为

vocdataset
    VOC2012_org
        JPEGImages #原图
        SegmentationClass_1D #单通道标签图
    VOC_aug
        dataset
            img #原图
            cls_png #单通道标签图

再将两数据集的原图和单通道标签图分别合并,将原始pascal voc2012数据集中的图片inages和标签labels复制到增强pascal voc2012数据集中,如果重复则覆盖

cd /media/yue/DATA/vocdataset
cp VOC2012_orig/SegmentationClass_1D/* VOC_aug/dataset/cls_png/
cp VOC2012_orig/JPEGImages/* VOC_aug/dataset/img/

  1. images:img,jpg图片的个数变成17125张;
  2. labels:文件名是cls_png, png图片的个数是11355张。

再制作训练集和验证集和测试集,生成.txt文件.由于Icnet对输入文件大小有限定,这里还需要将所有图片都resize到2的大小.再根据训练步骤训练即可.

二、VOC训练train.py

1、需要修改以下路径和参数,本文如下

先复制train.py为train_voc.py

# If you want to apply to other datasets, change following four lines
DATA_DIR ='/media/yue/DATA/vocdataset'
DATA_LIST_PATH = '/media/yue/DATA/vocdataset/icnettrain.txt'
IGNORE_LABEL = 0 # The class number of background
INPUT_SIZE = '256, 256'

2、修改传入的预训练模型,这里不需要传入预训练模型,直接将读入预训练模型语句注释掉.
本文修改代码的190和191行如下:

else:
    print('traing without pre-trained model...')
    #net.load(args.restore_from, sess)

训练命名如下:

python train_voc.py --update-mean-var --train-beta-gamma

如过需要传入预处理模型,修改如下

1修改icnet_cityscapes_bnnomerge.prototxt中的conv6_cls num_output 为自己需要输出类别数,这里为21(不修改此处也可进行训练,是否有错误还需验证)

2修改network.py
修改load(63行)函数如下,主要修改两处.1session.run(var.assign(data))前一行加入判断 if ‘conv6_cls’ not in var.name:2修改ignore_missing为True

def load(self, data_path, session, ignore_missing=True):
        data_dict = np.load(data_path, encoding='latin1').item()
        for op_name in data_dict:
            with tf.variable_scope(op_name, reuse=True):
                for param_name, data in data_dict[op_name].items():
                    try:
                        if 'bn' in op_name:
                            param_name = BN_param_map[param_name]

                        var = tf.get_variable(param_name)
                        if 'conv6_cls' not in var.name:
                            session.run(var.assign(data))
                    except ValueError:
                        if not ignore_missing:
                            raise

3取消net.load(args.restore_from, sess)的注释.
训练命名如上

三、VOC评价evaluate.py

1参照evaluate.py写的cityscapes_param和ADE20k_param写voc_param,并添加相关字段.本文如下:

voc_param = {'name': 'voc',
                    'input_size': [256, 256],
                    'num_classes': 21,
                    'ignore_label': 0,
                    'num_steps': 2000,
                    'data_dir': '/media/yue/DATA/vocdataset',
                    'data_list': '/media/yue/DATA/vocdataset/icnetval.txt'

2修改model_paths中的others字段如下:

'others': './snapshots/'

3在传入参数里添加"voc"字段:

parser.add_argument("--dataset", type=str, default='',
                        choices=['ade20k', 'cityscapes','voc'],
                        required=True)

4在定义的preprocess函数里里的添加以下内容:

    elif param['name'] == 'voc':
        img = tf.expand_dims(img, axis=0)
        img = tf.image.resize_bilinear(img, shape, align_corners=True)

5在main函数里添加传入参数的判断

    elif args.dataset == 'voc':
        param = voc_param

6在main函数里计算miou处添加voc字段

    elif args.dataset == 'voc':
        mIoU, update_op = tf.contrib.metrics.streaming_mean_iou(pred, gt, num_classes=param['num_classes'])

运行命令

 python evaluate.py --dataset=voc --filter-scale=1 --model=others 

报错:

InvalidArgumentError (see above for traceback): assertion failed: [`labels` out of bound] [Condition x < y did not hold element-wise:x (mean_iou/confusion_matrix/control_dependency:0) = ] [1 1 1...] [y (mean_iou/ToInt64_2:0) = ] [8]
	 [[Node: mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert = Assert[T=[DT_STRING, DT_STRING, DT_INT64, DT_STRING, DT_INT64], summarize=3, _device="/job:localhost/replica:0/task:0/device:CPU:0"](mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert/Switch/_781, mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert/data_0, mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert/data_1, mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert/Switch_1/_783, mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert/data_3, mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert/Switch_2/_785)]]
	 [[Node: Gather_1/_817 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_1308_Gather_1", tensor_type=DT_INT64, _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]

检查语义标签图片,发现有个别灰度图像素点值大于类别数,说明在转为灰度图时出现了错误.重新制作灰度图,发现灰度图无误,错误出现自resize的过程中,修正后即可.

训练的60000次的模型,miou为0.00797,loss已降至0.5,目前未找到原因.

四、VOC推断inference.py

1、修改model_paths中’others’字段,为’others’:’./snapshots/’

2、修改

def main():
    args = get_arguments()

    if args.dataset == 'cityscapes':
        num_classes = cityscapes_class
    else:
        num_classes = ADE20k_class

添加voc的判断,修改后为

    if args.dataset == 'cityscapes':
        num_classes = cityscapes_class
    elif args.dataset == 'ADE20k':
        num_classes = ADE20k_class
    elif args.dataset == 'voc':
        num_classes = voc_class

3、修改

parser.add_argument("--dataset", type=str, default='',
                        choices=['ade20k', 'cityscapes'],
                        required=True)

添加voc字段

parser.add_argument("--dataset", type=str, default='',
                        choices=['ade20k', 'cityscapes','voc'],
                        required=True)

运行命令

python inference.py --img-path=./input/2007_000027.jpg --model=others --dataset=voc --filter-scale=1

报错

Traceback (most recent call last):
  File "inference.py", line 198, in 
    main()
  File "inference.py", line 160, in main
    pred = decode_labels(raw_output_up, shape, num_classes)
  File "/home/yue/TF-Project/ICNet-tensorflow-master/tools.py", line 39, in decode_labels
    pred = tf.matmul(onehot_output, color_mat)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py", line 1891, in matmul
    a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 2437, in _mat_mul
    name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2958, in create_op
    set_shapes_for_outputs(ret)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2209, in set_shapes_for_outputs
    shapes = shape_func(op)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2159, in call_with_requiring
    return call_cpp_shape_fn(op, require_shape_fn=True)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 627, in call_cpp_shape_fn
    require_shape_fn)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 691, in _call_cpp_shape_fn_impl
    raise ValueError(err.message)
ValueError: Dimensions must be equal, but are 21 and 19 for 'MatMul' (op: 'MatMul') with input shapes: [65536,21], [19,3].


这里表示是decode_labels函数在调用的时候出错,进入这个函数的文件发现,修改判断num_classes的参数.首先新建voc的label_colours,它代表要用什么颜色标出相应的在deeplab_v2/voc2012/utils.py文件里找到为(需要修改格式,参考cityscapess的label_colours格式):

label_colours_voc = [[0,   0,   0],
             [128,   0,   0] ,
             [  0, 128,   0],
             [128, 128,   0],
             [  0,   0, 128]  ,
             [128,   0, 128] ,
             [  0, 128, 128] ,
             [128, 128, 128],
             [ 64,   0,   0]  ,
             [192,   0,   0] ,
             [ 64, 128,   0],
             [192, 128,   0],
             [ 64,   0, 128],
             [192,   0, 128],
             [ 64, 128, 128],
             [192, 128, 128],
             [  0,  64,   0],
             [128,  64,   0],
             [  0, 192,   0] ,
             [128, 192,   0],
             [  0,  64, 128]]

修改decode_labels

def decode_labels(mask, img_shape, num_classes):
    if num_classes == 150:
        color_table = read_labelcolours(matfn)
    elif num_classes==21:
        color_table = label_colours_voc
    else:
        color_table = label_colours

你可能感兴趣的:(cv,我的笔记)