用其他数据集训练KittiSeg

在win10上配置成功KittiSeg后,随便从网上download了一张图片测试一下效果,结果发现不work
随意数据上的结果:




kitti数据上的结果:



估计是kitti数据集太小了,模型过拟合,所以泛化性能低

打算自己训练一个model,用百度road_seg的数据集。用的是seg_3的数据
对数据进行修改,把label变成2分类问题

import numpy as np

import cv2

#test for one image
#img = cv2.imread('Label/Record057/Camera 6/171206_042426425_Camera_6_bin.png')
#print(img.shape)      
#img[np.where((img!=[49,49,49]).all(axis = 2))] = [255,0,0]
#img[np.where((img==[49,49,49]).all(axis = 2))] = [255,0,255]
#cv2.imwrite('a.png',img)
#image = scipy.misc.imresize(img,0.6)
#print(image.shape)

files = [line for line in open('all2.txt')]
file = files[1000:]
num = 0
for path in file:
    path = path[:-1] #这里要去掉最后的/n,这也是一个字符,不然返回空
    img = cv2.imread(path)
    print(img.shape)    
    img[np.where((img!=[49,49,49]).all(axis = 2))] = [255,0,0]
    img[np.where((img==[49,49,49]).all(axis = 2))] = [255,0,255]
    cv2.imwrite(path,img)
    print(num)
    num=num+1
    

制作并划分train.txt val.txt
a.先制作all.txt
分别读取图片和label,分成两个txt(all1.txt all2.txt),再把两个txt合并。

import os
import glob as gb
#import cv2
all_file = 'all2.txt'
with open (all_file, 'w') as a:
        #for root, dirs,files in os.walk('ColorImage'):
        for root, dirs,files in os.walk('Label'):
            if len(dirs)  == 0:
                for i in range(len(files)):
                    if files[i][-3:] =='png':
                        file_path = root+'/'+files[i]
                        a.writelines(file_path+'\n')

合并两个txt文件,此时发现图片30331,label30350,两者不一致,所以要对两个txt做比较,去除不匹配的。

#这种是按行来比较,同行之间才比较,所以不符合自己想要的
之前每行后面都加了/n换行,在这里,也算一个字符,所以要去掉
#with open('all.txt','w') as a:
#    with open('all1.txt','r') as a1, open('all2.txt') as a2:
#        for line1, line2 in zip(a1,a2):
#            #print(line1[10:-5],line2[5:-9])
#            if line1[10:-5]==line2[5:-9]:
#                print('o')
#                a.writelines(line1[:-1]+' '+line2[:-1]+'\n')

with open('all.txt','w') as a:
    with open('all1.txt','r') as a1, open('all2.txt') as a2:
        lines1 = [line for line in a1]
        lines2 = [line for line in a2]
        for line1 in lines1:
            #print(line1+'a')
            for line2 in lines2:
                if line1[10:-5]==line2[5:-9]:
                    
                    a.writelines(line1[:-1]+' '+line2[:-1]+'\n')
            #print(line1[10:-5],line2[5:-9])

随机划分成训练集验证集 由于数据量太大,自己想要先验证一下算法,所以,就先用1000样本,900训练,100验证。

from random import shuffle
def make_val_split():
    """
    Splits the Images in train and test.
    Assumes a File all.txt in data_folder.
    """

    all_file = "alls.txt"
    train_file = "trains.txt"
    test_file = "vals.txt"
    test_num = 100

    
    files = [line for line in open(all_file)]

    shuffle(files)

    train = files[:-test_num]
    test = files[-test_num:]

    #train_file = os.path.join(data_folder, train_file)
    #test_file = os.path.join(data_folder, test_file)

    with open(train_file, 'w') as file:
        for label in train:
            file.write(label)

    with open(test_file, 'w') as file:
        for label in test:
            file.write(label)
def main():
    make_val_split()

if __name__ == '__main__':
    main()

运行train.py遇到的问题:
报错1

    image_file, gt_image_file = file.split(" ")
ValueError: too many values to unpack (expected 2)

查看了一下自己的trains.txt 发现 ColorImage\Record008\Camera 5/171206_030617006_Camera_5.jpg Label\Record008\Camera 5/171206_030617006_Camera_5_bin.png 确实每行都有三个空格
所以需要对代码进行修改
对kitti_seg_input.py的114行进行修改,修改后如下

  #image_file, gt_image_file = file.split(" ")
   image_file1,image_file2,gt_image_file1,gt_image_file2 = file.split(" ")
   image_file = image_file1+" "+image_file2
   gt_image_file = gt_image_file1+" "+gt_image_file2

报错2:

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[1,64,2701,3367] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[Node: conv1_2/Conv2D = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](conv1_1/Relu, conv1_2/filter/read)]]
 Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.19GiB. Current allocation summary follows.

根据https://github.com/tensorflow/models/issues/3393估计可能确实是batchsize的问题,但是,自己的batchsize已经是1了,gg 内存不够的话,要么resize图片,要么换简单一点的网络,
鉴于自己的图片较大,有(2710, 3384),所以先resize一下
根据https://docs.scipy.org/doc/scipy/reference/generated/scipy.misc.imresize.html 使用scipy.misc.imresize(img,0.8) 把图像缩小为原来的0.8,0.8还是跑不起来,所以改为0.6

 img = scipy.misc.imread(image_file, mode='RGB')
 image = scipy.misc.imresize(img,0.8)
 # Please update Scipy, if mode='RGB' is not avaible
 gt_img = scp.misc.imread(gt_image_file, mode='RGB')
  gy_image = scipy.misc.imresize(gt_img,0.8)

报错3:
train中间evaluate的时候报错

  hypes, sess, tv_graph['image_pl'], tv_graph['inf_out'])
  File "D:\work\KittiSeg-hotfix-python3.5_support\hypes\../evals/kitti_eval.py", line 71, in evaluate
    image_file, gt_file = datum.split(" ")
ValueError: too many values to unpack (expected 2)

对kitti_eval.py进行修改 ,和输入的类似 就ok了

至此,开始训练过程。

对昨天训练好的model在自己的数据上进行测试,结果发现mask全白,也就是说将所有的像素预测为road,排查原因。将验证集的图片输入亦如此,遂将训练集图片输入,还是如此。说明拟合的时候就出了问题,所以,应该是标注出了问题。

在验证集,训练集上的结果:

查看.json文件

自己指定的是

"road_color" : [255,0,255],

"background_color" : [255,0,0],

没有更改,和kitti的一样,但是联想到自己处理出来的数据是蓝色的,kitti的是红色的,肯定有哪里不对,

查看当时自己改数据的代码


是按照kitti的改的啊,分别用取色器查看了kitti和自己的标定,发现蓝色分量和红色分量的值是反的,kitti的是红色分量为255,自己的是蓝色分量255,查了一下,opencv读取图片是BGR,所以BR两个分量反了,将.json文件中background_color改为[0,0,255]

重新开始训练

以后训练前还是要细细检查,这种bug白白浪费了一天时间。

你可能感兴趣的:(用其他数据集训练KittiSeg)