YOLOP个人数据代码复现

一 . YOLOP算法介绍

YOLOP算法是用于全景驾驶感知算法!集成目标检测/可行驶区域分割和车道线检测三大视觉任务同时处理.YOLOP 是第一个可以在嵌入式设备 Jetson TX2上以 23 FPS 速度实时同时处理目标检测、可行驶区域分割和车道线检测这三个视觉感知任务并保持出色精度的工作,代码刚刚开源!
今天我们来一起复现一下

二. 数据介绍

1.数据分享

YOLOP是基于BDD100K数据开源算法.BDD官网,BDD100K所有数据,BDD100K所有图片,BDD100K所有图片标签.

这里我也分享一下网盘链接. 提取密码:r9or.
如果想直接上手,我一个整理的3000张数据,下载链接: https://download.csdn.net/download/small_wu/85150937

整理好的数据应该是这个样子的:

da_seg_annotations: 可行驶区域mask数据图像
det_annotations: 检测图片信息json文档,主要包含图像所有的需要检测的类别.
images: 原始图像文档
ll_seg_annotations: 车道线mask数据图像
YOLOP个人数据代码复现_第1张图片

2. 代码分享

其中也整理了部分数据代码:
生成可行驶区域和车道线mask图片代码

from matplotlib.path import Path
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import matplotlib.image as mpimg
import os
import json 
import numpy as np

from tqdm import tqdm


def poly2patch(poly2d, closed=False, alpha=1., color=None):
    moves = {'L': Path.LINETO,
             'C': Path.CURVE4}
    points = [p[:2] for p in poly2d]
    codes = [moves[p[2]] for p in poly2d]
    codes[0] = Path.MOVETO

    if closed:
        points.append(points[0])
        codes.append(Path.CLOSEPOLY)


    return mpatches.PathPatch(
        Path(points, codes),
        facecolor=color if closed else 'none',
        edgecolor=color,  # if not closed else 'none',
        lw=1 if closed else 2 * 4, alpha=alpha,         # lw 大小调整线的粗细
        antialiased=False, snap=True)


def get_areas_v0(objects):
    # print(objects['category'])
    return [o for o in objects
            if 'poly2d' in o and o['category'].startswith('area')]
def get_lanes_v0(objects):
    return [o for o in objects
            if 'poly2d' in o and o['category'].startswith('lane')]

def draw_lane(objects, ax):  # 制作车道线mask
    plt.draw()

    objects = get_lanes_v0(objects)
    print(objects);
    quit()
    for obj in objects:
        # print(obj['category'])
        # if 'lane' in obj['category'] and  obj['category'] != 'lane/road curb' :
        if 'lane' in obj['category'] :
            color = (1, 1, 1)
        else:
            color = (0, 0, 0)

        # alpha = 0.5
        alpha = 1.0
        poly2d = obj['poly2d']
        ax.add_patch(poly2patch(
                poly2d, closed=False,
                alpha=alpha, color=color))

    ax.axis('off')

def draw_drivable(objects, ax):  # 制作可行驶区域mask
    plt.draw()

    objects = get_areas_v0(objects)
    for obj in objects:
        if obj['category'] == 'area/drivable':
            color = (1, 1, 1)
        # elif obj['category'] == 'area/alternative':
        #     color = (0, 1, 0)
        else:
            if obj['category'] != 'area/alternative':
                print(obj['category'])
            color = (0, 0, 0)
        # alpha = 0.5
        alpha = 1.0
        poly2d = obj['poly2d']
        ax.add_patch(poly2patch(
                poly2d, closed=True,
                alpha=alpha, color=color))

    ax.axis('off')

def filter_pic(data):
    for obj in data:
        if obj['category'].startswith('lane') or obj['category'].startswith('area'):
            return True
        else:
            pass
    return False

def main(mode="train"):
    image_dir = "/home/wqg/data/BDD100K/bdd100k/images/{}".format(mode)
    val_dir = "/home/wqg/data/BDD100K/bdd100k/labels/{}".format(mode)
    out_dir = '/home/wqg/data/BDD100K/bdd100k/test_bdd_seg_gt/{}'.format(mode)
    if not os.path.exists(out_dir):
        os.makedirs(out_dir)
    val_list = os.listdir(val_dir)
    # val_pd = pd.read_json(open(val_json))
    # print(val_pd.head())

    for val_json in tqdm(val_list):
        # val_json = 'a4ffac5d-9511118a.json'
        val_json = os.path.join(val_dir, val_json)
        val_pd = json.load(open(val_json))
        data = val_pd['frames'][0]['objects']

        img_name = val_pd['name']

        remain = filter_pic(data)
        # if remain:
        dpi = 80
        w = 16
        h = 9
        image_width = 1280
        image_height = 720
        fig = plt.figure(figsize=(w, h), dpi=dpi)
        ax = fig.add_axes([0.0, 0.0, 1.0, 1.0], frameon=False)
        out_path = os.path.join(out_dir, img_name+'.png')
        ax.set_xlim(0, image_width - 1)
        ax.set_ylim(0, image_height - 1)
        ax.invert_yaxis()
        ax.add_patch(poly2patch(
            [[0, 0, 'L'], [0, image_height - 1, 'L'],
            [image_width - 1, image_height - 1, 'L'],
            [image_width - 1, 0, 'L']],
            closed=True, alpha=1., color=(0, 0, 0)))
        if remain:
            # draw_drivable(data, ax)
            draw_lane(data, ax)   # 绘制车道线
        fig.savefig(out_path, dpi=dpi)
        plt.close()
    else:
        pass

if __name__ == '__main__':
    main(mode='val')

三 . 算法复现

1. 源码地址

git地址 : https://github.com/hustvl/YOLOP

2. 代码修改

./lib/config/default.py 存放所有训练需要设置的参数

数据路径修改:

YOLOP个人数据代码复现_第2张图片

显卡数据和batch_size设置:
注意:显卡官方给出的是2个卡,用的是tuple的类型,训练的batch_size是len(GPUS)*BATCH_SIZE_PER_GPU .所以改一张卡的时候注意改成列表,或者改成(0,) 注意添加逗号.
WORKERS 建议设置成0
修改BATCH_SIZE_PER_GPU = 2 (6G显存)

YOLOP个人数据代码复现_第3张图片

YOLOP个人数据代码复现_第4张图片

3. 开始训练

# 开始训练.
python tools/train.py

4. 模型推理

# 推理模型
python tools/test.py --weights weights/End-to-end.pth

好的,今天先写到这里,后面我已经将模型修改成了2任务级联模型,只检测车辆和车道线,如果需要,可以留言.

-----------------------------------------*******************6月21日更新 *************************------------------------------------------------------
yoloP2任务模型:github链接 :https://github.com/qinggangwu/YOLOP_revise

参考链接:

1 . https://blog.csdn.net/Dora_blank/article/details/120070490
2 .https://blog.csdn.net/qq583083658

你可能感兴趣的:(记录,模型修改,自动驾驶,计算机视觉,人工智能)