PaddleDetection开发套件训练自己的数据

PaddleDetection以模块化的设计实现了多种主流目标检测算法,并且提供了丰富的数据增强、网络组件、损失函数等模块,集成了模型压缩和跨平台高性能部署能力。本文通过增强版yolov3网络架构为例训练自己的数据。

一、环境准备:

1.python=3.6
2.paddlepaddle=1.8.0 
  官网安装教程(https://paddlepaddle.org.cn/install/quick)
3.安装COCO-API 
  $ pip install "git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI"
4.克隆PaddleDetection模块
  $ cd 
  $ git clone https://github.com/PaddlePaddle/PaddleDetection.git

二、数据集准备:

PaddleDetection默认支持COCO和Pascal VOC数据集格式,以下是两种数据集的制作流程

Pascal VOC数据集准备:

用LabelImg软件进行图像标注,其中Annotations文件夹放标注文件,JPEGImages文件夹放图片,ImageSets文件夹的Main文件夹下放.txt文件,生成.txt代码:

# -*- coding:utf8 -*-  
import os  
import random  
trainval_percent = 0.66  
train_percent = 0.5  
xmlfilepath = 'Annotations'  
txtsavepath = 'ImageSets\Main'  
total_xml = os.listdir(xmlfilepath) 

num=len(total_xml)  
list=range(num)  
tv=int(num*trainval_percent)  
tr=int(tv*train_percent)  
trainval= random.sample(list,tv)  

ftrainval = open('ImageSets/Main/trainval.txt', 'w')  
fval = open('ImageSets/Main/val.txt', 'w')  

for i  in list:  
    name=total_xml[i][:-4]+'\n'  
    if i in trainval:  
        ftrainval.write(name)  
    else:
        fval.write(name)  
        
ftrainval.close()  
fval.close()  
 VOCdevkit/VOC2007
    ├── Annotations
          ├── 001789.xml
          |   ...
    ├── JPEGImages
          ├── 001789.jpg
          |   ...
    ├── ImageSets
          ├── Main
               ├── trival.txt
               ├── test.txt

$ cd
$ python dataset/voc/create_list.py

Pascal VOC 数据集目录结构如下:
dataset/voc/
  ├── train.txt
  ├── val.txt
  ├── test.txt
  ├── label_list.txt (optional)
  ├── VOCdevkit/VOC2007
  │   ├── Annotations
  │       ├── 001789.xml
  │       |   ...
  │   ├── JPEGImages
  │       ├── 001789.jpg
  │       |   ...
  │   ├── ImageSets
  │       |   ...

注意:在yaml配置文件中设置use_default_label=False, 将从label_list.txt 中读取类别列表,反之则如果没有label_list.txt文件,会使用Pascal VOC数据集的默认类别列表

COCO数据集准备:
coco
 ├── Annotations
      ├── 001789.xml
      |   ...
 ├── JPEGImages
      ├── 001789.jpg
      |   ...

用LabelImg进行图像标注,Annotations文件夹放标注文件,JPEGImages文件夹放图像
将数据转化为.csv文件格式,

# -*- coding:utf-8 -*- 
import csv
import os
import glob
import sys
import numpy as np 
import random
class PascalVOC2CSV(object):
    def __init__(self,xml=[], ann_path='./annotations.csv',classes_path='./classes.csv'):
          '''
          :param xml: 所有Pascal VOC的xml文件路径组成的列表
          :param ann_path: ann_path
          :param classes_path: classes_path
          '''
          self.xml = xml
          self.ann_path = ann_path
          self.classes_path=classes_path
          self.label=[]
          self.annotations=[]
          self.anno = []
          self.data_transfer()
          self.write_file()
    def data_transfer(self):
    
        for num, xml_file in enumerate(self.xml):
                try:
                    print(xml_file)
                    # 进度输出
                    sys.stdout.write('\r>> Converting image %d/%d' % (
                            num + 1, len(self.xml)))
                    sys.stdout.flush()
                    image_num = len(self.xml)
                    print(image_num)
                    with open(xml_file, 'r') as fp:
                            for p in fp:
                                if '' in p:
                                    self.filen_ame = p.split('>')[1].split('<')[0]
                                if '' in p:
                                    # 类别
                                    d = [next(fp).split('>')[1].split('<')[0] for _ in range(9)]
                                    self.supercategory = d[0]
                                    if self.supercategory not in self.label:
                                        self.label.append(self.supercategory)
                                    # 边界框
                                    x1 = int(d[-4]);
                                    y1 = int(d[-3]);
                                    x2 = int(d[-2]);
                                    y2 = int(d[-1]) 
                                    self.anno.append([os.path.join('JPEGImages',self.filen_ame),x1,y1,x2,y2,self.supercategory])

                except:
                    continue
        sys.stdout.write('\n')
        sys.stdout.flush()
    def write_file(self,):
        with open(self.ann_path, 'w', newline='') as fp:
                csv_writer = csv.writer(fp, dialect='excel')
                csv_writer.writerows(self.anno)
       class_name=sorted(self.label)
        class_=[]
        for num,name in enumerate(class_name):
                class_.append([name,num])
        with open(self.classes_path, 'w', newline='') as fp:
                csv_writer = csv.writer(fp, dialect='excel')
                csv_writer.writerows(class_)
xml_file = glob.glob('./Annotations/*.xml')

PascalVOC2CSV(xml_file) 
  

将.csv文件格式转化为coco格式,

import os
import json
import numpy as np
import pandas as pd
import glob
import cv2
import shutil
from IPython import embed
from sklearn.model_selection import train_test_split
np.random.seed(41)

# 0为背景
classname_to_id = {"fire": 1}  #修改为自己的类别

class Csv2CoCo:
    def __init__(self,image_dir,total_annos):
        self.images = []
        self.annotations = []
        self.categories = []
        self.img_id = 0
        self.ann_id = 0
        self.image_dir = image_dir
        self.total_annos = total_annos
    def save_coco_json(self, instance, save_path):
        json.dump(instance, open(save_path, 'w'), ensure_ascii=False, indent=2)  # indent=2 更加美观显示
    # 由txt文件构建COCO
    def to_coco(self, keys):
        self._init_categories()
        for key in keys:
            self.images.append(self._image(key))
            shapes = self.total_annos[key]
            for shape in shapes:
                bboxi = []
                for cor in shape[:-1]:
                    bboxi.append(int(cor))
                label = shape[-1]
                annotation = self._annotation(bboxi,label)
                self.annotations.append(annotation)
                self.ann_id += 1
            self.img_id += 1
        instance = {}
        instance['info'] = 'spytensor created'
        instance['license'] = ['license']
        instance['images'] = self.images
        instance['annotations'] = self.annotations
        instance['categories'] = self.categories
        return instance
    # 构建类别
    def _init_categories(self):
        for k, v in classname_to_id.items():
            category = {}
            category['id'] = v
            category['name'] = k
            self.categories.append(category)
    # 构建COCO的image字段
    def _image(self, path):
        image = {}
        img = cv2.imread(self.image_dir + path)
        image['height'] = img.shape[0]
        image['width'] = img.shape[1]
        image['id'] = self.img_id
        image['file_name'] = path
        return image
    # 构建COCO的annotation字段
    def _annotation(self, shape,label):
        # label = shape[-1]
        points = shape[:4]
        annotation = {}
        annotation['id'] = self.ann_id
        annotation['image_id'] = self.img_id
        annotation['category_id'] = int(classname_to_id[label])
        annotation['segmentation'] = self._get_seg(points)
        annotation['bbox'] = self._get_box(points)
        annotation['iscrowd'] = 0
        annotation['area'] = 1.0
        return annotation
    # COCO的格式: [x1,y1,w,h] 对应COCO的bbox格式
    def _get_box(self, points):
        min_x = points[0]
        min_y = points[1]
        max_x = points[2]
        max_y = points[3]
        return [min_x, min_y, max_x - min_x, max_y - min_y]
    # segmentation
    def _get_seg(self, points):
        min_x = points[0]
        min_y = points[1]
        max_x = points[2]
        max_y = points[3]
        h = max_y - min_y
        w = max_x - min_x
        a = []
        a.append([min_x,min_y, min_x,min_y+0.5*h, min_x,max_y, min_x+0.5*w,max_y, max_x,max_y, max_x,max_y-0.5*h, max_x,min_y, max_x-0.5*w,min_y])
        return a
if __name__ == '__main__':
    ## 修改目录
    csv_file = "path/to/annotations.csv"
    image_dir = "path/to/JPEGImages/"
    saved_coco_path = "save/to/"
    # 整合csv格式标注文件
    total_csv_annotations = {}
    annotations = pd.read_csv(csv_file,header=None).values
    for annotation in annotations:
        key = annotation[0].split(os.sep)[-1]
        value = np.array([annotation[1:]])
        if key in total_csv_annotations.keys():
            total_csv_annotations[key] = np.concatenate((total_csv_annotations[key],value),axis=0)
        else:
            total_csv_annotations[key] = value
    # 按照键值划分数据
    total_keys = list(total_csv_annotations.keys())
    train_keys, val_keys = train_test_split(total_keys, test_size=0.2)
    print("train_n:", len(train_keys), 'val_n:', len(val_keys))
    ## 创建必须的文件夹
    if not os.path.exists('%ssteel/annotations/'%saved_coco_path):
        os.makedirs('%ssteel/annotations/'%saved_coco_path)
    if not os.path.exists('%ssteel/images/train/'%saved_coco_path):
        os.makedirs('%ssteel/images/train/'%saved_coco_path)
    if not os.path.exists('%ssteel/images/val/'%saved_coco_path):
        os.makedirs('%ssteel/images/val/'%saved_coco_path)
    ## 把训练集转化为COCO的json格式
    l2c_train = Csv2CoCo(image_dir=image_dir,total_annos=total_csv_annotations)
    train_instance = l2c_train.to_coco(train_keys)
    l2c_train.save_coco_json(train_instance, '%ssteel/annotations/instances_train.json'%saved_coco_path)
    for file in train_keys:
        shutil.copy(image_dir+file,"%ssteel/images/train/"%saved_coco_path)
    for file in val_keys:
        shutil.copy(image_dir+file,"%ssteel/images/val/"%saved_coco_path)
    ## 把验证集转化为COCO的json格式
    l2c_val = Csv2CoCo(image_dir=image_dir,total_annos=total_csv_annotations)
    val_instance = l2c_val.to_coco(val_keys)
    l2c_val.save_coco_json(val_instance, '%ssteel/annotations/instances_val.json'%saved_coco_path)
COCO数据集目录结构如下:
dataset/coco/
├── annotations
│   ├── instances_train2014.json
│   ├── instances_train2017.json
│   ├── instances_val2014.json
│   ├── instances_val2017.json
│   |   ...
├── train2017
│   ├── 000000000009.jpg
│   ├── 000000580008.jpg
│   |   ...
├── val2017
│   ├── 000000000139.jpg
│   ├── 000000000285.jpg
│   |   ...
|   ...

三、修改配置文件

(1)yolov4(Pascal VOC 数据集)
     $ cd 
     $ vi yolov4_cspdarknet_voc.yml
     如果用label_list.txt文件,则需要修改参数num_classes为自己的类别个数,设置use_default_label=False(在参数dataset下)
     修改batch_size、worker_num、base_lr等参数
(2)faster_rcnn_r50_1x(COCO数据集)
     $ cd 
     $ vi faster_rcnn_r50_1x.yml
     修改num_classes、base_lr等参数
(3)查看.yml文件中_READER_参数所指定文件,修改数据集路径、batch_size、worker_num等参数。
    注意:如果数据集格式按照第二步流程生成,则不用修改数据集路径。

四、开始训练

 $ cd  
(1)yolov4训练
    $ python3 tools/train.py -c configs/yolov4/yolov4_cspdarknet_voc.yml --eval --use_tb=true --tb_log_dir=tb_log_dir/scalar/
(2)faster_rcnn_r50_1x训练
    $ python3 tools/train.py -c configs/faster_rcnn_r50_1x.yml --eval --use_tb=true --tb_log_dir=tb_log_dir/scalar/

五、继续训练

初次训练可跳过此步

 $ cd 
(1)yolov4训练
    $ python tools/train.py -c configs/yolov4/yolov4_cspdarknet_voc.yml \
      -o pretrain_weights=path/to/PaddleDetection/output/yolov4_cspdarknet_voc/best_model \
      --eval --use_tb=true --tb_log_dir=tb_log_dir/scalar/
(2)faster_rcnn_r50_1x训练 
    $ python tools/train.py -c configs/faster_rcnn_r50_1x.yml \
      -o pretrain_weights=path/to/PaddleDetection/output/faster_rcnn_r50_1x/best_model \
      --eval --use_tb=true --tb_log_dir=tb_log_dir/scalar/
 参数解释:
     -c:指定配置文件
     -o:参数选择,例如pretrain_weights、weights
     --eval 训练过程中进行模型评估,训练完成后,保存模型的文件夹下best_model.*为最好模型
     --output_dir:指定模型保存路径
     --use_tb:设置tensorboard
     --tb_log_dir:保存训练日志,用于训练过程可视化

六、模型评估

 $ cd  
(1)yolov4模型评估
    $ python tools/eval.py -c configs/yolov4/yolov4_cspdarknet_voc.yml \
      -o weights=output/yolov4_cspdarknet_voc/best_model --eval
(2)faster_rcnn_r50_1x模型评估
    $ python tools/eval.py -c configs/faster_rcnn_r50_1x.yml \
      -o weights=output/faster_rcnn_r50_1x/best_model --eval
参数解释:
 -c:指定配置文件
 -o:weights指定评估模型

七、模型测试

 $ cd  
(1)yolov4模型测试
    $ python tools/infer.py -c configs/yolov4/yolov4_cspdarknet_voc.yml \
      --infer_dir=dataset/coco/val2017 \
      --output_dir=infer_output/ \
      --draw_threshold=0.5 \
      -o weights=output/yolov4_cspdarknet_voc/best_model
(2)faster_rcnn_r50_1x模型测试
    $ python tools/infer.py -c configs/faster_rcnn_r50_1x.yml \
      --infer_dir=dataset/coco/val2017 \
      --output_dir=infer_output/ \
      --draw_threshold=0.5 \
      -o weights=output/faster_rcnn_r50_1x.yml/best_model
 参数解释:
     -c:指定配置文件
     -o:weights指定测试模型
     --infer_dir:测试文件夹中所有图像
     --infer_img:测试单张图像
     --output_dir:保存测试图像
     --draw_threshold:设置得分阈值,滤除掉小于阈值的检测框

至此,利用PaddleDetection开发套件训练自己的数据已完成,如何实现paddleserving模型在线部署,请参考另一篇博客link

参考文档:

1.https://github.com/PaddlePaddle/PaddleDetection
2.https://paddledetection.readthedocs.io/

你可能感兴趣的:(paddlepaddle)