win10系统下将yolo v2-tiny模型部署于Maix dock开发板进行目标检测

(1)制作目标检测数据集
使用labelimg软件进行图片的标注如图所示
win10系统下将yolo v2-tiny模型部署于Maix dock开发板进行目标检测_第1张图片
举一个例子:
win10系统下将yolo v2-tiny模型部署于Maix dock开发板进行目标检测_第2张图片
点击open,导入等待标注的图片,进行目标的标注后生成标注从config文件,数据分为train_img文件夹与train_ano文件,图片与标注的信息文件。一般每一个class至少要有40张以上图片数据。
(2)建立训练模型
模型的选择有很多种,本文中使用yolo v2tiny,事实上使用MobiNet的更多一些,本文也只是举一个个例子,将数据集读入进行模型的迭代。
network.py

# -*- coding: utf-8 -*-
from keras.models import Model
from keras.layers import Reshape, Conv2D, Input, Lambda
import numpy as np
import cv2
import os

from .utils.feature import create_feature_extractor


def create_yolo_network(architecture,
                        input_size,
                        nb_classes,
                        nb_box):
    feature_extractor = create_feature_extractor(architecture, input_size)
    yolo_net = YoloNetwork(feature_extractor,
                           input_size,
                           nb_classes,
                           nb_box)
    return yolo_net


class YoloNetwork(object):
    
    def __init__(self,
                 feature_extractor,
                 input_size,
                 nb_classes,
                 nb_box):
        
        # 1. create full network
        grid_size = feature_extractor.get_output_size()
        
        # make the object detection layer
        output_tensor = Conv2D(nb_box * (4 + 1 + nb_classes), (1,1), strides=(1,1),
                               padding='same', 
                               name='detection_layer_{}'.format(nb_box * (4 + 1 + nb_classes)), 
                               kernel_initializer='lecun_normal')(feature_extractor.feature_extractor.output)
        output_tensor = Reshape((grid_size, grid_size, nb_box, 4 + 1 + nb_classes))(output_tensor)
    
        model = Model(feature_extractor.feature_extractor.input, output_tensor)
        self._norm = feature_extractor.normalize
        self._model = model
        self._model.summary()
        self._init_layer()
        layer_names = [layer.name for layer in self._model.layers]
        print(layer_names)

    def _init_layer(self):
        layer = self._model.layers[-2]
        weights = layer.get_weights()
        
        input_depth = weights[0].shape[-2] # 2048
        new_kernel = np.random.normal(size=weights[0].shape)/ input_depth
        new_bias   = np.zeros_like(weights[1])

        layer.set_weights([new_kernel, new_bias])

    def load_weights(self, weight_path, by_name):
        self._model.load_weights(weight_path, by_name=by_name)
        
    def forward(self, image):
        def _get_input_size():
            input_shape = self._model.get_input_shape_at(0)
            _, h, w, _ = input_shape
            return h
            
        input_size = _get_input_size()
        image = cv2.resize(image, (input_size, input_size))
        image = self._norm(image)

        input_image = image[:,:,::-1]
        input_image = np.expand_dims(input_image, 0)

        # (13,13,5,6)
        netout = self._model.predict(input_image)[0]
        return netout

    def get_model(self, first_trainable_layer=None):
        layer_names = [layer.name for layer in self._model.layers]
        fixed_layers = []
        if first_trainable_layer in layer_names:
            for layer in self._model.layers:
                if layer.name == first_trainable_layer:
                    break
                layer.trainable = False
                fixed_layers.append(layer.name)

        if fixed_layers != []:
            print("The following layers do not update weights!!!")
            print("    ", fixed_layers)
        return self._model

    def get_grid_size(self):
        _, h, w, _, _ = self._model.get_output_shape_at(-1)
        assert h == w
        return h

    def get_normalize_func(self):
        return self._norm




(3)在进行模型训练之前先搭建模型训练环境:
conda create -n yolo python=3.6(创建python3.6版本的conda环境,名字为yolo)
进行环境激活:conda activate yolo
环境中安装深度学习框架,本文已经对环境进行整理进入requirement.txt
使用pip install -r requirements.txt,系统会进行自动的镜像安装。
(4)进行模型的训练
Python train.py -c configs.json
win10系统下将yolo v2-tiny模型部署于Maix dock开发板进行目标检测_第3张图片
(5)生成的tflite文件进行格式的转化转换为k210可以使用的Kmodel模型
需要借助ncc工具箱转换,详细见本人GitHub社区
工具箱使用代码:

ncc_0.1_win\ncc test.tflite test.kmodel -i tflite -o k210model --dataset train_img

win10系统下将yolo v2-tiny模型部署于Maix dock开发板进行目标检测_第4张图片
可能会报错,原因是你需要将训练好的“日期”模型。转化为test.tflite文件,并且将其移动到根目录下,进行转化
(6)使用读卡器烧录模型,将k210烧录固件,进行IDE的串口调用
maixpy ide

import sensor,image,lcd,time
import KPU as kpu

lcd.init(freq=15000000)
sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
#sensor.set_hmirror(1)
#sensor.set_vflip(1)
sensor.set_windowing((224, 224))
sensor.set_brightness(2)
#sensor.set_contrast(-1)
#sensor.set_auto_gain(1,2)

sensor.run(1)
clock = time.clock()
classes = ['class_1']
task = kpu.load('/sd/test.kmodel')
anchor = (1, 1.2, 2, 3, 4, 3, 6, 4, 5, 6.5)
a = kpu.init_yolo2(task, 0.17, 0.3, 5, anchor)
while(True):
    clock.tick()
    img = sensor.snapshot()
    code = kpu.run_yolo2(task, img)
    print(clock.fps())
    if code:
        for i in code:
            a=img.draw_rectangle(i.rect())
            a = lcd.display(img)
            print(i.classid(),i.value())
            for i in code:
                lcd.draw_string(i.x(), i.y(), classes[i.classid()], lcd.RED, lcd.WHITE)
                lcd.draw_string(i.x(), i.y()+12, '%f1.3'%i.value(), lcd.RED, lcd.WHITE)
    else:
        a = lcd.display(img)
a = kpu.deinit(task)

完整的代码开源在本人的GitHub社区
https://github.com/qianyuqianxun-DeepLearning/make-tiny-YOLO-v2-for-maix-dock
参考源代码来源:
https://github.com/TonyZ1Min/yolo-for-k210

你可能感兴趣的:(win10系统下将yolo v2-tiny模型部署于Maix dock开发板进行目标检测)