基于yolov5的火焰识别

基于yolov5的火焰识别

1、准备工作

yolov5项目下载

​ 下载yolov5项目代码,其链接为:yolov5项目地址

​ 并且在PC机上配置环境,即正常按照requirements安装依赖包,而后根据自身需要下载相应的权重文件(yolov5s、yolov5m、yolov5l、yolov5x)

数据集的准备

​ 1、根据实际情况可以自身在网上爬取火焰图片

​ 2、通过网上的资料下载相关数据集,大部分数据集是无标注的数据集,此处可参考工控-小白的博客博客地址,其中就有部分火焰识别数据集的下载链接

​ 如下载的数据集无标注,那么使用lableImg进行标注,且将标注文件的保存格式设置为PascalVOC的类型,即xml格式的label文件,而后通过脚本将标签格式转换为.txt文件,并在文件上添加类别信息和对数据进行归一化。脚本脚本代码如下:

import os
import xml.etree.ElementTree as ET
from decimal import Decimal
 
dirpath = '/home/jiu/data_change/label_0'  # 原来存放xml文件的目录
newdir = '/home/jiu/data_change/labels'  # 修改label后形成的txt目录
 
if not os.path.exists(newdir):
    os.makedirs(newdir)
 
for fp in os.listdir(dirpath):
 
    root = ET.parse(os.path.join(dirpath, fp)).getroot()
 
    xmin, ymin, xmax, ymax = 0, 0, 0, 0
    sz = root.find('size')
    width = float(sz[0].text)
    height = float(sz[1].text)
    filename = root.find('filename').text
    print(fp)
    with open(os.path.join(newdir, fp.split('.')[0] + '.txt'), 'a+') as f:
        for child in root.findall('object'):  # 找到图片中的所有框
 
            sub = child.find('bndbox')  # 找到框的标注值并进行读取
            sub_label = child.find('name')
            xmin = float(sub[0].text)
            ymin = float(sub[1].text)
            xmax = float(sub[2].text)
            ymax = float(sub[3].text)
            try:  # 转换成yolov的标签格式,需要归一化到(0-1)的范围内
                x_center = Decimal(str(round(float((xmin + xmax) / (2 * width)),6))).quantize(Decimal('0.000000'))
                y_center = Decimal(str(round(float((ymin + ymax) / (2 * height)),6))).quantize(Decimal('0.000000'))
                w = Decimal(str(round(float((xmax - xmin) / width),6))).quantize(Decimal('0.000000'))
                h = Decimal(str(round(float((ymax - ymin) / height),6))).quantize(Decimal('0.000000'))
                print(str(x_center) + ' ' + str(y_center)+ ' '+str(w)+ ' '+str(h))
                #读取需要的标签
                if sub_label.text == 'fire':
                    f.write(' '.join([str(0), str(x_center), str(y_center), str(w), str(h) + '\n']))
            except ZeroDivisionError:
                print(filename, '的 width有问题')

​ 此处提供本人所使用的火焰数据集,该数据集一共1421张带火焰的图片,并将其分为训练集和测试集,其中训练集1200张,测试集221张;同样的将label也分为训练集和测试集,其图片和其label相对应。

数据集下载地址:Let’s go
基于yolov5的火焰识别_第1张图片
txt标签:
基于yolov5的火焰识别_第2张图片
xml标签:
基于yolov5的火焰识别_第3张图片

将下载后的zip文件解压到项目目录中

项目实现

配置文件修改

新建一个.yaml文件,在其中添加(根据实际情况修改文件路径):

train: /home/jiu/project/fire_detect/dataset/images/train  # train images 1200 images
val: /home/jiu/project/fire_detect/dataset/images/val  # val images 221 images
test:  # test images (optional)

# Classes
nc: 1  # number of classes
names: [ 'fire' ]  # class names

修改train.py以及detect.py

在train.py的parse_opt()函数中,修改’–weights’、’–data’、’–imgsz’等配置,其如下所示:

def parse_opt(known=False):
    parser = argparse.ArgumentParser()
    parser.add_argument('--weights', type=str, default='yolov5s.pt', help='initial weights path')
    parser.add_argument('--cfg', type=str, default='', help='model.yaml path')
    parser.add_argument('--data', type=str, default='data/coco128.yaml', help='dataset.yaml path')
    parser.add_argument('--hyp', type=str, default='data/hyps/hyp.scratch.yaml', help='hyperparameters path')
    parser.add_argument('--epochs', type=int, default=300)
    parser.add_argument('--batch-size', type=int, default=8, help='total batch size for all GPUs')
    parser.add_argument('--img-size', nargs='+', type=int, default=[320, 320], help='[train, val] image sizes')
    parser.add_argument('--rect', action='store_true', help='rectangular training')
    parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')
    parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
    parser.add_argument('--noval', action='store_true', help='only validate final epoch')
    parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check')
    parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations')
    parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
    parser.add_argument('--cache-images', action='store_true', help='cache images for faster training')
    parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training')
    parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
    parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')
    parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class')
    parser.add_argument('--adam', action='store_true', help='use torch.optim.Adam() optimizer')
    parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
    parser.add_argument('--workers', type=int, default=8, help='maximum number of dataloader workers')
    parser.add_argument('--project', default='runs/train', help='save to project/name')
    parser.add_argument('--entity', default=None, help='W&B entity')
    parser.add_argument('--name', default='exp', help='save to project/name')
    parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
    parser.add_argument('--quad', action='store_true', help='quad dataloader')
    parser.add_argument('--linear-lr', action='store_true', help='linear LR')
    parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon')
    parser.add_argument('--upload_dataset', action='store_true', help='Upload dataset as W&B artifact table')
    parser.add_argument('--bbox_interval', type=int, default=-1, help='Set bounding-box image logging interval for W&B')
    parser.add_argument('--save_period', type=int, default=-1, help='Log model after every "save_period" epoch')
    parser.add_argument('--artifact_alias', type=str, default="latest", help='version of dataset artifact to be used')
    parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify')
    opt = parser.parse_known_args()[0] if known else parser.parse_args()
    return opt

即’–weights’:添加yolov5的预训练权重文件,此处使用的是yolov5s.pt。如使用其他预训练权重文件,则在val.py中也应当相应修改

‘–data’:数据集的配置文件,即上面定义的.yaml文件

‘–imgsz’:输入图片的大小

在detect.py的parse_opt()函数中,同样修改’–weights’、’–source’、’–imgsz’等配置,其如下所示:

def parse_opt():
    parser = argparse.ArgumentParser()
    parser.add_argument('--weights', nargs='+', type=str,
                        default='/home/jiu/project/fire_detect/runs/train/exp/weights/best.pt',
                        help='model.pt path(s)')
    parser.add_argument('--source', type=str, default='/home/jiu/project/fire_detect/test_images/2.mp4',
                        help='file/dir/URL/glob, 0 for webcam')
    parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=320, help='inference size (pixels)')
    parser.add_argument('--conf-thres', type=float, default=0.25, help='confidence threshold')
    parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold')
    parser.add_argument('--max-det', type=int, default=1000, help='maximum detections per image')
    parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
    parser.add_argument('--view-img', action='store_true', help='show results')
    parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
    parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')
    parser.add_argument('--save-crop', action='store_true', help='save cropped prediction boxes')
    parser.add_argument('--nosave', action='store_true', help='do not save images/videos')
    parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --class 0, or --class 0 2 3')
    parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')
    parser.add_argument('--augment', action='store_true', help='augmented inference')
    parser.add_argument('--visualize', action='store_true', help='visualize features')
    parser.add_argument('--update', action='store_true', help='update all models')
    parser.add_argument('--project', default='runs/detect', help='save results to project/name')
    parser.add_argument('--name', default='exp', help='save results to project/name')
    parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
    parser.add_argument('--line-thickness', default=3, type=int, help='bounding box thickness (pixels)')
    parser.add_argument('--hide-labels', default=False, action='store_true', help='hide labels')
    parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences')
    parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference')
    opt = parser.parse_args()
    return opt

即’–weights’:添加训练好的权重文件

‘–source’:测试图片的路径或测试视频的路径

‘–imgsz’:输入图片的大小

训练过程

1、如在pycharm中运行,可直接运行train.py文件

2、如使用终端运行,则运行指令为:

python train.py --img 320 --batch 16 --epoch 300 --data data/coco128.yaml --cfg models/yolov5s.yaml --weights weights/yolov5s.pt --device '0'

实际测试

图片测试

# -*- coding: UTF-8 -*-
import time
import cv2
import torch
import copy
from models.experimental import attempt_load
from utils.datasets import letterbox
from utils.general import check_img_size, non_max_suppression, scale_coords, xyxy2xywh


def load_model(weights, device):
    model = attempt_load(weights, map_location=device)  # load FP32 model
    return model


def show_results(img, xywh, conf, class_num):
    h, w, c = img.shape
    labels = ['fire']
    tl = 1 or round(0.002 * (h + w) / 2) + 1  # line/font thickness
    x1 = int(xywh[0] * w - 0.5 * xywh[2] * w)
    y1 = int(xywh[1] * h - 0.5 * xywh[3] * h)
    x2 = int(xywh[0] * w + 0.5 * xywh[2] * w)
    y2 = int(xywh[1] * h + 0.5 * xywh[3] * h)
    cv2.rectangle(img, (x1, y1), (x2, y2), (0, 255, 0), thickness=tl, lineType=cv2.LINE_AA)

    tf = max(tl - 1, 1)  # font thickness
    label = str(labels[int(class_num)]) + ': ' + str(conf)[:5]
    cv2.putText(img, label, (x1, y1 - 2), 0, tl / 3, [0, 0, 255], thickness=tf, lineType=cv2.LINE_AA)
    return img


def detect_one(model, image_path, device):
    # Load model
    img_size = 320
    conf_thres = 0.3
    iou_thres = 0.2

    orgimg = cv2.imread(image_path)  # BGR
    # orgimg = image_path
    img0 = copy.deepcopy(orgimg)
    assert orgimg is not None, 'Image Not Found ' + image_path
    h0, w0 = orgimg.shape[:2]  # orig hw
    r = img_size / max(h0, w0)  # resize image to img_size
    if r != 1:  # always resize down, only resize up if training with augmentation
        interp = cv2.INTER_AREA if r < 1  else cv2.INTER_LINEAR
        img0 = cv2.resize(img0, (int(w0 * r), int(h0 * r)), interpolation=interp)

    imgsz = check_img_size(img_size, s=model.stride.max())  # check img_size

    img = letterbox(img0, new_shape=imgsz)[0]
    # Convert
    img = img[:, :, ::-1].transpose(2, 0, 1).copy()  # BGR to RGB, to 3x416x416

    # Run inference
    t0 = time.time()

    img = torch.from_numpy(img).to(device)
    img = img.float()  # uint8 to fp16/32
    img /= 255.0  # 0 - 255 to 0.0 - 1.0
    if img.ndimension() == 3:
        img = img.unsqueeze(0)

    # Inference
    pred = model(img)[0]

    # Apply NMS
    pred = non_max_suppression(pred, conf_thres, iou_thres)
    print('pred: ', pred)
    print('img.shape: ', img.shape)
    print('orgimg.shape: ', orgimg.shape)

    # Process detections
    for i, det in enumerate(pred):  # detections per image
        gn = torch.tensor(orgimg.shape)[[1, 0, 1, 0]].to(device)  # normalization gain whwh
        if len(det):
            # Rescale boxes from img_size to im0 size
            det[:, :4] = scale_coords(img.shape[2:], det[:, :4], orgimg.shape).round()

            # Print results
            for c in det[:, -1].unique():
                n = (det[:, -1] == c).sum()  # detections per class

            for j in range(det.size()[0]):
                xywh = (xyxy2xywh(torch.tensor(det[j, :4]).view(1, 4)) / gn).view(-1).tolist()
                conf = det[j, 4].cpu().numpy()
                class_num = det[j, 4].cpu().numpy()
                orgimg = show_results(orgimg, xywh, conf, class_num)



    # Stream results
    print(f'Done. ({time.time() - t0:.3f}s)')
    cv2.imshow('orgimg', orgimg)
    cv2.imwrite('filename.jpg',orgimg)
    if cv2.waitKey(0) == ord('q'):  # q to quit
        raise StopIteration


if __name__ == '__main__':
    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    weights = '/home/jiu/project/fire_detect/runs/train/exp/weights/best.pt'
    model = load_model(weights, device)

    # using images
    image_path = '/home/jiu/project/fire_detect/test_images/1.jpg'
    detect_one(model, image_path, device)
    print('over')

运行结果:
基于yolov5的火焰识别_第4张图片

视频测试

可直接使用detect.py进行测试,其效果如下所示:

yolov5火焰识别

摄像头测试:

# -*- coding: UTF-8 -*-
import time
import cv2
import torch
import copy
from models.experimental import attempt_load
from utils.datasets import letterbox
from utils.general import check_img_size, non_max_suppression, scale_coords, xyxy2xywh


def load_model(weights, device):
    model = attempt_load(weights, map_location=device)  # load FP32 model
    return model


def show_results(img, xywh, conf, class_num):
    h, w, c = img.shape
    labels = ['fire']
    tl = 1 or round(0.002 * (h + w) / 2) + 1  # line/font thickness
    x1 = int(xywh[0] * w - 0.5 * xywh[2] * w)
    y1 = int(xywh[1] * h - 0.5 * xywh[3] * h)
    x2 = int(xywh[0] * w + 0.5 * xywh[2] * w)
    y2 = int(xywh[1] * h + 0.5 * xywh[3] * h)
    cv2.rectangle(img, (x1, y1), (x2, y2), (0, 255, 0), thickness=tl, lineType=cv2.LINE_AA)

    tf = max(tl - 1, 1)  # font thickness
    label = str(labels[int(class_num)]) + ': ' + str(conf)[:5]
    cv2.putText(img, label, (x1, y1 - 2), 0, tl / 3, [0, 0, 255], thickness=tf, lineType=cv2.LINE_AA)
    return img


def detect_one(model, image_path, device):
    # Load model
    img_size = 320
    conf_thres = 0.3
    iou_thres = 0.2

    orgimg = image_path
    img0 = copy.deepcopy(orgimg)
    assert orgimg is not None, 'Image Not Found ' + image_path
    h0, w0 = orgimg.shape[:2]  # orig hw
    r = img_size / max(h0, w0)  # resize image to img_size
    if r != 1:  # always resize down, only resize up if training with augmentation
        interp = cv2.INTER_AREA if r < 1  else cv2.INTER_LINEAR
        img0 = cv2.resize(img0, (int(w0 * r), int(h0 * r)), interpolation=interp)

    imgsz = check_img_size(img_size, s=model.stride.max())  # check img_size

    img = letterbox(img0, new_shape=imgsz)[0]
    # Convert
    img = img[:, :, ::-1].transpose(2, 0, 1).copy()  # BGR to RGB, to 3x416x416

    # Run inference
    t0 = time.time()

    img = torch.from_numpy(img).to(device)
    img = img.float()  # uint8 to fp16/32
    img /= 255.0  # 0 - 255 to 0.0 - 1.0
    if img.ndimension() == 3:
        img = img.unsqueeze(0)

    # Inference
    pred = model(img)[0]

    # Apply NMS
    pred = non_max_suppression(pred, conf_thres, iou_thres)
    print('pred: ', pred)
    print('img.shape: ', img.shape)
    print('orgimg.shape: ', orgimg.shape)

    # Process detections
    for i, det in enumerate(pred):  # detections per image
        gn = torch.tensor(orgimg.shape)[[1, 0, 1, 0]].to(device)  # normalization gain whwh
        if len(det):
            # Rescale boxes from img_size to im0 size
            det[:, :4] = scale_coords(img.shape[2:], det[:, :4], orgimg.shape).round()

            # Print results
            for c in det[:, -1].unique():
                n = (det[:, -1] == c).sum()  # detections per class

            for j in range(det.size()[0]):
                xywh = (xyxy2xywh(torch.tensor(det[j, :4]).view(1, 4)) / gn).view(-1).tolist()
                conf = det[j, 4].cpu().numpy()
                class_num = det[j, 4].cpu().numpy()
                orgimg = show_results(orgimg, xywh, conf, class_num)



    # Stream results
    print(f'Done. ({time.time() - t0:.3f}s)')
    return orgimg


if __name__ == '__main__':
    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    weights = '/home/jiu/project/fire_detect/runs/train/exp/weights/best.pt'
    model = load_model(weights, device)

    # using camera
    cap = cv2.VideoCapture(0)
    while cap.isOpened():
        _, frame = cap.read()
        frame = detect_one(model, frame, device)
        cv2.imshow("img", frame)
        cv2.waitKey(1)
    print('over')

这个测试效果就不贴了

最后,整个项目代码下载地址: fire_detect 密码: kwu1

你可能感兴趣的:(机器学习,深度学习,pytorch)