【基于yolov4-deepsort实现轨迹跟踪刻画、事故判别、检测区域目标计数等功能】

【基于yolov4-deepsort实现轨迹跟踪刻画、逻辑事故判别、roi区域检测目标计数等功能】

  • 文前白话
    • 项目运行环境说明
    • 检测模型的训练权重文件
    • 部分功能代码
    • demo 展示

文前白话

  • 本文记录几个基于yolov4-deepsort的源码进行二次开发的项目,实现需要的功能,主要有:
  • 目标(各类车辆)检测与轨迹跟踪刻画;
  • 对道路监控摄像机的实时视频流进行碰撞事故检测(车辆、行人以及电动车等)
  • 对特定区域进行检测目标的计数统计等等

项目运行环境说明

主要的环境配置:

CUDA+Tensorflow

  • Windows 下的运行环境搭建CUDA+Tensorflow-gpu可以参考:链接: Win10+CUDA10.0+Tensorflow-gpu1.13.1+cudnn10.0_v7.4.1.5 (详细配置步骤教程).
  • 具体的CUDA+Tensorflow-gpu版本一定要匹配,可以参见官网文档: 从源代码构建 - TensorFlow (google.cn).
  • Linux 下的可以参考- github环境搭建指导: github - yolov4-deepsort.

运行虚拟环境- 可安装conda 创建

  • 依照源码包中的readme指导,通过conda &Pip 命令方式 运行.yml文件 或者 .txt 文件直接创建:

### Conda (Recommended
# Tensorflow CPU
conda env create -f conda-cpu.yml
conda activate yolov4-cpu

# Tensorflow GPU
conda env create -f conda-gpu.yml
conda activate yolov4-gpu
# Pip
# TensorFlow CPU
pip install -r requirements.txt

# TensorFlow GPU
pip install -r requirements-gpu.txt
  • 运行.yml文件 或者 .txt 文件 参考:【虚拟环境下的.yml文件/requirement.txt文件的生成与重新加载】.

检测模型的训练权重文件

  • yolov4预训练权重文件下载链接: yolov4.weights .提取码:p31d
  • yolov4预训练轻量级-权重文件下载链接: yolov4-tiny.weights .提取码:kgk6
  • 如果检测目标是自定义的要自己重新训练了。

部分功能代码

在 源码 object_tracker.py 基础上修改

对 检测的视频进行跟踪刻画轨迹 + 检测目标计数显示


# 基于 object_tracker.py  源码基础上修改
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
# comment out below line to enable tensorflow logging outputs
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import time
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
physical_devices = tf.config.experimental.list_physical_devices('GPU')
if len(physical_devices) > 0:
    tf.config.experimental.set_memory_growth(physical_devices[0], True)
from absl import app, flags, logging
from absl.flags import FLAGS
import core.utils as utils
from core.yolov4 import filter_boxes
from tensorflow.python.saved_model import tag_constants
from core.config import cfg
from PIL import Image
import cv2
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession
# 导入跟踪器
from deep_sort import preprocessing, nn_matching
from deep_sort.detection import Detection
from deep_sort.tracker import Tracker
# 导入检测器
from tools import generate_detections as gdet

# 导入判别器(添加)
from box_overlap import bb_overlap
from speed_jump import speed_jump
from cross_angle import cross_angle
from line_cross import line_cross
from Central_offset import Central_offset
from accident_frames_writer import accidentFrame
from pathlib import Path
# 忽略警告
import warnings
warnings.filterwarnings('ignore')



flags.DEFINE_string('framework', 'tf', '(tf, tflite, trt')
flags.DEFINE_string('weights', './checkpoints/yolov4-416',
                    'path to weights file')
flags.DEFINE_integer('size', 416, 'resize images to')
flags.DEFINE_boolean('tiny', False, 'yolo or yolo-tiny')
flags.DEFINE_string('model', 'yolov4', 'yolov3 or yolov4')
flags.DEFINE_string('video', './data/video/test.mp4', 'path to input video or set to 0 for webcam')
flags.DEFINE_string('output', None, 'path to output video')
flags.DEFINE_string('output_format', 'XVID', 'codec used in VideoWriter when saving video to file')
flags.DEFINE_float('iou', 0.45, 'iou threshold')
flags.DEFINE_float('score', 0.50, 'score threshold')
flags.DEFINE_boolean('dont_show', False, 'dont show video output')
flags.DEFINE_boolean('info', False, 'show detailed info of tracked objects')
flags.DEFINE_boolean('count', False, 'count objects being tracked on screen')

def main(_argv):
    object_dic = {} # 定义目标信息存储字典(添加)

    # 跟踪器参数阈值设定
    max_cosine_distance = 0.4  # 门控阈值,忽略成本大于此值的关联
    nn_budget = None
    nms_max_overlap = 1.0
    
    # 初始化 deep sort 跟踪器
    model_filename = 'model_data/mars-small128.pb'
    encoder = gdet.create_box_encoder(model_filename, batch_size=1)
    # 计算余弦距离度规 metric
    metric = nn_matching.NearestNeighborDistanceMetric("cosine", max_cosine_distance, nn_budget)
    # 初始化跟踪器
    tracker = Tracker(metric)

    # 为对象检测器加载配置
    config = ConfigProto()
    config.gpu_options.allow_growth = True
    session = InteractiveSession(config=config)
    STRIDES, ANCHORS, NUM_CLASS, XYSCALE = utils.load_config(FLAGS)
    input_size = FLAGS.size
    video_path = FLAGS.video

    # 如果设置了标记位,则加载tflite模型
    if FLAGS.framework == 'tflite':
        interpreter = tf.lite.Interpreter(model_path=FLAGS.weights)
        interpreter.allocate_tensors()
        input_details = interpreter.get_input_details()
        output_details = interpreter.get_output_details()
        print(input_details)
        print(output_details)
    # 否则加载标准tensorflow保存模型
    else:
        saved_model_loaded = tf.saved_model.load(FLAGS.weights, tags=[tag_constants.SERVING])
        infer = saved_model_loaded.signatures['serving_default']

    # 开始捕捉视频
    try:
        vid = cv2.VideoCapture(int(video_path))
    except:
        vid = cv2.VideoCapture(video_path)

    out = None

    # 如果设置了标记位,准备在本地保存视频
    if FLAGS.output:
        # 默认情况下,VideoCapture返回float型而不是int型
        width = int(vid.get(cv2.CAP_PROP_FRAME_WIDTH))
        height = int(vid.get(cv2.CAP_PROP_FRAME_HEIGHT))
        fps = int(vid.get(cv2.CAP_PROP_FPS))
        codec = cv2.VideoWriter_fourcc(*FLAGS.output_format)
        out = cv2.VideoWriter(FLAGS.output, codec, fps, (width, height))
    frame_num = 0
    # while video is running
    while True:
        return_value, frame = vid.read()
        if return_value:
            frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
            image = Image.fromarray(frame)
        else:
            print('Video has ended or failed, try a different video format!')
            break
        frame_num +=1
        print('Frame #: ', frame_num)
        frame_size = frame.shape[:2]
        image_data = cv2.resize(frame, (input_size, input_size))
        image_data = image_data / 255.
        image_data = image_data[np.newaxis, ...].astype(np.float32)
        start_time = time.time()

        # 如果设置了标记,运行tflite检测
        if FLAGS.framework == 'tflite':
            interpreter.set_tensor(input_details[0]['index'], image_data)
            interpreter.invoke()
            pred = [interpreter.get_tensor(output_details[i]['index']) for i in range(len(output_details))]
            # run detections using yolov3 if flag is set
            if FLAGS.model == 'yolov3' and FLAGS.tiny == True:
                boxes, pred_conf = filter_boxes(pred[1], pred[0], score_threshold=0.25,
                                                input_shape=tf.constant([input_size, input_size]))
            else:
                boxes, pred_conf = filter_boxes(pred[0], pred[1], score_threshold=0.25,
                                                input_shape=tf.constant([input_size, input_size]))
        else:
            batch_data = tf.constant(image_data)
            pred_bbox = infer(batch_data)
            for key, value in pred_bbox.items():
                boxes = value[:, :, 0:4]
                pred_conf = value[:, :, 4:]

        boxes, scores, classes, valid_detections = tf.image.combined_non_max_suppression(
            boxes=tf.reshape(boxes, (tf.shape(boxes)[0], -1, 1, 4)),
            scores=tf.reshape(
                pred_conf, (tf.shape(pred_conf)[0], -1, tf.shape(pred_conf)[-1])),
            max_output_size_per_class=50,
            max_total_size=50,
            iou_threshold=FLAGS.iou,
            score_threshold=FLAGS.score
        )

        # 将数据转换为numpy数组并分割出未使用的元素 convert data to numpy arrays and slice out unused elements
        num_objects = valid_detections.numpy()[0]  #检测出的所有目标个数
        bboxes = boxes.numpy()[0]
        bboxes = bboxes[0:int(num_objects)]
        scores = scores.numpy()[0]
        scores = scores[0:int(num_objects)]
        classes = classes.numpy()[0]
        classes = classes[0:int(num_objects)]  #检测出的类的index索引,类型np

        # 从标准化后的坐标得到格式化边界框ymin, xmin, ymax, xmax—> xmin, ymin,宽度,高度
        # format bounding boxes from normalized ymin, xmin, ymax, xmax ---> xmin, ymin, width, height
        original_h, original_w, _ = frame.shape
        bboxes = utils.format_boxes(bboxes, original_h, original_w)

        # store all predictions in one parameter for simplicity when calling functions
        # 在调用函数时,为了简单起见,将所有预测存储在一个参数中
        pred_bbox = [bboxes, scores, classes, num_objects]

        # read in all class names from config 从配置中读取所有类名
        class_names = utils.read_class_names(cfg.YOLO.CLASSES)

        # by default allow all classes in .names file
        # 在默认允许的情况下加载.names文件中的所有类
        allowed_classes = list(class_names.values())
        
        # custom allowed classes (uncomment line below to customize tracker for only people)
        # 自定义允许的检测类(取消下面的注释行,只进行行人跟踪器的定制)
        #allowed_classes = ['person']

        # loop through objects and use class index to get class name, allow only classes in allowed_classes list
        # 循环遍历对象并使用类索引来获取类名,只允许加载检测allowed_classes列表中的类别

        ###### 添加功能---将当前画面帧中的检测类别存储到一个字典中,方便打印显示 ############
        # 从检测出的所有目标中,筛选指定类
        dict={}
        for i in allowed_classes:
            class_index = (list(class_names.keys()))[list(class_names.values()).index(i)]
            class_num = np.count_nonzero(classes==class_index)
            dict[i] = class_num
        ######添加end

        names = []   # (源代码)从检测出的所有目标中,筛选指定类
        deleted_indx = [] 
        for i in range(num_objects):
            class_indx = int(classes[i]) # 拿到类名的索引
            class_name = class_names[class_indx] # 得到类名
            if class_name not in allowed_classes:
                deleted_indx.append(i)
            else:
                names.append(class_name)
        names = np.array(names)
        count = len(names)  # 统计目前追踪到的目标数
        if FLAGS.count: # 标识位:如果需要进行计数
            # 打印出统计的类别总个数
            cv2.putText(frame, "Objects being tracked: {}".format(count), (5, 35), cv2.FONT_HERSHEY_COMPLEX_SMALL, 2, (0, 255, 0), 2)

            ###### 添加:
            y = 70     # y 值表示?
            # 打印出追踪到的各个类别个数
            for key,value in dict.items():   # (174-180)
                if value != 0 :
                    cv2.putText(frame, "{} being tracked: {}".format(key,value), (5, y), cv2.FONT_HERSHEY_COMPLEX_SMALL, 2,(0, 255, 0), 2)
                    y += 35
            ##### 添加end

            print("Objects being tracked: {}".format(count))
        # 删除不在 allowed_classes 内允许的检测
        bboxes = np.delete(bboxes, deleted_indx, axis=0)
        scores = np.delete(scores, deleted_indx, axis=0)

        # 编码yolo检测并提供给跟踪器  encode yolo detections and feed to tracker
        features = encoder(frame, bboxes)
        detections = [Detection(bbox, score, class_name, feature) for bbox, score, class_name, feature in zip(bboxes, scores, names, features)]
        #####
        # image = Image.fromarray(frame[..., ::-1])  # BGR to RGB
        # boxs = yolo.detect_image(image)
        # features = encoder(frame, boxs)  # 特征提取
        # detections = [Detection(bbox, 1.0, feature) for bbox, feature in zip(boxs, features)]
        # boxes = np.array([d.tlwh for d in detections])
        # scores = np.array([d.confidence for d in detections])
        # indices = preprocessing.non_max_suppression(boxes, nms_max_overlap, scores)
        # detections = [detections[i] for i in indices]  # 检测结果(框信息,重合度,置信度)
        #####
        # 初始化 initialize color map
        cmap = plt.get_cmap('tab20b')
        colors = [cmap(i)[:3] for i in np.linspace(0, 1, 20)]

        # 运行non-maxima(非极大值)压制 run non-maxima supression
        boxs = np.array([d.tlwh for d in detections])
        scores = np.array([d.confidence for d in detections])
        classes = np.array([d.class_name for d in detections])
        indices = preprocessing.non_max_suppression(boxs, classes, nms_max_overlap, scores)
        detections = [detections[i] for i in indices]       

        # 调用跟踪器 Call the tracker
        tracker.predict()
        tracker.update(detections)

        ###### 添加:
        target = []  # 创建一个目标数列,用来存放ROI区域内的目标
        track_colors = [(255, 0, 0), (0, 255, 0), (0, 0, 255), (255, 255, 0),
                        (0, 255, 255), (255, 0, 255), (255, 127, 255),
                        (127, 0, 255), (127, 0, 127)]
                        
        ###### 添加end

        # 更新跟踪 update tracks 
        for track in tracker.tracks:
            if not track.is_confirmed() or track.time_since_update > 1:
                continue 
            bbox = track.to_tlbr() #(对应坐标:0:左上角点;1:左上角点y; 2:右下角点x;  3:右下角点y)
            class_name = track.get_class()
            
        # 在屏幕上绘制检测框 draw bbox on screen
            color = colors[int(track.track_id) % len(colors)]
            color = [i * 255 for i in color]

            cv2.rectangle(frame, (int(bbox[0]), int(bbox[1])), (int(bbox[2]), int(bbox[3])), color, 2)
            cv2.rectangle(frame, (int(bbox[0]), int(bbox[1]-30)), (int(bbox[0])+(len(class_name)+len(str(track.track_id)))*17, int(bbox[1])), color, -1)
            cv2.putText(frame, class_name + "-" + str(track.track_id),(int(bbox[0]), int(bbox[1]-10)),0, 0.75, (255,255,255),2)
        # if enable info flag then print details about each track
        # 如果启用信息标志,然后打印每个轨迹的详细信息
            if FLAGS.info:
                print("Tracker ID: {}, Class: {},  BBox Coords (xmin, ymin, xmax, ymax): {}".format(str(track.track_id), class_name, (int(bbox[0]), int(bbox[1]), int(bbox[2]), int(bbox[3]))))
        
        ###### 添加:    
            # 检测框中心位置:轨迹点:Cx,Cy,w,h
            center = [int((bbox[0] + bbox[2]) / 2), int((bbox[1] + bbox[3]) / 2), int(bbox[2] - bbox[0]),
                      int(bbox[3] - bbox[1])]

            #向字典中添加目标
            # 如果没有当前车的ID,则创建{'id':{'trace':[[],[],[],……[]],'trace_frames':num},'id':{'trace':[[],[],[],……[]]}}
            if not "%d" % track.track_id in object_dic:
                # 创建当前id的字典:key(ID):val{轨迹,丢帧计数器}   当丢帧数超过20帧就删除该对象
                object_dic["%d" % track.track_id] = {"trace": [],'traced_frames': 20}
                object_dic["%d" % track.track_id]["trace"].append(center)
                object_dic["%d" % track.track_id]["traced_frames"] += 1
            # 如果有,直接写入
            else:
                object_dic["%d" % track.track_id]["trace"].append(center)
                object_dic["%d" % track.track_id]["traced_frames"] += 1

            # 加坐标判断和roi区域设置及画轨迹
            # if FLAGS.roi: #(开启-添加标志位flags.DEFINE_boolean('roi', False/false, 'set roi and   on screen'))
                # 这里提供roi的四个顶点坐标   [173, 456], [966, 91], [1240, 122], [574, 515]
                pts = np.array([[173, 456], [966, 91], [1240, 122], [574, 515]], np.int32)
                pts = pts.reshape((-1, 1, 2)) # 矩阵变成4*1*2维
                # cv2.polylines()多边形绘制函数,True表示线段闭合,FALSE表示仅仅保留线段,后续参数:颜色,线宽,线形
                cv2.polylines(frame, [pts], True, (0, 255, 255), thickness=2) 

                # 判断目标是否在roi区域内: 依据检测框的下边线中心点是否落在roi内进行判断
                x = int((bbox[0] + bbox[2]) / 2)
                y = int(bbox[3]) - 8  # 设置偏移量
                # 这里是4条线,分别是Lad\Lbc\Lab\Lcd,a左下角、b左上角、c右上角、d右下角。
                yab = round(-0.46*x+535.63, 2) # 坐标保留到两小数位
                ybc = round(0.11*x+-18.29, 2)
                ycd = round(-0.59*x+853.710, 2)
                yda = round(0.15*x+430.55, 2)
                # 判断中心点是否落入roi内(给出目标出现的一个坐标区域范围)
                if (y > yab and y > ybc and y < ycd and y < yda):
                    target.append(x)
                    cv2.putText(frame, str('enter'), (int(bbox[2] - 65), int(bbox[3] - 5)),
                                cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (255, 255, 0), 1)
        cv2.putText(frame, "ROI count: {}".format(str(len(target))), (1500, 35), cv2.FONT_HERSHEY_COMPLEX_SMALL,2, (255, 0, 0), 2)

        # 绘制轨迹
        for s in object_dic:
            i = int(s)
            # 这里可以将目标的坐标存起来后面可以继续做目标速度,行驶方向的判断
            xlist, ylist, wlist, hlist = [], [], [], []
            # bbox左上角坐标
            bbox_x1 = int(object_dic["%d" % i]["trace"][-1][0]) - int(object_dic["%d" % i]["trace"][-1][2]) // 2
            bbox_y1 = int(object_dic["%d" % i]["trace"][-1][1]) - int(object_dic["%d" % i]["trace"][-1][3]) // 2

            # 限制轨迹最大长度
            if len(object_dic["%d" % i]["trace"]) > 60:
                for k in range(len(object_dic["%d" % i]["trace"]) - 60):
                    del object_dic["%d" % i]["trace"][k]

            # # # 绘制轨迹
            if len(object_dic["%d" % i]["trace"]) > 2:
                for j in range(1, len(object_dic["%d" % i]["trace"]) - 1):
                    pot1_x = object_dic["%d" % i]["trace"][j][0]
                    pot1_y = object_dic["%d" % i]["trace"][j][1]
                    pot2_x = object_dic["%d" % i]["trace"][j + 1][0]
                    pot2_y = object_dic["%d" % i]["trace"][j + 1][1]
                    clr = i % 9  # 轨迹颜色随机
                    cv2.line(frame, (pot1_x, pot1_y), (pot2_x, pot2_y), track_colors[clr], 2)
                    # 轨迹点信息列表:Cx,Cy,w,h
                    xlist.append(pot1_x)
                    ylist.append(pot1_y)
                    wlist.append(object_dic["%d" % i]["trace"][j][2])
                    hlist.append(object_dic["%d" % i]["trace"][j][3])

        #  # 判别器(对车辆行人进行事故检测判别)
        #     for t in object_dic: 
        #         m = int(t)
        #             # 排除取到相同ID
        #             if m == i:
        #                 continue
        #             # 目标出现超过20帧开始判断
        #         if len(object_dic["%d" % i]["trace"]) > 20:
        #                 # 取前20,10以及当前帧的轨迹点
                        # c1_x = object_dic["%d" % i]["trace"][-20][0]
                        # c1_y = object_dic["%d" % i]["trace"][-20][1]
                        # c2_x = object_dic["%d" % i]["trace"][-10][0]
                        # c2_y = object_dic["%d" % i]["trace"][-10][1]
                        # c4_x = object_dic["%d" % i]["trace"][-1][0]
                        # c4_y = object_dic["%d" % i]["trace"][-1][1]
                        # c_point1 = Point(c1_x, c1_y)
                        # c_point2 = Point(c2_x, c2_y)
                        # c_point4 = Point(c4_x, c4_y)
                         # 目标出现超过20帧开始判断
                # if len(object_dic["%d" % m]["trace"]) > 20:
                        # 取前20,10以及当前帧的轨迹点
                        # c5_x = object_dic["%d" % m]["trace"][-20][0]
                        # c5_y = object_dic["%d" % m]["trace"][-20][1]
                        # c6_x = object_dic["%d" % m]["trace"][-10][0]
                        # c6_y = object_dic["%d" % m]["trace"][-10][1]
                        # c8_x = object_dic["%d" % m]["trace"][-1][0]
                        # c8_y = object_dic["%d" % m]["trace"][-1][1]
                        # c_point5 = Point(c5_x, c5_y)
                        # c_point6 = Point(c6_x, c6_y)
                        # c_point8 = Point(c8_x, c8_y)
        #                 ......
                    
        #             #两个目标框重合度判断

        #             # 默认人与人之间不会发生交通事故
        #             if h_w1 > 1.5 and h_w2 > 1.5:
        #                 continue

        #             # 目标出现超过20帧开始判断
        #             if len(object_dic["%d" % i]["trace"]) > 20 and len(object_dic["%d" % m]["trace"]) > 20:
        #                 # 人和车的情况:

        #                 # 车和车的情况:

        #                 # 车和电动车的情况:


        # 对已经消失的目标予以排除
        for s in object_dic:
            if object_dic["%d" % int(s)]["traced_frames"] > 0:
                object_dic["%d" % int(s)]["traced_frames"] -= 1
        for n in list(object_dic):
            if object_dic["%d" % int(n)]["traced_frames"] == 0:
                del object_dic["%d" % int(n)]

        ###### 添加end 

        # calculate frames per second of running detections
        fps = 1.0 / (time.time() - start_time)
        print("FPS: %.2f" % fps)
        result = np.asarray(frame)
        result = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
        
        if not FLAGS.dont_show:
            cv2.imshow("Output Video", result)
        
        # if output flag is set, save video file
        if FLAGS.output:
            out.write(result)
        if cv2.waitKey(1) & 0xFF == ord('q'): break
    cv2.destroyAllWindows()

if __name__ == '__main__':
    try:
        app.run(main)
    except SystemExit:
        pass

部分事故判断的逻辑 :目标框重合度判断

 # 判断两个目标框重合度
  bbox1_leftx = abs(
        int(object_dic["%d" % i]["trace"][-1][0]) - int(object_dic["%d" % i]["trace"][-1][2]) // 2)
    bbox1_lefty = abs(
        int(object_dic["%d" % i]["trace"][-1][1]) - int(object_dic["%d" % i]["trace"][-1][3]) // 2)
    bbox1_leftp = Point(bbox1_leftx, bbox1_lefty)  # 1号目标框左上角坐标
    bbox1_rightx = abs(
        int(object_dic["%d" % i]["trace"][-1][0]) + int(object_dic["%d" % i]["trace"][-1][2]) // 2)
    bbox1_righty = abs(
        int(object_dic["%d" % i]["trace"][-1][1]) + int(object_dic["%d" % i]["trace"][-1][3]) // 2)
    bbox1_rightp = Point(bbox1_rightx, bbox1_righty)  # 1号目标框右下角坐标
    bbox2_leftx = abs(
        int(object_dic["%d" % m]["trace"][-1][0]) - int(object_dic["%d" % m]["trace"][-1][2]) // 2)
    bbox2_lefty = abs(
        int(object_dic["%d" % m]["trace"][-1][1]) - int(object_dic["%d" % m]["trace"][-1][3]) // 2)
    bbox2_leftp = Point(bbox2_leftx, bbox2_lefty)  # 2号目标框左上角坐标
    bbox2_rightx = abs(
        int(object_dic["%d" % m]["trace"][-1][0]) + int(object_dic["%d" % m]["trace"][-1][2]) // 2)
    bbox2_righty = abs(
        int(object_dic["%d" % m]["trace"][-1][1]) + int(object_dic["%d" % m]["trace"][-1][3]) // 2)
    bbox2_rightp = Point(bbox2_rightx, bbox2_righty)  # 2号目标框右下角坐标
    h_w1 = (bbox1_righty - bbox1_lefty) / (bbox1_rightx - bbox1_leftx)  # 1号目标框高宽比
    h_w2 = (bbox2_righty - bbox2_lefty) / (bbox2_rightx - bbox2_leftx)  # 2号目标框高宽比
   # 默认人与人之间不会发生交通事故
    if h_w1 > 1.5 and h_w2 > 1.5:
           continue    

	# 目标出现超过20帧开始判断
	if len(object_dic["%d" % i]["trace"]) > 20 and len(object_dic["%d" % m]["trace"]) > 20:
	    # 人和车的情况:
	    if h_w1 < car_Aspect and h_w2 > human_Aspect:
	        # 轨迹绝对延长距离
	        R = 100
	        overlap = 0.6
	        Central_offset1 = Central_offset(c_point4, c_point1, w)
	        Central_offset2 = Central_offset(c_point8, c_point5, w)
	        dis = 0.02 * w
	        ture_overlap = bb_overlap(bbox1_leftp, bbox1_rightp, bbox2_leftp, bbox2_rightp,
	                                    overlap)
	        # 判别条件1:重合度,延长后轨迹相交,轨迹偏移量
	        if ture_overlap and line_cross(c_point4, c_point1, c_point8, c_point5, w, h,
	                                        R) and Central_offset1 > dis and Central_offset2 > dis:
	            print("overlap and line_cross and Central_offset satisfied")
	            s1, speed_jump1 = speed_jump(c_point1, c_point2, c_point4, frequency)  # 人的速度突变率
	            s2, speed_jump2 = speed_jump(c_point5, c_point6, c_point8, frequency)  # 车的速度突变率
	            angle1 = cross_angle(c_point4, c_point2, c_point2, c_point1)  # 人的转角
	            angle2 = cross_angle(c_point8, c_point6, c_point6, c_point5)  # 车的转角
	            # 判别条件2:速度突变,轨迹是否发生较大转角
	            if (speed_jump1 < 0.5 and speed_jump2 < 0.5) or angle1 > 20 or angle2 > 10:
	                # 打印“发生事故”
	                cv2.putText(frame, "incident happened", (bbox_x1, bbox_y1), 0, 5e-3 * 150,
	                            (0, 0, 255), 2)
	                # 存储事故帧号
	                accident_frame_list.append(frame_index)        

部分事故判断: 车与车


# 车和车的情况:
    if h_w1 < car_Aspect and h_w2 < car_Aspect:
        R = 150
        overlap = 0.3
        Central_offset1 = Central_offset(c_point4, c_point1, w)
        Central_offset2 = Central_offset(c_point8, c_point5, w)
        dis = 0.02 * w
        ture_overlap = bb_overlap(bbox1_leftp, bbox1_rightp, bbox2_leftp, bbox2_rightp,
                                    overlap)
        # 判别条件1:重合度,延长后轨迹相交,轨迹偏移量
        if ture_overlap and line_cross(c_point4, c_point1, c_point8, c_point5, w, h,
                                        R) and Central_offset1 > dis and Central_offset2 > dis:
            print("overlap and line_cross and Central_offset satisfied")
            s1, speed_jump1 = speed_jump(c_point1, c_point2, c_point4, frequency)
            s2, speed_jump2 = speed_jump(c_point5, c_point6, c_point8, frequency)
            angle1 = cross_angle(c_point4, c_point2, c_point2, c_point1)
            angle2 = cross_angle(c_point8, c_point6, c_point6, c_point5)
            # 判别条件2:速度突变,轨迹是否发生较大转角
            if speed_jump1 < 0.4 or speed_jump2 < 0.4 and (angle1 > 25 or angle2 > 25):
                cv2.putText(frame, "incident happened", (bbox_x1, bbox_y1), 0, 5e-3 * 150,
                            (0, 0, 255), 2)
                # 存储事故帧号
                accident_frame_list.append(frame_index)

demo 展示

  • demo 1 :目标检测与轨迹跟踪刻画/对特定区域进行检测目标的计数统计

目标检测与轨迹跟踪刻画/对特定区域进行检测目标的计数统计demo

  • demo中的坐标判断存在小问题,区域坐标判断的逻辑需要调整。
  • demo 2 :道路监控视频事故检测

道路监控视频事故检测

你可能感兴趣的:(项目与比赛,tensorflow,深度学习,目标检测)