当训练好了模型后,用训练好的权重文件进行前向传播提取预测结果信息,后处理阶段对提取出的结果使用非最大抑制过滤预测框,本文不涉及过多原理,主要是代码实现
原理参考:https://jonathan-hui.medium.com/real-time-object-detection-with-yolo-yolov2-28b1b93e2088
YOLOV3-模型权重文件:https://pjreddie.com/darknet/yolo/
COCO数据集80个类别名称:https://github.com/pjreddie/darknet/blob/master/data/coco.names
yolov3原理讲解视频:【【机器学习】YOLOV3的原理及其介绍 全网最好最简单的课程!!】
运行需要cfg和yolov3.weights和coco.names
详细过程与代码展示:
import cv2
import numpy as np
import matplotlib.pyplot as plt
# 加载图像
img = cv2.imread('chicken.jpg')
height, width, _ = img.shape
# 转换图像为网络输入的blob对象
blob = cv2.dnn.blobFromImage(img, 1/255, (416, 416), (0, 0, 0), swapRB=True, crop=False)
将输入的图片都转换为416*416大小,其实在yolov3中输入的大小只要是可以被32整除的数都可以。
解码:对先验框进行调整的过程,解码后得到很多的预测框。
13×13的特征层,就有13×13×3=507个预测框。
26×26的特征层,就有26×26×3=2028个预测框。
52×52的特征层,就有52×52×3=8112个预测框。
如图,每个grid cell 会输出b个boudingbox,按照上面的计算量,会输出密密麻麻的boudingbox
# 存储预测框相关的信息
boxes = []
objectness = []
class_probs = []
class_ids = []
class_names = []
# 提取预测结果信息
for scale in prediction:
for bbox in scale:
obj = bbox[4]
class_scores = bbox[5:]
class_id = np.argmax(class_scores)
class_name = classes[class_id]
class_prob = class_scores[class_id]
center_x = int(bbox[0] * width)
center_y = int(bbox[1] * height)
w = int(bbox[2] * width)
h = int(bbox[3] * height)
x = int(center_x - w/2)
y = int(center_y - h/2)
boxes.append([x, y, w, h])
objectness.append(float(obj))
class_ids.append(class_id)
class_names.append(class_name)
class_probs.append(class_prob)
# 计算置信度
confidences = np.array(class_probs) * np.array(objectness)
CONF_THRES = 0.1
NMS_THRES = 0.6
# 使用非最大抑制过滤预测框
indexes = cv2.dnn.NMSBoxes(boxes, confidences, CONF_THRES, NMS_THRES)
import cv2
import numpy as np
import matplotlib.pyplot as plt
# 加载YOLOv3模型
net = cv2.dnn.readNet('yolov3.weights', 'yolov3.cfg')
# 读取类别名称文件
with open('coco.names', 'r') as f:
classes = f.read().splitlines()
# 加载图像
img = cv2.imread('chicken.jpg')
height, width, _ = img.shape
# 转换图像为网络输入的blob对象
blob = cv2.dnn.blobFromImage(img, 1/255, (416, 416), (0, 0, 0), swapRB=True, crop=False)
# 设置网络的输入
net.setInput(blob)
# 获取输出层的名称
layersNames = net.getLayerNames()
output_layers_names = [layersNames[i - 1] for i in net.getUnconnectedOutLayers()]
# 进行前向推断
prediction = net.forward(output_layers_names)
# 存储预测框相关的信息
boxes = []
objectness = []
class_probs = []
class_ids = []
class_names = []
# 提取预测结果信息
for scale in prediction:
for bbox in scale:
obj = bbox[4]
class_scores = bbox[5:]
class_id = np.argmax(class_scores)
class_name = classes[class_id]
class_prob = class_scores[class_id]
center_x = int(bbox[0] * width)
center_y = int(bbox[1] * height)
w = int(bbox[2] * width)
h = int(bbox[3] * height)
x = int(center_x - w/2)
y = int(center_y - h/2)
boxes.append([x, y, w, h])
objectness.append(float(obj))
class_ids.append(class_id)
class_names.append(class_name)
class_probs.append(class_prob)
# 计算置信度
confidences = np.array(class_probs) * np.array(objectness)
CONF_THRES = 0.1
NMS_THRES = 0.6
# 使用非最大抑制过滤预测框
indexes = cv2.dnn.NMSBoxes(boxes, confidences, CONF_THRES, NMS_THRES)
# 定义预测框的颜色列表
colors = [[255,0,255], [0,0,255], [0,255,255], [0,255,0], [255,255,0], [255,0,0], [180,187,28], [223,155,6], [94,218,121], [139,0,0], [77,169,10], [29,123,243], [66,77,229], [1,240,255], [140,47,240], [31,41,81], [29,123,243], [16,144,247], [151,57,224]]
# 绘制预测框和类别信息
for i in indexes.flatten():
x, y, w, h = boxes[i]
confidence = str(round(confidences[i], 2))
color = colors[i % len(colors)]
cv2.rectangle(img, (x, y), (x+w, y+h), color, 8)
string = '{} {}'.format(class_names[i], confidence)
cv2.putText(img, string, (x, y+20), cv2.FONT_HERSHEY_PLAIN, 3, (255, 255, 255), 5)
# 保存图像
cv2.imwrite('result-test.jpg', img)
1.为什么输入的图片大小只要能被32整除的数就可以?
yolov3在三个尺度完成探测,分别在82层,94层,106层,网络对输入的图片进行下采样,采用的步长为32,16,8。
步长为32并且输入416×416的图片,下采样后输出为13×13
步长为16并且输入416×416的图片,下采样后输出为26×26
步长为8并且输入416×416的图片,下采样后输出为52×52
所以,为了能够下采样输入的大小一定是可以被32整除的,那么能被32整除一定可以被16和8整除。