YOLOv4: Optimal Speed and Accuracy of Object Detection-论文链接-代码链接
作为一种经典的单阶段目标检测框架,YOLO系列的目标检测算法得到了学术界与工业界们的广泛关注。由于YOLO系列属于单阶段目标检测,因而具有较快的推理速度,能够更好的满足现实场景的需求。随着YOLOv3算法的出现,使得YOLO系列的检测算达到了高潮。YOLOv4则是在YOLOv3算法的基础上增加了很多实用的技巧,使得它的速度与精度都得到了极大的提升,本文将对YOLOv4算法的细节进行分析。
YOLOv4是一种单阶段目标检测算法,该算法在YOLOv3的基础上添加了一些新的改进思路,使得其速度与精度都得到了极大的性能提升。主要的改进思路如下所示:
上图展示了YOLOv4目标检测算法的整体框图。对于一个目标检测算法而言,我们通常可以将其划分为4个通用的模块,具体包括:输入端、基准网络、Neck网络与Head输出端,对应于上图中的4个红色模块。
为了获得更鲁棒的特征表示,通常会在基准网络和输出层之间插入一些层,YOLOv4的主要添加了SPP模块与FPN+PAN2种方式。
SPP模块-SPP模块通过融合不同大小的最大池化层来获得鲁棒的特征表示,YOLOv4中的k={1*1,5*5,9*9,13*13}包含这4种形式。这里的最大池化层采用padding操作,移动步长为1,比如输入特征图的大小为13x13,使用的池化核大小为5x5,padding=2,因此池化后的特征图大小仍然是13×13。YOLOv4论文表明:(1)与单纯的使用k*k最大池化的方式相比,采用SPP模块的方式能够更有效的增加主干特征的接收范围,显著的分离了最重要的上下文特征。(2)在COCO目标检测任务中,当输入图片的大小为608*608时,只需要额外花费0.5%的计算代价就可以将AP50提升2.7%,因此YOLOv4算法中也采用了SPP模块。
FPN+PAN-所谓的FPN,即特征金字塔网络,通过在特征图上面构建金字塔,可以更好的解决目标检测中尺度问题。PAN则是借鉴了图像分割领域PANet算法中的创新点,它是一种自底向上的结构,它在FPN的基础上增加了两个PAN结构,如下图中的2和3所示。(1)整个网络的输入图像大小为608*608;然后经过CSP块之后生成一个76*76大小的特征映射,经过下采样操作之后生成38*38的特征映射,经过下采样操作之后生成19*19的特征映射;(2)接着将其传入FPN结构中,依次对19*19、38*38、76*76执行融合操作,即先对比较小的特征映射层执行上采样操作,将其调整成相同大小,然后将两个同等大小的特征映射叠加起来。通过FPN操作可以将19*19大小的特征映射调整为76*76大小,这样不仅提升了特征映射的大小,可以更好的解决检测中尺度 问题,而且增加了网络的深度,提升了网络的鲁棒性。(3)接着将其传入PAN结构中,PANet网络的PAN结构是将两个相同大小的特征映射执行按位加操作,YOLOv4中使用Concat操作来代替它。经过两个PAN结构,我们将76*76大小的特征映射重新调整为19*19大小,这样可以在一定程度上提升该算法的目标定位能力。FPN层自顶向下可以捕获强语义特征,而PAF则通过自底向上传达强定位特征,通过组合这两个模块,可以很好的完成目标定位的功能。
# 导入对应的python三方包
import torch
from torch import nn
import torch.nn.functional as F
from tool.torch_utils import *
from tool.yolo_layer import YoloLayer
# Mish激活函数类
class Mish(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
x = x * (torch.tanh(torch.nn.functional.softplus(x)))
return x
# 上采样操作类
class Upsample(nn.Module):
def __init__(self):
super(Upsample, self).__init__()
def forward(self, x, target_size, inference=False):
assert (x.data.dim() == 4)
# _, _, tH, tW = target_size
if inference:
#B = x.data.size(0)
#C = x.data.size(1)
#H = x.data.size(2)
#W = x.data.size(3)
return x.view(x.size(0), x.size(1), x.size(2), 1, x.size(3), 1).\
expand(x.size(0), x.size(1), x.size(2), target_size[2] // x.size(2), x.size(3), target_size[3] // x.size(3)).\
contiguous().view(x.size(0), x.size(1), target_size[2], target_size[3])
else:
return F.interpolate(x, size=(target_size[2], target_size[3]), mode='nearest')
# Conv+BN+Activation模块
class Conv_Bn_Activation(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride, activation, bn=True, bias=False):
super().__init__()
pad = (kernel_size - 1) // 2
self.conv = nn.ModuleList()
if bias:
self.conv.append(nn.Conv2d(in_channels, out_channels, kernel_size, stride, pad))
else:
self.conv.append(nn.Conv2d(in_channels, out_channels, kernel_size, stride, pad, bias=False))
if bn:
self.conv.append(nn.BatchNorm2d(out_channels))
if activation == "mish":
self.conv.append(Mish())
elif activation == "relu":
self.conv.append(nn.ReLU(inplace=True))
elif activation == "leaky":
self.conv.append(nn.LeakyReLU(0.1, inplace=True))
elif activation == "linear":
pass
else:
print("activate error !!! {} {} {}".format(sys._getframe().f_code.co_filename,
sys._getframe().f_code.co_name, sys._getframe().f_lineno))
def forward(self, x):
for l in self.conv:
x = l(x)
return x
# Res残差块类
class ResBlock(nn.Module):
"""
Sequential residual blocks each of which consists of \
two convolution layers.
Args:
ch (int): number of input and output channels.
nblocks (int): number of residual blocks.
shortcut (bool): if True, residual tensor addition is enabled.
"""
def __init__(self, ch, nblocks=1, shortcut=True):
super().__init__()
self.shortcut = shortcut
self.module_list = nn.ModuleList()
for i in range(nblocks):
resblock_one = nn.ModuleList()
resblock_one.append(Conv_Bn_Activation(ch, ch, 1, 1, 'mish'))
resblock_one.append(Conv_Bn_Activation(ch, ch, 3, 1, 'mish'))
self.module_list.append(resblock_one)
def forward(self, x):
for module in self.module_list:
h = x
for res in module:
h = res(h)
x = x + h if self.shortcut else h
return x
# 下采样方法1
class DownSample1(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = Conv_Bn_Activation(3, 32, 3, 1, 'mish')
self.conv2 = Conv_Bn_Activation(32, 64, 3, 2, 'mish')
self.conv3 = Conv_Bn_Activation(64, 64, 1, 1, 'mish')
# [route]
# layers = -2
self.conv4 = Conv_Bn_Activation(64, 64, 1, 1, 'mish')
self.conv5 = Conv_Bn_Activation(64, 32, 1, 1, 'mish')
self.conv6 = Conv_Bn_Activation(32, 64, 3, 1, 'mish')
# [shortcut]
# from=-3
# activation = linear
self.conv7 = Conv_Bn_Activation(64, 64, 1, 1, 'mish')
# [route]
# layers = -1, -7
self.conv8 = Conv_Bn_Activation(128, 64, 1, 1, 'mish')
def forward(self, input):
x1 = self.conv1(input)
x2 = self.conv2(x1)
x3 = self.conv3(x2)
# route -2
x4 = self.conv4(x2)
x5 = self.conv5(x4)
x6 = self.conv6(x5)
# shortcut -3
x6 = x6 + x4
x7 = self.conv7(x6)
# [route]
# layers = -1, -7
x7 = torch.cat([x7, x3], dim=1)
x8 = self.conv8(x7)
return x8
# 下采样方法2
class DownSample2(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = Conv_Bn_Activation(64, 128, 3, 2, 'mish')
self.conv2 = Conv_Bn_Activation(128, 64, 1, 1, 'mish')
# r -2
self.conv3 = Conv_Bn_Activation(128, 64, 1, 1, 'mish')
self.resblock = ResBlock(ch=64, nblocks=2)
# s -3
self.conv4 = Conv_Bn_Activation(64, 64, 1, 1, 'mish')
# r -1 -10
self.conv5 = Conv_Bn_Activation(128, 128, 1, 1, 'mish')
def forward(self, input):
x1 = self.conv1(input)
x2 = self.conv2(x1)
x3 = self.conv3(x1)
r = self.resblock(x3)
x4 = self.conv4(r)
x4 = torch.cat([x4, x2], dim=1)
x5 = self.conv5(x4)
return x5
# 下采样方法3
class DownSample3(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = Conv_Bn_Activation(128, 256, 3, 2, 'mish')
self.conv2 = Conv_Bn_Activation(256, 128, 1, 1, 'mish')
self.conv3 = Conv_Bn_Activation(256, 128, 1, 1, 'mish')
self.resblock = ResBlock(ch=128, nblocks=8)
self.conv4 = Conv_Bn_Activation(128, 128, 1, 1, 'mish')
self.conv5 = Conv_Bn_Activation(256, 256, 1, 1, 'mish')
def forward(self, input):
x1 = self.conv1(input)
x2 = self.conv2(x1)
x3 = self.conv3(x1)
r = self.resblock(x3)
x4 = self.conv4(r)
x4 = torch.cat([x4, x2], dim=1)
x5 = self.conv5(x4)
return x5
# 下采样方法4
class DownSample4(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = Conv_Bn_Activation(256, 512, 3, 2, 'mish')
self.conv2 = Conv_Bn_Activation(512, 256, 1, 1, 'mish')
self.conv3 = Conv_Bn_Activation(512, 256, 1, 1, 'mish')
self.resblock = ResBlock(ch=256, nblocks=8)
self.conv4 = Conv_Bn_Activation(256, 256, 1, 1, 'mish')
self.conv5 = Conv_Bn_Activation(512, 512, 1, 1, 'mish')
def forward(self, input):
x1 = self.conv1(input)
x2 = self.conv2(x1)
x3 = self.conv3(x1)
r = self.resblock(x3)
x4 = self.conv4(r)
x4 = torch.cat([x4, x2], dim=1)
x5 = self.conv5(x4)
return x5
# 下采样方法5
class DownSample5(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = Conv_Bn_Activation(512, 1024, 3, 2, 'mish')
self.conv2 = Conv_Bn_Activation(1024, 512, 1, 1, 'mish')
self.conv3 = Conv_Bn_Activation(1024, 512, 1, 1, 'mish')
self.resblock = ResBlock(ch=512, nblocks=4)
self.conv4 = Conv_Bn_Activation(512, 512, 1, 1, 'mish')
self.conv5 = Conv_Bn_Activation(1024, 1024, 1, 1, 'mish')
def forward(self, input):
x1 = self.conv1(input)
x2 = self.conv2(x1)
x3 = self.conv3(x1)
r = self.resblock(x3)
x4 = self.conv4(r)
x4 = torch.cat([x4, x2], dim=1)
x5 = self.conv5(x4)
return x5
# Neck网络类
class Neck(nn.Module):
def __init__(self, inference=False):
super().__init__()
self.inference = inference
self.conv1 = Conv_Bn_Activation(1024, 512, 1, 1, 'leaky')
self.conv2 = Conv_Bn_Activation(512, 1024, 3, 1, 'leaky')
self.conv3 = Conv_Bn_Activation(1024, 512, 1, 1, 'leaky')
# SPP
self.maxpool1 = nn.MaxPool2d(kernel_size=5, stride=1, padding=5 // 2)
self.maxpool2 = nn.MaxPool2d(kernel_size=9, stride=1, padding=9 // 2)
self.maxpool3 = nn.MaxPool2d(kernel_size=13, stride=1, padding=13 // 2)
# R -1 -3 -5 -6
# SPP
self.conv4 = Conv_Bn_Activation(2048, 512, 1, 1, 'leaky')
self.conv5 = Conv_Bn_Activation(512, 1024, 3, 1, 'leaky')
self.conv6 = Conv_Bn_Activation(1024, 512, 1, 1, 'leaky')
self.conv7 = Conv_Bn_Activation(512, 256, 1, 1, 'leaky')
# UP
self.upsample1 = Upsample()
# R 85
self.conv8 = Conv_Bn_Activation(512, 256, 1, 1, 'leaky')
# R -1 -3
self.conv9 = Conv_Bn_Activation(512, 256, 1, 1, 'leaky')
self.conv10 = Conv_Bn_Activation(256, 512, 3, 1, 'leaky')
self.conv11 = Conv_Bn_Activation(512, 256, 1, 1, 'leaky')
self.conv12 = Conv_Bn_Activation(256, 512, 3, 1, 'leaky')
self.conv13 = Conv_Bn_Activation(512, 256, 1, 1, 'leaky')
self.conv14 = Conv_Bn_Activation(256, 128, 1, 1, 'leaky')
# UP
self.upsample2 = Upsample()
# R 54
self.conv15 = Conv_Bn_Activation(256, 128, 1, 1, 'leaky')
# R -1 -3
self.conv16 = Conv_Bn_Activation(256, 128, 1, 1, 'leaky')
self.conv17 = Conv_Bn_Activation(128, 256, 3, 1, 'leaky')
self.conv18 = Conv_Bn_Activation(256, 128, 1, 1, 'leaky')
self.conv19 = Conv_Bn_Activation(128, 256, 3, 1, 'leaky')
self.conv20 = Conv_Bn_Activation(256, 128, 1, 1, 'leaky')
def forward(self, input, downsample4, downsample3, inference=False):
x1 = self.conv1(input)
x2 = self.conv2(x1)
x3 = self.conv3(x2)
# SPP
m1 = self.maxpool1(x3)
m2 = self.maxpool2(x3)
m3 = self.maxpool3(x3)
spp = torch.cat([m3, m2, m1, x3], dim=1)
# SPP end
x4 = self.conv4(spp)
x5 = self.conv5(x4)
x6 = self.conv6(x5)
x7 = self.conv7(x6)
# UP
up = self.upsample1(x7, downsample4.size(), self.inference)
# R 85
x8 = self.conv8(downsample4)
# R -1 -3
x8 = torch.cat([x8, up], dim=1)
x9 = self.conv9(x8)
x10 = self.conv10(x9)
x11 = self.conv11(x10)
x12 = self.conv12(x11)
x13 = self.conv13(x12)
x14 = self.conv14(x13)
# UP
up = self.upsample2(x14, downsample3.size(), self.inference)
# R 54
x15 = self.conv15(downsample3)
# R -1 -3
x15 = torch.cat([x15, up], dim=1)
x16 = self.conv16(x15)
x17 = self.conv17(x16)
x18 = self.conv18(x17)
x19 = self.conv19(x18)
x20 = self.conv20(x19)
return x20, x13, x6
# Head网络类
class Yolov4Head(nn.Module):
def __init__(self, output_ch, n_classes, inference=False):
super().__init__()
self.inference = inference
self.conv1 = Conv_Bn_Activation(128, 256, 3, 1, 'leaky')
self.conv2 = Conv_Bn_Activation(256, output_ch, 1, 1, 'linear', bn=False, bias=True)
self.yolo1 = YoloLayer(
anchor_mask=[0, 1, 2], num_classes=n_classes,
anchors=[12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401],
num_anchors=9, stride=8)
# R -4
self.conv3 = Conv_Bn_Activation(128, 256, 3, 2, 'leaky')
# R -1 -16
self.conv4 = Conv_Bn_Activation(512, 256, 1, 1, 'leaky')
self.conv5 = Conv_Bn_Activation(256, 512, 3, 1, 'leaky')
self.conv6 = Conv_Bn_Activation(512, 256, 1, 1, 'leaky')
self.conv7 = Conv_Bn_Activation(256, 512, 3, 1, 'leaky')
self.conv8 = Conv_Bn_Activation(512, 256, 1, 1, 'leaky')
self.conv9 = Conv_Bn_Activation(256, 512, 3, 1, 'leaky')
self.conv10 = Conv_Bn_Activation(512, output_ch, 1, 1, 'linear', bn=False, bias=True)
self.yolo2 = YoloLayer(
anchor_mask=[3, 4, 5], num_classes=n_classes,
anchors=[12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401],
num_anchors=9, stride=16)
# R -4
self.conv11 = Conv_Bn_Activation(256, 512, 3, 2, 'leaky')
# R -1 -37
self.conv12 = Conv_Bn_Activation(1024, 512, 1, 1, 'leaky')
self.conv13 = Conv_Bn_Activation(512, 1024, 3, 1, 'leaky')
self.conv14 = Conv_Bn_Activation(1024, 512, 1, 1, 'leaky')
self.conv15 = Conv_Bn_Activation(512, 1024, 3, 1, 'leaky')
self.conv16 = Conv_Bn_Activation(1024, 512, 1, 1, 'leaky')
self.conv17 = Conv_Bn_Activation(512, 1024, 3, 1, 'leaky')
self.conv18 = Conv_Bn_Activation(1024, output_ch, 1, 1, 'linear', bn=False, bias=True)
self.yolo3 = YoloLayer(
anchor_mask=[6, 7, 8], num_classes=n_classes,
anchors=[12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401],
num_anchors=9, stride=32)
def forward(self, input1, input2, input3):
x1 = self.conv1(input1)
x2 = self.conv2(x1)
x3 = self.conv3(input1)
# R -1 -16
x3 = torch.cat([x3, input2], dim=1)
x4 = self.conv4(x3)
x5 = self.conv5(x4)
x6 = self.conv6(x5)
x7 = self.conv7(x6)
x8 = self.conv8(x7)
x9 = self.conv9(x8)
x10 = self.conv10(x9)
# R -4
x11 = self.conv11(x8)
# R -1 -37
x11 = torch.cat([x11, input3], dim=1)
x12 = self.conv12(x11)
x13 = self.conv13(x12)
x14 = self.conv14(x13)
x15 = self.conv15(x14)
x16 = self.conv16(x15)
x17 = self.conv17(x16)
x18 = self.conv18(x17)
if self.inference:
y1 = self.yolo1(x2)
y2 = self.yolo2(x10)
y3 = self.yolo3(x18)
return get_region_boxes([y1, y2, y3])
else:
return [x2, x10, x18]
# 整个Yolov4网络类
class Yolov4(nn.Module):
def __init__(self, yolov4conv137weight=None, n_classes=80, inference=False):
super().__init__()
output_ch = (4 + 1 + n_classes) * 3
# backbone
self.down1 = DownSample1()
self.down2 = DownSample2()
self.down3 = DownSample3()
self.down4 = DownSample4()
self.down5 = DownSample5()
# neck
self.neek = Neck(inference)
# yolov4conv137
if yolov4conv137weight:
_model = nn.Sequential(self.down1, self.down2, self.down3, self.down4, self.down5, self.neek)
pretrained_dict = torch.load(yolov4conv137weight)
model_dict = _model.state_dict()
# 1. filter out unnecessary keys
pretrained_dict = {
k1: v for (k, v), k1 in zip(pretrained_dict.items(), model_dict)}
# 2. overwrite entries in the existing state dict
model_dict.update(pretrained_dict)
_model.load_state_dict(model_dict)
# head
self.head = Yolov4Head(output_ch, n_classes, inference)
def forward(self, input):
d1 = self.down1(input)
d2 = self.down2(d1)
d3 = self.down3(d2)
d4 = self.down4(d3)
d5 = self.down5(d4)
x20, x13, x6 = self.neek(d5, d4, d3)
output = self.head(x20, x13, x6)
return output
Yolov4视频检测实例
上图展示了不同的数据增强操作对CSPResNeXt-50分类器的精度影响。通过观察我们可以发现:(1)使用CutMix数据增强操作之后,CSPResNeXt-50分类器的top1与top5精度都得到了较大的提升;(2)与CutMix数据增强操作相比,使用Mosaic数据增强操作之后,CSPResNeXt-50分类器能够获得进一步的提升;(3)同时使用CutMix、Mosaic、Label Smoothing与Mish操作之后,CSPResNeXt-50分类器的top1与top5精度得到了最高的精度。
上图展示了使用不同的优化策略之后,对YOLOv4算法的AP指标所产生的影响。其中S表示消除梯度敏感度,M表示使用Mosia数据增强操作,IT表示IoU阈值,GA表示遗传算法,LS表示类别标签平滑操作,CBS表示最小Batch归一化操作,CA表示模拟退火机制,DM表示动态最小Batch大小,OA表示优化的锚点框,loss表示回归分支的损失函数。通过观察我们可以得出以下的初步结论:(1)与没有使用任何优化操作的AP指标相比,使用Mosia数据增强操作之后的AP指标提升了1.1%;(2)与没有使用任何优化操作的AP指标相比,使用GA来选择最优的超参数的方法可以将AP指标提升到38.9%;(3)同时使用消失梯度敏感度、Mosaic数据增强操作、IoU阈值、遗传算法、优化的锚点框和CIoU损失之后,可以获得最高的AP指标。
上图展示了PAN、SPP、RFB与SAM模块对YOLOv4算法的影响。通过观察我们可以发现:在基准网络CSPResNeXt50的基础上,同时利用PAN、SPP与SAM模块,可以获得最高的AP指标。
上图展示了YOLOv4与其它state-of-srt目标检测算法在MS COCO数据集上面的各项指标。具体包括目标检测的Backbone网络、模型Size大小、模型帧率FPS、模型评估指标AP、AP50、AP75、APS、APM与APL。通过观察上图,我们可以得出以下的初步结论:(1)与YOLOv3相比,YOLOv4几乎具有相同的输入分辨率,具有相似的模型大小,各项AP指标却得到了极大的性能提升,平均提升了10个百分点左右;(2)与性能优异的RetinaNet相比,YOLOv4不仅具有较小的输入分辨率,而且性能得到了极大的提升,平均提升了5个百分点左右;(3)与基于Anchor-free的目标检测算法CornerNet相比,虽然YOLOv4的模型较大一些,但是YOLOv4的性能提升了3个百分点左右。
上图展示了YOLOv4与其它state-of-srt目标检测算法在MS COCO数据集上面的各项指标。通过观察上图,我们可以得出以下的初步结论:(1)与CenterMask-Lite算法相比,虽然CenterMask-Lite具有更大的分辨率,但是YOLOv4算法的精度提升了3个百分点左右;(2)与HSD目标检测算法相比,两者的分辨率大小基本相同,虽然HSD算法使用了更深的基准网络,但是算法精度并不如YOLOv4算法;(3)与双阶段目标检测算法Faster R-CNN、RetinaNet与Cascade R-CNN相比,YOLOv4算法的AP指标都得到了极大的性能提升;(4)与基于Anchor-free的目标检测算法Centernet相比,当使用了基准网络Hourglass-104之后,虽然Centernet获得了更高的AP指标,但是其推理速度并不能满足很多设备的要求。
上图展示了YOLOv4与其它state-of-srt目标检测算法在MS COCO数据集上面的各项指标。通过观察上图,我们可以得出以下的初步结论:(1)与轻量级目标检测算法EfficientDet相比,当其具有相似的分辨率时,获得了相似的AP指标,但是YOLOv4算法的运行速度更快一些;(2)与添加了ASFF模块的YOLOv3算法相比,虽然它们具有相同的输入分辨率,但是YOLOv4在精度与速度方面优于前者;(3)与SM-NAS E3算法相比,虽然SE-NAS的输入图像更大一些,但是其速度与AP精度均不如YOLOv4检测算法。(4)与NAS-FPN算法相比,该算法不仅利用到NAS技术来寻找最优的目标检测网络架构,而且利用FPN操作来解决目标检测中的尺度问题。当输入图像大小为1024*1024时,虽然NAS-FPN在AP指标上面更高一些,但是它的运行速度变得很慢。
YOLOv4是一种单阶段目标检测算法,该算法在YOLOv3的基础上添加了一些新的改进思路,使得其速度与精度都得到了极大的性能提升,具体包括:输入端的Mosaic数据增强、cmBN、SAT自对抗训练操作;基准端的CSPDarknet53、Mish激活函数、Dropblock操作;Neck端的SPP与FPN+PAN结构;输出端的损失函数CIOU_Loss以及预测框筛选的DIOU_nms。实际测试发现YOLOv4算法确定具有较大的性能提升。除此之外,YOLOv4中的各种改进思路仍然可以应用到其它的目标检测算法中。
[1] 原始论文
[2] 博客链接1
[1] 该博客是本人原创博客,如果您对该博客感兴趣,想要转载该博客,请与我联系(qq邮箱:[email protected]),我会在第一时间回复大家,谢谢大家的关注。
[2] 由于个人能力有限,该博客可能存在很多的问题,希望大家能够提出改进意见。
[3] 如果您在阅读本博客时遇到不理解的地方,希望您可以联系我,我会及时的回复您,和您交流想法和意见,谢谢。
[4] 本文中部分图像的版权归江大白所有。
[5] 本人业余时间承接各种本科毕设设计和各种项目,包括图像处理(数据挖掘、机器学习、深度学习等)、matlab仿真、python算法及仿真等,非诚勿扰,有需要的请加QQ:1575262785详聊,备注“项目”!!!