darknet YoloV4手册翻译

https://github.com/AlexeyAB/darknet

(对象检测神经网络) – 可用于Linxuwindows的张量计算核心

Yolo v4网页https://arxiv.org/abs/2004.10934

更多细节http://pjreddie.com/darknet/yolo/

  • 必要条件(及依赖性安装)Requirements (and how to install dependecies)
  • 预训练模型Pre-trained models
  • 问题的解释Explanations in issues
  • YoloV3在其他框架中的应用Yolo v3 in other frameworks (TensorRT, TensorFlow, PyTorch, OpenVINO, OpenCV-dnn, TVM,...)
  • 数据集Datasets
  1. 改进Improvements in this repository
  2. 如何使用How to use
  3. 如何在Linux下编译How to compile on Linux
    • 使用cmake  Using cmake
    • 使用make   Using make
  4. 如何在Windows下编译How to compile on Windows
    • 使用CMake-GUI  Using CMake-GUI
    • 使用vcpkg       Using vcpkg
    • 传统方式         Legacy way
  5. 在MS COCO上训练评估速度和精度Training and Evaluation of speed and accuracy on MS COCO
  6. 如何在多GPU上训练How to train with multi-GPU:
  7. 如何训练自定义对象检测器How to train (to detect your custom objects)
  8. 如何训练tiny-yolo自定义对象检测器How to train tiny-yolo (to detect your custom objects)
  9. 何时需要停止训练When should I stop training
  10. 如何改进对象检测How to improve object detection
  11. 如何标注对象范围框并创建标注文件How to mark bounded boxes of objects and create annotation files
  12. 如何使用Yolo的dll和so库How to use Yolo as DLL and SO libraries

darknet YoloV4手册翻译_第1张图片

  • YoloV4完全比对Yolo v4 Full comparison: map_fps
  • CSPNet: (paper and map_fps )比较: https://github.com/WongKinYiu/CrossStagePartialNetworks
  • MS COCO YoloV3: Speed / Accuracy ([email protected]) chart
  • MS COCO YoloV3 (YoloV3 vs RetinaNet) - Figure 3: https://arxiv.org/pdf/1804.02767v1.pdf
  • Pascal VOC 2007 YoloV2: https://hsto.org/files/a24/21e/068/a2421e0689fb43f08584de9d44c2215f.jpg
  • Pascal VOC 2012 YoloV2(comp4): https://hsto.org/files/3a6/fdf/b53/3a6fdfb533f34cee9b52bdd9bb0b19d9.jpg

如何在MS COCO评估服务上评估YoloV4AP

  1. 在MS COCO服务器上下载并解压test-dev2017( http://images.cocodataset.org/zips/test2017.zip

Download and unzip test-dev2017 dataset from MS COCO server: http://images.cocodataset.org/zips/test2017.zip

  1. 下载检测任务图片,并替换文件路径

下载检测任务图片列表并替换为实际路径:

 https://raw.githubusercontent.com/AlexeyAB/darknet/master/scripts/testdev2017.txt

  1. 下载yolov4.weights文件: https://drive.google.com/open?id=1cewMfusmPjYWbrnuJRuKhPMwRe_b9PaT
  2. 调整cfg/coco.data文件如下所示:

classes= 80

train  = /trainvalno5k.txt

valid = /testdev2017.txt

names = data/coco.names

backup = backup

eval=coco

  1. 在./darknet可执行文件同级创建/results目录
  2. 执行验证: ./darknet detector valid cfg/coco.data cfg/yolov4.cfg yolov4.weights
  3. 将/results/coco_results.json 文件重命名为 detections_test-dev2017_yolov4_results.json,并压缩为detections_test-dev2017_yolov4_results.zip。
  4. 提交detections_test-dev2017_yolov4_results.zip文件到MS COCO的test-dev2019(bbox)评估服务

如何在GPU下评估YoloV4的帧率

  1. 设置Makefile中参数GPU=1 CUDNN=1 CUDNN_HALF=1 OPENCV=1后编译Darknet (Cmake使用同样配置)
  2. 下载 yolov4.weights: https://drive.google.com/open?id=1cewMfusmPjYWbrnuJRuKhPMwRe_b9PaT
  3. 获取任意.avi/.mp4视频文件(分辨率最好不要超过1920x1080以免引起cpu性能瓶颈)
  4. 运行下面两个命令之一,查看平均帧率:
  • 包含video_capturing + NMS + drawing_bboxes: ./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights test.mp4 -dont_show -ext_output
  • 不含video_capturing + NMS + drawing_bboxes: ./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights test.mp4 -benchmark

预训练模型

不同cfg-文件对应的权重文件(使用MS COCO 数据集训练):

RTX 2070 (R)Tesla V100 (V)上的帧率:

Yolo v3 模型

  • csresnext50-panet-spp-original-optimal.cfg - 65.4% [email protected] (43.2% [email protected]:0.95) - 32(R) FPS - 100.5 BFlops - 217 MB: csresnext50-panet-spp-original-optimal_final.weights
  • yolov3-spp.cfg - 60.6% [email protected] - 38(R) FPS - 141.5 BFlops - 240 MB: yolov3-spp.weights
  • csresnext50-panet-spp.cfg - 60.0% [email protected] - 44 FPS - 71.3 BFlops - 217 MB: csresnext50-panet-spp_final.weights
  • yolov3.cfg - 55.3% [email protected] - 66(R) FPS - 65.9 BFlops - 236 MB: yolov3.weights
  • yolov3-tiny.cfg - 33.1% [email protected] - 345(R) FPS - 5.6 BFlops - 33.7 MB: yolov3-tiny.weights

 

Yolo v2 模型

  • yolov2.cfg (194 MB COCO Yolo v2) - requires 4 GB GPU-RAM: https://pjreddie.com/media/files/yolov2.weights
  • yolo-voc.cfg (194 MB VOC Yolo v2) - requires 4 GB GPU-RAM: http://pjreddie.com/media/files/yolo-voc.weights
  • yolov2-tiny.cfg (43 MB COCO Yolo v2) - requires 1 GB GPU-RAM: https://pjreddie.com/media/files/yolov2-tiny.weights
  • yolov2-tiny-voc.cfg (60 MB VOC Yolo v2) - requires 1 GB GPU-RAM: http://pjreddie.com/media/files/yolov2-tiny-voc.weights
  • yolo9000.cfg (186 MB Yolo9000-model) - requires 4 GB GPU-RAM: http://pjreddie.com/media/files/yolo9000.weights

将下载的模型拷贝到darknet.exe同目录下。

在目录darknet/cfg/中获取cfg-文件

必要条件

  • Windows 或Linux
  • CMake >= 3.8 支持最新的CUDA: https://cmake.org/download/
  • CUDA 10.0https://developer.nvidia.com/cuda-toolkit-archive (Linux:Post-installation Actions)
  • OpenCV >= 2.4: 使用喜欢的包管理器(brew、apt)下载,或使用vcpkg从源码编译,或从OpenCV official site下载(Windows系统设置环境变量OpenCV_DIR = C:\opencv\build -其中包括include 和x64 目录)

darknet YoloV4手册翻译_第2张图片

 

  • cuDNN >= 7.0 for CUDA 10.0 https://developer.nvidia.com/rdp/cudnn-archive (Linux 下复制 cudnn.h,libcudnn.

详细见 https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#installlinux-tar , Windows 下复制 cudnn.h,cudnn64_7.dllcudnn64_7.lib 详细见 https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#installwindows )

  • GPU with CC >= 3.0https://en.wikipedia.org/wiki/CUDA#GPUs_supported
  • Linux下使用 GCC Clang, Windows下使用 MSVC 2015/2017/2019 https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=Community

其他框架下的YoloV3

  • TensorFlow: 将yolov3.weights/cfg 文件转换为yolov3.ckpt/pb/meta: 使用mystic123 或 jinyu121项目,和TensorFlow-lite
  • Intel OpenVINO 2019 R1: (Myriad X / USB Neural Compute Stick / Arria FPGA): 详见 manual
  • OpenCV-dnn 最快的CPU实现(x86/ARM-Android), OpenCV配合OpenVINO-backend 编译可运行在(Myriad X / USB Neural Compute Stick / Arria FPGA), 使用yolov3.weights/cfg: C++ example or Python example
  • PyTorch > ONNX > CoreML > iOS 如何将cfg/weights-files 转换为pt-file: ultralytics/yolov3 and iOS App
  • TensorRT for YOLOv3 (-70% faster inference): Yolo is natively supported in DeepStream 4.0 read PDF
  • TVM - 将深度学习模型(Keras, MXNet, PyTorch, Tensorflow, CoreML, DarkNet) 编译为可在各种低端硬件(CPUs, GPUs, FPGA, 及专用加速器)上最小化部署的模块: https://tvm.ai/about
  • OpenDataCam - 使用Yolo对移动物体进行检查、跟踪和计数: https://github.com/opendatacam/opendatacam#-hardware-pre-requisite
  • Netron - 神经网络可视化工具: https://github.com/lutzroeder/netron

数据集

  • MS COCO: 使用 ./scripts/get_coco_dataset.sh 下载标注好的MS COCO检测数据集
  • OpenImages: 使用python ./scripts/get_openimages_dataset.py 获取标注好的训练检测数据集
  • Pascal VOC: 使用python ./scripts/voc_label.py 获取标注好的Train/Test/Val 检测数据集
  • ILSVRC2012 (ImageNet 分类器): 使用 ./scripts/get_imagenet_train.sh (或 imagenet_label.sh 下载标注好的集合)
  • German/Belgium/Russian/LISA/MASTIF 交通信号检测数据集-见:

 https://github.com/angeligareta/Datasets2Darknet#detection-task

  • 其他数据集: https://github.com/AlexeyAB/darknet/tree/master/scripts#datasets

范例结果

https://www.youtube.com/watch?v=MPU2HistivI

darknet YoloV4手册翻译_第3张图片

其他https://www.youtube.com/user/pjreddie/videos

本库的改进

  • 添加对windows的支持
  • 添加先进的(State-of-Art)模型: CSP, PRN(医学搜索网络), EfficientNet
  • 增加层: [conv_lstm], [scale_channels] SE/ASFF/BiFPN, [local_avgpool], [sam], [Gaussian_yolo], [reorg3d] (fixed [reorg]), fixed [batchnorm]
  • 添加训练循环模型的能力(conv-lstm[conv_lstm]/conv-rnn[crnn]层),可精确检测视频
  • 增加新参数: [net] mixup=1 cutmix=1 mosaic=1 blur=1. 增加新的架构: SWISH, MISH, NORM_CHAN, NORM_CHAN_SOFTMAX
  • 增加使用CPU-RAM做GPU处理的训练能力,增加mini_batch_size参数,提升精确度(替换batch-norm)

added the ability for training with GPU-processing using CPU-RAM to increase the mini_batch_size and increase accuracy (instead of batch-norm sync)

  • 如果使用XNOR-net(https://github.com/AlexeyAB/darknet/blob/master/cfg/yolov3-tiny_xnor.cfg)模型训练权重,则使用CPU和GPU做二分类神经网络检测,性能提升2-4倍

improved binary neural network performance 2x-4x times for Detection on CPU and GPU if you trained your own weights by using this XNOR-net model (bit-1 inference) : https://github.com/AlexeyAB/darknet/blob/master/cfg/yolov3-tiny_xnor.cfg

  • 合并Convolutional + Batch-norm 为一层,神经网络效率提升约7%

improved neural network performance ~7% by fusing 2 layers into 1: Convolutional + Batch-norm

  • 提升效率:在Makefile或darknet.sln中定义CUDNN_HALF启用张量运算则在Volta/Turing(Tesla V100、Geforce RTX…)GPU上,检测效率x2

 Detection 2x times, on GPU Volta/Turing (Tesla V100, GeForce RTX, ...) using Tensor Cores if CUDNN_HALF defined in the Makefile or darknet.sln

  • 提升效率:使用darknet detector demo检测视频(文件或流), FullHD1.2倍提速,4K 2倍提速

improved performance ~1.2x times on FullHD, ~2x times on 4K, for detection on the video (file/stream) using darknet detector demo...

  • 提升效率:训练中数据增强速度提升3.5倍(使用opencv SSE/AVX函数替代原有函数)-移除多GPU或GPU Volta训练中的瓶颈

improved performance 3.5 X times of data augmentation for training (using OpenCV SSE/AVX functions instead of hand-written functions) - removes bottleneck for training on multi-GPU or GPU Volta

  • 提升效率:在支持AVX指令的Inter CPU上检测和训练效率提升85%

Improved performance of detection and training on Intel CPU with AVX (Yolo v3 ~85%)

  • 当random=1时,在调整网络大小时优化内存分配

optimized memory allocation during network resizing when random=1

  • 优化检测时的GPU初始化过程-直接将batch初始化为1,而不是重新初始化为1

optimized GPU initialization for detection - we use batch=1 initially instead of re-init with batch=1

  • 增加精确计算mAp、F1、Iou、Precision-Recall功能,命令行:darknet detector map…

added correct calculation of mAP, F1, IoU, Precision-Recall using command darknet detector map

  • 训练过程中绘制average-Loss 和accuracy-mAP (加-map 标志)chart图

added drawing of chart of average-Loss and accuracy-mAP (-map flag) during training

  • 运行./darknet detector demo ... -json_port 8070 -mjpeg_port 8090 ,可作为JSON、MJPEG服务让软件或浏览器在线获取实时训练结果

run ./darknet detector demo ... -json_port 8070 -mjpeg_port 8090 as JSON and MJPEG server to get results online over the network by using your soft or Web-browser

  • 增加训练描点计算功能
  • 增加检测和对象跟踪范例: https://github.com/AlexeyAB/darknet/blob/master/src/yolo_console_dll.cpp
  • 如果使用错误的cfg-文件或数据集,运行时进行提示和警告
  • 代码中多处修正...

增加教程How to train Yolo v4-v2 (to detect your custom objects)

同时,你也可能对简化库感兴趣-使用INT8-quantization 实现(速度提升30%mAP降低1%https://github.com/AlexeyAB/yolo2_light

如何使用命令行

Linux下使用./darknet 替代darknet.exe, :./darknet detector test ./cfg/coco.data ./cfg/yolov4.cfg ./yolov4.weights

Linux在根目录查找可执行文件./darknet, Windows下在\build\darknet\x64目录

  • Yolo v4 COCO - 图像: darknet.exe detector test cfg/coco.data cfg/yolov4.cfg yolov4.weights -thresh 0.25
  • 输出对象坐标: darknet.exe detector test cfg/coco.data yolov4.cfg yolov4.weights -ext_output dog.jpg
  • Yolo v4 COCO – 视频: darknet.exe detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights -ext_output test.mp4
  • Yolo v4 COCO - WebCam 0: darknet.exe detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights -c 0
  • Yolo v4 COCO  net-videocam - Smart WebCam: darknet.exe detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights http://192.168.0.80:8080/video?dummy=param.mjpg
  • Yolo v4 - 保存结果视频文件 res.avi: darknet.exe detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights test.mp4 -out_filename res.avi
  • Yolo v3 Tiny COCO - 视频: darknet.exe detector demo cfg/coco.data cfg/yolov3-tiny.cfg yolov3-tiny.weights test.mp4
  • JSON 和 MJPEG 服务 让多个软件或浏览器连接到 ip-address:8070 and 8090: ./darknet detector demo ./cfg/coco.data ./cfg/yolov3.cfg ./yolov3.weights test50.mp4 -json_port 8070 -mjpeg_port 8090 -ext_output
  • Yolo v3 Tiny 使用 GPU #1: darknet.exe detector demo cfg/coco.data cfg/yolov3-tiny.cfg yolov3-tiny.weights -i 1 test.mp4
  • 替代 Yolo v3 COCO - 图像: darknet.exe detect cfg/yolov4.cfg yolov4.weights -i 0 -thresh 0.25
  • Amazon EC2训练,在Chrome或Firefox浏览器上访问URL: http://ec2-35-160-228-91.us-west-2.compute.amazonaws.com:8090查看mAP & Loss-chart (Darknet编译需要依赖OpenCV): ./darknet detector train cfg/coco.data yolov4.cfg yolov4.conv.137 -dont_show -mjpeg_port 8090 -map
  • 186 MB Yolo9000 - 图像: darknet.exe detector test cfg/combine9k.data cfg/yolo9000.cfg yolo9000.weights
  • 如果实用cpp api编译app,将data/9k.tree 和 data/coco9k.map文件放到app同目录中
  • 处理图像列表 data/train.txt并将检测结果保存到result.json文件: darknet.exe detector test cfg/coco.data cfg/yolov4.cfg yolov4.weights -ext_output -dont_show -out result.json < data/train.txt
  • 处理图像列表 data/train.txt 并将结果保存到result.txt文件:
    darknet.exe detector test cfg/coco.data cfg/yolov4.cfg yolov4.weights -dont_show -ext_output < data/train.txt > result.txt
  • 伪标记 - 处理图像列表data/new_train.txt ,并将每个图像的检测结果以Yolo训练标注格式保存为到.txt (可以用来增加训练数据) : darknet.exe detector test cfg/coco.data cfg/yolov4.cfg yolov4.weights -thresh 0.25 -dont_show -save_labels < data/new_train.txt
  • 计算描点: darknet.exe detector calc_anchors data/obj.data -num_of_clusters 9 -width 416 -height 416
  • 检测精度 mAP@IoU=50: darknet.exe detector map data/obj.data yolo-obj.cfg backup\yolo-obj_7000.weights
  • 检测精度 mAP@IoU=75: darknet.exe detector map data/obj.data yolo-obj.cfg backup\yolo-obj_7000.weights -iou_thresh 0.75

在安卓智能手机中使用网络video-camera mjpeg-stream

  1. 下载Android phone mjpeg-stream 软件: IP Webcam / Smart WebCam
    • Smart WebCam - 首选: https://play.google.com/store/apps/details?id=com.acontech.android.SmartWebCam2
    • IP Webcam: https://play.google.com/store/apps/details?id=com.pas.webcam
  2. 使用wifi(through a WiFi-router)或usb将安卓连接到计算机
  3. 在手机上启动Smart WebCam
  4. 替换下面的地址为手机Smart WebCam APP中的地址,并启动:
  • Yolo v4 COCO-model: darknet.exe detector demo data/coco.data yolov4.cfg yolov4.weights http://192.168.0.80:8080/video?dummy=param.mjpg -i 0

如何在Linux下编译 (使用cmake)

CMakeLists.txt 文件尝试寻找以安装的可选依赖,如CUDA、cudn、ZED并据此进行编译。同时创建darkent共享对象库文件用于编码开发。

在clone的源码库中:

mkdir build-release
cd build-release
cmake ..
make
make install

如何在Linux下编译 (使用 make)

在darknet目录中执行make. make前,可以在Makefile中设置如下选项link

  • GPU=1 使用CUDA来build,进而使用GPU加速(CUDA包含在/usr/local/cuda)
  • CUDNN=1 使用cuDNN v5-v7来build,进而使用GPU加速训练(cuDNN 包含在/usr/local/cudnn)
  • CUDNN_HALF=1 使用Tensor Cores 来build,在Titan V / Tesla V100 / DGX-2 及更新硬件上检测提速3倍,训练提速2倍
  • OPENCV=1使用 OpenCV 4.x/3.x/2.4.x来build -可在网络摄像头或web-cams上检测视频文件和视频流
  • DEBUG=1 编译Yolo的debug版本
  • OPENMP=1 to build with OpenMP support to accelerate Yolo by using multi-core CPU
  • LIBSO=1 编译darknet.so动态库及使用这个库的可执行文件uselib.尝试运行:LD_LIBRARY_PATH=./:$LD_LIBRARY_PATH ./uselib test.mp4如何使用这个so库,见C++范例: https://github.com/AlexeyAB/darknet/blob/master/src/yolo_console_dll.cpp 或: LD_LIBRARY_PATH=./:$LD_LIBRARY_PATH ./uselib data/coco.names cfg/yolov4.cfg yolov4.weights test.mp4
  • ZED_CAMERA=1使用ZED-3D-camera 支持来build(依赖ZED SDK的安装), 然后执行LD_LIBRARY_PATH=./:$LD_LIBRARY_PATH ./uselib data/coco.names cfg/yolov4.cfg yolov4.weights zed_camera

在linux下运行本文中的darknet范例,将./darknet 替换darknet.exe, i.e. use this command: ./darknet detector test ./cfg/coco.data ./cfg/yolov4.cfg ./yolov4.weights

如何在Windows下编译(使用CMake-GUI)

如果安装了VS2015/2017/2019,CUDA >10.0,cuDNN > 7.0,OpenCV>2.4,推荐采用此方法。

注意:确保安装好CUDAOPENCV

CMake-GUI如下图所示:

darknet YoloV4手册翻译_第4张图片

  1. Configure
  2. 选择生成平台(设置为x64)Optional platform for generator (Set: x64)
  3. Finish
  4. Generate
  5. Open Project
  6. 设置为x64&Release  Set: x64 & Release
  7. Build
  8. Build solution

如何在Windows下编译(使用vcpkg)

vcpkg是自动下载依赖包的工具

如果已经安装了Visual Studio 2015/2017/2019, CUDA > 10.0, cuDNN > 7.0, OpenCV > 2.4,推荐使用CMake-GUI方式。

否则,按如下步骤:

  1. 安装并升级VS2017的最新版,确保安装全部的补丁(如不确定可再次运行安装程序)。如果是全新安装,可从Visual Studio Community下载。
  2. 安装CUDA和cuDNN
  3. 安装git和cmake.并确保都加入到了PATH环境变量中。 Make sure they are on the Path at least for the current account
  4. 安装vcpkg并安装测试库,确保所有组件ok,例如运行cvpkg install opengl
  5. 定义VCPKG_ROOT环境变量,指向vcpkg的安装目录
  6. 定义环境变量VCPKG_DEFAULT_TRIPLET,值为x64-windows
  7. 打开Powershell并输入如下命令:
    PS \>                  cd $env:VCPKG_ROOT
    PS Code\vcpkg>         .\vcpkg install pthreads opencv[ffmpeg] #replace with opencv[cuda,ffmpeg] in case you want to use cuda-accelerated openCV

     

  8. 打开Powershell,进入到darknet目录,运行命令.\build.ps1。如果要使用vs,可以在build后看到cmake生成的两个解决方案文件,一个是build_win_debug,一个是build_win_release,包含了与当前系统匹配的所有配置信息

如何在windows下编译(传统方式)

  1. If you have CUDA 10.0, cuDNN 7.4 and OpenCV 3.x (with paths: C:\opencv_3.0\opencv\build\include & C:\opencv_3.0\opencv\build\x64\vc14\lib), then open build\darknet\darknet.sln, set x64 and Release https://hsto.org/webt/uh/fk/-e/uhfk-eb0q-hwd9hsxhrikbokd6u.jpeg and do the: Build -> Build darknet. Also add Windows system variable CUDNN with path to CUDNN: https://user-images.githubusercontent.com/4096485/53249764-019ef880-36ca-11e9-8ffe-d9cf47e7e462.jpg

1.1. Find files opencv_world320.dll and opencv_ffmpeg320_64.dll (or opencv_world340.dll and opencv_ffmpeg340_64.dll) in C:\opencv_3.0\opencv\build\x64\vc14\bin and put it near with darknet.exe

1.2 Check that there are bin and include folders in the C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0 if aren't, then copy them to this folder from the path where is CUDA installed

1.3. To install CUDNN (speedup neural network), do the following:

    • download and install cuDNN v7.4.1 for CUDA 10.0https://developer.nvidia.com/rdp/cudnn-archive
    • add Windows system variable CUDNN with path to CUDNN: https://user-images.githubusercontent.com/4096485/53249764-019ef880-36ca-11e9-8ffe-d9cf47e7e462.jpg
    • copy file cudnn64_7.dll to the folder \build\darknet\x64 near with darknet.exe

1.4. If you want to build without CUDNN then: open \darknet.sln -> (right click on project) -> properties -> C/C++ -> Preprocessor -> Preprocessor Definitions, and remove this: CUDNN;

  1. If you have other version of CUDA (not 10.0) then open build\darknet\darknet.vcxproj by using Notepad, find 2 places with "CUDA 10.0" and change it to your CUDA-version. Then open \darknet.sln -> (right click on project) -> properties -> CUDA C/C++ -> Device and remove there ;compute_75,sm_75. Then do step 1
  2. If you don't have GPU, but have OpenCV 3.0 (with paths: C:\opencv_3.0\opencv\build\include & C:\opencv_3.0\opencv\build\x64\vc14\lib), then open build\darknet\darknet_no_gpu.sln, set x64 and Release, and do the: Build -> Build darknet_no_gpu
  3. If you have OpenCV 2.4.13 instead of 3.0 then you should change paths after \darknet.sln is opened

4.1 (right click on project) -> properties -> C/C++ -> General -> Additional Include Directories: C:\opencv_2.4.13\opencv\build\include

4.2 (right click on project) -> properties -> Linker -> General -> Additional Library Directories: C:\opencv_2.4.13\opencv\build\x64\vc14\lib

  1. If you have GPU with Tensor Cores (nVidia Titan V / Tesla V100 / DGX-2 and later) speedup Detection 3x, Training 2x: \darknet.sln -> (right click on project) -> properties -> C/C++ -> Preprocessor -> Preprocessor Definitions, and add here: CUDNN_HALF;

Note: CUDA must be installed only after Visual Studio has been installed.

How to compile (custom):

Also, you can to create your own darknet.sln & darknet.vcxproj, this example for CUDA 9.1 and OpenCV 3.0

Then add to your created project:

  • (right click on project) -> properties -> C/C++ -> General -> Additional Include Directories, put here:

C:\opencv_3.0\opencv\build\include;..\..\3rdparty\include;%(AdditionalIncludeDirectories);$(CudaToolkitIncludeDir);$(CUDNN)\include

  • (right click on project) -> Build dependecies -> Build Customizations -> set check on CUDA 9.1 or what version you have - for example as here: http://devblogs.nvidia.com/parallelforall/wp-content/uploads/2015/01/VS2013-R-5.jpg
  • add to project:
    • all .c files
    • all .cu files
    • file http_stream.cpp from \src directory
    • file darknet.h from \include directory
  • (right click on project) -> properties -> Linker -> General -> Additional Library Directories, put here:

C:\opencv_3.0\opencv\build\x64\vc14\lib;$(CUDA_PATH)\lib\$(PlatformName);$(CUDNN)\lib\x64;%(AdditionalLibraryDirectories)

  • (right click on project) -> properties -> Linker -> Input -> Additional dependecies, put here:

..\..\3rdparty\lib\x64\pthreadVC2.lib;cublas.lib;curand.lib;cudart.lib;cudnn.lib;%(AdditionalDependencies)

  • (right click on project) -> properties -> C/C++ -> Preprocessor -> Preprocessor Definitions

OPENCV;_TIMESPEC_DEFINED;_CRT_SECURE_NO_WARNINGS;_CRT_RAND_S;WIN32;NDEBUG;_CONSOLE;_LIB;%(PreprocessorDefinitions)

  • compile to .exe (X64 & Release) and put .dll-s near with .exe: https://hsto.org/webt/uh/fk/-e/uhfk-eb0q-hwd9hsxhrikbokd6u.jpeg
    • pthreadVC2.dll, pthreadGC2.dll from \3rdparty\dll\x64
    • cusolver64_91.dll, curand64_91.dll, cudart64_91.dll, cublas64_91.dll - 91 for CUDA 9.1 or your version, from C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.1\bin
    • For OpenCV 3.2: opencv_world320.dll and opencv_ffmpeg320_64.dll from C:\opencv_3.0\opencv\build\x64\vc14\bin
    • For OpenCV 2.4.13: opencv_core2413.dll, opencv_highgui2413.dll and opencv_ffmpeg2413_64.dll from C:\opencv_2.4.13\opencv\build\x64\vc14\bin

使用多GPU训练:

  1. 首先在1GPU下训练1000次: darknet.exe detector train cfg/coco.data cfg/yolov4.cfg yolov4.conv.137
  2. 然后停止训练,使用部分训练模型/backup/yolov4_1000.weights 继续使用多gpu(如4个gpu)进行训练: darknet.exe detector train cfg/coco.data cfg/yolov4.cfg /backup/yolov4_1000.weights -gpus 0,1,2,3

数据集较小时应该减少学习速率,如4gpu训练时设置learning_rate=0.00025learning_rate = 0.001 / GPUs)。同时在cfg-文件中将burn_inmax_batches变为原来的4倍。如burn_in1000变为4000.如果policy-stepssteps参数也要做同样变更。

Only for small datasets sometimes better to decrease learning rate, for 4 GPUs set learning_rate = 0.00025 (i.e. learning_rate = 0.001 / GPUs). In this case also increase 4x times burn_in = and max_batches = in your cfg-file. I.e. use burn_in = 4000 instead of 1000. Same goes for steps= if policy=steps is set.

https://groups.google.com/d/msg/darknet/NbJqonJBTSY/Te5PfIpuCAAJ

如何训练(检测自定义对象):

(训练YoloV2的配置文件: yolov2-voc.cfgyolov2-tiny-voc.cfgyolo-voc.cfgyolo-voc.2.0.cfg, ... click by the link)

训练YoloV4(V3):

  1. 创建yolo-obj.cfg文件,内容同yolov4-custom.cfg(或拷贝yolov4-custom.cfg为yolo-obj.cfg):
  • 修改batch batch=64
  • 修改subdivisions subdivisions=16
  • 修改max_batches 为 (classes*2000 不小于 4000), 例如3分类则 max_batches=6000 
  • 修改steps为max_batches的80%和90%,例如. steps=4800,5400
  • 修改网络大小width=416 height=416 或任意32的倍数: https://github.com/AlexeyAB/darknet/blob/0039fd26786ab5f71d5af725fc18b3f521e7acfd/cfg/yolov3.cfg#L8-L9
  • 在3个[yolo]-段中修改classes=80为实际对象数量:
    • https://github.com/AlexeyAB/darknet/blob/0039fd26786ab5f71d5af725fc18b3f521e7acfd/cfg/yolov3.cfg#L610
    • https://github.com/AlexeyAB/darknet/blob/0039fd26786ab5f71d5af725fc18b3f521e7acfd/cfg/yolov3.cfg#L696
    • https://github.com/AlexeyAB/darknet/blob/0039fd26786ab5f71d5af725fc18b3f521e7acfd/cfg/yolov3.cfg#L783
  • 在三个 [yolo]段前的 [convolutional] 中修改[filters=255] 为filters=(分类数 + 5)x3,注意仅修改[yolo]节中最临近的[convolutinal]
    • https://github.com/AlexeyAB/darknet/blob/0039fd26786ab5f71d5af725fc18b3f521e7acfd/cfg/yolov3.cfg#L603
    • https://github.com/AlexeyAB/darknet/blob/0039fd26786ab5f71d5af725fc18b3f521e7acfd/cfg/yolov3.cfg#L689
    • https://github.com/AlexeyAB/darknet/blob/0039fd26786ab5f71d5af725fc18b3f521e7acfd/cfg/yolov3.cfg#L776
  • [Gaussian_yolo] 段中,修改 [filters=57] 为filters=(分类数 + 9)x3,3个 [convolutional]位于[Gaussian_yolo]段前(yolov4-custom.cfg模板中没有Gaussian_yolo段的内容,不用修改)
    • https://github.com/AlexeyAB/darknet/blob/6e5bdf1282ad6b06ed0e962c3f5be67cf63d96dc/cfg/Gaussian_yolov3_BDD.cfg#L604
    • https://github.com/AlexeyAB/darknet/blob/6e5bdf1282ad6b06ed0e962c3f5be67cf63d96dc/cfg/Gaussian_yolov3_BDD.cfg#L696
    • https://github.com/AlexeyAB/darknet/blob/6e5bdf1282ad6b06ed0e962c3f5be67cf63d96dc/cfg/Gaussian_yolov3_BDD.cfg#L789

因此,如果classes=1filters=18. 如果classes=2 filters=21.

(不要再cfg-文件中这样写: filters=(classes + 5)x3)

(通常filters依赖于classescoordsmaskS数量,例如filters=(classes + coords + 1)*, 其中mask是描点的索引(mask is indices of anchors.如果不存在,则filteres=(classes + coords + 1)*num)

例如,对2个对象,在yolo-obj.cfg中,3[yolo]段中的行将与yolov4-custom.cfg对应行不同:

[convolutional]

filters=21

 

[region]

classes=2

  1. 在build/darknet/x64/data/目录中创建obj.names文件,每行一个对象名称
  2. 在build/darknet/x64/data/目录中创建obj.data文件,内容如下: (其中 classes = 对象类别数):

classes= 2

train  = data/train.txt

valid  = data/test.txt

names = data/obj.names

backup = backup/

  1. 将对象图像文件(.jpg)放到build\darknet\x64\data\obj\目录
  2. 在数据集中标注图像中的各个对象。可使用可视化GUI工具标注对象的边界框并生成标注文件(Yolo v2 & v3): https://github.com/AlexeyAB/Yolo_mark

将为每个.jpg图像文件在同目录下创建对应的同名.txt文件,其中包含对象类型编号、对象在图像中的坐标,每个对象一行:

其中:

  •  -对象类型编号,整数,0—到 (classes-1)
  •  - float值,被图像宽和高归一化,范围为 (0.0 to 1.0]
  • 例如:  = /  或  = /
  • 注意:   - 是矩形的中心,而不是左上角

例如对img1.jpg,将创建一个img1.txt文件,包含:

1 0.716797 0.395833 0.216406 0.147222

0 0.687109 0.379167 0.255469 0.158333

1 0.420312 0.395833 0.140625 0.166667

  1. 在build/darknet/x64/data/目录中创建train.txt文件,指定训练图像,其中每行一个图像文件名,文件名是相对于darknet.exe的相对路径,例如:

data/obj/img1.jpg

data/obj/img2.jpg

data/obj/img3.jpg

  1. 下载卷积层预训练权重文件,放在build\darknet\x64目录
    • for yolov4.cfg, yolov4-custom.cfg (162 MB): yolov4.conv.137
    • for csresnext50-panet-spp.cfg (133 MB): csresnext50-panet-spp.conv.112
    • for yolov3.cfg, yolov3-spp.cfg (154 MB): darknet53.conv.74
    • for yolov3-tiny-prn.cfg , yolov3-tiny.cfg (6 MB): yolov3-tiny.conv.11
    • for enet-coco.cfg (EfficientNetB0-Yolov3) (14 MB): enetb0-coco.conv.132
  2. 使用命令行开始训练: darknet.exe detector train data/obj.data cfg/yolov4-obj.cfg yolov4.conv.137

Linux下的命令行./darknet detector train data/obj.data cfg/yolov4-obj.cfg yolov4.conv.137 (仅用./darknet替换darknet.exe)

    • (每100次迭代在build\darknet\x64\backup\目录中保存一次yolo-obj_last.weights )
    • (每1000次迭代在目录build\darknet\x64\backup\中保存一个yolo-obj_xxxx.weights)
    • (如果在没有显示器的计算机上训练,如亚马逊的云服务器,可禁用Loss显示窗口:./darknet.exe detector train data/obj.data cfg/yolov4-obj.cfg yolov4.conv.137 -dont_show)
    • (在远程无GUI服务器上训练,使用命令:darknet.exe detector train data/obj.data yolo-obj.cfg yolov4.conv.137 -dont_show -mjpeg_port 8090 -map,就可以在其他pc上用chrome或firefox浏览器中输入http://ip-address:8090地址查看训练过程中的mAP和Loss-chart)

8.1. 训练过程中每4批次计算一次mAP(在obj.data文件中设置valid=valid.txt train.txt),使用如下命令:

For training with mAP (mean average precisions) calculation for each 4 Epochs (set valid=valid.txt or train.txt in obj.data file) and run: darknet.exe detector train data/obj.data yolo-obj.cfg yolov4.conv.137 -map

  1. 训练结束后,在目录build\darknet\x64\backup\中获取 yolo-obj_final.weights结果文件。

After training is complete - get result yolo-obj_final.weights from path build\darknet\x64\backup\

  • 每100次迭代后都可以停止,而后从这个点继续训练。例如,2000次跌停后停止训练,继续训练使用命令:darknet.exe detector train data/obj.data yolo-obj.cfg backup\yolo-obj_2000.weights

(初始版本中10000次迭代保存一次权重文件(如果iterations > 1000))

After each 100 iterations you can stop and later start training from this point. For example, after 2000 iterations you can stop training, and later just start training using: darknet.exe detector train data/obj.data yolo-obj.cfg backup\yolo-obj_2000.weights

(in the original repository https://github.com/pjreddie/darknet the weights-file is saved only once every 10 000 iterations if(iterations > 1000))

  • 可以在所有迭代完成前获取结果文件。

Also you can get result earlier than all 45000 iterations.

注意: 如果在训练过程中看到avg(loss)字段值为nan,则训练出错,如果nun在其他行,则训练正常。

If during training you see nan values for avg (loss) field - then training goes wrong, but if nan is in some other lines - then training goes well.

注意: 如果在cfg-文件中修改了width= height=,确保宽、高值被32整除。

注意: 训练后使用如下命令进行检测darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights

注意: 如果提示Out of memory 错误,修改.cfg-文件,增加subdivisions=16, 32 64: link

YoloV3的标注文件可以直接用于YoloV4

darknet YoloV4手册翻译_第5张图片

如何训练tiny-yolo(自定义对象检测器):

完成上述全yolo模型的全部步骤。不同之处:

  • 下载yolov3-tiny的默认权重文件: https://pjreddie.com/media/files/yolov3-tiny.weights
  • 获取预训练权重yolov3-tiny.conv.15 命令行如下: darknet.exe partial cfg/yolov3-tiny.cfg yolov3-tiny.weights yolov3-tiny.conv.15 15
  • 基于cfg/yolov3-tiny_obj.cfg(而不是yolov3.cfg)生成自定义模型配置文件yolov3-tiny-obj.cfg
  • 开始训练: darknet.exe detector train data/obj.data yolov3-tiny-obj.cfg yolov3-tiny.conv.15

要基于其他模型(DenseNet201-Yolo or ResNet50-Yolo)训练Yolo,可下载并获取预训练权重并按https://github.com/AlexeyAB/darknet/blob/master/build/darknet/x64/partial.cmd文件中的方法训练。如果自定义模型不基于任何模型,可以不使用预训练权重,而是使用随机初始权重。

rem Download weights for - DenseNet201, ResNet50 and ResNet152 by this link: https://pjreddie.com/darknet/imagenet/
rem Download Yolo/Tiny-yolo: https://pjreddie.com/darknet/yolo/
rem Download Yolo9000: http://pjreddie.com/media/files/yolo9000.weights
rem darknet.exe partial cfg/tiny-yolo-voc.cfg tiny-yolo-voc.weights tiny-yolo-voc.conv.13 13
darknet.exe partial cfg/csdarknet53-omega.cfg csdarknet53-omega_final.weights csdarknet53-omega.conv.105 105
darknet.exe partial cfg/cd53paspp-omega.cfg cd53paspp-omega_final.weights cd53paspp-omega.conv.137 137
darknet.exe partial cfg/csresnext50.cfg csresnext50.weights csresnext50.conv.75 75
darknet.exe partial cfg/darknet53_448.cfg darknet53_448.weights darknet53.conv.74 74
darknet.exe partial cfg/darknet53_448_xnor.cfg darknet53_448_xnor.weights darknet53_448_xnor.conv.74 74
darknet.exe partial cfg/yolov2-tiny-voc.cfg yolov2-tiny-voc.weights yolov2-tiny-voc.conv.13 13
darknet.exe partial cfg/yolov2-tiny.cfg yolov2-tiny.weights yolov2-tiny.conv.13 13
darknet.exe partial cfg/yolo-voc.cfg yolo-voc.weights yolo-voc.conv.23 23
darknet.exe partial cfg/yolov2.cfg yolov2.weights yolov2.conv.23 23
darknet.exe partial cfg/yolov3.cfg yolov3.weights yolov3.conv.81 81
darknet.exe partial cfg/yolov3-spp.cfg yolov3-spp.weights yolov3-spp.conv.85 85
darknet.exe partial cfg/yolov3-tiny.cfg yolov3-tiny.weights yolov3-tiny.conv.15 15
darknet.exe partial cfg/yolov3-tiny.cfg yolov3-tiny.weights yolov3-tiny.conv.14 14
darknet.exe partial cfg/yolov3-tiny.cfg yolov3-tiny.weights yolov3-tiny.conv.13 13
darknet.exe partial cfg/yolo9000.cfg yolo9000.weights yolo9000.conv.22 22
darknet.exe partial cfg/densenet201.cfg densenet201.weights densenet201.57 57
darknet.exe partial cfg/densenet201.cfg densenet201.weights densenet201.300 300
darknet.exe partial cfg/resnet50.cfg resnet50.weights resnet50.65 65
darknet.exe partial cfg/resnet152.cfg resnet152.weights resnet152.201 201

For training Yolo based on other models (DenseNet201-Yolo or ResNet50-Yolo), you can download and get pre-trained weights as showed in this file: https://github.com/AlexeyAB/darknet/blob/master/build/darknet/x64/partial.cmd If you made you custom model that isn't based on other models, then you can train it without pre-trained weights, then will be used random initial weights.

何时可停止训练:

通常每个对象类别2000次迭代就足够了,但总迭代次数不低于4000次。但为了更精确的拟合,需要适时停止训练,策略如下:

Usually sufficient 2000 iterations for each class(object), but not less than 4000 iterations in total. But for a more precise definition when you should stop training, use the following manual:

  1. 训练过程中可以看到错误偏差值,当0.XXXXXXX avg不再减少则应该停止训练:

During training, you will see varying indicators of error, and you should stop when no longer decreases 0.XXXXXXX avg:

Region Avg IOU: 0.798363, Class: 0.893232, Obj: 0.700808, No Obj: 0.004567, Avg Recall: 1.000000, count: 8 Region Avg IOU: 0.800677, Class: 0.892181, Obj: 0.701590, No Obj: 0.004574, Avg Recall: 1.000000, count: 8

9002: 0.211667, 0.60730 avg, 0.001000 rate, 3.868000 seconds, 576128 images Loaded: 0.000000 seconds

  • 9002 – 迭代次数(批次数)
  • 0.60730 avg - average loss (error) – 越低越好

当看到多次迭代中平均损失值0.xxxxxx不在减少,需要停止训练。最终损失值可以介于0.05(小模型,简单塑胶件)到3.0(大模型,复杂数据集)。

When you see that average loss 0.xxxxxx avg no longer decreases at many iterations then you should stop training. The final avgerage loss can be from 0.05 (for a small model and easy dataset) to 3.0 (for a big model and a difficult dataset).

  1. 停止训练后,在darknet\build\darknet\x64\backup目录中获取last.weights-XXX文件并选择一个最优的。

Once training is stopped, you should take some of last .weights-files from darknet\build\darknet\x64\backup and choose the best of them:

例如,在地9000次迭代停止,但最优权重可能是之前的某个(700080009000)。可能是过拟合导致的。过拟合可以很好的检测训练集中的图像,但无法检测其他图像。应该获取Early Stopping Point处的权重。

For example, you stopped training after 9000 iterations, but the best result can give one of previous weights (7000, 8000, 9000). It can happen due to overfitting. Overfitting - is case when you can detect objects on images from training-dataset, but can't detect objects on any others images. You should get weights from Early Stopping Point:

darknet YoloV4手册翻译_第6张图片

获取Early Stopping Point处的权重:

2.1. 首先,在obj.data文件中必须指定验证集的路径 valid = valid.txt (valid.txt格式与train.txt相同),如果还没有验证图像,可从data\train.txt 拷贝到data\valid.txt

At first, in your file obj.data you must specify the path to the validation dataset valid = valid.txt (format of valid.txt as in train.txt), and if you haven't validation images, just copy data\train.txt to data\valid.txt.

2.2 如果在9000次迭代处停止,使用如下命令验证前面保存的几个权重:

If training is stopped after 9000 iterations, to validate some of previous weights use this commands:

(如果使用其他GitHub库的代码,需要使用darknet.exe detector recall...替代darknet.exe detector map...)

If you use another GitHub repository, then use darknet.exe detector recall... instead of darknet.exe detector map...)

  • darknet.exe detector map data/obj.data yolo-obj.cfg backup\yolo-obj_7000.weights
  • darknet.exe detector map data/obj.data yolo-obj.cfg backup\yolo-obj_8000.weights
  • darknet.exe detector map data/obj.data yolo-obj.cfg backup\yolo-obj_9000.weights

比较每个权重最后输出行:

And comapre last output lines for each weights (7000, 8000, 9000):

选择mAPmean average precision)或Iouintersect over union)最高的权重

Choose weights-file with the highest mAP (mean average precision) or IoU (intersect over union)

例如,最大mAP权重文件是 yolo-obj_8000.weights – 则使用这个权重进行检测.

或训练时使用-map标志:

darknet.exe detector train data/obj.data yolo-obj.cfg yolov4.conv.137 -map

可以在Loss-chart窗口看到mAP-chart(红色线)。每4 Epochs使用obj.data文件中指定的valid=valid.txt文件计算一次mAP。(1 Epoch = images_in_train_txt / batch iterations

So you will see mAP-chart (red-line) in the Loss-chart Window. mAP will be calculated for each 4 Epochs using valid=valid.txt file that is specified in obj.data file (1 Epoch = images_in_train_txt / batch iterations)

(修改最大x坐标值修改 max_batches= 参数为2000*分类数,例如. 3分类max_batches=2000*3=6000)

darknet YoloV4手册翻译_第7张图片

Snowman样本训练图(取1000次迭代的权重)

darknet YoloV4手册翻译_第8张图片

自定义对象检测的例子darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights

  • IoU (交并比intersect over union) – “预测的边框” 和 “真实的边框” 的交集和并集的比值,有效预测阈值=0.24

average instersect over union of objects and detections for a certain threshold = 0.24

  • mAP( AP 即 Average Precision即平均精确度,mAP是平均AP值,是对多个验证集个体求平均AP值,作为 object dection中衡量检测精度的指标。)

(平均精度均值)-每个类平均精度的均值,平均精度是对同类型每种可能阈值的PR-曲线(Precision-Recall)上的11个点的均值。(Precision-Recall in terms of PascalVOC, where Precision=TP/(TP+FP) and Recall=TP/(TP+FN) ), page-11: http://homepages.inf.ed.ac.uk/ckiw/postscript/ijcv_voc09.pdf

 (mean average precision) - mean value of average precisions for each class, where average precision is average value of 11 points on PR-curve for each possible threshold (each probability of detection) for the same class (Precision-Recall in terms of PascalVOC, where Precision=TP/(TP+FP) and Recall=TP/(TP+FN) ), page-11: http://homepages.inf.ed.ac.uk/ckiw/postscript/ijcv_voc09.pdf

mAPPascalVOC竞赛中默认的精度度量,这与MS COCO竞赛中的AP50是一样的。在Wiki词条中,精准度和召回度指标与PascalVOC的意义略有不同,但IoU的意义始终相同。

mAP is default metric of precision in the PascalVOC competition, this is the same as AP50 metric in the MS COCO competition. In terms of Wiki, indicators Precision and Recall have a slightly different meaning than in the PascalVOC competition, but IoU always has the same meaning.

darknet YoloV4手册翻译_第9张图片

自定义对象检测

自定义对象检测范例darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights

darknet YoloV4手册翻译_第10张图片darknet YoloV4手册翻译_第11张图片

如果改进对象检测:

  1. 训练前:
  • 在.cfg-文件中设置random=1,训练过程中使用不同分配率进而提升精度(link

set flag random=1 in your .cfg-file – it will increase precision by training Yolo for different resolutions: link

  • 在.cfg-文件中增加网络分辨率(height=608,width=608,或其他32的倍数),可以增加精确度

increase network resolution in your .cfg-file (height=608width=608 or any value multiple of 32) - it will increase precision

  • 检查数据集中每个待检测的对象都被正确标注-数据集中不存在未被标注的对象。很多训练问题,都是由于数据集错误标注引起的(使用转换器转换标注、使用三方工具标注,等等)。使用https://github.com/AlexeyAB/Yolo_mark工具检查你的数据集

check that each object that you want to detect is mandatory labeled in your dataset - no one object in your data set should not be without label. In the most training issues - there are wrong labels in your dataset (got labels by using some conversion script, marked with a third-party tool, ...). Always check your dataset by using: https://github.com/AlexeyAB/Yolo_mark

  • Loss很高但mAP很低,是训练错误吗?在训练命令后加-show-imgs标记,可以看到正确的对象范围框吗(在窗口或aug_XXX.jpg文件)?如果没有,则是数据集错误。

My Loss is very high and mAP is very low, is training wrong? Run training with -show_imgs flag at the end of training command, do you see correct bounded boxes of objects (in windows or in files aug_...jpg)? If no – your training dataset is wrong.

  • 每一个待检测对象-在训练集中至少有一个相似对象:相同的形状、对象的侧面、相对大小、旋转角度、倾斜、光照。因此合理情况下,训练数据集中包含对象的图像应该是不同的:大小、旋转、光照、不同角度、不同背景—每个类别应该有2000个不同的图像或更多,并且要训练2000*类别数次迭代以上

for each object which you want to detect - there must be at least 1 similar object in the Training dataset with about the same: shape, side of object, relative size, angle of rotation, tilt, illumination. So desirable that your training dataset include images with objects at diffrent: scales, rotations, lightings, from different sides, on different backgrounds - you should preferably have 2000 different images for each class or more, and you should train 2000*classes iterations or more

  • 理想情况下,训练数据集中包含无标记对象的图像,其中含有不想检查的对象----无标注框的负样本(empty.txt文件)----使用与含对象的图像同样多的负样本

desirable that your training dataset include images with non-labeled objects that you do not want to detect - negative samples without bounded box (empty .txt files) - use as many images of negative samples as there are images with objects

  • 最好的标注方式:仅标注对象可见区域,或对象可见及重叠区域,或标注区域稍稍大于对象实际区域(有一点点空隙)?--取决于我们希望任何检查对象。

What is the best way to mark objects: label only the visible part of the object, or label the visible and overlapped part of the object, or label a little more than the entire object (with a little gap)? Mark as you like - how would you like it to be detected.

  • 为了在每张图像中训练更多对象,在cfg-文件中的[yolo]-层或[region]-层修改max=200或更大(YoloV3中全局最大检查对象数是 0,0615234375*(width*height),width和height在[net]段中)

for training with a large number of objects in each image, add the parameter max=200 or higher value in the last [yolo]-layer or [region]-layer in your cfg-file (the global maximum number of objects that can be detected by YoloV3 is 0,0615234375*(width*height) where are width and height are parameters from [net] section in cfg-file)

  • 为了检测到小对象(图像resize到416x416后小于16x16)-设置layers=23(替换 https://github.com/AlexeyAB/darknet/blob/6f718c257815a984253346bba8fb7aa756c55090/cfg/yolov4.cfg#L895);设置stride=4(替换https://github.com/AlexeyAB/darknet/blob/6f718c257815a984253346bba8fb7aa756c55090/cfg/yolov4.cfg#L892);设置stride=4(替换https://github.com/AlexeyAB/darknet/blob/6f718c257815a984253346bba8fb7aa756c55090/cfg/yolov4.cfg#L989

for training for small objects (smaller than 16x16 after the image is resized to 416x416) - set layers = 23 instead of https://github.com/AlexeyAB/darknet/blob/6f718c257815a984253346bba8fb7aa756c55090/cfg/yolov4.cfg#L895 set stride=4 instead of https://github.com/AlexeyAB/darknet/blob/6f718c257815a984253346bba8fb7aa756c55090/cfg/yolov4.cfg#L892 and set stride=4 instead of https://github.com/AlexeyAB/darknet/blob/6f718c257815a984253346bba8fb7aa756c55090/cfg/yolov4.cfg#L989

  • 同时训练大对象和小对象使用修改的模型:

for training for both small and large objects use modified models:

    • Full-model: 5 yolo layers: https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov3_5l.cfg
    • Tiny-model: 3 yolo layers: https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov3-tiny_3l.cfg
    • YOLOv4: 3 yolo layers: https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4-custom.cfg
  • 如果要训练区分左右的对象类别(如左右手、左转、右转路标),需要禁用图像增强的flip选项-在https://github.com/AlexeyAB/darknet/blob/3d2d0a7c98dbc8923d9ff705b81ff4f7940ea6ff/cfg/yolov3.cfg#L17处添加flip=0

If you train the model to distinguish Left and Right objects as separate classes (left/right hand, left/right-turn on road signs, ...) then for disabling flip data augmentation - add flip=0 here: https://github.com/AlexeyAB/darknet/blob/3d2d0a7c98dbc8923d9ff705b81ff4f7940ea6ff/cfg/yolov3.cfg#L17

  • 通用规则—训练数据集中应该包含与待检测对象尺寸近似相等的对象:

General rule - your training dataset should include such a set of relative sizes of objects that you want to detect:

    • train_network_width * train_obj_width / train_image_width ~= detection_network_width * detection_obj_width / detection_image_width
    • train_network_height * train_obj_height / train_image_height ~= detection_network_height * detection_obj_height / detection_image_height

例如,测试集中的每个对象,都应该在训练数据集中至少有一个同类别且大小近似的对象:

object width in percent from Training dataset ~= object width in percent from Test dataset

因此,如果训练集数据中只包含相对size80-90%的对象,训练的网络就无法检测到相对size1-10%的目标

I.e. for each object from Test dataset there must be at least 1 object in the Training dataset with the same class_id and about the same relative size:

object width in percent from Training dataset ~= object width in percent from Test dataset

That is, if only objects that occupied 80-90% of the image were present in the training set, then the trained network will not be able to detect objects that occupy 1-10% of the image.

  • 要提升训练速度(降低检测精度),设置cfg-文件中的layer-136的参数stopbackward=1

to speedup training (with decreasing detection accuracy) set param stopbackward=1 for layer-136 in cfg-file

  • 模型中的每个对象侧面、抖动、缩放、每30度旋转和倾斜—从神经网络内部来看都是不同的对象。因此,检测的对象越多,使用的模型就越复杂。

Each: model of object, side, illimination, scale, each 30 grad of the turn and inclination angles – these are different objects from an internal perspective of the neural network. So the more different objects you want to detect, the more complex network model should be used.

  • 要提高检测边界框的精度,可以在每个[yolo]层增加3个参数ignore_thresh = .9 iou_normalizer=0.5 iou_loss=giou 将增加[email protected],但降低了[email protected]

to make the detected bounded boxes more accurate, you can add 3 parameters ignore_thresh = .9 iou_normalizer=0.5 iou_loss=giou to each [yolo] layer and train, it will increase [email protected], but decrease [email protected].

  • 如果是神经检测网络专家-可根据cfg-文件配置的width和height重新计算数据集的描点(anchors):darknet.exe detector calc_anchors data/obj.data -num_of_clusters 9 -width 416 -height 416,将获取的9个描点值配置到cfg-文件中的[yolo]-段中。但也要同时修改每个[yolo]-段中的masks指标,使第一个层[yolo]-中的描点大于60x60,第二层大于30x30,第三层同理。同时要修改每段[yolo]-的filters=(classes + 5)*。如果多次计算描点与层都不匹配,就使用默认描点进行尝试。

Only if you are an expert in neural detection networks - recalculate anchors for your dataset for width and height from cfg-file: darknet.exe detector calc_anchors data/obj.data -num_of_clusters 9 -width 416 -height 416 then set the same 9 anchors in each of 3 [yolo]-layers in your cfg-file. But you should change indexes of anchors masks= for each [yolo]-layer, so that 1st-[yolo]-layer has anchors larger than 60x60, 2nd larger than 30x30, 3rd remaining. Also you should change the filters=(classes + 5)* before each [yolo]-layer. If many of the calculated anchors do not fit under the appropriate layers - then just try using all the default anchors.

  1. 训练后的检测:
  • 修改cfg-文件增加网络分辨率(height=608和width=608)或(height=832和width=832)或其他32整数倍大小—可以增加检测精度,增加检测到小对象的可能性( link

Increase network-resolution by set in your .cfg-file (height=608 and width=608) or (height=832 and width=832) or (any value multiple of 32) – this increases the precision and makes it possible to detect small objects: link

    • 不需要再重新训练网络,使用416x416分辨率训练的权重文件即可

it is not necessary to train the network again, just use .weights-file already trained for 416x416 resolution

    • 但要获取更高精度,应该使用更高分辨率608x608或832x832,注意:如果报 Out of memory错误,需要在.cfg-文件中增大subdivisions=16,32或64(link

but to get even greater accuracy you should train with higher resolution 608x608 or 832x832, note: if error Out of memory occurs then in .cfg-file you should increase subdivisions=16, 32 or 64: link

如何标注对象边界框并创建标注文件:

可以使用https://github.com/AlexeyAB/Yolo_mark提供的GUI对象标注工具并生成YoloV2-YoloV4的标注文件

并提供范例:2分类对象(airbird)的文件train.txtobj.namesobj.datayolo-obj.cfgair1-6.txtbird1-4.txttrain_obj.cmd文件提供了如何使用YoloV2-YoloV4进行训练的方法

Here you can find repository with GUI-software for marking bounded boxes of objects and generating annotation files for Yolo v2 - v4: https://github.com/AlexeyAB/Yolo_mark

With example of: train.txtobj.namesobj.datayolo-obj.cfgair1-6.txtbird1-4.txt for 2 classes of objects (air, bird) and train_obj.cmd with example how to train this image-set with Yolo v2 - v4

标注图像中对象的不同工具:

  1. in C++: https://github.com/AlexeyAB/Yolo_mark
  2. in Python: https://github.com/tzutalin/labelImg
  3. in Python: https://github.com/Cartucho/OpenLabeling
  4. in C++: https://www.ccoderun.ca/darkmark/
  5. in JavaScript: https://github.com/opencv/cvat

使用Yolo9000

同时检测和分类9000种类别darknet.exe detector test cfg/combine9k.data cfg/yolo9000.cfg yolo9000.weights data/dog.jpg

  • yolo9000.weights - (186 MB Yolo9000 Model) requires 4 GB GPU-RAM: http://pjreddie.com/media/files/yolo9000.weights
  • yolo9000.cfg - Yolo9000的cfg-文件, 以及9k.tree 和 coco9k.map的路径:

 https://github.com/AlexeyAB/darknet/blob/617cf313ccb1fe005db3f7d88dec04a04bd97cc2/cfg/yolo9000.cfg#L217-L218

    • 9k.tree - WordTree of 9418 categories - 
    • coco9k.map - map 80 categories from MSCOCO to WordTree 9k.tree: https://raw.githubusercontent.com/AlexeyAB/darknet/master/build/darknet/x64/data/coco9k.map
  • combine9k.data – 数据文件,指向路径: 9k.labels, 9k.names, inet9k.map, (change path to your combine9k.train.list): https://raw.githubusercontent.com/AlexeyAB/darknet/master/build/darknet/x64/data/combine9k.data
    • 9k.labels - 9418 labels of objects: https://raw.githubusercontent.com/AlexeyAB/darknet/master/build/darknet/x64/data/9k.labels
    • 9k.names - 9418 names of objects: https://raw.githubusercontent.com/AlexeyAB/darknet/master/build/darknet/x64/data/9k.names
    • inet9k.map - map 200 categories from ImageNet to WordTree 9k.tree: https://raw.githubusercontent.com/AlexeyAB/darknet/master/build/darknet/x64/data/inet9k.map

如何使用YoloDLLSO

  • Linux
    • using build.sh or
    • build darknet using cmake or
    • set LIBSO=1 in the Makefile and do make
  • Windows
    • using build.ps1 or
    • build darknet using cmake or
    • compile build\darknet\yolo_cpp_dll.sln solution or build\darknet\yolo_cpp_dll_no_gpu.sln solution

两种API:

  • C API: https://github.com/AlexeyAB/darknet/blob/master/include/darknet.h
    • Python使用C API范例:
      • https://github.com/AlexeyAB/darknet/blob/master/darknet.py
      • https://github.com/AlexeyAB/darknet/blob/master/darknet_video.py
  • C++ API: https://github.com/AlexeyAB/darknet/blob/master/include/yolo_v2_class.hpp
    • C++调用范例: API: https://github.com/AlexeyAB/darknet/blob/master/src/yolo_console_dll.cpp

  1. 编译Yolo C++ DLL yolo_cpp_dll.dll – 打开build\darknet\yolo_cpp_dll.sln解决方案, 设置x64 和 Release, 执行: Build -> Build yolo_cpp_dll
    • 必须安装 CUDA 10.0
    • 使用cuDNN : (右键项目) -> properties -> C/C++ -> Preprocessor -> Preprocessor Definitions, 在行开始添加: CUDNN;
  2. 在C++控制台中使用DLL -打开build\darknet\yolo_console_dll.sln解决方案, 设置 x64 和 Release, 执行: Build -> Build yolo_console_dll
    • 在Windows下运行命令行应用程序 build\darknet\x64\yolo_console_dll.exe 命令行: yolo_console_dll.exe data/coco.names yolov4.cfg yolov4.weights test.mp4
    • 启动控制台应用程序并输入图像文件名,可以看到检测出的每个对象的信息: 
    • 要使用OpenCV-GUI,在yolo_console_dll.cpp文件中取消注释//#define OPENCV: link
    • 检测视频文件的源码范例见: link

yolo_cpp_dll.dll-API: link

struct bbox_t {
    unsigned int x, y, w, h;    // (x,y) - top-left corner, (w, h) - width & height of bounded box
    float prob;                    // confidence - probability that the object was found correctly
    unsigned int obj_id;        // class of object - from range [0, classes-1]
    unsigned int track_id;        // tracking id for video (0 - untracked, 1 - inf - tracked object)
    unsigned int frames_counter;// counter of frames on which the object was detected
};

class Detector {
public:
        Detector(std::string cfg_filename, std::string weight_filename, int gpu_id = 0);
        ~Detector();

        std::vector detect(std::string image_filename, float thresh = 0.2, bool use_mean = false);
        std::vector detect(image_t img, float thresh = 0.2, bool use_mean = false);
        static image_t load_image(std::string image_filename);
        static void free_image(image_t m);

#ifdef OPENCV
        std::vector detect(cv::Mat mat, float thresh = 0.2, bool use_mean = false);
	std::shared_ptr mat_to_image_resize(cv::Mat mat) const;
#endif
};

C#下使用dark.dll

查看代码发现yolov4中将yolo_cpp_dll.dll重命名为dark.dll,新增了几个函数。

 darknet YoloV4手册翻译_第12张图片

打开Alturos.Yolo项目的YoloWrapper.cs文件,修改dll导入代码:

darknet YoloV4手册翻译_第13张图片

重新编译C#项目,将新版编译的dark.dll文件及相关依赖dll拷贝到输入目录,启动测试GUI界面,可以正常使用YoloV2的模型检测。

darknet YoloV4手册翻译_第14张图片

其他语言调用dark.dll方法同上。

 

你可能感兴趣的:(darknet YoloV4手册翻译)