https://github.com/AlexeyAB/darknet
(对象检测神经网络) – 可用于Linxu和windows的张量计算核心
Yolo v4网页: https://arxiv.org/abs/2004.10934
更多细节: http://pjreddie.com/darknet/yolo/
如何在MS COCO评估服务上评估YoloV4的AP
Download and unzip test-dev2017 dataset from MS COCO server: http://images.cocodataset.org/zips/test2017.zip
下载检测任务图片列表并替换为实际路径:
https://raw.githubusercontent.com/AlexeyAB/darknet/master/scripts/testdev2017.txt
classes= 80
train =
valid =
names = data/coco.names
backup = backup
eval=coco
如何在GPU下评估YoloV4的帧率
预训练模型
不同cfg-文件对应的权重文件(使用MS COCO 数据集训练):
在RTX 2070 (R)和Tesla V100 (V)上的帧率:
Yolo v3 模型
Yolo v2 模型
yolov2.cfg
(194 MB COCO Yolo v2) - requires 4 GB GPU-RAM: https://pjreddie.com/media/files/yolov2.weightsyolo-voc.cfg
(194 MB VOC Yolo v2) - requires 4 GB GPU-RAM: http://pjreddie.com/media/files/yolo-voc.weightsyolov2-tiny.cfg
(43 MB COCO Yolo v2) - requires 1 GB GPU-RAM: https://pjreddie.com/media/files/yolov2-tiny.weightsyolov2-tiny-voc.cfg
(60 MB VOC Yolo v2) - requires 1 GB GPU-RAM: http://pjreddie.com/media/files/yolov2-tiny-voc.weightsyolo9000.cfg
(186 MB Yolo9000-model) - requires 4 GB GPU-RAM: http://pjreddie.com/media/files/yolo9000.weights将下载的模型拷贝到darknet.exe同目录下。
在目录darknet/cfg/中获取cfg-文件
必要条件
详细见 https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#installlinux-tar , Windows 下复制 cudnn.h,cudnn64_7.dll, cudnn64_7.lib 详细见 https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#installwindows )
其他框架下的YoloV3
数据集
https://github.com/angeligareta/Datasets2Darknet#detection-task
范例结果
https://www.youtube.com/watch?v=MPU2HistivI
其他: https://www.youtube.com/user/pjreddie/videos
本库的改进
added the ability for training with GPU-processing using CPU-RAM to increase the mini_batch_size and increase accuracy (instead of batch-norm sync)
improved binary neural network performance 2x-4x times for Detection on CPU and GPU if you trained your own weights by using this XNOR-net model (bit-1 inference) : https://github.com/AlexeyAB/darknet/blob/master/cfg/yolov3-tiny_xnor.cfg
improved neural network performance ~7% by fusing 2 layers into 1: Convolutional + Batch-norm
Detection 2x times, on GPU Volta/Turing (Tesla V100, GeForce RTX, ...) using Tensor Cores if CUDNN_HALF defined in the Makefile or darknet.sln
improved performance ~1.2x times on FullHD, ~2x times on 4K, for detection on the video (file/stream) using darknet detector demo...
improved performance 3.5 X times of data augmentation for training (using OpenCV SSE/AVX functions instead of hand-written functions) - removes bottleneck for training on multi-GPU or GPU Volta
Improved performance of detection and training on Intel CPU with AVX (Yolo v3 ~85%)
optimized memory allocation during network resizing when random=1
optimized GPU initialization for detection - we use batch=1 initially instead of re-init with batch=1
added correct calculation of mAP, F1, IoU, Precision-Recall using command darknet detector map…
added drawing of chart of average-Loss and accuracy-mAP (-map flag) during training
run ./darknet detector demo ... -json_port 8070 -mjpeg_port 8090 as JSON and MJPEG server to get results online over the network by using your soft or Web-browser
增加教程 - How to train Yolo v4-v2 (to detect your custom objects)
同时,你也可能对简化库感兴趣-使用INT8-quantization 实现(速度提升30%,mAP降低1%): https://github.com/AlexeyAB/yolo2_light
如何使用命令行
Linux下使用./darknet 替代darknet.exe, 如:./darknet detector test ./cfg/coco.data ./cfg/yolov4.cfg ./yolov4.weights
Linux在根目录查找可执行文件./darknet, Windows下在\build\darknet\x64目录
在安卓智能手机中使用网络video-camera mjpeg-stream
如何在Linux下编译 (使用cmake)
CMakeLists.txt 文件尝试寻找以安装的可选依赖,如CUDA、cudn、ZED并据此进行编译。同时创建darkent共享对象库文件用于编码开发。
在clone的源码库中:
mkdir build-release
cd build-release
cmake ..
make
make install
如何在Linux下编译 (使用 make)
在darknet目录中执行make. make前,可以在Makefile中设置如下选项: link
在linux下运行本文中的darknet范例,将./darknet 替换darknet.exe, i.e. use this command: ./darknet detector test ./cfg/coco.data ./cfg/yolov4.cfg ./yolov4.weights
如何在Windows下编译(使用CMake-GUI)
如果安装了VS2015/2017/2019,CUDA >10.0,cuDNN > 7.0,OpenCV>2.4,推荐采用此方法。
注意:确保安装好CUDA和OPENCV
CMake-GUI如下图所示:
如何在Windows下编译(使用vcpkg)
vcpkg是自动下载依赖包的工具
如果已经安装了Visual Studio 2015/2017/2019, CUDA > 10.0, cuDNN > 7.0, OpenCV > 2.4,推荐使用CMake-GUI方式。
否则,按如下步骤:
PS \> cd $env:VCPKG_ROOT
PS Code\vcpkg> .\vcpkg install pthreads opencv[ffmpeg] #replace with opencv[cuda,ffmpeg] in case you want to use cuda-accelerated openCV
如何在windows下编译(传统方式)
1.1. Find files opencv_world320.dll and opencv_ffmpeg320_64.dll (or opencv_world340.dll and opencv_ffmpeg340_64.dll) in C:\opencv_3.0\opencv\build\x64\vc14\bin and put it near with darknet.exe
1.2 Check that there are bin and include folders in the C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0 if aren't, then copy them to this folder from the path where is CUDA installed
1.3. To install CUDNN (speedup neural network), do the following:
1.4. If you want to build without CUDNN then: open \darknet.sln -> (right click on project) -> properties -> C/C++ -> Preprocessor -> Preprocessor Definitions, and remove this: CUDNN;
4.1 (right click on project) -> properties -> C/C++ -> General -> Additional Include Directories: C:\opencv_2.4.13\opencv\build\include
4.2 (right click on project) -> properties -> Linker -> General -> Additional Library Directories: C:\opencv_2.4.13\opencv\build\x64\vc14\lib
Note: CUDA must be installed only after Visual Studio has been installed.
How to compile (custom):
Also, you can to create your own darknet.sln & darknet.vcxproj, this example for CUDA 9.1 and OpenCV 3.0
Then add to your created project:
C:\opencv_3.0\opencv\build\include;..\..\3rdparty\include;%(AdditionalIncludeDirectories);$(CudaToolkitIncludeDir);$(CUDNN)\include
C:\opencv_3.0\opencv\build\x64\vc14\lib;$(CUDA_PATH)\lib\$(PlatformName);$(CUDNN)\lib\x64;%(AdditionalLibraryDirectories)
..\..\3rdparty\lib\x64\pthreadVC2.lib;cublas.lib;curand.lib;cudart.lib;cudnn.lib;%(AdditionalDependencies)
OPENCV;_TIMESPEC_DEFINED;_CRT_SECURE_NO_WARNINGS;_CRT_RAND_S;WIN32;NDEBUG;_CONSOLE;_LIB;%(PreprocessorDefinitions)
使用多GPU训练:
数据集较小时应该减少学习速率,如4gpu训练时设置learning_rate=0.00025(learning_rate = 0.001 / GPUs)。同时在cfg-文件中将burn_in和max_batches变为原来的4倍。如burn_in从1000变为4000.如果policy-steps则steps参数也要做同样变更。
Only for small datasets sometimes better to decrease learning rate, for 4 GPUs set learning_rate = 0.00025 (i.e. learning_rate = 0.001 / GPUs). In this case also increase 4x times burn_in = and max_batches = in your cfg-file. I.e. use burn_in = 4000 instead of 1000. Same goes for steps= if policy=steps is set.
https://groups.google.com/d/msg/darknet/NbJqonJBTSY/Te5PfIpuCAAJ
如何训练(检测自定义对象):
(训练YoloV2的配置文件: yolov2-voc.cfg, yolov2-tiny-voc.cfg, yolo-voc.cfg, yolo-voc.2.0.cfg, ...见 click by the link)
训练YoloV4(和V3):
因此,如果classes=1则filters=18. 如果classes=2 则filters=21.
(不要再cfg-文件中这样写: filters=(classes + 5)x3)
(通常filters依赖于classes、coords和maskS数量,例如filters=(classes + coords + 1)*
例如,对2个对象,在yolo-obj.cfg中,3个[yolo]段中的行将与yolov4-custom.cfg对应行不同:
[convolutional]
filters=21
[region]
classes=2
classes= 2
train = data/train.txt
valid = data/test.txt
names = data/obj.names
backup = backup/
将为每个.jpg图像文件在同目录下创建对应的同名.txt文件,其中包含对象类型编号、对象在图像中的坐标,每个对象一行:
其中:
例如对img1.jpg,将创建一个img1.txt文件,包含:
1 0.716797 0.395833 0.216406 0.147222
0 0.687109 0.379167 0.255469 0.158333
1 0.420312 0.395833 0.140625 0.166667
data/obj/img1.jpg
data/obj/img2.jpg
data/obj/img3.jpg
在Linux下的命令行: ./darknet detector train data/obj.data cfg/yolov4-obj.cfg yolov4.conv.137 (仅用./darknet替换darknet.exe)
8.1. 训练过程中每4批次计算一次mAP(在obj.data文件中设置valid=valid.txt 或train.txt),使用如下命令:
For training with mAP (mean average precisions) calculation for each 4 Epochs (set valid=valid.txt or train.txt in obj.data file) and run: darknet.exe detector train data/obj.data yolo-obj.cfg yolov4.conv.137 -map
After training is complete - get result yolo-obj_final.weights from path build\darknet\x64\backup\
(初始版本中10000次迭代保存一次权重文件(如果iterations > 1000))
After each 100 iterations you can stop and later start training from this point. For example, after 2000 iterations you can stop training, and later just start training using: darknet.exe detector train data/obj.data yolo-obj.cfg backup\yolo-obj_2000.weights
(in the original repository https://github.com/pjreddie/darknet the weights-file is saved only once every 10 000 iterations if(iterations > 1000))
Also you can get result earlier than all 45000 iterations.
注意: 如果在训练过程中看到avg(loss)字段值为nan,则训练出错,如果nun在其他行,则训练正常。
If during training you see nan values for avg (loss) field - then training goes wrong, but if nan is in some other lines - then training goes well.
注意: 如果在cfg-文件中修改了width= 或 height=,确保宽、高值被32整除。
注意: 训练后使用如下命令进行检测: darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights
注意: 如果提示Out of memory 错误,修改.cfg-文件,增加subdivisions=16, 32 或 64: link
YoloV3的标注文件可以直接用于YoloV4
如何训练tiny-yolo(自定义对象检测器):
完成上述全yolo模型的全部步骤。不同之处:
要基于其他模型(DenseNet201-Yolo or ResNet50-Yolo)训练Yolo,可下载并获取预训练权重并按https://github.com/AlexeyAB/darknet/blob/master/build/darknet/x64/partial.cmd文件中的方法训练。如果自定义模型不基于任何模型,可以不使用预训练权重,而是使用随机初始权重。
rem Download weights for - DenseNet201, ResNet50 and ResNet152 by this link: https://pjreddie.com/darknet/imagenet/
rem Download Yolo/Tiny-yolo: https://pjreddie.com/darknet/yolo/
rem Download Yolo9000: http://pjreddie.com/media/files/yolo9000.weights
rem darknet.exe partial cfg/tiny-yolo-voc.cfg tiny-yolo-voc.weights tiny-yolo-voc.conv.13 13
darknet.exe partial cfg/csdarknet53-omega.cfg csdarknet53-omega_final.weights csdarknet53-omega.conv.105 105
darknet.exe partial cfg/cd53paspp-omega.cfg cd53paspp-omega_final.weights cd53paspp-omega.conv.137 137
darknet.exe partial cfg/csresnext50.cfg csresnext50.weights csresnext50.conv.75 75
darknet.exe partial cfg/darknet53_448.cfg darknet53_448.weights darknet53.conv.74 74
darknet.exe partial cfg/darknet53_448_xnor.cfg darknet53_448_xnor.weights darknet53_448_xnor.conv.74 74
darknet.exe partial cfg/yolov2-tiny-voc.cfg yolov2-tiny-voc.weights yolov2-tiny-voc.conv.13 13
darknet.exe partial cfg/yolov2-tiny.cfg yolov2-tiny.weights yolov2-tiny.conv.13 13
darknet.exe partial cfg/yolo-voc.cfg yolo-voc.weights yolo-voc.conv.23 23
darknet.exe partial cfg/yolov2.cfg yolov2.weights yolov2.conv.23 23
darknet.exe partial cfg/yolov3.cfg yolov3.weights yolov3.conv.81 81
darknet.exe partial cfg/yolov3-spp.cfg yolov3-spp.weights yolov3-spp.conv.85 85
darknet.exe partial cfg/yolov3-tiny.cfg yolov3-tiny.weights yolov3-tiny.conv.15 15
darknet.exe partial cfg/yolov3-tiny.cfg yolov3-tiny.weights yolov3-tiny.conv.14 14
darknet.exe partial cfg/yolov3-tiny.cfg yolov3-tiny.weights yolov3-tiny.conv.13 13
darknet.exe partial cfg/yolo9000.cfg yolo9000.weights yolo9000.conv.22 22
darknet.exe partial cfg/densenet201.cfg densenet201.weights densenet201.57 57
darknet.exe partial cfg/densenet201.cfg densenet201.weights densenet201.300 300
darknet.exe partial cfg/resnet50.cfg resnet50.weights resnet50.65 65
darknet.exe partial cfg/resnet152.cfg resnet152.weights resnet152.201 201
For training Yolo based on other models (DenseNet201-Yolo or ResNet50-Yolo), you can download and get pre-trained weights as showed in this file: https://github.com/AlexeyAB/darknet/blob/master/build/darknet/x64/partial.cmd If you made you custom model that isn't based on other models, then you can train it without pre-trained weights, then will be used random initial weights.
何时可停止训练:
通常每个对象类别2000次迭代就足够了,但总迭代次数不低于4000次。但为了更精确的拟合,需要适时停止训练,策略如下:
Usually sufficient 2000 iterations for each class(object), but not less than 4000 iterations in total. But for a more precise definition when you should stop training, use the following manual:
During training, you will see varying indicators of error, and you should stop when no longer decreases 0.XXXXXXX avg:
Region Avg IOU: 0.798363, Class: 0.893232, Obj: 0.700808, No Obj: 0.004567, Avg Recall: 1.000000, count: 8 Region Avg IOU: 0.800677, Class: 0.892181, Obj: 0.701590, No Obj: 0.004574, Avg Recall: 1.000000, count: 8
9002: 0.211667, 0.60730 avg, 0.001000 rate, 3.868000 seconds, 576128 images Loaded: 0.000000 seconds
当看到多次迭代中平均损失值0.xxxxxx不在减少,需要停止训练。最终损失值可以介于0.05(小模型,简单塑胶件)到3.0(大模型,复杂数据集)。
When you see that average loss 0.xxxxxx avg no longer decreases at many iterations then you should stop training. The final avgerage loss can be from 0.05 (for a small model and easy dataset) to 3.0 (for a big model and a difficult dataset).
Once training is stopped, you should take some of last .weights-files from darknet\build\darknet\x64\backup and choose the best of them:
例如,在地9000次迭代停止,但最优权重可能是之前的某个(7000,8000,9000)。可能是过拟合导致的。过拟合—可以很好的检测训练集中的图像,但无法检测其他图像。应该获取Early Stopping Point处的权重。
For example, you stopped training after 9000 iterations, but the best result can give one of previous weights (7000, 8000, 9000). It can happen due to overfitting. Overfitting - is case when you can detect objects on images from training-dataset, but can't detect objects on any others images. You should get weights from Early Stopping Point:
获取Early Stopping Point处的权重:
2.1. 首先,在obj.data文件中必须指定验证集的路径 valid = valid.txt (valid.txt格式与train.txt相同),如果还没有验证图像,可从data\train.txt 拷贝到data\valid.txt。
At first, in your file obj.data you must specify the path to the validation dataset valid = valid.txt (format of valid.txt as in train.txt), and if you haven't validation images, just copy data\train.txt to data\valid.txt.
2.2 如果在9000次迭代处停止,使用如下命令验证前面保存的几个权重:
If training is stopped after 9000 iterations, to validate some of previous weights use this commands:
(如果使用其他GitHub库的代码,需要使用darknet.exe detector recall...替代darknet.exe detector map...)
If you use another GitHub repository, then use darknet.exe detector recall... instead of darknet.exe detector map...)
比较每个权重最后输出行:
And comapre last output lines for each weights (7000, 8000, 9000):
选择mAP(mean average precision)或Iou(intersect over union)最高的权重
Choose weights-file with the highest mAP (mean average precision) or IoU (intersect over union)
例如,最大mAP权重文件是 yolo-obj_8000.weights – 则使用这个权重进行检测.
或训练时使用-map标志:
darknet.exe detector train data/obj.data yolo-obj.cfg yolov4.conv.137 -map
可以在Loss-chart窗口看到mAP-chart(红色线)。每4 Epochs使用obj.data文件中指定的valid=valid.txt文件计算一次mAP。(1 Epoch = images_in_train_txt / batch iterations)
So you will see mAP-chart (red-line) in the Loss-chart Window. mAP will be calculated for each 4 Epochs using valid=valid.txt file that is specified in obj.data file (1 Epoch = images_in_train_txt / batch iterations)
(修改最大x坐标值—修改 max_batches= 参数为2000*分类数,例如. 3分类max_batches=2000*3=6000)
Snowman样本训练图(取1000次迭代的权重)
自定义对象检测的例子: darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights
average instersect over union of objects and detections for a certain threshold = 0.24
(平均精度均值)-每个类平均精度的均值,平均精度是对同类型每种可能阈值的PR-曲线(Precision-Recall)上的11个点的均值。(Precision-Recall in terms of PascalVOC, where Precision=TP/(TP+FP) and Recall=TP/(TP+FN) ), page-11: http://homepages.inf.ed.ac.uk/ckiw/postscript/ijcv_voc09.pdf
(mean average precision) - mean value of average precisions for each class, where average precision is average value of 11 points on PR-curve for each possible threshold (each probability of detection) for the same class (Precision-Recall in terms of PascalVOC, where Precision=TP/(TP+FP) and Recall=TP/(TP+FN) ), page-11: http://homepages.inf.ed.ac.uk/ckiw/postscript/ijcv_voc09.pdf
mAP是PascalVOC竞赛中默认的精度度量,这与MS COCO竞赛中的AP50是一样的。在Wiki词条中,精准度和召回度指标与PascalVOC的意义略有不同,但IoU的意义始终相同。
mAP is default metric of precision in the PascalVOC competition, this is the same as AP50 metric in the MS COCO competition. In terms of Wiki, indicators Precision and Recall have a slightly different meaning than in the PascalVOC competition, but IoU always has the same meaning.
自定义对象检测:
自定义对象检测范例: darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights
如果改进对象检测:
set flag random=1 in your .cfg-file – it will increase precision by training Yolo for different resolutions: link
increase network resolution in your .cfg-file (height=608, width=608 or any value multiple of 32) - it will increase precision
check that each object that you want to detect is mandatory labeled in your dataset - no one object in your data set should not be without label. In the most training issues - there are wrong labels in your dataset (got labels by using some conversion script, marked with a third-party tool, ...). Always check your dataset by using: https://github.com/AlexeyAB/Yolo_mark
My Loss is very high and mAP is very low, is training wrong? Run training with -show_imgs flag at the end of training command, do you see correct bounded boxes of objects (in windows or in files aug_...jpg)? If no – your training dataset is wrong.
for each object which you want to detect - there must be at least 1 similar object in the Training dataset with about the same: shape, side of object, relative size, angle of rotation, tilt, illumination. So desirable that your training dataset include images with objects at diffrent: scales, rotations, lightings, from different sides, on different backgrounds - you should preferably have 2000 different images for each class or more, and you should train 2000*classes iterations or more
desirable that your training dataset include images with non-labeled objects that you do not want to detect - negative samples without bounded box (empty .txt files) - use as many images of negative samples as there are images with objects
What is the best way to mark objects: label only the visible part of the object, or label the visible and overlapped part of the object, or label a little more than the entire object (with a little gap)? Mark as you like - how would you like it to be detected.
for training with a large number of objects in each image, add the parameter max=200 or higher value in the last [yolo]-layer or [region]-layer in your cfg-file (the global maximum number of objects that can be detected by YoloV3 is 0,0615234375*(width*height) where are width and height are parameters from [net] section in cfg-file)
for training for small objects (smaller than 16x16 after the image is resized to 416x416) - set layers = 23 instead of https://github.com/AlexeyAB/darknet/blob/6f718c257815a984253346bba8fb7aa756c55090/cfg/yolov4.cfg#L895 set stride=4 instead of https://github.com/AlexeyAB/darknet/blob/6f718c257815a984253346bba8fb7aa756c55090/cfg/yolov4.cfg#L892 and set stride=4 instead of https://github.com/AlexeyAB/darknet/blob/6f718c257815a984253346bba8fb7aa756c55090/cfg/yolov4.cfg#L989
for training for both small and large objects use modified models:
If you train the model to distinguish Left and Right objects as separate classes (left/right hand, left/right-turn on road signs, ...) then for disabling flip data augmentation - add flip=0 here: https://github.com/AlexeyAB/darknet/blob/3d2d0a7c98dbc8923d9ff705b81ff4f7940ea6ff/cfg/yolov3.cfg#L17
General rule - your training dataset should include such a set of relative sizes of objects that you want to detect:
例如,测试集中的每个对象,都应该在训练数据集中至少有一个同类别且大小近似的对象:
object width in percent from Training dataset ~= object width in percent from Test dataset
因此,如果训练集数据中只包含相对size为80-90%的对象,训练的网络就无法检测到相对size为1-10%的目标
I.e. for each object from Test dataset there must be at least 1 object in the Training dataset with the same class_id and about the same relative size:
object width in percent from Training dataset ~= object width in percent from Test dataset
That is, if only objects that occupied 80-90% of the image were present in the training set, then the trained network will not be able to detect objects that occupy 1-10% of the image.
to speedup training (with decreasing detection accuracy) set param stopbackward=1 for layer-136 in cfg-file
Each: model of object, side, illimination, scale, each 30 grad of the turn and inclination angles – these are different objects from an internal perspective of the neural network. So the more different objects you want to detect, the more complex network model should be used.
to make the detected bounded boxes more accurate, you can add 3 parameters ignore_thresh = .9 iou_normalizer=0.5 iou_loss=giou to each [yolo] layer and train, it will increase [email protected], but decrease [email protected].
Only if you are an expert in neural detection networks - recalculate anchors for your dataset for width and height from cfg-file: darknet.exe detector calc_anchors data/obj.data -num_of_clusters 9 -width 416 -height 416 then set the same 9 anchors in each of 3 [yolo]-layers in your cfg-file. But you should change indexes of anchors masks= for each [yolo]-layer, so that 1st-[yolo]-layer has anchors larger than 60x60, 2nd larger than 30x30, 3rd remaining. Also you should change the filters=(classes + 5)*
Increase network-resolution by set in your .cfg-file (height=608 and width=608) or (height=832 and width=832) or (any value multiple of 32) – this increases the precision and makes it possible to detect small objects: link
it is not necessary to train the network again, just use .weights-file already trained for 416x416 resolution
but to get even greater accuracy you should train with higher resolution 608x608 or 832x832, note: if error Out of memory occurs then in .cfg-file you should increase subdivisions=16, 32 or 64: link
如何标注对象边界框并创建标注文件:
可以使用https://github.com/AlexeyAB/Yolo_mark提供的GUI对象标注工具并生成YoloV2-YoloV4的标注文件
并提供范例:2分类对象(air和bird)的文件train.txt, obj.names, obj.data, yolo-obj.cfg, air1-6.txt, bird1-4.txt,train_obj.cmd文件提供了如何使用YoloV2-YoloV4进行训练的方法
Here you can find repository with GUI-software for marking bounded boxes of objects and generating annotation files for Yolo v2 - v4: https://github.com/AlexeyAB/Yolo_mark
With example of: train.txt, obj.names, obj.data, yolo-obj.cfg, air1-6.txt, bird1-4.txt for 2 classes of objects (air, bird) and train_obj.cmd with example how to train this image-set with Yolo v2 - v4
标注图像中对象的不同工具:
使用Yolo9000
同时检测和分类9000种类别: darknet.exe detector test cfg/combine9k.data cfg/yolo9000.cfg yolo9000.weights data/dog.jpg
https://github.com/AlexeyAB/darknet/blob/617cf313ccb1fe005db3f7d88dec04a04bd97cc2/cfg/yolo9000.cfg#L217-L218
如何使用Yolo的DLL和SO库
两种API:
yolo_cpp_dll.dll-API: link
struct bbox_t {
unsigned int x, y, w, h; // (x,y) - top-left corner, (w, h) - width & height of bounded box
float prob; // confidence - probability that the object was found correctly
unsigned int obj_id; // class of object - from range [0, classes-1]
unsigned int track_id; // tracking id for video (0 - untracked, 1 - inf - tracked object)
unsigned int frames_counter;// counter of frames on which the object was detected
};
class Detector {
public:
Detector(std::string cfg_filename, std::string weight_filename, int gpu_id = 0);
~Detector();
std::vector detect(std::string image_filename, float thresh = 0.2, bool use_mean = false);
std::vector detect(image_t img, float thresh = 0.2, bool use_mean = false);
static image_t load_image(std::string image_filename);
static void free_image(image_t m);
#ifdef OPENCV
std::vector detect(cv::Mat mat, float thresh = 0.2, bool use_mean = false);
std::shared_ptr mat_to_image_resize(cv::Mat mat) const;
#endif
};
查看代码发现yolov4中将yolo_cpp_dll.dll重命名为dark.dll,新增了几个函数。
打开Alturos.Yolo项目的YoloWrapper.cs文件,修改dll导入代码:
重新编译C#项目,将新版编译的dark.dll文件及相关依赖dll拷贝到输入目录,启动测试GUI界面,可以正常使用YoloV2的模型检测。
其他语言调用dark.dll方法同上。