More details 更多细节: http://pjreddie.com/darknet/yolo/
[email protected] (AP50) https://pjreddie.com/media/files/papers/YOLOv3.pdf |
---|
YOLOv3-spp better than YOLOv3 - mAP = 60.6%, FPS = 20: https://pjreddie.com/darknet/yolo/ YOLOv3-spp优于YOLOv3-mAP = 60.6%,FPS = 20
Yolo v3 source chart for the RetinaNet on MS COCO got from Table 1 (e): https://arxiv.org/pdf/1708.02002.pdf 表1(e)获得了MS COCO上RetinaNet的Yolo v3源图表
Yolo v2 on Pascal VOC 2007: https://hsto.org/files/a24/21e/068/a2421e0689fb43f08584de9d44c2215f.jpg Pascal VOC 2007上的Yolo v2
Yolo v2 on Pascal VOC 2012 (comp4): https://hsto.org/files/3a6/fdf/b53/3a6fdfb533f34cee9b52bdd9bb0b19d9.jpg Pascal VOC 2012上的Yolo v2(comp4)
OpenCV_DIR
= C:\opencv\build
- where are the include
and x64
folders image)cudnn.h
,libcudnn.so
… as desribed here https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#installlinux-tar , on Windows copy cudnn.h
,cudnn64_7.dll
, cudnn64_7.lib
as desribed here https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#installwindows )There are weights-file for different cfg-files (smaller size -> faster speed & lower accuracy:
yolov3-openimages.cfg
(247 MB COCO Yolo v3) - requires 4 GB GPU-RAM: https://pjreddie.com/media/files/yolov3-openimages.weightsyolov3-spp.cfg
(240 MB COCO Yolo v3) - requires 4 GB GPU-RAM: https://pjreddie.com/media/files/yolov3-spp.weightsyolov3.cfg
(236 MB COCO Yolo v3) - requires 4 GB GPU-RAM: https://pjreddie.com/media/files/yolov3.weightsyolov3-tiny.cfg
(34 MB COCO Yolo v3 tiny) - requires 1 GB GPU-RAM: https://pjreddie.com/media/files/yolov3-tiny.weightsenet-coco.cfg
(EfficientNetb0-Yolo- 45.5% [email protected] - 3.7 BFlops) enetb0-coco_final.weights and yolov3-tiny-prn.cfg
(33.1% [email protected] - 3.5 BFlops - more)yolov2.cfg
(194 MB COCO Yolo v2) - requires 4 GB GPU-RAM: https://pjreddie.com/media/files/yolov2.weightsyolo-voc.cfg
(194 MB VOC Yolo v2) - requires 4 GB GPU-RAM: http://pjreddie.com/media/files/yolo-voc.weightsyolov2-tiny.cfg
(43 MB COCO Yolo v2) - requires 1 GB GPU-RAM: https://pjreddie.com/media/files/yolov2-tiny.weightsyolov2-tiny-voc.cfg
(60 MB VOC Yolo v2) - requires 1 GB GPU-RAM: http://pjreddie.com/media/files/yolov2-tiny-voc.weightsyolo9000.cfg
(186 MB Yolo9000-model) - requires 4 GB GPU-RAM: http://pjreddie.com/media/files/yolo9000.weightsPut it near compiled: darknet.exe
You can get cfg-files by path: darknet/cfg/
yolov3.weights
/cfg
files to yolov3.ckpt
/pb/meta
: by using mystic123 or jinyu121 projects, and TensorFlow-liteyolov3.weights
/cfg
with: C++ example, Python example./scripts/get_coco_dataset.sh
to get labeled MS COCO detection datasetpython ./scripts/get_openimages_dataset.py
for labeling train detection datasetpython ./scripts/voc_label.py
for labeling Train/Test/Val detection datasets./scripts/get_imagenet_train.sh
(also imagenet_label.sh
for labeling valid set)Others: https://www.youtube.com/user/pjreddie/videos
CUDNN_HALF
defined in the Makefile
or darknet.sln
darknet detector demo
…[reorg]
-layerrandom=1
darknet detector map
…-map
flag) during training./darknet detector demo ... -json_port 8070 -mjpeg_port 8090
as JSON and MJPEG server to get results online over the network by using your soft or Web-browserAnd added manual - How to train Yolo v3/v2 (to detect your custom objects)
Also, you might be interested in using a simplified repository where is implemented INT8-quantization (+30% speedup and -1% mAP reduced): https://github.com/AlexeyAB/yolo2_light
On Linux use ./darknet
instead of darknet.exe
, like this:./darknet detector test ./cfg/coco.data ./cfg/yolov3.cfg ./yolov3.weights
On Linux find executable file ./darknet
in the root directory, while on Windows find it in the directory \build\darknet\x64
darknet.exe detector test cfg/coco.data cfg/yolov3.cfg yolov3.weights -thresh 0.25
darknet.exe detector test cfg/coco.data yolov3.cfg yolov3.weights -ext_output dog.jpg
darknet.exe detector demo cfg/coco.data cfg/yolov3.cfg yolov3.weights -ext_output test.mp4
darknet.exe detector demo cfg/coco.data cfg/yolov3.cfg yolov3.weights -c 0
darknet.exe detector demo cfg/coco.data cfg/yolov3.cfg yolov3.weights http://192.168.0.80:8080/video?dummy=param.mjpg
darknet.exe detector demo cfg/coco.data cfg/yolov3.cfg yolov3.weights test.mp4 -out_filename res.avi
darknet.exe detector demo cfg/coco.data cfg/yolov3-tiny.cfg yolov3-tiny.weights test.mp4
ip-address:8070
and 8090: ./darknet detector demo ./cfg/coco.data ./cfg/yolov3.cfg ./yolov3.weights test50.mp4 -json_port 8070 -mjpeg_port 8090 -ext_output
darknet.exe detector demo cfg/coco.data cfg/yolov3-tiny.cfg yolov3-tiny.weights -i 1 test.mp4
darknet.exe detect cfg/yolov3.cfg yolov3.weights -i 0 -thresh 0.25
http://ec2-35-160-228-91.us-west-2.compute.amazonaws.com:8090
in the Chrome/Firefox (Darknet should be compiled with OpenCV):./darknet detector train cfg/coco.data yolov3.cfg darknet53.conv.74 -dont_show -mjpeg_port 8090 -map
darknet.exe detector test cfg/combine9k.data cfg/yolo9000.cfg yolo9000.weights
data/train.txt
and save results of detection to result.json
file use:darknet.exe detector test cfg/coco.data cfg/yolov3.cfg yolov3.weights -ext_output -dont_show -out result.json < data/train.txt
data/train.txt
and save results of detection to result.txt
use:darknet.exe detector test cfg/coco.data cfg/yolov3.cfg yolov3.weights -dont_show -ext_output < data/train.txt > result.txt
data/new_train.txt
and save results of detection in Yolo training format for each image as label .txt
(in this way you can increase the amount of training data) use:darknet.exe detector test cfg/coco.data cfg/yolov3.cfg yolov3.weights -thresh 0.25 -dont_show -save_labels < data/new_train.txt
darknet.exe detector calc_anchors data/obj.data -num_of_clusters 9 -width 416 -height 416
darknet.exe detector map data/obj.data yolo-obj.cfg backup\yolo-obj_7000.weights
darknet.exe detector map data/obj.data yolo-obj.cfg backup\yolo-obj_7000.weights -iou_thresh 0.75
Download for Android phone mjpeg-stream soft: IP Webcam / Smart WebCam
Connect your Android phone to computer by WiFi (through a WiFi-router) or USB
Start Smart WebCam on your phone
Replace the address below, on shown in the phone application (Smart WebCam) and launch:
darknet.exe detector demo data/coco.data yolov3.cfg yolov3.weights http://192.168.0.80:8080/video?dummy=param.mjpg -i 0
cmake
)The CMakeLists.txt
will attempt to find installed optional dependencies like
CUDA, cudnn, ZED and build against those. It will also create a shared object
library file to use darknet
for code development.
Do inside the cloned repository:
mkdir build-release
cd build-release
cmake ..
make
make install
make
)Just do make
in the darknet directory.
Before make, you can set such options in the Makefile
: link
GPU=1
to build with CUDA to accelerate by using GPU (CUDA should be in /usr/local/cuda
)CUDNN=1
to build with cuDNN v5-v7 to accelerate training by using GPU (cuDNN should be in /usr/local/cudnn
)CUDNN_HALF=1
to build for Tensor Cores (on Titan V / Tesla V100 / DGX-2 and later) speedup Detection 3x, Training 2xOPENCV=1
to build with OpenCV 4.x/3.x/2.4.x - allows to detect on video files and video streams from network cameras or web-camsDEBUG=1
to bould debug version of YoloOPENMP=1
to build with OpenMP support to accelerate Yolo by using multi-core CPULIBSO=1
to build a library darknet.so
and binary runable file uselib
that uses this library. Or you can try to run so LD_LIBRARY_PATH=./:$LD_LIBRARY_PATH ./uselib test.mp4
How to use this SO-library from your own code - you can look at C++ example: https://github.com/AlexeyAB/darknet/blob/master/src/yolo_console_dll.cppLD_LIBRARY_PATH=./:$LD_LIBRARY_PATH ./uselib data/coco.names cfg/yolov3.cfg yolov3.weights test.mp4
ZED_CAMERA=1
to build a library with ZED-3D-camera support (should be ZED SDK installed), then runLD_LIBRARY_PATH=./:$LD_LIBRARY_PATH ./uselib data/coco.names cfg/yolov3.cfg yolov3.weights zed_camera
To run Darknet on Linux use examples from this article, just use ./darknet
instead of darknet.exe
, i.e. use this command: ./darknet detector test ./cfg/coco.data ./cfg/yolov3.cfg ./yolov3.weights
CMake-GUI
)This is the recommended approach to build Darknet on Windows if you have already
installed Visual Studio 2015/2017/2019, CUDA > 10.0, cuDNN > 7.0, and
OpenCV > 2.4.
Use CMake-GUI
as shown here on this IMAGE:
vcpkg
)If you have already installed Visual Studio 2015/2017/2019, CUDA > 10.0,
cuDNN > 7.0, OpenCV > 2.4, then to compile Darknet it is recommended to use
CMake-GUI.
Otherwise, follow these steps:
Install or update Visual Studio to at least version 2017, making sure to have it fully patched (run again the installer if not sure to automatically update to latest version). If you need to install from scratch, download VS from here: Visual Studio Community
Install CUDA and cuDNN
Install git
and cmake
. Make sure they are on the Path at least for the current account
Install vcpkg and try to install a test library to make sure everything is working, for example vcpkg install opengl
Define an environment variables, VCPKG_ROOT
, pointing to the install path of vcpkg
Define another environment variable, with name VCPKG_DEFAULT_TRIPLET
and value x64-windows
Open Powershell and type these commands:
PS \> cd $env:VCPKG_ROOT
PS Code\vcpkg> .\vcpkg install pthreads opencv[ffmpeg] #replace with opencv[cuda,ffmpeg] in case you want to use cuda-accelerated openCV
darknet
folder and build with the command .\build.ps1
. If you want to use Visual Studio, you will find two custom solutions created for you by CMake after the build, one in build_win_debug
and the other in build_win_release
, containing all the appropriate config flags for your system.If you have CUDA 10.0, cuDNN 7.4 and OpenCV 3.x (with paths: C:\opencv_3.0\opencv\build\include
& C:\opencv_3.0\opencv\build\x64\vc14\lib
), then open build\darknet\darknet.sln
, set x64 and Release https://hsto.org/webt/uh/fk/-e/uhfk-eb0q-hwd9hsxhrikbokd6u.jpeg and do the: Build -> Build darknet. Also add Windows system variable CUDNN
with path to CUDNN: https://user-images.githubusercontent.com/4096485/53249764-019ef880-36ca-11e9-8ffe-d9cf47e7e462.jpg
1.1. Find files opencv_world320.dll
and opencv_ffmpeg320_64.dll
(or opencv_world340.dll
and opencv_ffmpeg340_64.dll
) in C:\opencv_3.0\opencv\build\x64\vc14\bin
and put it near with darknet.exe
1.2 Check that there are bin
and include
folders in the C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0
if aren’t, then copy them to this folder from the path where is CUDA installed
1.3. To install CUDNN (speedup neural network), do the following:
download and install cuDNN v7.4.1 for CUDA 10.0: https://developer.nvidia.com/rdp/cudnn-archive
add Windows system variable CUDNN
with path to CUDNN: https://user-images.githubusercontent.com/4096485/53249764-019ef880-36ca-11e9-8ffe-d9cf47e7e462.jpg
copy file cudnn64_7.dll
to the folder \build\darknet\x64
near with darknet.exe
1.4. If you want to build without CUDNN then: open \darknet.sln
-> (right click on project) -> properties -> C/C++ -> Preprocessor -> Preprocessor Definitions, and remove this: CUDNN;
If you have other version of CUDA (not 10.0) then open build\darknet\darknet.vcxproj
by using Notepad, find 2 places with “CUDA 10.0” and change it to your CUDA-version. Then open \darknet.sln
-> (right click on project) -> properties -> CUDA C/C++ -> Device and remove there ;compute_75,sm_75
. Then do step 1
If you don’t have GPU, but have OpenCV 3.0 (with paths: C:\opencv_3.0\opencv\build\include
& C:\opencv_3.0\opencv\build\x64\vc14\lib
), then open build\darknet\darknet_no_gpu.sln
, set x64 and Release, and do the: Build -> Build darknet_no_gpu
If you have OpenCV 2.4.13 instead of 3.0 then you should change paths after \darknet.sln
is opened
4.1 (right click on project) -> properties -> C/C++ -> General -> Additional Include Directories: C:\opencv_2.4.13\opencv\build\include
4.2 (right click on project) -> properties -> Linker -> General -> Additional Library Directories: C:\opencv_2.4.13\opencv\build\x64\vc14\lib
If you have GPU with Tensor Cores (nVidia Titan V / Tesla V100 / DGX-2 and later) speedup Detection 3x, Training 2x:
\darknet.sln
-> (right click on project) -> properties -> C/C++ -> Preprocessor -> Preprocessor Definitions, and add here: CUDNN_HALF;
Note: CUDA must be installed only after Visual Studio has been installed.
Also, you can to create your own darknet.sln
& darknet.vcxproj
, this example for CUDA 9.1 and OpenCV 3.0
Then add to your created project:
C:\opencv_3.0\opencv\build\include;..\..\3rdparty\include;%(AdditionalIncludeDirectories);$(CudaToolkitIncludeDir);$(CUDNN)\include
.c
files.cu
fileshttp_stream.cpp
from \src
directorydarknet.h
from \include
directoryC:\opencv_3.0\opencv\build\x64\vc14\lib;$(CUDA_PATH)\lib\$(PlatformName);$(CUDNN)\lib\x64;%(AdditionalLibraryDirectories)
..\..\3rdparty\lib\x64\pthreadVC2.lib;cublas.lib;curand.lib;cudart.lib;cudnn.lib;%(AdditionalDependencies)
OPENCV;_TIMESPEC_DEFINED;_CRT_SECURE_NO_WARNINGS;_CRT_RAND_S;WIN32;NDEBUG;_CONSOLE;_LIB;%(PreprocessorDefinitions)
compile to .exe (X64 & Release) and put .dll-s near with .exe: https://hsto.org/webt/uh/fk/-e/uhfk-eb0q-hwd9hsxhrikbokd6u.jpeg
pthreadVC2.dll, pthreadGC2.dll
from \3rdparty\dll\x64
cusolver64_91.dll, curand64_91.dll, cudart64_91.dll, cublas64_91.dll
- 91 for CUDA 9.1 or your version, from C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.1\bin
For OpenCV 3.2: opencv_world320.dll
and opencv_ffmpeg320_64.dll
from C:\opencv_3.0\opencv\build\x64\vc14\bin
For OpenCV 2.4.13: opencv_core2413.dll
, opencv_highgui2413.dll
and opencv_ffmpeg2413_64.dll
from C:\opencv_2.4.13\opencv\build\x64\vc14\bin
Download pre-trained weights for the convolutional layers (154 MB): http://pjreddie.com/media/files/darknet53.conv.74 and put to the directory build\darknet\x64
Download The Pascal VOC Data and unpack it to directory build\darknet\x64\data\voc
will be created dir build\darknet\x64\data\voc\VOCdevkit\
:
2.1 Download file voc_label.py
to dir build\darknet\x64\data\voc
: http://pjreddie.com/media/files/voc_label.py
Download and install Python for Windows: https://www.python.org/ftp/python/3.5.2/python-3.5.2-amd64.exe
Run command: python build\darknet\x64\data\voc\voc_label.py
(to generate files: 2007_test.txt, 2007_train.txt, 2007_val.txt, 2012_train.txt, 2012_val.txt)
Run command: type 2007_train.txt 2007_val.txt 2012_*.txt > train.txt
Set batch=64
and subdivisions=8
in the file yolov3-voc.cfg
: link
Start training by using train_voc.cmd
or by using the command line:
darknet.exe detector train cfg/voc.data cfg/yolov3-voc.cfg darknet53.conv.74
(Note: To disable Loss-Window use flag -dont_show
. If you are using CPU, try darknet_no_gpu.exe
instead of darknet.exe
.)
If required change paths in the file build\darknet\cfg\voc.data
More information about training by the link: http://pjreddie.com/darknet/yolo/#train-voc
Note: If during training you see nan
values for avg
(loss) field - then training goes wrong, but if nan
is in some other lines - then training goes well.
Train it first on 1 GPU for like 1000 iterations: darknet.exe detector train cfg/voc.data cfg/yolov3-voc.cfg darknet53.conv.74
Then stop and by using partially-trained model /backup/yolov3-voc_1000.weights
run training with multigpu (up to 4 GPUs): darknet.exe detector train cfg/voc.data cfg/yolov3-voc.cfg /backup/yolov3-voc_1000.weights -gpus 0,1,2,3
Only for small datasets sometimes better to decrease learning rate, for 4 GPUs set learning_rate = 0.00025
(i.e. learning_rate = 0.001 / GPUs). In this case also increase 4x times burn_in =
and max_batches =
in your cfg-file. I.e. use burn_in = 4000
instead of 1000
. Same goes for steps=
if policy=steps
is set.
https://groups.google.com/d/msg/darknet/NbJqonJBTSY/Te5PfIpuCAAJ
(to train old Yolo v2 yolov2-voc.cfg
, yolov2-tiny-voc.cfg
, yolo-voc.cfg
, yolo-voc.2.0.cfg
, … click by the link) 训练旧的Yolo v2 yolov2-voc.cfg,yolov2-tiny-voc.cfg,yolo-voc.cfg,yolo-voc.2.0.cfg
Training Yolo v3 训练Yolo v3:
yolo-obj.cfg
with the same content as in yolov3.cfg
(or copy yolov3.cfg
to yolo-obj.cfg)
and 创建文件yolo-obj.cfg,其内容与yolov3.cfg中的内容相同(或将yolov3.cfg复制到yolo-obj.cfg),然后:batch=64
将批次行更改为 batch= 64subdivisions=8
将细分行更改为 subdivisions= 8classes*2000
but not less than 4000
), f.e. max_batches=6000
if you train for 3 classes 将行最大批次行更改为(classs * 2000但不少于4000),例如您训练3个类,则 max_batches=6000steps=4800,5400
将步长更改为最大批次的80%和90%,例如 steps= 4800,5400classes=80
to your number of objects in each of 3 [yolo]
-layers 将3层[yolo]中每一层的“ classes = 80”行更改为你的对象数:
filters=255
] to filters=(classes + 5)x3 in the 3 [convolutional]
before each [yolo]
layer
So if classes=1
then should be filters=18
. If classes=2
then write filters=21
.
(Do not write in the cfg-file: filters=(classes + 5)x3)
(Generally filters
depends on the classes
, coords
and number of mask
s, i.e. filters=(classes + coords + 1)*
, where mask
is indices of anchors. If mask
is absence, then filters=(classes + coords + 1)*num
)
So for example, for 2 objects, your file yolo-obj.cfg
should differ from yolov3.cfg
in such lines in each of 3 [yolo]-layers:
[convolutional]
filters=21
[region]
classes=2
Create file obj.names
in the directory build\darknet\x64\data\
, with objects names - each in new line
Create file obj.data
in the directory build\darknet\x64\data\
, containing (where classes = number of objects):
classes= 2
train = data/train.txt
valid = data/test.txt
names = data/obj.names
backup = backup/
Put image-files (.jpg) of your objects in the directory build\darknet\x64\data\obj\
You should label each object on images from your dataset. Use this visual GUI-software for marking bounded boxes of objects and generating annotation files for Yolo v2 & v3: https://github.com/AlexeyAB/Yolo_mark
It will create .txt
-file for each .jpg
-image-file - in the same directory and with the same name, but with .txt
-extension, and put to file: object number and object coordinates on this image, for each object in new line:
Where:
- integer object number from 0
to (classes-1)
- float values relative to width and height of image, it can be equal from (0.0 to 1.0]
= /
or = /
- are center of rectangle (are not top-left corner)For example for img1.jpg
you will be created img1.txt
containing:
1 0.716797 0.395833 0.216406 0.147222
0 0.687109 0.379167 0.255469 0.158333
1 0.420312 0.395833 0.140625 0.166667
train.txt
in directory build\darknet\x64\data\
, with filenames of your images, each filename in new line, with path relative to darknet.exe
, for example containing:data/obj/img1.jpg
data/obj/img2.jpg
data/obj/img3.jpg
Download pre-trained weights for the convolutional layers (154 MB): https://pjreddie.com/media/files/darknet53.conv.74 and put to the directory build\darknet\x64
Start training by using the command line: darknet.exe detector train data/obj.data yolo-obj.cfg darknet53.conv.74
To train on Linux use command: ./darknet detector train data/obj.data yolo-obj.cfg darknet53.conv.74
(just use ./darknet
instead of darknet.exe
)
yolo-obj_last.weights
will be saved to the build\darknet\x64\backup\
for each 100 iterations)yolo-obj_xxxx.weights
will be saved to the build\darknet\x64\backup\
for each 1000 iterations)darknet.exe detector train data/obj.data yolo-obj.cfg darknet53.conv.74 -dont_show
, if you train on computer without monitor like a cloud Amazon EC2)darknet.exe detector train data/obj.data yolo-obj.cfg darknet53.conv.74 -dont_show -mjpeg_port 8090 -map
then open URL http://ip-address:8090
in Chrome/Firefox browser)8.1. For training with mAP (mean average precisions) calculation for each 4 Epochs (set valid=valid.txt
or train.txt
in obj.data
file) and run: darknet.exe detector train data/obj.data yolo-obj.cfg darknet53.conv.74 -map
yolo-obj_final.weights
from path build\darknet\x64\backup\
After each 100 iterations you can stop and later start training from this point. For example, after 2000 iterations you can stop training, and later just start training using: darknet.exe detector train data/obj.data yolo-obj.cfg backup\yolo-obj_2000.weights
(in the original repository https://github.com/pjreddie/darknet the weights-file is saved only once every 10 000 iterations if(iterations > 1000)
)
Also you can get result earlier than all 45000 iterations.
Note: If during training you see nan
values for avg
(loss) field - then training goes wrong, but if nan
is in some other lines - then training goes well.
Note: If you changed width= or height= in your cfg-file, then new width and height must be divisible by 32.
Note: After training use such command for detection: darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights
Note: if error Out of memory
occurs then in .cfg
-file you should increase subdivisions=16
, 32 or 64: link
Do all the same steps as for the full yolo model as described above. With the exception of:
yolov3-tiny.conv.15
using command: darknet.exe partial cfg/yolov3-tiny.cfg yolov3-tiny.weights yolov3-tiny.conv.15 15
yolov3-tiny-obj.cfg
based on cfg/yolov3-tiny_obj.cfg
instead of yolov3.cfg
darknet.exe detector train data/obj.data yolov3-tiny-obj.cfg yolov3-tiny.conv.15
For training Yolo based on other models (DenseNet201-Yolo or ResNet50-Yolo), you can download and get pre-trained weights as showed in this file: https://github.com/AlexeyAB/darknet/blob/master/build/darknet/x64/partial.cmd
If you made you custom model that isn’t based on other models, then you can train it without pre-trained weights, then will be used random initial weights.
Usually sufficient 2000 iterations for each class(object), but not less than 4000 iterations in total. But for a more precise definition when you should stop training, use the following manual:
Region Avg IOU: 0.798363, Class: 0.893232, Obj: 0.700808, No Obj: 0.004567, Avg Recall: 1.000000, count: 8
Region Avg IOU: 0.800677, Class: 0.892181, Obj: 0.701590, No Obj: 0.004574, Avg Recall: 1.000000, count: 89002: 0.211667, 0.60730 avg, 0.001000 rate, 3.868000 seconds, 576128 images
Loaded: 0.000000 seconds
When you see that average loss 0.xxxxxx avg no longer decreases at many iterations then you should stop training. The final avgerage loss can be from 0.05
(for a small model and easy dataset) to 3.0
(for a big model and a difficult dataset).
.weights
-files from darknet\build\darknet\x64\backup
and choose the best of them:For example, you stopped training after 9000 iterations, but the best result can give one of previous weights (7000, 8000, 9000). It can happen due to overfitting. Overfitting - is case when you can detect objects on images from training-dataset, but can’t detect objects on any others images. You should get weights from Early Stopping Point:
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-qsA3ggEp-1571902453893)(https://hsto.org/files/5dc/7ae/7fa/5dc7ae7fad9d4e3eb3a484c58bfc1ff5.png)]
To get weights from Early Stopping Point:
2.1. At first, in your file obj.data
you must specify the path to the validation dataset valid = valid.txt
(format of valid.txt
as in train.txt
), and if you haven’t validation images, just copy data\train.txt
to data\valid.txt
.
2.2 If training is stopped after 9000 iterations, to validate some of previous weights use this commands:
(If you use another GitHub repository, then use darknet.exe detector recall
… instead of darknet.exe detector map
…)
darknet.exe detector map data/obj.data yolo-obj.cfg backup\yolo-obj_7000.weights
darknet.exe detector map data/obj.data yolo-obj.cfg backup\yolo-obj_8000.weights
darknet.exe detector map data/obj.data yolo-obj.cfg backup\yolo-obj_9000.weights
And comapre last output lines for each weights (7000, 8000, 9000):
Choose weights-file with the highest mAP (mean average precision) or IoU (intersect over union)
For example, bigger mAP gives weights yolo-obj_8000.weights
- then use this weights for detection.
Or just train with -map
flag:
darknet.exe detector train data/obj.data yolo-obj.cfg darknet53.conv.74 -map
So you will see mAP-chart (red-line) in the Loss-chart Window. mAP will be calculated for each 4 Epochs using valid=valid.txt
file that is specified in obj.data
file (1 Epoch = images_in_train_txt / batch
iterations)
(to change the max x-axis value - change max_batches=
parameter to 2000*classes
, f.e. max_batches=6000
for 3 classes)
Example of custom object detection: darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights
IoU (intersect over union) - average instersect over union of objects and detections for a certain threshold = 0.24
mAP (mean average precision) - mean value of average precisions
for each class, where average precision
is average value of 11 points on PR-curve for each possible threshold (each probability of detection) for the same class (Precision-Recall in terms of PascalVOC, where Precision=TP/(TP+FP) and Recall=TP/(TP+FN) ), page-11: http://homepages.inf.ed.ac.uk/ckiw/postscript/ijcv_voc09.pdf
mAP is default metric of precision in the PascalVOC competition, this is the same as AP50 metric in the MS COCO competition.
In terms of Wiki, indicators Precision and Recall have a slightly different meaning than in the PascalVOC competition, but IoU always has the same meaning.
2007_test.txt
as described here: https://github.com/AlexeyAB/darknet#how-to-train-pascal-voc-databuild\darknet\x64\data\
then run voc_label_difficult.py
to get the file difficult_2007_test.txt
#
from this line to un-comment it: https://github.com/AlexeyAB/darknet/blob/master/build/darknet/x64/data/voc.data#L4build/darknet/x64/calc_mAP_voc_py.cmd
- you will get mAP for yolo-voc.cfg
model, mAP = 75.9%build/darknet/x64/calc_mAP.cmd
- you will get mAP for yolo-voc.cfg
model, mAP = 75.8%(The article specifies the value of mAP = 76.8% for YOLOv2 416×416, page-4 table-3: https://arxiv.org/pdf/1612.08242v1.pdf. We get values lower - perhaps due to the fact that the model was trained on a slightly different source code than the code on which the detection is was done)
tiny-yolo-voc.cfg
model, then un-comment line for tiny-yolo-voc.cfg and comment line for yolo-voc.cfg in the .cmd-filereval_voc.py
and voc_eval.py
instead of reval_voc_py3.py
and voc_eval_py3.py
from this directory: https://github.com/AlexeyAB/darknet/tree/master/scriptsExample of custom object detection: darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-LEPJtdTZ-1571902453895)(https://hsto.org/files/727/c7e/5e9/727c7e5e99bf4d4aa34027bb6a5e4bab.jpg)] |
---|
set flag random=1
in your .cfg
-file - it will increase precision by training Yolo for different resolutions: link
increase network resolution in your .cfg
-file (height=608
, width=608
or any value multiple of 32) - it will increase precision
check that each object that you want to detect is mandatory labeled in your dataset - no one object in your data set should not be without label. In the most training issues - there are wrong labels in your dataset (got labels by using some conversion script, marked with a third-party tool, …). Always check your dataset by using: https://github.com/AlexeyAB/Yolo_mark
my Loss is very high and mAP is very low, is training wrong? Run training with -show_imgs
flag at the end of training command, do you see correct bounded boxes of objects (in windows or in files aug_...jpg
)? If no - your training dataset is wrong.
for each object which you want to detect - there must be at least 1 similar object in the Training dataset with about the same: shape, side of object, relative size, angle of rotation, tilt, illumination. So desirable that your training dataset include images with objects at diffrent: scales, rotations, lightings, from different sides, on different backgrounds - you should preferably have 2000 different images for each class or more, and you should train 2000*classes
iterations or more
desirable that your training dataset include images with non-labeled objects that you do not want to detect - negative samples without bounded box (empty .txt
files) - use as many images of negative samples as there are images with objects
What is the best way to mark objects: label only the visible part of the object, or label the visible and overlapped part of the object, or label a little more than the entire object (with a little gap)? Mark as you like - how would you like it to be detected.
for training with a large number of objects in each image, add the parameter max=200
or higher value in the last [yolo]
-layer or [region]
-layer in your cfg-file (the global maximum number of objects that can be detected by YoloV3 is 0,0615234375*(width*height)
where are width and height are parameters from [net]
section in cfg-file)
for training for small objects (smaller than 16x16 after the image is resized to 416x416) - set layers = -1, 11
instead of https://github.com/AlexeyAB/darknet/blob/6390a5a2ab61a0bdf6f1a9a6b4a739c16b36e0d7/cfg/yolov3.cfg#L720
and set stride=4
instead of https://github.com/AlexeyAB/darknet/blob/6390a5a2ab61a0bdf6f1a9a6b4a739c16b36e0d7/cfg/yolov3.cfg#L717
for training for both small and large objects use modified models:
If you train the model to distinguish Left and Right objects as separate classes (left/right hand, left/right-turn on road signs, …) then for disabling flip data augmentation - add flip=0
here: https://github.com/AlexeyAB/darknet/blob/3d2d0a7c98dbc8923d9ff705b81ff4f7940ea6ff/cfg/yolov3.cfg#L17
General rule - your training dataset should include such a set of relative sizes of objects that you want to detect:
train_network_width * train_obj_width / train_image_width ~= detection_network_width * detection_obj_width / detection_image_width
train_network_height * train_obj_height / train_image_height ~= detection_network_height * detection_obj_height / detection_image_height
I.e. for each object from Test dataset there must be at least 1 object in the Training dataset with the same class_id and about the same relative size:
object width in percent from Training dataset
~= object width in percent from Test dataset
That is, if only objects that occupied 80-90% of the image were present in the training set, then the trained network will not be able to detect objects that occupy 1-10% of the image.
to speedup training (with decreasing detection accuracy) do Fine-Tuning instead of Transfer-Learning, set param stopbackward=1
here: https://github.com/AlexeyAB/darknet/blob/6d44529cf93211c319813c90e0c1adb34426abe5/cfg/yolov3.cfg#L548
then do this command: ./darknet partial cfg/yolov3.cfg yolov3.weights yolov3.conv.81 81
will be created file yolov3.conv.81
,
then train by using weights file yolov3.conv.81
instead of darknet53.conv.74
each: model of object, side, illimination, scale, each 30 grad
of the turn and inclination angles - these are different objects from an internal perspective of the neural network. So the more different objects you want to detect, the more complex network model should be used.
Only if you are an expert in neural detection networks - recalculate anchors for your dataset for width
and height
from cfg-file:
darknet.exe detector calc_anchors data/obj.data -num_of_clusters 9 -width 416 -height 416
then set the same 9 anchors
in each of 3 [yolo]
-layers in your cfg-file. But you should change indexes of anchors masks=
for each [yolo]-layer, so that 1st-[yolo]-layer has anchors larger than 60x60, 2nd larger than 30x30, 3rd remaining. Also you should change the filters=(classes + 5)*
before each [yolo]-layer. If many of the calculated anchors do not fit under the appropriate layers - then just try using all the default anchors.
Increase network-resolution by set in your .cfg
-file (height=608
and width=608
) or (height=832
and width=832
) or (any value multiple of 32) - this increases the precision and makes it possible to detect small objects: link
.weights
-file already trained for 416x416 resolutionOut of memory
occurs then in .cfg
-file you should increase subdivisions=16
, 32 or 64: linkHere you can find repository with GUI-software for marking bounded boxes of objects and generating annotation files for Yolo v2 & v3: https://github.com/AlexeyAB/Yolo_mark
With example of: train.txt
, obj.names
, obj.data
, yolo-obj.cfg
, air
1-6.txt
, bird
1-4.txt
for 2 classes of objects (air, bird) and train_obj.cmd
with example how to train this image-set with Yolo v2 & v3
Different tools for marking objects in images:
Simultaneous detection and classification of 9000 objects: darknet.exe detector test cfg/combine9k.data cfg/yolo9000.cfg yolo9000.weights data/dog.jpg
yolo9000.weights
- (186 MB Yolo9000 Model) requires 4 GB GPU-RAM: http://pjreddie.com/media/files/yolo9000.weights
yolo9000.cfg
- cfg-file of the Yolo9000, also there are paths to the 9k.tree
and coco9k.map
https://github.com/AlexeyAB/darknet/blob/617cf313ccb1fe005db3f7d88dec04a04bd97cc2/cfg/yolo9000.cfg#L217-L218
9k.tree
- WordTree of 9418 categories - , if
parent_id == -1
then this label hasn’t parent: https://raw.githubusercontent.com/AlexeyAB/darknet/master/build/darknet/x64/data/9k.tree
coco9k.map
- map 80 categories from MSCOCO to WordTree 9k.tree
: https://raw.githubusercontent.com/AlexeyAB/darknet/master/build/darknet/x64/data/coco9k.map
combine9k.data
- data file, there are paths to: 9k.labels
, 9k.names
, inet9k.map
, (change path to your combine9k.train.list
): https://raw.githubusercontent.com/AlexeyAB/darknet/master/build/darknet/x64/data/combine9k.data
9k.labels
- 9418 labels of objects: https://raw.githubusercontent.com/AlexeyAB/darknet/master/build/darknet/x64/data/9k.labels
9k.names
-
9418 names of objects: https://raw.githubusercontent.com/AlexeyAB/darknet/master/build/darknet/x64/data/9k.names
inet9k.map
- map 200 categories from ImageNet to WordTree 9k.tree
: https://raw.githubusercontent.com/AlexeyAB/darknet/master/build/darknet/x64/data/inet9k.map
build.sh
ordarknet
using cmake
orLIBSO=1
in the Makefile
and do make
build.ps1
ordarknet
using cmake
orbuild\darknet\yolo_cpp_dll.sln
solution or build\darknet\yolo_cpp_dll_no_gpu.sln
solutionThere are 2 APIs:
C API: https://github.com/AlexeyAB/darknet/blob/master/include/darknet.h
C++ API: https://github.com/AlexeyAB/darknet/blob/master/include/yolo_v2_class.hpp
To compile Yolo as C++ DLL-file yolo_cpp_dll.dll
- open the solution build\darknet\yolo_cpp_dll.sln
, set x64 and Release, and do the: Build -> Build yolo_cpp_dll
CUDNN;
To use Yolo as DLL-file in your C++ console application - open the solution build\darknet\yolo_console_dll.sln
, set x64 and Release, and do the: Build -> Build yolo_console_dll
you can run your console application from Windows Explorer build\darknet\x64\yolo_console_dll.exe
use this command: yolo_console_dll.exe data/coco.names yolov3.cfg yolov3.weights test.mp4
after launching your console application and entering the image file name - you will see info for each object:
to use simple OpenCV-GUI you should uncomment line //#define OPENCV
in yolo_console_dll.cpp
-file: link
you can see source code of simple example for detection on the video file: link
yolo_cpp_dll.dll
-API: link
struct bbox_t {
unsigned int x, y, w, h; // (x,y) - top-left corner, (w, h) - width & height of bounded box
float prob; // confidence - probability that the object was found correctly
unsigned int obj_id; // class of object - from range [0, classes-1]
unsigned int track_id; // tracking id for video (0 - untracked, 1 - inf - tracked object)
unsigned int frames_counter;// counter of frames on which the object was detected
};
class Detector {
public:
Detector(std::string cfg_filename, std::string weight_filename, int gpu_id = 0);
~Detector();
std::vector detect(std::string image_filename, float thresh = 0.2, bool use_mean = false);
std::vector detect(image_t img, float thresh = 0.2, bool use_mean = false);
static image_t load_image(std::string image_filename);
static void free_image(image_t m);
#ifdef OPENCV
std::vector detect(cv::Mat mat, float thresh = 0.2, bool use_mean = false);
std::shared_ptr mat_to_image_resize(cv::Mat mat) const;
#endif
};