Yolo v3的使用方法
参考自@zhaonan
- Yolo v3的使用方法
- 安装darknet
- 训练Pascal VOC格式的数据
- 修改cfg文件中的voc.data
- 修改VOC.names
- 下载预训练卷积层权重
- 修改cfg/yolov3-voc.cfg
- 训练自己的模型
- 测试Yolo模型
- 测试单张图片:
- 批量测试图片
- 生成预测结果
- 采用第三方compute_mAP
- 高级进阶
- Reference
安装darknet
- 下载库文件
git clone https://github.com/pjreddie/darknet
cd darknet
- 修改Makefile
GPU=1 #0或1
CUDNN=1 #0或1
OPENCV=0 #0或1
OPENMP=0
DEBUG=0
- 编译
make
- 下载预训练模型
wget https://pjreddie.com/media/files/yolov3.weights
- 用预训练模型进行简单的测试
./darknet detect cfg/yolov3.cfg yolov3.weights data/dog.jpg
训练Pascal VOC格式的数据
- 生成Labels,因为darknet不需要xml文件,需要.txt文件(格式: )
用voc_label.py(位于./scripts)cat voc_label.py
共修改四处
import xml.etree.ElementTree as ET
import pickle
import os
from os import listdir, getcwd
from os.path import join
sets=[('2007', 'train'), ('2007', 'val'), ('2007', 'test')] #替换为自己的数据集
classes = ["head", "eye", "nose"] #修改为自己的类别
def convert(size, box):
dw = 1./(size[0])
dh = 1./(size[1])
x = (box[0] + box[1])/2.0 - 1
y = (box[2] + box[3])/2.0 - 1
w = box[1] - box[0]
h = box[3] - box[2]
x = x*dw
w = w*dw
y = y*dh
h = h*dh
return (x,y,w,h)
def convert_annotation(year, image_id):
in_file = open('VOCdevkit/VOC%s/Annotations/%s.xml'%(year, image_id)) #将数据集放于当前目录下
out_file = open('VOCdevkit/VOC%s/labels/%s.txt'%(year, image_id), 'w')
tree=ET.parse(in_file)
root = tree.getroot()
size = root.find('size')
w = int(size.find('width').text)
h = int(size.find('height').text)
for obj in root.iter('object'):
difficult = obj.find('difficult').text
cls = obj.find('name').text
if cls not in classes or int(difficult)==1:
continue
cls_id = classes.index(cls)
xmlbox = obj.find('bndbox')
b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text), float(xmlbox.find('ymax').text))
bb = convert((w,h), b)
out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')
wd = getcwd()
for year, image_set in sets:
if not os.path.exists('VOCdevkit/VOC%s/labels/'%(year)):
os.makedirs('VOCdevkit/VOC%s/labels/'%(year))
image_ids = open('VOCdevkit/VOC%s/ImageSets/Main/%s.txt'%(year, image_set)).read().strip().split()
list_file = open('%s_%s.txt'%(year, image_set), 'w')
for image_id in image_ids:
list_file.write('%s/VOCdevkit/VOC%s/JPEGImages/%s.jpg\n'%(wd, year, image_id))
convert_annotation(year, image_id)
list_file.close()
os.system("cat 2007_train.txt 2007_val.txt > train.txt") #修改为自己的数据集用作训练
wget https://pjreddie.com/media/files/voc_label.py
python voc_label.py
在VOCdevkit/VOC2007/labels/
中:
learner@learner-pc:~/darknet/scripts$ ls 2007_test.txt #0 dice_label.sh imagenet_label.sh VOCdevkit_original 2007_train.txt #1 gen_tactic.sh train.txt #3 voc_label.py 2007_val.txt #2 get_coco_dataset.sh VOCdevkit
这时darknet需要一个txt文件,其中包含了所有的图片
cat 2007_train.txt 2007_val.txt 2012_*.txt > train.txt
修改cfg文件中的voc.data
classes= 3 #修改为自己的类别数
train = /home/learner/darknet/data/voc/train.txt #修改为自己的路径 or /home/learner/darknet/scripts/2007_test.txt
valid = /home/learner/darknet/data/voc/2007_test.txt #修改为自己的路径 or /home/learner/darknet/scripts/2007_test.txt
names = /home/learner/darknet/data/voc.names #修改见voc.names
backup = /home/learner/darknet/backup #修改为自己的路径,输出的权重信息将存储其内
修改VOC.names
head #自己需要探测的类别,一行一个
eye
nose
下载预训练卷积层权重
wget https://pjreddie.com/media/files/darknet53.conv.74
修改cfg/yolov3-voc.cfg
[net]
# Testing
batch=64
subdivisions=32 #每批训练的个数=batch/subvisions,根据自己GPU显存进行修改,显存不够改大一些
# Training
# batch=64
# subdivisions=16
width=416
height=416
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1
learning_rate=0.001
burn_in=1000
max_batches = 50200 #训练步数
policy=steps
steps=40000,45000 #开始衰减的步数
scales=.1,.1
[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky
.....
[convolutional]
size=1
stride=1
pad=1
filters=24 #filters = 3 * ( classes + 5 ) here,filters=3*(3+5)
activation=linear
[yolo]
mask = 6,7,8
anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
classes=3 #修改为自己的类别数
num=9
jitter=.3
ignore_thresh = .5
truth_thresh = 1
random=1
[route]
layers = -4
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[upsample]
stride=2
[route]
layers = -1, 61
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky
[convolutional]
size=1
stride=1
pad=1
filters=24 #filters = 3 * ( classes + 5 ) here,filters=3*(3+5)
activation=linear
[yolo]
mask = 3,4,5
anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
classes=3 #修改为自己的类别数
num=9
jitter=.3
ignore_thresh = .5
truth_thresh = 1
random=1
[route]
layers = -4
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[upsample]
stride=2
[route]
layers = -1, 36
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky
[convolutional]
size=1
stride=1
pad=1
filters=24 #filters = 3 * ( classes + 5 ) here,filters=3*(3+5)
activation=linear
[yolo]
mask = 0,1,2
anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
classes=3 #修改为自己的类别数
num=9
jitter=.3
ignore_thresh = .5
truth_thresh = 1
random=1
训练自己的模型
1 单GPU训练:./darknet -i
./darknet detector train cfg/voc.data cfg/yolov3-voc.cfg darknet53.conv.74
2 多GPU训练,格式为0,1,2,3
:./darknet detector train
./darknet detector train cfg/voc.data cfg/yolov3-voc.cfg darknet53.conv.74 -gpus 0,1,2,3
测试Yolo模型
测试单张图片:
- 测试单张图片,需要编译时有OpenCV支持:
./darknet detector test
#本次测试无opencv支持
文件中batch
和subdivisions
两项必须为1。- 测试时还可以用
-thresh
和-hier
选项指定对应参数。 ./darknet detector test cfg/voc.data cfg/yolov3-voc.cfg backup/yolov3-voc_20000.weights Eminem.jpg
批量测试图片
-
yolov3-voc.cfg(cfg文件夹下)
文件中batch
和subdivisions
两项必须为1。 -
在detector.c中增加头文件:
#include
/* Many POSIX functions (but not all, by a large margin) */ #include /* open(), creat() - and fcntl() */ -
在前面添加*GetFilename(char *p)函数
#include "darknet.h" #include
//需增加的头文件 #include #include #include //需增加的头文件 static int coco_ids[] = {1,2,3,4,5,6,7,8,9,10,11,13,14,15,16,17,18,19,20,21,22,23,24,25,27,28,31,32,33,34,35,36,37,38,39,40,41,42,43,44,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,67,70,72,73,74,75,76,77,78,79,80,81,82,84,85,86,87,88,89,90}; char *GetFilename(char *p) { static char name[30]={""}; char *q = strrchr(p,'/') + 1; strncpy(name,q,20); return name; } -
用下面代码替换detector.c文件(example文件夹下)的void test_detector函数(注意有3处要改成自己的路径)
void test_detector(char *datacfg, char *cfgfile, char *weightfile, char *filename, float thresh, float hier_thresh, char *outfile, int fullscreen)
{
list *options = read_data_cfg(datacfg);
char *name_list = option_find_str(options, "names", "data/names.list");
char **names = get_labels(name_list);
image **alphabet = load_alphabet();
network *net = load_network(cfgfile, weightfile, 0);
set_batch_network(net, 1);
srand(2222222);
double time;
char buff[256];
char *input = buff;
float nms=.45;
int i=0;
while(1){
if(filename){
strncpy(input, filename, 256);
image im = load_image_color(input,0,0);
image sized = letterbox_image(im, net->w, net->h);
//image sized = resize_image(im, net->w, net->h);
//image sized2 = resize_max(im, net->w);
//image sized = crop_image(sized2, -((net->w - sized2.w)/2), -((net->h - sized2.h)/2), net->w, net->h);
//resize_network(net, sized.w, sized.h);
layer l = net->layers[net->n-1];
float *X = sized.data;
time=what_time_is_it_now();
network_predict(net, X);
printf("%s: Predicted in %f seconds.\n", input, what_time_is_it_now()-time);
int nboxes = 0;
detection *dets = get_network_boxes(net, im.w, im.h, thresh, hier_thresh, 0, 1, &nboxes);
//printf("%d\n", nboxes);
//if (nms) do_nms_obj(boxes, probs, l.w*l.h*l.n, l.classes, nms);
if (nms) do_nms_sort(dets, nboxes, l.classes, nms);
draw_detections(im, dets, nboxes, thresh, names, alphabet, l.classes);
free_detections(dets, nboxes);
if(outfile)
{
save_image(im, outfile);
}
else{
save_image(im, "predictions");
#ifdef OPENCV
cvNamedWindow("predictions", CV_WINDOW_NORMAL);
if(fullscreen){
cvSetWindowProperty("predictions", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN);
}
show_image(im, "predictions",0);
cvWaitKey(0);
cvDestroyAllWindows();
#endif
}
free_image(im);
free_image(sized);
if (filename) break;
}
else {
printf("Enter Image Path: ");
fflush(stdout);
input = fgets(input, 256, stdin);
if(!input) return;
strtok(input, "\n");
list *plist = get_paths(input);
char **paths = (char **)list_to_array(plist);
printf("Start Testing!\n");
int m = plist->size;
if(access("/home/learner/darknet/data/outv3tiny_dpj",0)==-1)//"/home/learner/darknet/data"修改成自己的路径
{
if (mkdir("/home/learner/darknet/data/outv3tiny_dpj",0777))//"/home/learner/darknet/data"修改成自己的路径
{
printf("creat file bag failed!!!");
}
}
for(i = 0; i < m; ++i){
char *path = paths[i];
image im = load_image_color(path,0,0);
image sized = letterbox_image(im, net->w, net->h);
//image sized = resize_image(im, net->w, net->h);
//image sized2 = resize_max(im, net->w);
//image sized = crop_image(sized2, -((net->w - sized2.w)/2), -((net->h - sized2.h)/2), net->w, net->h);
//resize_network(net, sized.w, sized.h);
layer l = net->layers[net->n-1];
float *X = sized.data;
time=what_time_is_it_now();
network_predict(net, X);
printf("Try Very Hard:");
printf("%s: Predicted in %f seconds.\n", path, what_time_is_it_now()-time);
int nboxes = 0;
detection *dets = get_network_boxes(net, im.w, im.h, thresh, hier_thresh, 0, 1, &nboxes);
//printf("%d\n", nboxes);
//if (nms) do_nms_obj(boxes, probs, l.w*l.h*l.n, l.classes, nms);
if (nms) do_nms_sort(dets, nboxes, l.classes, nms);
draw_detections(im, dets, nboxes, thresh, names, alphabet, l.classes);
free_detections(dets, nboxes);
if(outfile){
save_image(im, outfile);
}
else{
char b[2048];
sprintf(b,"/home/learner/darknet/data/outv3tiny_dpj/%s",GetFilename(path));//"/home/leaner/darknet/data"修改成自己的路径
save_image(im, b);
printf("save %s successfully!\n",GetFilename(path));
/*
#ifdef OPENCV
//cvNamedWindow("predictions", CV_WINDOW_NORMAL);
if(fullscreen){
cvSetWindowProperty("predictions", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN);
}
//show_image(im, "predictions");
//cvWaitKey(0);
//cvDestroyAllWindows();
#endif*/
}
free_image(im);
free_image(sized);
if (filename) break;
}
}
}
}
- 重新进行编译
make clean
make
- 开始批量测试
./darknet detector test cfg/voc.data cfg/yolov3-voc.cfg backup/yolov3-voc_20000.weights
- 输入Image Path(所有的测试文件的路径,可以复制voc.data中valid后边的路径):
/home/learner/darknet/data/voc/2007_test.txt # 完整路径
- 结果都保存在
./data/out
文件夹下
生成预测结果
生成预测结果
./darknet detector valid
- yolov3-voc.cfg(cfg文件夹下)
文件中
batch和
subdivisions两项必须为1。 - 结果生成在
的results
指定的目录下以
开头的若干文件中,若
没有指定results
,那么默认为
。/results - 执行语句如下:在终端只返回用时,在./results/comp4_det_test_[类名].txt里保存测试结果
./darknet detector valid cfg/voc.data cfg/yolov3-voc.cfg backup/yolov3-voc_20000.weights
采用第三方compute_mAP
下载第三方库:
git clone https://github.com/LianjiLi/yolo-compute-map.git
进行如下修改:
-
修改darknet/examples/detector.c中validate_detector()
char *valid_images = option_find_str(options, "valid", "./data/2007_test.txt");//改成自己的测试文件路径 if(!outfile) outfile = "comp4_det_test_"; fps = calloc(classes, sizeof(FILE *)); for(j = 0; j < classes; ++j){ snprintf(buff, 1024, "%s/%s.txt", prefix, names[j]);//删除outfile参数以及对应的%s fps[j] = fopen(buff, "w");
-
重新编译
make clean make
-
运行valid
darknet文件夹下运行./darknet detector valid cfg/voc.data cfg/yolov3-tiny.cfg backup/yolov3-tiny_164000.weights(改为自己的模型路径)
-
在本文件夹下运行
python compute_mAP.py
-
说明:compute_mAP.py中的test.txt文件内容只有文件名字,不带绝对路径,不带后缀
高级进阶
darknet的浅层特征可视化请参看:https://www.cnblogs.com/pprp/p/10146355.html
AlexyAB大神总结的优化经验请参看:https://www.cnblogs.com/pprp/p/10204480.html
如何使用Darknet进行分类请参看:https://www.cnblogs.com/pprp/p/10342335.html
Darknet loss可视化软件请参看:https://www.cnblogs.com/pprp/p/10248436.html
如何设计更改YOLO网络结构:https://pprp.github.io/2018/09/20/tricks.html
YOLO详细改进总结:https://pprp.github.io/2018/06/20/yolo.html
ps: 以上都是自己科研过程中总结内容,可能不够系统,欢迎留言讨论
Reference
YOLOv3目标检测总结
官方网站
思路整理自@zhaonan
转载请注明作者 _, 如有问题请留言。