YOLO 算法是非常著名的目标检测算法。从其全称 You Only Look Once: Unified, Real-Time Object Detection ,可以看出它的特性:
- Look Once: one-stage (one-shot object detectors) 算法,把目标检测的两个任务分类和定位一步完成。
- Unified: 统一的架构,提供 end-to-end 的训练和预测。
- Real-Time: 实时性,初代论文给出的指标 FPS 45 , mAP 63.4 。
YOLOv4: Optimal Speed and Accuracy of Object Detection ,于今年 4 月公布,采用了很多近些年 CNN 领域优秀的优化技巧。其平衡了精度与速度,目前在实时目标检测算法中精度是最高的。
论文地址:
- YOLO: https://arxiv.org/abs/1506.02640
- YOLO v4: https://arxiv.org/abs/2004.10934
源码地址:
- YOLO: https://github.com/pjreddie/darknet
- YOLO v4: https://github.com/AlexeyAB/darknet
本文将介绍 YOLOv4 官方 Darknet 实现,如何于 Docker 编译使用。以及从 MS COCO 2017 数据集中怎么选出部分物体,训练出模型。
主要内容有:
- 准备 Docker 镜像
- 准备 COCO 数据集
- 用预训练模型进行推断
- 准备 COCO 数据子集
- 训练自己的模型并推断
- 参考内容
准备 Docker 镜像
首先,准备 Docker ,请见:Docker: Nvidia Driver, Nvidia Docker 推荐安装步骤 。
之后,开始准备镜像,从下到上的层级为:
- nvidia/cuda: https://hub.docker.com/r/nvidia/cuda
- OpenCV: https://github.com/opencv/opencv
- Darknet: https://github.com/AlexeyAB/darknet
nvidia/cuda
准备 Nvidia 基础 CUDA 镜像。这里我们选择 CUDA 10.2 ,不用最新 CUDA 11,因为现在 PyTorch 等都还都是 10.2 呢。
拉取镜像:
docker pull nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04
测试镜像:
$ docker run --gpus all nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04 nvidia-smi
Sun Aug 8 00:00:00 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.100 Driver Version: 440.100 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce RTX 208... Off | 00000000:07:00.0 On | N/A |
| 0% 48C P8 14W / 300W | 340MiB / 11016MiB | 2% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce RTX 208... Off | 00000000:08:00.0 Off | N/A |
| 0% 45C P8 19W / 300W | 1MiB / 11019MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
OpenCV
基于 nvidia/cuda 镜像,构建 OpenCV 的镜像:
cd docker/ubuntu18.04-cuda10.2/opencv4.4.0/
docker build \
-t joinaero/ubuntu18.04-cuda10.2:opencv4.4.0 \
--build-arg opencv_ver=4.4.0 \
--build-arg opencv_url=https://gitee.com/cubone/opencv.git \
--build-arg opencv_contrib_url=https://gitee.com/cubone/opencv_contrib.git \
.
其 Dockerfile 可见这里: https://github.com/ikuokuo/start-yolov4/blob/master/docker/ubuntu18.04-cuda10.2/opencv4.4.0/Dockerfile 。
Darknet
基于 OpenCV 镜像,构建 Darknet 镜像:
cd docker/ubuntu18.04-cuda10.2/opencv4.4.0/darknet/
docker build \
-t joinaero/ubuntu18.04-cuda10.2:opencv4.4.0-darknet \
.
其 Dockerfile 可见这里: https://github.com/ikuokuo/start-yolov4/blob/master/docker/ubuntu18.04-cuda10.2/opencv4.4.0/darknet/Dockerfile 。
上述镜像已上传 Docker Hub 。如果 Nvidia 驱动能够支持 CUDA 10.2 ,那可以直接拉取该镜像:
docker pull joinaero/ubuntu18.04-cuda10.2:opencv4.4.0-darknet
准备 COCO 数据集
MS COCO 2017 下载地址: http://cocodataset.org/#download
图像,包括:
- 2017 Train images [118K/18GB]
- http://images.cocodataset.org/zips/train2017.zip
- 2017 Val images [5K/1GB]
- http://images.cocodataset.org/zips/val2017.zip
- 2017 Test images [41K/6GB]
- http://images.cocodataset.org/zips/test2017.zip
- 2017 Unlabeled images [123K/19GB]
- http://images.cocodataset.org/zips/unlabeled2017.zip
标注,包括:
- 2017 Train/Val annotations [241MB]
- http://images.cocodataset.org/annotations/annotations_trainval2017.zip
- 2017 Stuff Train/Val annotations [1.1GB]
- http://images.cocodataset.org/annotations/stuff_annotations_trainval2017.zip
- 2017 Panoptic Train/Val annotations [821MB]
- http://images.cocodataset.org/annotations/panoptic_annotations_trainval2017.zip
- 2017 Testing Image info [1MB]
- http://images.cocodataset.org/annotations/image_info_test2017.zip
- 2017 Unlabeled Image info [4MB]
- http://images.cocodataset.org/annotations/image_info_unlabeled2017.zip
用预训练模型进行推断
预训练模型 yolov4.weights ,下载地址 https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights 。
运行镜像:
xhost +local:docker
docker run -it --gpus all \
-e DISPLAY \
-e QT_X11_NO_MITSHM=1 \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v $HOME/.Xauthority:/root/.Xauthority \
--name darknet \
--mount type=bind,source=$HOME/Codes/devel/datasets/coco2017,target=/home/coco2017 \
--mount type=bind,source=$HOME/Codes/devel/models/yolov4,target=/home/yolov4 \
joinaero/ubuntu18.04-cuda10.2:opencv4.4.0-darknet
进行推断:
./darknet detector test cfg/coco.data cfg/yolov4.cfg /home/yolov4/yolov4.weights \
-thresh 0.25 -ext_output -show -out /home/coco2017/result.json \
/home/coco2017/test2017/000000000001.jpg
推断结果:
准备 COCO 数据子集
MS COCO 2017 数据集有 80 个物体标签。我们从中选取自己关注的物体,重组个子数据集。
首先,获取样例代码:
git clone https://github.com/ikuokuo/start-yolov4.git
- scripts/coco2yolo.py: COCO 数据集转 YOLO 数据集的脚本
- scripts/coco/label.py: COCO 数据集的物体标签有哪些
- cfg/coco/coco.names: 编辑我们想要的那些物体标签
之后,准备数据集:
cd start-yolov4/
pip install -r scripts/requirements.txt
export COCO_DIR=$HOME/Codes/devel/datasets/coco2017
# train
python scripts/coco2yolo.py \
--coco_img_dir $COCO_DIR/train2017/ \
--coco_ann_file $COCO_DIR/annotations/instances_train2017.json \
--yolo_names_file ./cfg/coco/coco.names \
--output_dir ~/yolov4/coco2017/ \
--output_name train2017 \
--output_img_prefix /home/yolov4/coco2017/train2017/
# valid
python scripts/coco2yolo.py \
--coco_img_dir $COCO_DIR/val2017/ \
--coco_ann_file $COCO_DIR/annotations/instances_val2017.json \
--yolo_names_file ./cfg/coco/coco.names \
--output_dir ~/yolov4/coco2017/ \
--output_name val2017 \
--output_img_prefix /home/yolov4/coco2017/val2017/
数据集,内容如下:
~/yolov4/coco2017/
├── train2017/
│ ├── 000000000071.jpg
│ ├── 000000000071.txt
│ ├── ...
│ ├── 000000581899.jpg
│ └── 000000581899.txt
├── train2017.txt
├── val2017/
│ ├── 000000001353.jpg
│ ├── 000000001353.txt
│ ├── ...
│ ├── 000000579818.jpg
│ └── 000000579818.txt
└── val2017.txt
训练自己的模型并推断
准备必要文件
-
cfg/coco/coco.names
- Edit: keep desired objects
-
cfg/coco/yolov4.cfg
- Download yolov4.cfg, then changed:
batch
=64,subdivisions
=32 <32 for 8-12 GB GPU-VRAM>width
=512,height
=512classes
=max_batches
=steps
=filters
=<(classes+5)x3, in the 3 [convolutional] before each [yolo] layer>filters
=<(classes+9)x3, in the 3 [convolutional] before each [Gaussian_yolo] layer>
-
cfg/coco/coco.data
- Edit:
train
,valid
to YOLO datas
- Edit:
-
csdarknet53-omega.conv.105
- Download csdarknet53-omega_final.weights, then run:
docker run -it --rm --gpus all \ --mount type=bind,source=$HOME/Codes/devel/models/yolov4,target=/home/yolov4 \ joinaero/ubuntu18.04-cuda10.2:opencv4.4.0-darknet ./darknet partial cfg/csdarknet53-omega.cfg /home/yolov4/csdarknet53-omega_final.weights /home/yolov4/csdarknet53-omega.conv.105 105
训练自己的模型
运行镜像:
cd start-yolov4/
xhost +local:docker
docker run -it --gpus all \
-e DISPLAY \
-e QT_X11_NO_MITSHM=1 \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v $HOME/.Xauthority:/root/.Xauthority \
--name darknet \
--mount type=bind,source=$HOME/Codes/devel/models/yolov4,target=/home/yolov4 \
--mount type=bind,source=$HOME/yolov4/coco2017,target=/home/yolov4/coco2017 \
--mount type=bind,source=$PWD/cfg/coco,target=/home/cfg \
joinaero/ubuntu18.04-cuda10.2:opencv4.4.0-darknet
进行训练:
mkdir -p /home/yolov4/coco2017/backup
# Training command
./darknet detector train /home/cfg/coco.data /home/cfg/yolov4.cfg /home/yolov4/csdarknet53-omega.conv.105 -map
中途可以中断训练,然后这样继续:
# Continue training
./darknet detector train /home/cfg/coco.data /home/cfg/yolov4.cfg /home/yolov4/coco2017/backup/yolov4_last.weights -map
yolov4_last.weights
每迭代 100 次,会被记录。
如果多 GPU 训练,可以在 1000 次迭代后,加参数 -gpus 0,1
,再继续:
# How to train with multi-GPU
# 1. Train it first on 1 GPU for like 1000 iterations
# 2. Then stop and by using partially-trained model `/backup/yolov4_1000.weights` run training with multigpu
./darknet detector train /home/cfg/coco.data /home/cfg/yolov4.cfg /home/yolov4/coco2017/backup/yolov4_1000.weights -gpus 0,1 -map
训练过程,记录如下:
加参数 -map
后,上图会显示有红线 mAP
。
查看模型 mAP@IoU=50 精度:
$ ./darknet detector map /home/cfg/coco.data /home/cfg/yolov4.cfg /home/yolov4/coco2017/backup/yolov4_final.weights
...
Loading weights from /home/yolov4/coco2017/backup/yolov4_final.weights...
seen 64, trained: 384 K-images (6 Kilo-batches_64)
Done! Loaded 162 layers from weights-file
calculation mAP (mean average precision)...
Detection layer: 139 - type = 27
Detection layer: 150 - type = 27
Detection layer: 161 - type = 27
160
detections_count = 745, unique_truth_count = 190
class_id = 0, name = train, ap = 80.61% (TP = 142, FP = 18)
for conf_thresh = 0.25, precision = 0.89, recall = 0.75, F1-score = 0.81
for conf_thresh = 0.25, TP = 142, FP = 18, FN = 48, average IoU = 75.31 %
IoU threshold = 50 %, used Area-Under-Curve for each unique Recall
mean average precision ([email protected]) = 0.806070, or 80.61 %
Total Detection Time: 4 Seconds
进行推断:
./darknet detector test /home/cfg/coco.data /home/cfg/yolov4.cfg /home/yolov4/coco2017/backup/yolov4_final.weights \
-ext_output -show /home/yolov4/coco2017/val2017/000000006040.jpg
推断结果:
参考内容
- Train Detector on MS COCO (trainvalno5k 2014) dataset
- How to evaluate accuracy and speed of YOLOv4
- How to train (to detect your custom objects)
结语
为什么用 Docker ? Docker 导出镜像,可简化环境部署。如 PyTorch 也都有镜像,可以直接上手使用。
关于 Darknet 还有什么? 下回介绍 Darknet 于 Ubuntu 编译,及使用 Python 接口 。
Let's go coding ~