tensorflow-yolov4实施方法

tensorflow-yolov4实施方法

tensorflow-yolov4-tflite

YOLOv4: Optimal Speed and Accuracy of Object Detection

文献链接:https://arxiv.org/abs/2004.10934

代码链接:https://github.com/AlexeyAB/darknet

摘要

有大量的特征被认为可以提高卷积神经网络(CNN)的精度。需要在大型数据集上对这些特征的组合进行实际测试,并对结果进行理论证明。某些功能只在某些模型上操作,某些问题只在某些模型上操作,或只在小规模数据集上操作;而某些功能(如批处理规范化和剩余连接)适用于大多数模型、任务和数据集。我们假设这些通用特征包括加权剩余连接(WRC)、跨阶段部分连接(CSP)、跨小批量规范化(CmBN)、自对抗训练(SAT)和Mish激活。使用了新功能:WRC、CSP、CmBN、SAT、误激活、马赛克数据增强、CmBN、DropBlock正则化和CIoU丢失,并将其中一些功能结合起来,以达到最新的结果:43.5%AP(65.7%AP50)的MS
COCO数据集,在Tesla V100上以约65 FPS的实时速度。

YOLOv4 Implemented in Tensorflow
2.0. Convert YOLO v4, YOLOv3, YOLO tiny .weights to .pb, .tflite and trt format
for tensorflow, tensorflow lite, tensorRT.

Download yolov4.weights file: https://drive.google.com/open?id=1cewMfusmPjYWbrnuJRuKhPMwRe_b9PaT

环境需要Prerequisites

Tensorflow
2.1.0
tensorflow_addons
0.9.1 (required for mish activation)

tensorflow-yolov4实施方法_第1张图片

Demo

yolov4

python detect.py
–weights ./data/yolov4.weights --framework tf --size 608 --image
./data/kite.jpg

yolov4 tflite

python detect.py
–weights ./data/yolov4-int8.tflite --framework tflite --size 416 --image
./data/kite.jpg

tensorflow-yolov4实施方法_第2张图片

Convert to tflite

yolov4python convert_tflite.py --weights ./data/yolov4.weights --output ./data/yolov4.tflite # yolov4 quantize float16python convert_tflite.py --weights ./data/yolov4.weights --output ./data/yolov4-fp16.tflite --quantize_mode float16 # yolov4 quantize int8python convert_tflite.py --weights ./data/yolov4.weights --output ./data/yolov4-fp16.tflite --quantize_mode full_int8 --dataset ./coco_dataset/coco/val207.txt

Convert to TensorRT

yolov3python save_model.py --weights ./data/yolov3.weights --output ./checkpoints/yolov3.tf --input_size 416 --model yolov3python convert_trt.py --weights ./checkpoints/yolov3.tf --quantize_mode float16 --output ./checkpoints/yolov3-trt-fp16-416 # yolov3-tinypython save_model.py --weights ./data/yolov3-tiny.weights --output ./checkpoints/yolov3-tiny.tf --input_size 416 --tinypython convert_trt.py --weights ./checkpoints/yolov3-tiny.tf --quantize_mode float16 --output ./checkpoints/yolov3-tiny-trt-fp16-416 # yolov4python save_model.py --weights ./data/yolov4.weights --output ./checkpoints/yolov4.tf --input_size 416 --model yolov4python convert_trt.py --weights ./checkpoints/yolov4.tf --quantize_mode float16 --output ./checkpoints/yolov4-trt-fp16-416

Evaluate on COCO 2017 Dataset

run script in /script/get_coco_dataset_2017.sh to download COCO 2017 Dataset# preprocess coco datasetcd datamkdir datasetcd …cd scriptspython coco_convert.py --input ./coco/annotations/instances_val2017.json --output val2017.pklpython coco_annotation.py --coco_path ./coco cd … # evaluate yolov4 modelpython evaluate.py --weights ./data/yolov4.weightscd mAP/extrapython remove_space.pycd …python main.py --output results_yolov4_tf

mAP50 on COCO 2017 Dataset

tensorflow-yolov4实施方法_第3张图片

Benchmark

python benchmarks.py --size 416 --model yolov4 --weights ./data/yolov4.weights

TensorRT performance

tensorflow-yolov4实施方法_第4张图片
tensorflow-yolov4实施方法_第5张图片

训练模型

Prepare your dataset# If you want to train from scratch:In config.py set FISRT_STAGE_EPOCHS=0 # Run script:python train.py# Transfer learning: python train.py --weights ./data/yolov4.weights

训练性能还没有完全重现,建议使用Alex的Darknet训练自己的数据,然后将.weights转换为tensorflow或tflite。

tensorflow-yolov4实施方法_第6张图片

你可能感兴趣的:(目标检测,深度学习,摄像头)