YOLO系列全家桶

你想要的都在! 文献精读! 代码讲解! 相关权重!

本文章属于作者个人学习笔记整合,如有侵权,请告知作者删除!!!

YOLOv1: You Only Look Once: Unified, Real-Time Object Detection——开山之作

论文地址:https://arxiv.org/abs/1506.02640
代码地址:https://github.com/pjreddie/darknet
About Darknet framework: http://pjreddie.com/darknet/

文献精读:
https://blog.csdn.net/luke_sanjayzzzhong/article/details/90768245
https://blog.csdn.net/c20081052/article/details/80236015

YOLOv2: YOLO9000: Better, Faster, Stronger

论文地址:https://arxiv.org/abs/1612.08242

文献精读:
https://zengdiqing.blog.csdn.net/article/details/85274711

YOLOv3: An Incremental Improvement

论文地址:https://arxiv.org/abs/1804.02767
代码地址:https://github.com/ultralytics/yolov3——ultralytics公司对原版YOLOv3的python版复刻

文献精读:
https://muzhan.blog.csdn.net/article/details/82660381
https://blog.csdn.net/qq_34199326/article/details/84109828

YOLOv4: Optimal Speed and Accuracy of Object Detection

论文地址:https://arxiv.org/abs/2004.10934
代码地址:https://github.com/AlexeyAB/darknet ——c版 俄罗斯大神AlexeyAB接棒更新YOLO之父Joseph Redmon仓库——与YOLOv4作者合作——基于原版YOLOv3的进一步更新
https://github.com/WongKinYiu/PyTorch_YOLOv4 ——v4作者基于ultralytics公司YOLOv3 python版的更新复刻
加入v4作者发表的CSPNet:https://arxiv.org/abs/1911.11929——个人感觉v4中的CSPDarknet53思想与这篇论文有所偏差

文献精读:
https://ai-wx.blog.csdn.net/article/details/107445791

Scaled-YOLOv4: Scaling Cross Stage Partial Network——原作者对YOLOv4改进

论文地址:https://arxiv.org/abs/2011.08036
代码地址:https://github.com/WongKinYiu/ScaledYOLOv4——基于YOLOv5-v3.0的改进

文献精读:个人感觉neck部分有点YOLOv5的意思
https://blog.csdn.net/Q1u1NG/article/details/109765162

YOLOv5 ☆☆☆☆☆

代码地址:https://github.com/ultralytics/yolov5——ultralytics公司基于YOLOv4功能的重磅更新——目前已经更新到v5.0

文献精读:
https://blog.csdn.net/nan355655600/article/details/107852353——关于v系列很全 5☆推荐

代码精讲:
https://www.bilibili.com/video/BV19K4y197u8—— 5☆推荐
https://blog.csdn.net/weixin_42716570/article/details/112993638
https://blog.csdn.net/Q1u1NG/article/details/107465061

环境配及训练:文章很多很全 不多解释 以下仅作者配置时所看
https://blog.csdn.net/oJiWuXuan/article/details/107558286
https://blog.csdn.net/qq_36756866/article/details/109111065
https://blog.csdn.net/weixin_44145782/article/details/113983421

YOLO综合版文献精读: 仅个人觉得通俗易懂但不全面

https://zhuanlan.zhihu.com/p/297965943
https://ai-wx.blog.csdn.net/article/details/107509243

作者整理的YOLO笔记

https://download.csdn.net/download/qq_44703282/20699117


YOLO-Fastest

论文地址:https://github.com/dog-qiuqiu/Yolo-Fastest——c版


POLY-YOLO: HIGHER SPEED, MORE PRECISE DETECTION AND INSTANCE SEGMENTATION FOR YOLOV3

论文地址:https://arxiv.org/abs/2005.13243
代码地址:https://gitlab.com/irafm-ai/poly-yolo——tensorflow版

文献精读:
https://zhuanlan.zhihu.com/p/149332782

阅读要点:主要用于实例分割 消除YOLOv3的两个弱点:大量重写的标签(覆盖)&& 无效的anchor分配(低分正样本和高分负样本)

YOLObile: Real-Time Object Detection on Mobile Devices via Compression-Compilation Co-Design

论文地址:https://arxiv.org/abs/2009.05697
代码地址:https://github.com/nightsnack/YOLObile

文献精读:
https://zhuanlan.zhihu.com/p/359251349

阅读要点:对当前主流剪枝操作的对比

PP-YOLO: An Effective and Efficient Implementation of Object Detector

论文地址:https://arxiv.org/abs/2007.12099

PP-YOLOv2: A Practical Object Detector

论文地址:https://arxiv.org/abs/2104.10419
代码地址:https://github.com/PaddlePaddle/PaddleDetection——paddle环境

文献精读:
https://blog.csdn.net/qq_41375609/article/details/116375385

环境配及训练:
https://blog.csdn.net/Dora_blank/article/details/117740837——就很强,膜拜,虽然我没做…

阅读"要点":tricks的合理搭配

YOLOF: You Only Look One-level Feature

论文地址:https://arxiv.org/abs/2103.09460
代码地址:https://github.com/megvii-model/yolof

文献精读:
https://zhuanlan.zhihu.com/p/358030385
https://blog.csdn.net/Q1u1NG/article/details/115168451

阅读要点:由于SiMo与MiMO的AP差别不大,YOLOF作者认为FPN的C5已经足够描述特征,因此由SiSo代替MiMo;为了过渡Mo到So,提出了空洞编码器(为了扩大感受野,融合了残差块)和Uniform匹配(由于SiSo导致anchors变得稀疏,会更加注重大目标而忽略小目标,通过取K个最近的positive anchors达到平衡)来弥补差距

YOLOR: You Only Learn One Representation: Unified Network for Multiple Tasks——YOLOv4作者又一力作

论文地址:https://arxiv.org/abs/2105.04206
代码地址:https://github.com/WongKinYiu/yolor

文献精读:
https://blog.csdn.net/Q1u1NG/article/details/115168451

环境配及训练:
https://blog.csdn.net/weixin_45054641/article/details/119004451

阅读要点:显性知识和隐形知识的融合完成多任务的统一表征 属实拗口

YOLOS: You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection

论文地址:https://arxiv.org/abs/2106.00666
代码地址:https://github.com/hustvl/yolos——jupyter notebook

文献精读:
https://blog.csdn.net/Yong_Qi2015/article/details/117608565

阅读要点:参考DETR构架 ViT 迁移学习 暂时还没了解Transformer

YOLOX: Exceeding YOLO Series in 2021

论文地址:https://arxiv.org/abs/2107.08430
代码地址:https://github.com/Megvii-BaseDetection/YOLOX

文献精读:
https://zhuanlan.zhihu.com/p/391396921
https://blog.csdn.net/wjinjie/article/details/119394042
https://www.bilibili.com/video/BV1Kq4y1p7fM

代码精读:
https://zhuanlan.zhihu.com/p/394392992☆
https://blog.csdn.net/nan355655600/article/details/119666304

环境配及训练:
https://blog.csdn.net/qq_39056987/article/details/119002910
https://blog.csdn.net/Dora_blank/article/details/119087239——包含了许多报错的解决 5☆推荐

旷视科技现身说法:
https://www.zhihu.com/question/473350307/answer/2021031747

阅读要点:YOLOv3-Darknet53作为基线 引入了Decoupled Head,Data Aug,Anchor Free 和 SimOTA 样本匹配的方法

相关权重

yolov5.zip

yolor-csp-x.zip
yolor-main.zip

yolox.zip
yolox_darknet53.pth
yolox_x.pth

你可能感兴趣的:(深度学习,目标检测,深度学习,人工智能)