【目标检测】YOLOV8实战入门(二)使用方法概述

Ultralytics  YOLOV8同时支持CLI与Python接口,本节我们对两种方式进行简单介绍。

1、CLI(命令行)使用

Ultralytics YOLO的命令行界面(CLI)允许简单的单行命令,而不需要Python环境。CLI不需要自定义或Python代码。您可以简单地使用yolo命令从终端运行所有任务。
Ultralytics yolo命令使用以下语法:

yolo TASK MODE ARGS

Where   TASK (optional) is one of [detect, segment, classify]
        MODE (required) is one of [train, val, predict, export, track]
        ARGS (optional) are any number of custom 'arg=value' pairs like 'imgsz=320' that override defaults.

TASK(可选):[detect, segment, classify]之一。如果没有显式传递,YOLOv8将尝试从模型类型中猜测TASK。
MODE(必填):[train, val, predict, export, track]之一
ARGS(可选):任意数量的自定义arg=value,例如imgsz=320,它们覆盖默认值。有关可用ARGS的完整列表,请参阅Configuration和default. yamlGitHub源代码(如下)。

# Ultralytics YOLO , AGPL-3.0 license
# Default training settings and hyperparameters for medium-augmentation COCO training

task: detect  # YOLO task, i.e. detect, segment, classify, pose
mode: train  # YOLO mode, i.e. train, val, predict, export, track, benchmark

# Train settings -------------------------------------------------------------------------------------------------------
model:  # path to model file, i.e. yolov8n.pt, yolov8n.yaml
data:  # path to data file, i.e. coco128.yaml
epochs: 100  # number of epochs to train for
patience: 50  # epochs to wait for no observable improvement for early stopping of training
batch: 16  # number of images per batch (-1 for AutoBatch)
imgsz: 640  # size of input images as integer or w,h
save: True  # save train checkpoints and predict results
save_period: -1 # Save checkpoint every x epochs (disabled if < 1)
cache: False  # True/ram, disk or False. Use cache for data loading
device:  # device to run on, i.e. cuda device=0 or device=0,1,2,3 or device=cpu
workers: 8  # number of worker threads for data loading (per RANK if DDP)
project:  # project name
name:  # experiment name, results saved to 'project/name' directory
exist_ok: False  # whether to overwrite existing experiment
pretrained: False  # whether to use a pretrained model
optimizer: SGD  # optimizer to use, choices=['SGD', 'Adam', 'AdamW', 'RMSProp']
verbose: True  # whether to print verbose output
seed: 0  # random seed for reproducibility
deterministic: True  # whether to enable deterministic mode
single_cls: False  # train multi-class data as single-class
rect: False  # rectangular training if mode='train' or rectangular validation if mode='val'
cos_lr: False  # use cosine learning rate scheduler
close_mosaic: 0  # (int) disable mosaic augmentation for final epochs
resume: False  # resume training from last checkpoint
amp: True  # Automatic Mixed Precision (AMP) training, choices=[True, False], True runs AMP check
# Segmentation
overlap_mask: True  # masks should overlap during training (segment train only)
mask_ratio: 4  # mask downsample ratio (segment train only)
# Classification
dropout: 0.0  # use dropout regularization (classify train only)

# Val/Test settings ----------------------------------------------------------------------------------------------------
val: True  # validate/test during training
split: val  # dataset split to use for validation, i.e. 'val', 'test' or 'train'
save_json: False  # save results to JSON file
save_hybrid: False  # save hybrid version of labels (labels + additional predictions)
conf:  # object confidence threshold for detection (default 0.25 predict, 0.001 val)
iou: 0.7  # intersection over union (IoU) threshold for NMS
max_det: 300  # maximum number of detections per image
half: False  # use half precision (FP16)
dnn: False  # use OpenCV DNN for ONNX inference
plots: True  # save plots during train/val

# Prediction settings --------------------------------------------------------------------------------------------------
source:  # source directory for images or videos
show: False  # show results if possible
save_txt: False  # save results as .txt file
save_conf: False  # save results with confidence scores
save_crop: False  # save cropped images with results
show_labels: True  # show object labels in plots
show_conf: True  # show object confidence scores in plots
vid_stride: 1  # video frame-rate stride
line_thickness: 3  # bounding box thickness (pixels)
visualize: False  # visualize model features
augment: False  # apply image augmentation to prediction sources
agnostic_nms: False  # class-agnostic NMS
classes:  # filter results by class, i.e. class=0, or class=[0,2,3]
retina_masks: False  # use high-resolution segmentation masks
boxes: True  # Show boxes in segmentation predictions

# Export settings ------------------------------------------------------------------------------------------------------
format: torchscript  # format to export to
keras: False  # use Keras
optimize: False  # TorchScript: optimize for mobile
int8: False  # CoreML/TF INT8 quantization
dynamic: False  # ONNX/TF/TensorRT: dynamic axes
simplify: False  # ONNX: simplify model
opset:  # ONNX: opset version (optional)
workspace: 4  # TensorRT: workspace size (GB)
nms: False  # CoreML: add NMS

# Hyperparameters ------------------------------------------------------------------------------------------------------
lr0: 0.01  # initial learning rate (i.e. SGD=1E-2, Adam=1E-3)
lrf: 0.01  # final learning rate (lr0 * lrf)
momentum: 0.937  # SGD momentum/Adam beta1
weight_decay: 0.0005  # optimizer weight decay 5e-4
warmup_epochs: 3.0  # warmup epochs (fractions ok)
warmup_momentum: 0.8  # warmup initial momentum
warmup_bias_lr: 0.1  # warmup initial bias lr
box: 7.5  # box loss gain
cls: 0.5  # cls loss gain (scale with pixels)
dfl: 1.5  # dfl loss gain
pose: 12.0  # pose loss gain
kobj: 1.0  # keypoint obj loss gain
label_smoothing: 0.0  # label smoothing (fraction)
nbs: 64  # nominal batch size
hsv_h: 0.015  # image HSV-Hue augmentation (fraction)
hsv_s: 0.7  # image HSV-Saturation augmentation (fraction)
hsv_v: 0.4  # image HSV-Value augmentation (fraction)
degrees: 0.0  # image rotation (+/- deg)
translate: 0.1  # image translation (+/- fraction)
scale: 0.5  # image scale (+/- gain)
shear: 0.0  # image shear (+/- deg)
perspective: 0.0  # image perspective (+/- fraction), range 0-0.001
flipud: 0.0  # image flip up-down (probability)
fliplr: 0.5  # image flip left-right (probability)
mosaic: 1.0  # image mosaic (probability)
mixup: 0.0  # image mixup (probability)
copy_paste: 0.0  # segment copy-paste (probability)

# Custom config.yaml ---------------------------------------------------------------------------------------------------
cfg:  # for overriding defaults.yaml

# Debug, do not modify -------------------------------------------------------------------------------------------------
v5loader: False  # use legacy YOLOv5 dataloader

# Tracker settings ------------------------------------------------------------------------------------------------------
tracker: botsort.yaml  # tracker type, ['botsort.yaml', 'bytetrack.yaml']
  • Train
    训练初始learning_rate为0.01的10个纪元的检测模型
yolo train data=coco128.yaml model=yolov8n.pt epochs=10 lr0=0.01
  • Val
    验证batchsize为1和图像大小为640的预训练检测模型:
 yolo val model=yolov8n.pt data=coco128.yaml batch=1 imgsz=640
  • Predict
    使用图像大小为320的预训练分割模型预测视频
yolo predict model=yolov8n-seg.pt source='https://youtu.be/Zgi9g1ksQHc' imgsz=320
  • Export
    将YOLOv8n分类模型导出为ONNX格式,图像大小224 x 128(无需TASK
yolo export model=yolov8n-cls.pt format=onnx imgsz=224,128
  • 其他功能
    运行特殊命令以查看版本、查看设置、运行检查等
yolo help
yolo checks
yolo version
yolo settings
yolo copy-cfg
yolo cfg

2、Python接口使用

YOLOv8的Python接口允许无缝集成到自己的Python项目中,从而易于加载、运行和处理模型的输出。Python接口设计简单易用,使用户能够在他们的项目中快速实现目标检测、分割和分类。这使得YOLOv8的Python接口成为任何希望将这些功能整合到Python项目中的人的宝贵工具。

例如,用户可以加载模型、训练它、在验证集上评估其性能,甚至只需几行代码即可将其导出为ONNX格式。

from ultralytics import YOLO

# Create a new YOLO model from scratch
model = YOLO('yolov8n.yaml')

# Load a pretrained YOLO model (recommended for training)
model = YOLO('yolov8n.pt')

# Train the model using the 'coco128.yaml' dataset for 3 epochs
results = model.train(data='coco128.yaml', epochs=3)

# Evaluate the model's performance on the validation set
results = model.val()

# Perform object detection on an image using the model
results = model('https://ultralytics.com/images/bus.jpg')

# Export the model to ONNX format
success = model.export(format='onnx')

Note:本系列的后续内容,我们会重点介绍yolov8的python接口使用,对于命令行操作的介绍仅限于本节内容,更多命令行信息请大家参考官网文档。

你可能感兴趣的:(深度学习,#,目标检测,目标检测,YOLO,人工智能,计算机视觉,python)