mmsegment配置参数说明(五)

目录

1、参数组成

2、命令行参数

3、配置文件

3.1、配置文件结构

3.2、常见问题


以fcn_r50-d8_512x512_20k_voc12.py为例。fcn_r50-d8_512x512_20k_voc12.py由fcn_r50-d8_512x512_20k_voc12aug.py修改而来。修改数据集为voc_aug为voc,其它一致。

1、参数组成

tools/train.py中main函数,查看训练参数来源三个部分:

1)命令行参数;

2)配置文件;

3)环境变量。

关键代码段如下:

def main():
    args = parse_args()
    cfg = Config.fromfile(args.config)
    if args.options is not None:
        cfg.merge_from_dict(args.options)
    ...
    env_info_dict = collect_env() 

2、命令行参数

具体的命令行参数解析,可参考:https://blog.csdn.net/weixin_34910922/article/details/106677752

命令行参数的意义,可参考上一篇,模型训练和推理。

这里对调用方式进行说明:

import argparse
from mmcv.utils import Config, DictAction, get_git_hash

def parse_args():
    parser = argparse.ArgumentParser(description='Train a segmentor')
    parser.add_argument('config', help='train config file path')
    parser.add_argument('--work-dir', help='the dir to save logs and models')
    parser.add_argument(
        '--load-from', help='the checkpoint file to load weights from')
    parser.add_argument(
        '--resume-from', help='the checkpoint file to resume from')
    parser.add_argument(
        '--no-validate',
        action='store_true',
        help='whether not to evaluate the checkpoint during training')
    group_gpus = parser.add_mutually_exclusive_group()
    group_gpus.add_argument(
        '--gpus',
        type=int,
        help='number of gpus to use '
             '(only applicable to non-distributed training)')
    group_gpus.add_argument(
        '--gpu-ids',
        type=int,
        nargs='+',
        help='ids of gpus to use '
             '(only applicable to non-distributed training)')
    parser.add_argument('--seed', type=int, default=None, help='random seed')
    parser.add_argument(
        '--deterministic',
        action='store_true',
        help='whether to set deterministic options for CUDNN backend.')
    parser.add_argument(
        '--options', nargs='+', action=DictAction, help='custom options')
    parser.add_argument(
        '--launcher',
        choices=['none', 'pytorch', 'slurm', 'mpi'],
        default='none',
        help='job launcher')
    parser.add_argument('--local_rank', type=int, default=0)
    args = parser.parse_args()
    if 'LOCAL_RANK' not in os.environ:
        os.environ['LOCAL_RANK'] = str(args.local_rank)
    return args

1)config参数设置

调用方式:python tools/train.py configs/fcn/fcn_r50-d8_512x512_20k_voc12.py

args结果:

args: Namespace(
    config='configs/fcn/fcn_r50-d8_512x512_20k_voc12.py',
    deterministic=False,
    gpu_ids=None,
    gpus=None,
    launcher='none',
    load_from=None,
    local_rank=0,
    no_validate=False,
    options=None,
    resume_from=None,
    seed=None,
    work_dir=None)

2)--param设置

调用方式,--work_dir="work_dir":

python tools/train.py configs/fcn/fcn_r50-d8_512x512_20k_voc12.py --work_dir="work_dir"

3)action='store_true'设置

调用方式,--no-validate

4)nargs='+'设置

如int型gpus设置, --gpu 0 1 2; 字典型options设置,--options root_dir="data"

调用示例:

python tools/train.py configs/fcn/fcn_r50-d8_512x512_20k_voc12.py --work-dir="work_dir" --no-validate --gpu-ids 0 1 2 --options root_dir="data"

调用结果:

args: Namespace(
    config='configs/fcn/fcn_r50-d8_512x512_20k_voc12.py',
    deterministic=False, 
    gpu_ids=[0, 1, 2], 
    gpus=None, 
    launcher='none', 
    load_from=None, 
    local_rank=0, 
    no_validate=True, 
    options={'root_dir': 'data'}, 
    resume_from=None, 
    seed=None, 
    work_dir='work_dir')

3、配置文件

将模块化和继承设计合并到我们的配置系统中,方便进行各种实验。若希望检查配置文件,则可以运行以查看完整的配置。可以通过查看更新的配置。

python tools/print_config.py /PATH/TO/CONFIG--options xxx.yyy=zzz

实例测试:

python tools/print_config.py configs/fcn/fcn_r50-d8_512x512_20k_voc12.py

结果:

Config:
norm_cfg = dict(type='BN', requires_grad=True)
model = dict(
    type='EncoderDecoder',
    pretrained=None,
    backbone=dict(
        type='ResNetV1c',
        depth=50,
        num_stages=4,
        out_indices=(0, 1, 2, 3),
        dilations=(1, 1, 2, 4),
        strides=(1, 2, 1, 1),
        norm_cfg=dict(type='BN', requires_grad=True),
        norm_eval=False,
        style='pytorch',
        contract_dilation=True),
    decode_head=dict(
        type='FCNHead',
        in_channels=2048,
        in_index=3,
        channels=512,
        num_convs=2,
        concat_input=True,
        dropout_ratio=0.1,
        num_classes=21,
        norm_cfg=dict(type='BN', requires_grad=True),
        align_corners=False,
        loss_decode=dict(
            type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
    auxiliary_head=dict(
        type='FCNHead',
        in_channels=1024,
        in_index=2,
        channels=256,
        num_convs=1,
        concat_input=False,
        dropout_ratio=0.1,
        num_classes=21,
        norm_cfg=dict(type='BN', requires_grad=True),
        align_corners=False,
        loss_decode=dict(
            type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
    train_cfg=dict(),
    test_cfg=dict(mode='whole'))
dataset_type = 'PascalVOCDataset'
data_root = 'F:\dataset\voc2012'
img_norm_cfg = dict(
    mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (512, 512)
train_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(type='LoadAnnotations'),
    dict(type='Resize', img_scale=(2048, 512), ratio_range=(0.5, 2.0)),
    dict(type='RandomCrop', crop_size=(512, 512), cat_max_ratio=0.75),
    dict(type='RandomFlip', prob=0.5),
    dict(type='PhotoMetricDistortion'),
    dict(
        type='Normalize',
        mean=[123.675, 116.28, 103.53],
        std=[58.395, 57.12, 57.375],
        to_rgb=True),
    dict(type='Pad', size=(512, 512), pad_val=0, seg_pad_val=255),
    dict(type='DefaultFormatBundle'),
    dict(type='Collect', keys=['img', 'gt_semantic_seg'])
]
test_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(
        type='MultiScaleFlipAug',
        img_scale=(2048, 512),
        flip=False,
        transforms=[
            dict(type='Resize', keep_ratio=True),
            dict(type='RandomFlip'),
            dict(
                type='Normalize',
                mean=[123.675, 116.28, 103.53],
                std=[58.395, 57.12, 57.375],
                to_rgb=True),
            dict(type='ImageToTensor', keys=['img']),
            dict(type='Collect', keys=['img'])
        ])
]
data = dict(
    samples_per_gpu=2,
    workers_per_gpu=2,
    train=dict(
        type='PascalVOCDataset',
        data_root='F:\dataset\voc2012',
        img_dir='JPEGImages',
        ann_dir='SegmentationClass',
        split='ImageSets/Segmentation/train.txt',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(type='LoadAnnotations'),
            dict(type='Resize', img_scale=(2048, 512), ratio_range=(0.5, 2.0)),
            dict(type='RandomCrop', crop_size=(512, 512), cat_max_ratio=0.75),
            dict(type='RandomFlip', prob=0.5),
            dict(type='PhotoMetricDistortion'),
            dict(
                type='Normalize',
                mean=[123.675, 116.28, 103.53],
                std=[58.395, 57.12, 57.375],
                to_rgb=True),
            dict(type='Pad', size=(512, 512), pad_val=0, seg_pad_val=255),
            dict(type='DefaultFormatBundle'),
            dict(type='Collect', keys=['img', 'gt_semantic_seg'])
        ]),
    val=dict(
        type='PascalVOCDataset',
        data_root='F:\dataset\voc2012',
        img_dir='JPEGImages',
        ann_dir='SegmentationClass',
        split='ImageSets/Segmentation/val.txt',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(
                type='MultiScaleFlipAug',
                img_scale=(2048, 512),
                flip=False,
                transforms=[
                    dict(type='Resize', keep_ratio=True),
                    dict(type='RandomFlip'),
                    dict(
                        type='Normalize',
                        mean=[123.675, 116.28, 103.53],
                        std=[58.395, 57.12, 57.375],
                        to_rgb=True),
                    dict(type='ImageToTensor', keys=['img']),
                    dict(type='Collect', keys=['img'])
                ])
        ]),
    test=dict(
        type='PascalVOCDataset',
        data_root='F:\dataset\voc2012',
        img_dir='JPEGImages',
        ann_dir='SegmentationClass',
        split='ImageSets/Segmentation/val.txt',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(
                type='MultiScaleFlipAug',
                img_scale=(2048, 512),
                flip=False,
                transforms=[
                    dict(type='Resize', keep_ratio=True),
                    dict(type='RandomFlip'),
                    dict(
                        type='Normalize',
                        mean=[123.675, 116.28, 103.53],
                        std=[58.395, 57.12, 57.375],
                        to_rgb=True),
                    dict(type='ImageToTensor', keys=['img']),
                    dict(type='Collect', keys=['img'])
                ])
        ]))
log_config = dict(
    interval=50, hooks=[dict(type='TextLoggerHook', by_epoch=False)])
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = None
resume_from = None
workflow = [('train', 1)]
cudnn_benchmark = True
optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005)
optimizer_config = dict()
lr_config = dict(policy='poly', power=0.9, min_lr=0.0001, by_epoch=False)
runner = dict(type='IterBasedRunner', max_iters=20000)
checkpoint_config = dict(by_epoch=False, interval=2000)
evaluation = dict(interval=2000, metric='mIoU')

3.1、配置文件结构

打开fcn_r50-d8_512x512_20k_voc12.py文件。其内容如下:

_base_ = [
    '../_base_/models/fcn_r50-d8.py',  # 模型配置
    '../_base_/datasets/pascal_voc12.py',  # 数据集配置
    '../_base_/default_runtime.py',  # 运行时配置
    '../_base_/schedules/schedule_20k.py'  # 计划任务配置 
]
model = dict(
    decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21))

组件_base_为基础配置,文件内其它参数会覆盖基础配置中的内容。配置文件加载后,会将基础配置中文件参数与文件内其它参数合并为一个完整的cfg。合并后的cfg如上一节中python tools/print_config.py config打印内容。

3.2、常见问题

1)配置文件命名格式

{model}_{backbone}_[misc]_[gpu x batch_per_gpu]_{resolution}_{schedule}_{dataset}

{xxx}是必填字段,[yyy]是可选字段。

  • {model}:型号类型等psp,deeplabv3等。
  • {backbone}:骨干类型,例如r50(ResNet-50),x101(ResNeXt-101)。
  • [misc]:杂项设置/模型的插件,例如dconv,gcb,attention,mstrain。
  • [gpu x batch_per_gpu]:8x2默认使用GPU和每个GPU的样本。
  • {schedule}:训练时间表,20ki意味着20k次迭代。
  • {dataset}:数据集一样cityscapes,voc12aug,ade。

2)_delete_=True删除该字典集中字段。

你可能感兴趣的:(#,mmsegment,python,pytorch)