MMAction2学习笔记 使用C3D训练测试自己的数据集

新手上路,记录一下自己的学习过程,希望也能对你有所帮助。

1.数据集准备


参考官网给出的数据集准备教程 

https://github.com/open-mmlab/mmaction2/blob/master/docs/data_preparation.md

参考ucf101准备数据集

mmaction2/toos/data/ucf101


1.1 准备视频数据

cd mmaction2/data

mkdir sketch(自己数据集的名称,我的是sketch)

cd sketch

视频数据目录结构:

data/
--sketch/
----videos/
------class1/
--------class1_1.mp4
--------class1_2.mp4
--------class1_3.mp4
....................


1.2 准备 annotations文件

(1)classInd.txt

改为自己的动作类别

1 backward
2 circle
3 draw
4 jump
5 rectangle
6 run
7 triangle
8 turnleft
9 turnright

(2)trainlist.txt和testlist.txt

自己编写代码划分训练集数据集

文件内容格式如下:类别名文件夹/视频

backward/backward_1.mp4
backward/backward_2.mp4
backward/backward_3.mp4
backward/backward_4.mp4
backward/backward_5.mp4
backward/backward_6.mp4
backward/backward_7.mp4
backward/backward_8.mp4
backward/backward_9.mp4
backward/backward_10.mp4
…………………………………………………………

我的数据集划分比较简单(难的也不会写),1-105是训练集,106-150是测试集

file = open('trainlist.txt', mode='w')
for i in range(1,106):
  x = str(i)
  file.write('backward/backward_' + x + '.mp4\n')

1.3 提取视频帧

cd mmaction2/tools/data/ucf101

选用opencv裁剪视频帧

复制extract_rgb_frames_opencv.sh文件,粘贴改名为自己的myextract_rgb_frames_opencv.sh(随意,直接在原文件里修改也可以)

再修改其中的相关路径

因为opencv是按视频原尺寸裁剪,训练测试需要统一尺寸,所以添加参数--new-width 320 --new-height 240,否则后续训练测试可能会出现尺寸问题。我使用ucf101数据集训练就遇到这个问题。

python build_rawframes.py ../../data/sketch/videos/ ../../data/sketch/rawframes/ --task rgb --level 2 --ext mp4 --new-width 320 --new-height 240 --use-opencv

 执行文件bash myextract_rgb_frames_opencv.sh


1.4 生成list文件

同上,复制generate_rawframes_filelist.sh文件,改名mygenerate_rawframes_filelist.sh,修改路径。

1.4.1 修改build_file_list.py文件

tools/data/build_file_list.py,这里有些需要修改的地方。

(1)添加自己数据集的内容parse_sketch_splits,

from tools.data.parse_file_list import (parse_directory, parse_diving48_splits,parse_sketch_splits,
                                        parse_hmdb51_split,
                                        parse_jester_splits,
                                        parse_kinetics_splits,
                                        parse_mit_splits, parse_mmit_splits,
                                        parse_sthv1_splits, parse_sthv2_splits,
                                        parse_ucf101_splits)

(2)choices添加数据集名sketch

    parser.add_argument(
        'dataset',
        type=str,
        choices=[
            'ucf101', 'sketch','kinetics400', 'kinetics600', 'kinetics700', 'thumos14',
            'sthv1', 'sthv2', 'mit', 'mmit', 'activitynet', 'hmdb51', 'jester',
            'diving48'
        ],
        help='dataset to be built file list')

(3)添加数据集内容

    if args.dataset == 'ucf101':
        splits = parse_ucf101_splits(args.level)
    elif args.dataset == 'sketch': #add my dataset
        splits = parse_sketch_splits(args.level)

1.4.2 修改parse_file_list.py

tools/data/parse_file_list.py,复制parse_ucf101_splits(),改为自己的数据集parse_sketch_splits(),再修改其中的内容。主要就是修改路径,其余不动。

def parse_sketch_splits(level):
    """Parse mydataset into "train", "val", "test" splits.

    Args:
        level (int): Directory level of data. 1 for the single-level directory,
            2 for the two-level directory.

    Returns:
        list: "train", "val", "test" splits of UCF-101.
    """
    class_index_file = 'data/sketch/annotations/classInd.txt'
    train_file_template = 'data/sketch/annotations/trainlist.txt'
    test_file_template = 'data/sketch/annotations/testlist.txt'

    with open(class_index_file, 'r') as fin:
        class_index = [x.strip().split() for x in fin]
    class_mapping = {x[1]: int(x[0]) - 1 for x in class_index}

    def line_to_map(line):
        """A function to map line string to video and label.

        Args:
            line (str): A long directory path, which is a text path.

        Returns:
            tuple[str, str]: (video, label), video is the video id,
                label is the video label.
        """
        items = line.strip().split()
        video = osp.splitext(items[0])[0]
        if level == 1:
            video = osp.basename(video)
            label = items[0]
        elif level == 2:
            video = osp.join(
                osp.basename(osp.dirname(video)), osp.basename(video))
            label = class_mapping[osp.dirname(items[0])]
        return video, label

    splits = []
    for i in range(1, 4):
        with open(train_file_template.format(i), 'r') as fin:
            train_list = [line_to_map(x) for x in fin]

        with open(test_file_template.format(i), 'r') as fin:
            test_list = [line_to_map(x) for x in fin]
        splits.append((train_list, test_list))

    return splits

执行文件bash mygenerate_rawframes_filelist.sh 生成相应的list文件,可以不用管那个split123,后续与ucf101相关的训练config代码一般都是默认split=1,我们生成三个文件内容都是一样的,也不用像ucf101一样细分那么多,除非数据非常多,可以自己修改一下。


2.准备C3D的config文件

目录:mmaction2/configs/recognition/c3d/c3d_sports1m_16x1x1_45e_ucf101_rgb.py

可以复制到自己的work_dir,同时还要复制这个文件与上面config文件在同一个目录,mmaction2/configs/_base_/models/c3d_sports1m_pretrained.py

2.1 修改c3d_sports1m_16x1x1_45e_sketch_rgb.py

_base_ = 'c3d_sports1m_pretrained.py'

# dataset settings
dataset_type = 'RawframeDataset'
data_root = 'data/sketch/rawframes'
data_root_val = 'data/sketch/rawframes'
split = 1  # official train/test splits. valid numbers: 1, 2, 3
ann_file_train = f'data/sketch/sketch_train_split_{split}_rawframes.txt'
ann_file_val = f'data/sketch/sketch_val_split_{split}_rawframes.txt'
ann_file_test = f'data/sketch/sketch_val_split_{split}_rawframes.txt'
img_norm_cfg = dict(mean=[104, 117, 128], std=[1, 1, 1], to_bgr=False)
train_pipeline = [
    dict(type='SampleFrames', clip_len=16, frame_interval=1, num_clips=1),
    dict(type='RawFrameDecode'),
    dict(type='Resize', scale=(128, 171)),
    dict(type='RandomCrop', size=112),
    dict(type='Flip', flip_ratio=0.5),
    dict(type='Normalize', **img_norm_cfg),
    dict(type='FormatShape', input_format='NCTHW'),
    dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
    dict(type='ToTensor', keys=['imgs', 'label'])
]
val_pipeline = [
    dict(
        type='SampleFrames',
        clip_len=16,
        frame_interval=1,
        num_clips=1,
        test_mode=True),
    dict(type='RawFrameDecode'),
    dict(type='Resize', scale=(128, 171)),
    dict(type='CenterCrop', crop_size=112),
    dict(type='Normalize', **img_norm_cfg),
    dict(type='FormatShape', input_format='NCTHW'),
    dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
    dict(type='ToTensor', keys=['imgs', 'label'])
]
test_pipeline = [
    dict(
        type='SampleFrames',
        clip_len=16,
        frame_interval=1,
        num_clips=10,
        test_mode=True),
    dict(type='RawFrameDecode'),
    dict(type='Resize', scale=(128, 171)),
    dict(type='CenterCrop', crop_size=112),
    dict(type='Normalize', **img_norm_cfg),
    dict(type='FormatShape', input_format='NCTHW'),
    dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
    dict(type='ToTensor', keys=['imgs', 'label'])
]
data = dict(
    videos_per_gpu=30,
    workers_per_gpu=2,
    test_dataloader=dict(videos_per_gpu=1),
    train=dict(
        type=dataset_type,
        ann_file=ann_file_train,
        data_prefix=data_root,
        pipeline=train_pipeline),
    val=dict(
        type=dataset_type,
        ann_file=ann_file_val,
        data_prefix=data_root_val,
        pipeline=val_pipeline),
    test=dict(
        type=dataset_type,
        ann_file=ann_file_test,
        data_prefix=data_root_val,
        pipeline=test_pipeline))
# optimizer
optimizer = dict(
    type='SGD', lr=0.001, momentum=0.9,
    weight_decay=0.0005)  # this lr is used for 8 gpus lr = 0.001
optimizer_config = dict(grad_clip=dict(max_norm=40, norm_type=2))
# learning policy
lr_config = dict(policy='step', step=[20, 40])
total_epochs = 45
checkpoint_config = dict(interval=5)
evaluation = dict(
    interval=5, metrics=['top_k_accuracy', 'mean_class_accuracy'])
log_config = dict(
    interval=20,
    hooks=[
        dict(type='TextLoggerHook'),
        # dict(type='TensorboardLoggerHook'),
    ])
# runtime settings
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = f'./work_dirs/c3d_sports1m_16x1x1_45e_sketch_train_rgb/'
load_from = None
resume_from = None
workflow = [('train', 1)]

2.2 修改c3d_sports1m_pretrained.py

这里只需要修改num_classes,改为自己的类别数

# model settings
model = dict(
    type='Recognizer3D',
    backbone=dict(
        type='C3D',
        pretrained=  # noqa: E251
        'https://download.openmmlab.com/mmaction/recognition/c3d/c3d_sports1m_pretrain_20201016-dcc47ddc.pth',  # noqa: E501
        style='pytorch',
        conv_cfg=dict(type='Conv3d'),
        norm_cfg=None,
        act_cfg=dict(type='ReLU'),
        dropout_ratio=0.5,
        init_std=0.005),
    cls_head=dict(
        type='I3DHead',
        num_classes=9,
        in_channels=4096,
        spatial_type=None,
        dropout_ratio=0.5,
        init_std=0.01),
    # model training and testing settings
    train_cfg=None,
    test_cfg=dict(average_clips='score'))

3. 开始训练自己的数据集

cd mmaction2/

bash tools/dist_train.sh \
    data/mymodel/c3d_sports1m_16x1x1_45e_sketch_rgb.py 2 \
    --work-dir data/mymodel/checkpoints \
    --cfg-options load_from=data/checkpoints/c3d_sports1m_16x1x1_45e_ucf101_rgb_20201021-26655025.pth \
    --validate --gpus 2 --seed 0 --deterministic

训练命令注意添加--cfg-options load_from={model_path},后面输入模型文件存放路径(使用wget命令下载)

开始训练:

2022-09-05 02:32:31,992 - mmaction - INFO - workflow: [('train', 1)], max: 45 epochs
2022-09-05 02:32:31,992 - mmaction - INFO - Checkpoints will be saved to /mmaction2/data/mymodel/checkpoints by HardDiskBackend.
2022-09-05 02:32:48,149 - mmaction - INFO - Epoch [1][20/32]	lr: 2.500e-04, eta: 0:19:06, time: 0.808, data_time: 0.152, memory: 5749, top1_acc: 0.2717, top5_acc: 0.7400, loss_cls: 2.2312, loss: 2.2312, grad_norm: 47.8905
2022-09-05 02:33:11,388 - mmaction - INFO - Epoch [2][20/32]	lr: 2.500e-04, eta: 0:14:15, time: 0.794, data_time: 0.159, memory: 5749, top1_acc: 0.6000, top5_acc: 0.9467, loss_cls: 1.0950, loss: 1.0950, grad_norm: 31.0190
2022-09-05 02:33:34,541 - mmaction - INFO - Epoch [3][20/32]	lr: 2.500e-04, eta: 0:12:52, time: 0.790, data_time: 0.155, memory: 5749, top1_acc: 0.7400, top5_acc: 0.9833, loss_cls: 0.7251, loss: 0.7251, grad_norm: 29.4750
2022-09-05 02:33:57,579 - mmaction - INFO - Epoch [4][20/32]	lr: 2.500e-04, eta: 0:12:04, time: 0.784, data_time: 0.151, memory: 5749, top1_acc: 0.7917, top5_acc: 0.9950, loss_cls: 0.5874, loss: 0.5874, grad_norm: 28.1907
2022-09-05 02:34:20,535 - mmaction - INFO - Epoch [5][20/32]	lr: 2.500e-04, eta: 0:11:30, time: 0.781, data_time: 0.147, memory: 5749, top1_acc: 0.8350, top5_acc: 0.9967, loss_cls: 0.4336, loss: 0.4336, grad_norm: 26.6179
2022-09-05 02:34:27,886 - mmaction - INFO - Saving checkpoint at 5 epochs
2022-09-05 02:34:36,493 - mmaction - INFO - Evaluating top_k_accuracy ...
2022-09-05 02:34:36,496 - mmaction - INFO - 
top1_acc	0.5037
top5_acc	0.7926
2022-09-05 02:34:36,496 - mmaction - INFO - Evaluating mean_class_accuracy ...
2022-09-05 02:34:36,499 - mmaction - INFO - 
mean_acc	0.5037
2022-09-05 02:53:35,276 - mmaction - INFO - Epoch(val) [40][14]	top1_acc: 0.7481, top5_acc: 0.8815, mean_class_accuracy: 0.7481
2022-09-05 02:53:55,093 - mmaction - INFO - Epoch [41][20/32]	lr: 2.500e-06, eta: 0:01:29, time: 0.991, data_time: 0.287, memory: 5749, top1_acc: 0.9917, top5_acc: 1.0000, loss_cls: 0.0225, loss: 0.0225, grad_norm: 5.3701
2022-09-05 02:54:21,693 - mmaction - INFO - Epoch [42][20/32]	lr: 2.500e-06, eta: 0:01:08, time: 0.895, data_time: 0.159, memory: 5749, top1_acc: 0.9917, top5_acc: 0.9983, loss_cls: 0.0387, loss: 0.0387, grad_norm: 6.0994
2022-09-05 02:54:48,302 - mmaction - INFO - Epoch [43][20/32]	lr: 2.500e-06, eta: 0:00:48, time: 0.894, data_time: 0.149, memory: 5749, top1_acc: 0.9900, top5_acc: 1.0000, loss_cls: 0.0200, loss: 0.0200, grad_norm: 4.4092
2022-09-05 02:55:19,084 - mmaction - INFO - Epoch [44][20/32]	lr: 2.500e-06, eta: 0:00:27, time: 1.069, data_time: 0.171, memory: 5749, top1_acc: 0.9900, top5_acc: 1.0000, loss_cls: 0.0293, loss: 0.0293, grad_norm: 5.7926
2022-09-05 02:56:12,333 - mmaction - INFO - Epoch [45][20/32]	lr: 2.500e-06, eta: 0:00:07, time: 1.683, data_time: 0.163, memory: 5749, top1_acc: 0.9967, top5_acc: 1.0000, loss_cls: 0.0157, loss: 0.0157, grad_norm: 3.6506
2022-09-05 02:56:27,762 - mmaction - INFO - Saving checkpoint at 45 epochs
2022-09-05 02:56:39,246 - mmaction - INFO - Evaluating top_k_accuracy ...
2022-09-05 02:56:39,248 - mmaction - INFO - 
top1_acc	0.7531
top5_acc	0.8568
2022-09-05 02:56:39,248 - mmaction - INFO - Evaluating mean_class_accuracy ...
2022-09-05 02:56:39,250 - mmaction - INFO - 
mean_acc	0.7531
2022-09-05 02:56:39,347 - mmaction - INFO - The previous best checkpoint /mmaction2/data/mymodel/checkpoints/best_top1_acc_epoch_30.pth was removed
2022-09-05 02:56:41,530 - mmaction - INFO - Now best checkpoint is saved as best_top1_acc_epoch_45.pth.
2022-09-05 02:56:41,530 - mmaction - INFO - Best top1_acc is 0.7531 at 45 epoch.
2022-09-05 02:56:41,531 - mmaction - INFO - Epoch(val) [45][14]	top1_acc: 0.7531, top5_acc: 0.8568, mean_class_accuracy: 0.7531

跑了45个epoch,得到最终的top1_acc等结果。

可能遗漏些细节,后续发现再补充。

你可能感兴趣的:(mmaction2,学习)