FasterRCNN源码解析(一)-——跑通代码

FasterRCNN源码解析(一)-——跑通代码

这个系列是对哔哩哔哩up主霹雳吧啦Wz所出的FasterRCNN源码解析的视频进行一个记录以及加上自己理解(可能没有多少,更多的是对数据类型怎么变换的进行一个记录),首先学习源码的第一步就是先跑通目标代码
这里附上霹雳吧啦Wz的github链接:https://github.com/WZMIAOMIAO/deep-learning-for-image-processing
课程中的代码都在git中,大家可以自行下载

文章目录

  • FasterRCNN源码解析(一)-——跑通代码
  • 环境配置
  • 文件结构
  • 跑通代码
    • 1. `create_model`
    • 2. `main`
    • 训练结果展示


环境配置

  • Python3.6或者3.7
  • Pytorch1.6(注意:必须是1.6.0或以上,因为使用官方提供的混合精度训练1.6.0后才支持)
  • pycocotools(Linux: pip install pycocotools;Windows:pip install
    pycocotools-windows(不需要额外安装vs))
  • Ubuntu或Centos(不建议Windows)
  • 最好使用GPU训练
  • 详细环境配置见原作者github中的requirements.txt文件

文件结构

  • ├── backbone: 特征提取网络,可以根据自己的要求选择
  • ├── network_files: Faster R-CNN网络(包括Fast R-CNN以及RPN等模块)
  • ├── train_utils: 训练验证相关模块(包括cocotools)
  • ├── my_dataset.py: 自定义dataset用于读取VOC数据集
  • ├── train_mobilenet.py: 以MobileNetV2做为backbone进行训练
  • ├── train_resnet50_fpn.py: 以resnet50+FPN做为backbone进行训练
  • ├── train_multi_GPU.py: 针对使用多GPU的用户使用
  • ├── predict.py: 简易的预测脚本,使用训练好的权重进行预测测试
  • ├── valisation.py: 利用训练好的权重验证/测试数据的COCO指标,并生成record_mAP.txt文件
  • └── pascal_voc_classes.json: pascal_voc标签文件

跑通代码

作者在视频中跑的是mobilnet模型,这里我们尝试跑一下res50+fpn的模型

1. create_model

这个类是定义模型的部分。

这里需要注意的是
backbone = resnet50_fpn_backbone()会自动的冻结部分底层权重
代码如下(示例):

def create_model(num_classes):
    backbone = resnet50_fpn_backbone()
    # 训练自己数据集时不要修改这里的91,修改的是传入的num_classes参数
    model = FasterRCNN(backbone=backbone, num_classes=91)
    # 载入预训练模型权重
    # https://download.pytorch.org/models/fasterrcnn_resnet50_fpn_coco-258fb6c6.pth
    weights_dict = torch.load("./backbone/fasterrcnn_resnet50_fpn_coco.pth")
    missing_keys, unexpected_keys = model.load_state_dict(weights_dict, strict=False)
    if len(missing_keys) != 0 or len(unexpected_keys) != 0:
        print("missing_keys: ", missing_keys)
        print("unexpected_keys: ", unexpected_keys)

    # get number of input features for the classifier
    in_features = model.roi_heads.box_predictor.cls_score.in_features
    # replace the pre-trained head with a new one
    model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)

    return model

2. main

这就是训练的主干部分了,
主要步骤有:

  1. 声明本次训练是在GPU上还有在CPU上进行
  2. 定义data_transform 图像预处理函数
  3. 获取voc数据集路径
  4. 获取数据集
  5. 对数据集(验证集和训练集)进行载入操作
  6. 然后实例化训练模型
  7. 定义优化函数
  8. 定义学习率及其衰减策略
  9. 对每个epoch开始进行训练(并对训练权重进行保存)
  10. 绘制 损失值 和 学习率 曲线
  11. 绘制mAP曲线
def main(parser_data):
    device = torch.device(parser_data.device if torch.cuda.is_available() else "cpu")
    print("Using {} device training.".format(device.type))

    data_transform = {
        "train": transforms.Compose([transforms.ToTensor(),
                                     transforms.RandomHorizontalFlip(0.5)]),
        "val": transforms.Compose([transforms.ToTensor()])
    }

    VOC_root = parser_data.data_path
    # check voc root
    if os.path.exists(os.path.join(VOC_root, "VOCdevkit")) is False:
        raise FileNotFoundError("VOCdevkit dose not in path:'{}'.".format(VOC_root))

    # load train data set
    # VOCdevkit -> VOC2012 -> ImageSets -> Main -> train.txt
    train_data_set = VOC2012DataSet(VOC_root, data_transform["train"], "train.txt")

    # 注意这里的collate_fn是自定义的,因为读取的数据包括image和targets,不能直接使用默认的方法合成batch
    batch_size = parser_data.batch_size
    nw = min([os.cpu_count(), batch_size if batch_size > 1 else 0, 8])  # number of workers
    print('Using %g dataloader workers' % nw)
    train_data_loader = torch.utils.data.DataLoader(train_data_set,
                                                    batch_size=batch_size,
                                                    shuffle=True,
                                                    num_workers=nw,
                                                    collate_fn=train_data_set.collate_fn)

    # load validation data set
    # VOCdevkit -> VOC2012 -> ImageSets -> Main -> val.txt
    val_data_set = VOC2012DataSet(VOC_root, data_transform["val"], "val.txt")
    val_data_set_loader = torch.utils.data.DataLoader(val_data_set,
                                                      batch_size=batch_size,
                                                      shuffle=False,
                                                      num_workers=nw,
                                                      collate_fn=train_data_set.collate_fn)

    # create model num_classes equal background + 20 classes
    model = create_model(num_classes=21)
    # print(model)

    model.to(device)

    # define optimizer
    params = [p for p in model.parameters() if p.requires_grad]
    optimizer = torch.optim.SGD(params, lr=0.005,
                                momentum=0.9, weight_decay=0.0005)

    # learning rate scheduler
    lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,
                                                   step_size=5,
                                                   gamma=0.33)

    # 如果指定了上次训练保存的权重文件地址,则接着上次结果接着训练
    if parser_data.resume != "":
        checkpoint = torch.load(parser_data.resume, map_location=device)
        model.load_state_dict(checkpoint['model'])
        optimizer.load_state_dict(checkpoint['optimizer'])
        lr_scheduler.load_state_dict(checkpoint['lr_scheduler'])
        parser_data.start_epoch = checkpoint['epoch'] + 1
        print("the training process from epoch{}...".format(parser_data.start_epoch))

    train_loss = []
    learning_rate = []
    val_mAP = []

    for epoch in range(parser_data.start_epoch, parser_data.epochs):
        # train for one epoch, printing every 10 iterations
        utils.train_one_epoch(model, optimizer, train_data_loader,
                              device, epoch, train_loss=train_loss, train_lr=learning_rate,
                              print_freq=50, warmup=True)
        # update the learning rate
        lr_scheduler.step()

        # evaluate on the test dataset
        utils.evaluate(model, val_data_set_loader, device=device, mAP_list=val_mAP)

        # save weights
        save_files = {
            'model': model.state_dict(),
            'optimizer': optimizer.state_dict(),
            'lr_scheduler': lr_scheduler.state_dict(),
            'epoch': epoch}
        torch.save(save_files, "./save_weights/resNetFpn-model-{}.pth".format(epoch))

    # plot loss and lr curve
    if len(train_loss) != 0 and len(learning_rate) != 0:
        from plot_curve import plot_loss_and_lr
        plot_loss_and_lr(train_loss, learning_rate)

    # plot mAP curve
    if len(val_mAP) != 0:
        from plot_curve import plot_map
        plot_map(val_mAP)

    # model.eval()
    # x = [torch.rand(3, 300, 400), torch.rand(3, 400, 400)]
    # predictions = model(x)
    # print(predictions)


训练结果展示

这里由于时间和设备关系就不进行结果展示了,只贴出前3个batch的训练结果

C:\ProgramData\Anaconda3\python.exe E:/VSproject/faster_rcnn/train_res50_fpn.py
Namespace(batch_size=2, data_path='./', device='cuda:0', epochs=15, output_dir='./save_weights', resume='', start_epoch=0)
Using cuda device training.
Using 2 dataloader workers
Epoch: [0]  [   0/2859]  eta: 10:03:48.620696  lr: 0.000010  loss: 4.5428 (4.5428)  loss_classifier: 3.4154 (3.4154)  loss_box_reg: 0.3164 (0.3164)  loss_objectness: 0.7873 (0.7873)  loss_rpn_box_reg: 0.0237 (0.0237)  time: 12.6718  data: 6.5712  max mem: 1770
Epoch: [0]  [  50/2859]  eta: 1:52:39.169986  lr: 0.000260  loss: 0.5238 (2.1267)  loss_classifier: 0.3413 (1.6028)  loss_box_reg: 0.1622 (0.2195)  loss_objectness: 0.0559 (0.2783)  loss_rpn_box_reg: 0.0131 (0.0261)  time: 2.1854  data: 0.0041  max mem: 2399
Epoch: [0]  [ 100/2859]  eta: 1:46:39.056735  lr: 0.000509  loss: 0.6432 (1.4373)  loss_classifier: 0.3237 (1.0013)  loss_box_reg: 0.2254 (0.2355)  loss_objectness: 0.0361 (0.1761)  loss_rpn_box_reg: 0.0177 (0.0244)  time: 2.2578  data: 0.0039  max mem: 2399

你可能感兴趣的:(计算机视觉,深度学习,人工智能)