这个系列是对哔哩哔哩up主霹雳吧啦Wz所出的FasterRCNN源码解析的视频进行一个记录以及加上自己理解(可能没有多少,更多的是对数据类型怎么变换的进行一个记录),首先学习源码的第一步就是先跑通目标代码
这里附上霹雳吧啦Wz的github链接:https://github.com/WZMIAOMIAO/deep-learning-for-image-processing
课程中的代码都在git中,大家可以自行下载
作者在视频中跑的是mobilnet模型,这里我们尝试跑一下res50+fpn的模型
create_model
这个类是定义模型的部分。
这里需要注意的是
backbone = resnet50_fpn_backbone()
会自动的冻结部分底层权重
代码如下(示例):
def create_model(num_classes):
backbone = resnet50_fpn_backbone()
# 训练自己数据集时不要修改这里的91,修改的是传入的num_classes参数
model = FasterRCNN(backbone=backbone, num_classes=91)
# 载入预训练模型权重
# https://download.pytorch.org/models/fasterrcnn_resnet50_fpn_coco-258fb6c6.pth
weights_dict = torch.load("./backbone/fasterrcnn_resnet50_fpn_coco.pth")
missing_keys, unexpected_keys = model.load_state_dict(weights_dict, strict=False)
if len(missing_keys) != 0 or len(unexpected_keys) != 0:
print("missing_keys: ", missing_keys)
print("unexpected_keys: ", unexpected_keys)
# get number of input features for the classifier
in_features = model.roi_heads.box_predictor.cls_score.in_features
# replace the pre-trained head with a new one
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
return model
main
这就是训练的主干部分了,
主要步骤有:
data_transform
图像预处理函数def main(parser_data):
device = torch.device(parser_data.device if torch.cuda.is_available() else "cpu")
print("Using {} device training.".format(device.type))
data_transform = {
"train": transforms.Compose([transforms.ToTensor(),
transforms.RandomHorizontalFlip(0.5)]),
"val": transforms.Compose([transforms.ToTensor()])
}
VOC_root = parser_data.data_path
# check voc root
if os.path.exists(os.path.join(VOC_root, "VOCdevkit")) is False:
raise FileNotFoundError("VOCdevkit dose not in path:'{}'.".format(VOC_root))
# load train data set
# VOCdevkit -> VOC2012 -> ImageSets -> Main -> train.txt
train_data_set = VOC2012DataSet(VOC_root, data_transform["train"], "train.txt")
# 注意这里的collate_fn是自定义的,因为读取的数据包括image和targets,不能直接使用默认的方法合成batch
batch_size = parser_data.batch_size
nw = min([os.cpu_count(), batch_size if batch_size > 1 else 0, 8]) # number of workers
print('Using %g dataloader workers' % nw)
train_data_loader = torch.utils.data.DataLoader(train_data_set,
batch_size=batch_size,
shuffle=True,
num_workers=nw,
collate_fn=train_data_set.collate_fn)
# load validation data set
# VOCdevkit -> VOC2012 -> ImageSets -> Main -> val.txt
val_data_set = VOC2012DataSet(VOC_root, data_transform["val"], "val.txt")
val_data_set_loader = torch.utils.data.DataLoader(val_data_set,
batch_size=batch_size,
shuffle=False,
num_workers=nw,
collate_fn=train_data_set.collate_fn)
# create model num_classes equal background + 20 classes
model = create_model(num_classes=21)
# print(model)
model.to(device)
# define optimizer
params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(params, lr=0.005,
momentum=0.9, weight_decay=0.0005)
# learning rate scheduler
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,
step_size=5,
gamma=0.33)
# 如果指定了上次训练保存的权重文件地址,则接着上次结果接着训练
if parser_data.resume != "":
checkpoint = torch.load(parser_data.resume, map_location=device)
model.load_state_dict(checkpoint['model'])
optimizer.load_state_dict(checkpoint['optimizer'])
lr_scheduler.load_state_dict(checkpoint['lr_scheduler'])
parser_data.start_epoch = checkpoint['epoch'] + 1
print("the training process from epoch{}...".format(parser_data.start_epoch))
train_loss = []
learning_rate = []
val_mAP = []
for epoch in range(parser_data.start_epoch, parser_data.epochs):
# train for one epoch, printing every 10 iterations
utils.train_one_epoch(model, optimizer, train_data_loader,
device, epoch, train_loss=train_loss, train_lr=learning_rate,
print_freq=50, warmup=True)
# update the learning rate
lr_scheduler.step()
# evaluate on the test dataset
utils.evaluate(model, val_data_set_loader, device=device, mAP_list=val_mAP)
# save weights
save_files = {
'model': model.state_dict(),
'optimizer': optimizer.state_dict(),
'lr_scheduler': lr_scheduler.state_dict(),
'epoch': epoch}
torch.save(save_files, "./save_weights/resNetFpn-model-{}.pth".format(epoch))
# plot loss and lr curve
if len(train_loss) != 0 and len(learning_rate) != 0:
from plot_curve import plot_loss_and_lr
plot_loss_and_lr(train_loss, learning_rate)
# plot mAP curve
if len(val_mAP) != 0:
from plot_curve import plot_map
plot_map(val_mAP)
# model.eval()
# x = [torch.rand(3, 300, 400), torch.rand(3, 400, 400)]
# predictions = model(x)
# print(predictions)
这里由于时间和设备关系就不进行结果展示了,只贴出前3个batch的训练结果
C:\ProgramData\Anaconda3\python.exe E:/VSproject/faster_rcnn/train_res50_fpn.py
Namespace(batch_size=2, data_path='./', device='cuda:0', epochs=15, output_dir='./save_weights', resume='', start_epoch=0)
Using cuda device training.
Using 2 dataloader workers
Epoch: [0] [ 0/2859] eta: 10:03:48.620696 lr: 0.000010 loss: 4.5428 (4.5428) loss_classifier: 3.4154 (3.4154) loss_box_reg: 0.3164 (0.3164) loss_objectness: 0.7873 (0.7873) loss_rpn_box_reg: 0.0237 (0.0237) time: 12.6718 data: 6.5712 max mem: 1770
Epoch: [0] [ 50/2859] eta: 1:52:39.169986 lr: 0.000260 loss: 0.5238 (2.1267) loss_classifier: 0.3413 (1.6028) loss_box_reg: 0.1622 (0.2195) loss_objectness: 0.0559 (0.2783) loss_rpn_box_reg: 0.0131 (0.0261) time: 2.1854 data: 0.0041 max mem: 2399
Epoch: [0] [ 100/2859] eta: 1:46:39.056735 lr: 0.000509 loss: 0.6432 (1.4373) loss_classifier: 0.3237 (1.0013) loss_box_reg: 0.2254 (0.2355) loss_objectness: 0.0361 (0.1761) loss_rpn_box_reg: 0.0177 (0.0244) time: 2.2578 data: 0.0039 max mem: 2399