在学习pytorch过程中遇到的一些难题,博主在这里进行记录。主要针对官网里面例子的代码,其中对有些基础python知识与pytorch中的接口函数细节理解。
这个例子介绍如何用PyTorch进行迁移学习训练一个ResNet模型来对蚂蚁和蜜蜂进行分类。
# Data augmentation and normalization for training
# Just normalization for validation
data_transforms = {
'train': transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'val': transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
data_dir = 'data/hymenoptera_data'
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
data_transforms[x])
for x in ['train', 'val']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,
shuffle=True, num_workers=4)
for x in ['train', 'val']}
dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
class_names = image_datasets['train'].classes
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
1.transforms是很常用的图片变换方式,可以通过compose将各个变换串联起来。class torchvision.transforms.Compose (transforms) 这个类将多个变换方式结合在一起,参数:各个变换的实例对象。torchvision.transforms.RandomSizedCrop是做crop的。需要注意的是对于torchvision.transforms.RandomSizedCrop和transforms.RandomHorizontalFlip()等,输入对象都是PIL Image,也就是用python的PIL库读进来的图像内容。
2. torchvision.transforms.ToTensor
将PIL图片或者numpy.ndarray转成Tensor类型的
将PIL图片或者numpy.ndarray(HxWxC) (范围在0-255) 转成torch.FloatTensor (CxHxW) (范围为0.0-1.0)
3.data_dir = 'data/hymenoptera_data’这是存放数据集的地址。数据导入是从datasets.ImageFolder接口实现的。os.path.join()函数用于路径拼接文件路径。因为数据集中有train与val,所以需要用 for x in [‘train’, ‘val’]来循环完成数据存放。这里image_datasets是一个字典,返回image_datasets[‘train’],image_datasets[‘val]两个tuple,每个tuple包含图像和标签信息。
4.datasets.ImageFolder返回的是字典,并不能作为模型的输入,所以torch.utils.data.DataLoader类将其转变为Tensor数据模式。
def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
scheduler.step()
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
# load best model weights
model.load_state_dict(best_model_wts)
return model
这里的主要操作有:Scheduling the learning rate(规划学习率)、Saving the best model(保存最优模型)
先介绍 scheduler 的用法:
制定任意一层的学习率(多个参数组):下面为两个参数组
optim.SGD([
{'params': model.base.parameters()},
{'params': model.classifier.parameters(), 'lr': 1e-3}
], lr=1e-2, momentum=0.9)
那么多个参数组如何进一步调整学习率呢?用到torch.optim.lr_scheduler ,它提供了几种方法来根据epoches的数量调整学习率。有些优化算法已经拥有了学习率衰减参数lr_decay 。
torch.optim.lr_scheduler.StepLR(optimizer, step_size, gamma=0.1, last_epoch=-1)
其中optimizer就是包装好的优化器,step_size (int) 为学习率衰减期,指几个epoch衰减一次。gamma为学习率衰减的乘积因子。 默认为0.1 。当last_epoch=-1时,将初始lr设置为lr。
best_model_wts = copy.deepcopy(model.state_dict()) 先深拷贝一份当前模型的参数,后面迭代过程中若遇到更优模型则替换。
scheduler.step() 训练的时候进行学习率规划,其定义在下面给出。
if phase == ‘val’ and epoch_acc > best_acc: 当验证时遇到了更好的模型则予以保留
best_model_wts = copy.deepcopy(model.state_dict()) 深拷贝模型参数
model.load_state_dict(best_model_wts) # 载入最优模型参数
加载预训练模型并重置最终完全连接的图层。注意这里是对所有层参数进行微调。
model_ft = models.resnet18(pretrained=True)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, 2)
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
model_ft即为含训练好参数的残差网络,num_ftrs = model_ft.fc.in_features 最后一个全连接的输入维度,这里实为512。
model_ft.fc = nn.Linear(num_ftrs, 2) # 将最后一个全连接由(512, 1000)改为(512, 2) 因为原网络是在1000类的ImageNet数据集上训练的。
model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler,
num_epochs=25)
CPU上需要大约15-25分钟。但是在GPU上,它只需不到一分钟。
将conv的参数都固定,只调整全连接。
model_conv = torchvision.models.resnet18(pretrained=True)
for param in model_conv.parameters():
param.requires_grad = False
# Parameters of newly constructed modules have requires_grad=True by default
num_ftrs = model_conv.fc.in_features
model_conv.fc = nn.Linear(num_ftrs, 2)
model_conv = model_conv.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that only parameters of final layer are being optimized as
# opposed to before.
optimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1)
param.requires_grad = False # 将所有参数求导设为否
需要注意的是:新构建的model的参数默认为 requires_grad=True !
训练结果比全局调优还好一些。