【Pytorch】多GPU训练网络与单GPU训练网络保存模型的区别

测试环境:Python3.6 + Pytorch0.4

使用条件:多GPU训练,单GPU测试

#-----多GPU训练,单GPU测试---------
def load_network(network):
    save_path = os.path.join('./model',name,'net_%s.pth'%opt.which_epoch)
    state_dict = torch.load(save_path)
    # create new OrderedDict that does not contain `module.`
    from collections import OrderedDict
    new_state_dict = OrderedDict()
    for k, v in state_dict.items():
        namekey = k[7:] # remove `module.`
        new_state_dict[namekey] = v
    # load params
    network.load_state_dict(new_state_dict)
    return network
#-----单GPU训练,单GPU测试-----------------
def load_network(network):
    save_path = os.path.join('./model',name,'net_%s.pth'%opt.which_epoch)
    network.load_state_dict(torch.load(save_path))
    return network

下面是包含了2中方式

# loaded model is single GPU but we will train it in multiple GPUS!

# loaded model is multiple GPUs but we will train it in single GPU!

from collections import OrderedDict
def load_pretrainedmodel(modelname, model_):
    pre_model = torch.load(modelname, map_location=lambda storage, loc: storage)["model"]
    #print(pre_model)
    if cuda:
        state_dict = OrderedDict()
        for k in pre_model.state_dict():
            name = k
            if name[:7] != 'module' and torch.cuda.device_count() > 1: # loaded model is single GPU but we will train it in multiple GPUS!
                name = 'module.' + name #add 'module'
            elif name[:7] == 'module' and torch.cuda.device_count() == 1: # loaded model is multiple GPUs but we will train it in single GPU!
                name = k[7:]# remove `module.` 
            state_dict[name] = pre_model.state_dict()[k]
            #print(name)
        model_.load_state_dict(state_dict)
        #model_.load_state_dict(torch.load(modelname)['model'].state_dict())
    else:
        model_ = torch.load(modelname, map_location=lambda storage, loc: storage)["model"]
    return model_

在pytorch中,使用多GPU训练网络需要用到 【nn.DataParallel】:

gpu_ids = [0, 1, 2, 3]
device = t.device("cuda:0" if t.cuda.is_available() else "cpu")  # 只能单GPU运行
net = LeNet()
if len(gpu_ids) > 1:
    net = nn.DataParallel(net, device_ids=gpu_ids)
net = net.to(device)

而使用单GPU训练网络:

device = t.device("cuda:0" if t.cuda.is_available() else "cpu")  # 只能单GPU运行
net = LeNet().to(device)

由于多GPU训练使用了 nn.DataParallel(net, device_ids=gpu_ids) 对网络进行封装,因此在原始网络结构中添加了一层module。网络结构如下:

DataParallel(
  (module): LeNet(
    (conv1): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
    (conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
    (fc1): Linear(in_features=400, out_features=120, bias=True)
    (fc2): Linear(in_features=120, out_features=84, bias=True)
    (fc3): Linear(in_features=84, out_features=10, bias=True)
  )
)

而不使用多GPU训练的网络结构如下:

LeNet(
  (conv1): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
  (conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
  (fc1): Linear(in_features=400, out_features=120, bias=True)
  (fc2): Linear(in_features=120, out_features=84, bias=True)
  (fc3): Linear(in_features=84, out_features=10, bias=True)
)

由于在测试模型时不需要用到多GPU测试,因此在保存模型时应该把module层去掉。如下:

if len(gpu_ids) > 1:
    t.save(net.module.state_dict(), "model.pth")
else:
    t.save(net.state_dict(), "model.pth")

 

你可能感兴趣的:(Pytorch)