pytorch 多GPU训练注意事项

1.多GPU训练记得DataLoader(dataset=dataset_train, batch_size=config['TRAIN']['BATCH'], shuffle=config['TRAIN']['SHUFFLE'], num_workers=config['TRAIN']['WORKERS'],drop_last=True)中的drop_last=True,把最后一块数据丢掉,不然最后报错。

2.如果BN在多GPU要同步,那么就要用torch.nn.SyncBatchNorm.convert_sync_batchnorm(net).to(device_ids[0]),并且要在这个代码前面,先初设化:dist.init_process_group('gloo', init_method='file:///tmp/somefile', rank=0, world_size=1)net = torch.nn.DataParallel(net, device_ids=device_ids),具体情况如下

import torch.distributed as dist
dist.init_process_group('gloo', init_method='file:///tmp/somefile', rank=0, world_size=1)net = torch.nn.DataParallel(net, device_ids=device_ids)
if config["TRAIN"]["DATAPARALLEL"]["syncbatchnorm"]:
    net = torch.nn.SyncBatchNorm.convert_sync_batchnorm(net).to(device_ids[0])
else:
    net = net.cuda(device=device_ids[0])
 

你可能感兴趣的:(pytorch 多GPU训练注意事项)