PyTorch多卡训练

下面代码实现torch多卡同时训练

import pytorch as torch
from torch.utils.data.distributed import DistributedSample
torch.distributed.init_process_group(backend='nccl')#初始化并行训练
n_gpu = torch.cuda.device_count()#统计gpu数量
model = torch.nn.DataParallel(model)#多卡部署模型
data  = TensorDataset(data)
data = DistributedSample(data)#分布训练
loss = model(data)
loss = loss.mean()#多卡取平均loss

你可能感兴趣的:(pytorch,深度学习)