在pytorch中分布式训练模型

我只是一个搬运工!

把我看到的、认为写的比较好的一些关于在pytorch中分布式训练模型的相关文章整理到这里,以便以后查看,嘿嘿。

pytorch中分布式训练入门教程

单机多卡

torch分布式训练

torch.distributed 实现单机多卡分布式训练

pytorch的分布式 torch.distributed.launch 命令在做什么呢

搞清torch.distributed.launch相关的环境变量

torch 单机多卡训练

最后说一句,我没有卡!

添加 multiprocessing 后并行训练部分主要与如下代码段有关:

# main.py
import torch
import torch.distributed as dist
import torch.multiprocessing as mp

mp.spawn(main_worker, nprocs=4, args=(4, myargs))

def main_worker(proc, nprocs, args):

   dist.init_process_group(backend='nccl', init_method='tcp://127.0.0.1:23456', world_size=4, rank=gpu)
   torch.cuda.set_device(args.local_rank)

   train_dataset = ...
   train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)

   train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=..., sampler=train_sampler)

   model = ...
   model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_rank])

   optimizer = optim.SGD(model.parameters())

   for epoch in range(100):
      for batch_idx, (data, target) in enumerate(train_loader):
          images = images.cuda(non_blocking=True)
          target = target.cuda(non_blocking=True)
          ...
          output = model(images)
          loss = criterion(output, target)
          ...
          optimizer.zero_grad()
          loss.backward()
          optimizer.step()

你可能感兴趣的:(pytorch,深度学习,人工智能)