Slrum 分布式训练+提交作业

创建分布式+采样

    if hparams.multi_gpu:
        logger.info('-------------  分布式训练 -----------------')
        torch.distributed.init_process_group(backend='nccl')
        local_rank = torch.distributed.get_rank()
        torch.cuda.set_device(local_rank)
        device = torch.device("cuda", local_rank)  # local_rank是当前的一个gpu
        nprocs = torch.cuda.device_count()
        
        # 分布式采样
        train_sampler = torch.utils.data.distributed.DistributedSampler(train_data, shuffle=True)
        valid_sampler = torch.utils.data.distributed.DistributedSampler(valid_data, shuffle=False)
        test_sampler = torch.utils.data.distributed.DistributedSampler(test_data, shuffle=False)
    
        train_loader = DataLoader(train_data, batch_size=hparams.batch_size, collate_fn=collate, sampler=train_sampler)
        valid_loader = DataLoader(valid_data, batch_size=hparams.batch_size, collate_fn=collate, sampler=valid_sampler)
        test_loader = DataLoader(test_data, batch_size=hparams.batch_size, collate_fn=collate, sampler=test_sampler)

模型部署

     model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[local_rank],output_device=local_rank)

由于模型已被包装,这时候直接调用模型组件会报错,比如:model.fc, 会显示没有属性, 因此一下操作

if isinstance(model, torch.nn.DataParallel) or isinstance(model,torch.nn.parallel.DistributedDataParallel):
      model = model.module

损失loss、 梯度和准确度等整合。 由于不同的GPU加载的数据不一样,会导致算出来的Loss、acc等不一样,需要合并

def average_gradients(model):
    """ Gradient averaging. """
    size = float(dist.get_world_size())
    for param in model.parameters():
        if param.grad is None:
            continue
        dist.all_reduce(param.grad.data, op=dist.ReduceOp.SUM)
        param.grad.data /= size
def reduce_mean(tensor, nprocs):
    rt = torch.tensor(tensor).to(device).clone()
    dist.all_reduce(rt, op=dist.ReduceOp.SUM)  # sum-up as the all-reduce operation
    rt /= nprocs  # NOTE this is necessary, since all_reduce here do not perform average
    return rt

应用的时候

        total_loss += loss.item()
        # 多个GPU需要进行整合
        if hparams.multi_gpu:
            average_gradients(model)
        loss.backward()
        optimizer.step()
        scheduler.step()
        if hparams.multi_gpu:
            acc = reduce_mean(acc, nprocs)
  • SLurm 提交作业,提交 *.sh文件, 或在bash交互环境中直接输入命令即可。者当你提交作业后, print函数等会多个进行输出,那表示是正确的
# Distributed-DataParallel (Multi-GPUs)
env CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2\
 train.py \
 --dataset Subj \
 --epochs 50 \
 --learning_rate 0.0005\
 --batch_size 128 \
 --multi_gpu

你可能感兴趣的:(Slrum 分布式训练+提交作业)