由于工作需要,最近在补充分布式训练方面的知识。经过一番理论学习后仍觉得意犹未尽,很多知识点无法准确get到(例如:分布式原语scatter、all reduce等代码层面应该是什么样的,ring all reduce 算法在梯度同步时是怎么使用的,parameter server参数是如何部分更新的)。
著名物理学家,诺贝尔奖得主Richard Feynman办公室的黑板上写了:"What I cannot create, I do not understand."。在程序员界也经常有"show me the code"的口号。 因此,我打算写一系列的分布式训练的文章,将以往抽象的分布式训练的概念以代码的形式展现出来,并保证每个代码可执行、可验证、可复现,并贡献出来源码让大家相互交流。
经过调研发现pytorch对于分布式训练做好很好的抽象且接口完善,因此本系列文章将以pytorch为主要框架进行,文章中的例子很多都来自pytorch的文档,并在此基础上进行了调试和扩充。
最后,由于分布式训练的理论介绍网络上已经很多了,理论部分的介绍不会是本系列文章的重点,我会将重点放在代码层面的介绍上面。
通过实现一个线性变换模型 的单机2卡分布式训练任务,来初步体验pytorch中DistributeDataparallel的使用。本文主要参考pytorch tutorial中的介绍。
代码编写流程如下:
pytorch中分布式通信模块为torch.distributed
本例中初始化代码为:
MASTER_ADDR
和MASTER_PORT
设置rank0的IP和PORT信息,rank0的作用相当于是协调节点,需要其他所有节点知道其访问地址;NCCL_DEBUG
环境变量为INFO,输出NCCL的调试信息;init_process_group
:执行网络通信模块的初始化工作
# set env信息
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29500'
os.environ['NCCL_DEBUG'] = "INFO"
# create default process group
dist.init_process_group(backend="nccl", rank=rank, world_size=world_size)
通过下面的代码分别创建本地模型和分布式模型:
nn.Linear(10, 10).to(rank)
: 创建线性变换模型,input size和out put size都是10,并且将模型copy到gpu上(通过rank来标识gpu 的id)DDP(model, device_ids=[rank])
: 创建分布式模型;该模型会将local model 复制到所有副本上,并对数据进行切分,然后使得每个local model都按照mini batch进行训练。 # create local model
model = nn.Linear(10, 10).to(rank)
# construct DDP model
ddp_model = DDP(model, device_ids=[rank])
# define loss function and optimizer
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
通过ddp_model执行forward和backward计算,这样才能够达到分布式计算的效果;
# forward pass
outputs = ddp_model(torch.randn(20, 10).to(rank))
labels = torch.randn(20, 10).to(rank)
# backward pass
loss_fn(outputs, labels).backward()
# update parameters
optimizer.step()
启动一个有两个process组成的分布式任务:
run_worker
:子进程执行的function,会以fn(i, *args)
的形式被调用,i为process的id(0,1,2...),*args为spawn的args参数args
:执行进程的参数nprocs
:进程的个数join
:是否等待子进程执行完成def main():
worker_size = 2
mp.spawn(run_worker,
args=(worker_size,),
nprocs=worker_size,
join=True)
import os
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
import torch.nn as nn
import torch.optim as optim
from torch.nn.parallel import DistributedDataParallel as DDP
def run_worker(rank, world_size):
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29500'
os.environ['NCCL_DEBUG'] = "INFO"
# create default process group
dist.init_process_group(backend="nccl", rank=rank, world_size=world_size)
# create local model
model = nn.Linear(10, 10).to(rank)
# construct DDP model
ddp_model = DDP(model, device_ids=[rank])
# define loss function and optimizer
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
# forward pass
outputs = ddp_model(torch.randn(20, 10).to(rank))
labels = torch.randn(20, 10).to(rank)
# backward pass
loss_fn(outputs, labels).backward()
# update parameters
optimizer.step()
def main():
worker_size = 2
mp.spawn(run_worker,
args=(worker_size,),
nprocs=worker_size,
join=True)
if __name__=="__main__":
main()
代码执行如下:
root@g48r13:/workspace/DDP# python linear-ddp.py
g48r13:350:350 [0] NCCL INFO Bootstrap : Using [0]bond0:11.139.84.88<0>
g48r13:350:350 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
g48r13:350:350 [0] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
g48r13:350:350 [0] NCCL INFO NET/Socket : Using [0]bond0:11.139.84.88<0>
g48r13:350:350 [0] NCCL INFO Using network Socket
NCCL version 2.7.8+cuda10.2
g48r13:351:351 [1] NCCL INFO Bootstrap : Using [0]bond0:11.139.84.88<0>
g48r13:351:351 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
g48r13:351:351 [1] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
g48r13:351:351 [1] NCCL INFO NET/Socket : Using [0]bond0:11.139.84.88<0>
g48r13:351:351 [1] NCCL INFO Using network Socket
g48r13:350:366 [0] NCCL INFO Channel 00/02 : 0 1
g48r13:351:367 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/64
g48r13:350:366 [0] NCCL INFO Channel 01/02 : 0 1
g48r13:351:367 [1] NCCL INFO Trees [0] -1/-1/-1->1->0|0->1->-1/-1/-1 [1] -1/-1/-1->1->0|0->1->-1/-1/-1
g48r13:351:367 [1] NCCL INFO Setting affinity for GPU 1 to ffffffff,ffffffff
g48r13:350:366 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/64
g48r13:350:366 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1|-1->0->1/-1/-1 [1] 1/-1/-1->0->-1|-1->0->1/-1/-1
g48r13:350:366 [0] NCCL INFO Setting affinity for GPU 0 to ffffffff,ffffffff
g48r13:351:367 [1] NCCL INFO Channel 00 : 1[5000] -> 0[4000] via P2P/IPC
g48r13:350:366 [0] NCCL INFO Channel 00 : 0[4000] -> 1[5000] via P2P/IPC
g48r13:351:367 [1] NCCL INFO Channel 01 : 1[5000] -> 0[4000] via P2P/IPC
g48r13:350:366 [0] NCCL INFO Channel 01 : 0[4000] -> 1[5000] via P2P/IPC
g48r13:351:367 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
g48r13:351:367 [1] NCCL INFO comm 0x7fb0b4001060 rank 1 nranks 2 cudaDev 1 busId 5000 - Init COMPLETE
g48r13:350:366 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
g48r13:350:366 [0] NCCL INFO comm 0x7fc7a8001060 rank 0 nranks 2 cudaDev 0 busId 4000 - Init COMPLETE
g48r13:350:350 [0] NCCL INFO Launch mode Parallel