MindSpore易点通·精讲系列--网络构建之LSTM算子--中篇

Dive Into MindSpore – Distributed Training With GPU For Model TrainMindSpore易点通·精讲系列–模型训练之GPU分布式并行训练本文开发环境Ubuntu 20.04Python 3.8MindSpore 1.7.0OpenMPI 4.0.3RTX 1080Ti * 4本文内容摘要基础知识环境搭建单卡训练多卡训练–OpenMPI多卡训练–非OpenMPI本文总结遇到问题本文参考1. 基础知识1.1 概念介绍在深度学习中,随着模型和数据的不断增长,在很多情况下需要使用单机多卡或者多机多卡进行训练,即分布式训练。分布式训练策略按照并行方式不同,可以简单的分为数据并行和模型并行两种方式。数据并行数据并行是指在不同的 GPU 上都 copy 保存一份模型的副本,然后将不同的数据分配到不同的 GPU 上进行计算,最后将所有 GPU 计算的结果进行合并,从而达到加速模型训练的目的。模型并行与数据并行不同,分布式训练中的模型并行是指将整个神经网络模型拆解分布到不同的 GPU 中,不同的 GPU 负责计算网络模型中的不同部分。这通常是在网络模型很大很大、单个 GPU 的显存已经完全装不下整体网络的情况下才会采用。
MindSpore易点通·精讲系列--网络构建之LSTM算子--中篇_第1张图片
1.2 MindSpore中的支持1.1中介绍了理论中的并行方式,具体到MIndSpore框架中,目前支持下述的四种并行模式:数据并行:用户的网络参数规模在单卡上可以计算的情况下使用。这种模式会在每卡上复制相同的网络参数,训练时输入不同的训练数据,适合大部分用户使用。半自动并行:用户的神经网络在单卡上无法计算,并且对切分的性能存在较大的需求。用户可以设置这种运行模式,手动指定每个算子的切分策略,达到较佳的训练性能。自动并行:用户的神经网络在单卡上无法计算,但是不知道如何配置算子策略。用户启动这种模式,MindSpore会自动针对每个算子进行配置策略,适合想要并行训练但是不知道如何配置策略的用户。混合并行:完全由用户自己设计并行训练的逻辑和实现,用户可以自己在网络中定义AllGather等通信算子。适合熟悉并行训练的用户。对于大部分用户来说,其实能够用到的是数据并行模式,所以下面的案例中,会以数据并行模式来展开讲解。2. 环境搭建2.1 MindSpore安装略。可参考笔者之前的文章MindSpore入门–基于GPU服务器安装MindSpore 1.5.0,注意将文章中的MindSpore版本升级到1.7.0。2.2 OpenMPI安装在GPU硬件平台上,MindSpore采用OpenMPI的mpirun进行分布式训练。所以我们先来安装OpenMPI。本文安装的是4.0.3版本,安装命令如下:wget -c https://download.open-mpi.org...
tar xf openmpi-4.0.3.tar.gz
cd openmpi-4.0.3/
./configure --prefix=/usr/local/openmpi-4.0.3
make -j 16
sudo make install
echo -e "export PATH=/usr/local/openmpi-4.0.3/bin:$PATH" >> ~/.bashrc
echo -e "export LD_LIBRARY_PATH=/usr/local/openmpi-4.0.3/lib:$LD_LIBRARY_PATH" >> ~/.bashrc
source ~/.bashrc
使用mpirun --version命令验证是否安装成功,输出如下内容:mpirun (Open MPI) 4.0.3

Report bugs to http://www.open-mpi.org/commu...
2.3 环境验证上面基础环境安装完成后,我们对环境进行一个初步验证,来看看是否搭建成功。验证代码如下:# nccl_allgather.py
import numpy as np
import mindspore.ops as ops
import mindspore.nn as nn
from mindspore import context, Tensor
from mindspore.communication import init, get_rank

class Net(nn.Cell):

def __init__(self):
    super(Net, self).__init__()
    self.allgather = ops.AllGather()

def construct(self, x):
    return self.allgather(x)

if name == "__main__":

context.set_context(mode=context.GRAPH_MODE, device_target="GPU")
init("nccl")
value = get_rank()
input_x = Tensor(np.array([[value]]).astype(np.float32))
net = Net()
output = net(input_x)
print(output)

将上面代码保存到文件nccl_allgather.py中,运行命令:命令解读:-n 后面数字代表使用GPU的数量,这里使用了机器内全部GPU。如果读者不想使用全部,记得设置相应的环境变量。mpirun -n 4 python3 nccl_allgather.py
输出内容如下:[[0.]
[1.]
[2.]
[3.]]
[[0.]
[1.]
[2.]
[3.]]
[[0.]
[1.]
[2.]
[3.]]
[[0.]
[1.]
[2.]
[3.]]
至此,我们的环境搭建完成,且验证成功。3. 单卡训练为了能够后续进行对比测试,这里我们先来进行单卡训练,以此做为基准。3.1 代码部分代码说明:网络结构采用的是ResNet-50,读者可以在MindSpore Models仓库进行获取,复制粘贴过来即可,ResNet-50代码链接。数据集采用的是Fruit-360数据集,有关该数据集的更详细介绍可以参看笔者之前的文章MindSpore易点通·精讲系列–数据集加载之ImageFolderDataset。数据集下载链接读者注意将代码中的train_dataset_dir和test_dataset_dir替换为自己的文件目录。单卡训练的代码如下:import numpy as np

from mindspore import context
from mindspore import nn
from mindspore.common import dtype as mstype
from mindspore.common import set_seed
from mindspore.common import Tensor
from mindspore.communication import init, get_rank, get_group_size
from mindspore.dataset import ImageFolderDataset
from mindspore.dataset.transforms.c_transforms import Compose, TypeCast
from mindspore.dataset.vision.c_transforms import HWC2CHW, Normalize, RandomCrop, RandomHorizontalFlip, Resize
from mindspore.nn.loss import SoftmaxCrossEntropyWithLogits
from mindspore.nn.optim import Momentum
from mindspore.ops import operations as P
from mindspore.ops import functional as F
from mindspore.train import Model
from mindspore.train.callback import CheckpointConfig, ModelCheckpoint, LossMonitor
from scipy.stats import truncnorm

define reset50

def create_dataset(dataset_dir, mode="train", decode=True, batch_size=32, repeat_num=1):

if mode == "train":
    shuffle = True
else:
    shuffle = False

dataset = ImageFolderDataset(
    dataset_dir=dataset_dir, shuffle=shuffle, decode=decode)

mean = [127.5, 127.5, 127.5]
std = [127.5, 127.5, 127.5]
if mode == "train":
    transforms_list = Compose(
        [RandomCrop((32, 32), (4, 4, 4, 4)),
         RandomHorizontalFlip(),
         Resize((100, 100)),
         Normalize(mean, std),
         HWC2CHW()])
else:
    transforms_list = Compose(
        [Resize((128, 128)),
         Normalize(mean, std),
         HWC2CHW()])

cast_op = TypeCast(mstype.int32)

dataset = dataset.map(operations=transforms_list, input_columns="image")
dataset = dataset.map(operations=cast_op, input_columns="label")
dataset = dataset.batch(batch_size=batch_size, drop_remainder=True)
dataset = dataset.repeat(repeat_num)

return dataset

def run_train():

context.set_context(mode=context.GRAPH_MODE, device_target="GPU")
set_seed(0)

train_dataset_dir = "/mnt/data_0002_24t/xingchaolong/dataset/Fruits_360/fruits-360_dataset/fruits-360/Training"
test_dataset_dir = "/mnt/data_0002_24t/xingchaolong/dataset/Fruits_360/fruits-360_dataset/fruits-360/Test"
batch_size = 32

train_dataset = create_dataset(dataset_dir=train_dataset_dir, batch_size=batch_size)
test_dataset = create_dataset(dataset_dir=test_dataset_dir, mode="test")
train_batch_num = train_dataset.get_dataset_size()
test_batch_num = test_dataset.get_dataset_size()
print("train dataset batch num: {}".format(train_batch_num), flush=True)
print("test dataset batch num: {}".format(test_batch_num), flush=True)

# build model
net = resnet50(class_num=131)
loss = SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
optim = Momentum(params=net.trainable_params(), learning_rate=0.01, momentum=0.9, loss_scale=1024.0)
model = Model(net, loss_fn=loss, optimizer=optim, metrics={"accuracy"})

# CheckPoint CallBack definition
config_ck = CheckpointConfig(save_checkpoint_steps=train_batch_num, keep_checkpoint_max=35)
ckpoint_cb = ModelCheckpoint(prefix="fruit_360_renet50", directory="./ckpt/", config=config_ck)
# LossMonitor is used to print loss value on screen
loss_cb = LossMonitor()

# model train
model.train(10, train_dataset, callbacks=[ckpoint_cb, loss_cb], dataset_sink_mode=True)

# model eval
result = model.eval(test_dataset)
print("eval result: {}".format(result), flush=True)

def main():

run_train()

if name == "__main__":

main()

3.2 训练部分保存代码到gpu_single_train.py,使用如下命令进行训练:export CUDA_VISIBLE_DEVICES=0
python3 gpu_single_train.py
训练过程输出内容如下:train dataset batch num: 2115
test dataset batch num: 709
epoch: 1 step: 2115, loss is 4.219570636749268
epoch: 2 step: 2115, loss is 3.7109947204589844
......
epoch: 9 step: 2115, loss is 2.66499400138855
epoch: 10 step: 2115, loss is 2.540522336959839
eval result: {'accuracy': 0.676348730606488}
使用tree ckpt命令,查看一下模型保存目录的情况,输出内容如下:ckpt/
├── fruit_360_renet50-10_2115.ckpt
├── fruit_360_renet50-1_2115.ckpt
......
├── fruit_360_renet50-9_2115.ckpt
└── fruit_360_renet50-graph.meta

  1. 多卡训练–OpenMPI下面我们通过实际案例,介绍如何在GPU平台上,采用OpenMPI进行分布式训练。4.1 代码部分代码说明:前三点说明请参考3.1部分的代码说明。多卡训练主要修改的是数据集读取和context设置部分。数据集读取:需要指定num_shards和shard_id,详细内容参考代码。context设置:包含参数一致性和并行模式设定。参数一致性这里使用的是set_seed来设定;并行模式通过set_auto_parallel_context方法和parallel_mode参数来进行设置。多卡训练的代码如下:import numpy as np

from mindspore import context
from mindspore import nn
from mindspore.common import dtype as mstype
from mindspore.common import set_seed
from mindspore.common import Tensor
from mindspore.communication import init, get_rank, get_group_size
from mindspore.dataset import ImageFolderDataset
from mindspore.dataset.transforms.c_transforms import Compose, TypeCast
from mindspore.dataset.vision.c_transforms import HWC2CHW, Normalize, RandomCrop, RandomHorizontalFlip, Resize
from mindspore.nn.loss import SoftmaxCrossEntropyWithLogits
from mindspore.nn.optim import Momentum
from mindspore.ops import operations as P
from mindspore.ops import functional as F
from mindspore.train import Model
from mindspore.train.callback import CheckpointConfig, ModelCheckpoint, LossMonitor
from scipy.stats import truncnorm

define reset50

def create_dataset(dataset_dir, mode="train", decode=True, batch_size=32, repeat_num=1):

if mode == "train":
    shuffle = True
    rank_id = get_rank()
    rank_size = get_group_size()
else:
    shuffle = False
    rank_id = None
    rank_size = None

dataset = ImageFolderDataset(
    dataset_dir=dataset_dir, shuffle=shuffle, decode=decode, num_shards=rank_size, shard_id=rank_id)

mean = [127.5, 127.5, 127.5]
std = [127.5, 127.5, 127.5]
if mode == "train":
    transforms_list = Compose(
        [RandomCrop((32, 32), (4, 4, 4, 4)),
         RandomHorizontalFlip(),
         Resize((100, 100)),
         Normalize(mean, std),
         HWC2CHW()])
else:
    transforms_list = Compose(
        [Resize((128, 128)),
         Normalize(mean, std),
         HWC2CHW()])

cast_op = TypeCast(mstype.int32)

dataset = dataset.map(operations=transforms_list, input_columns="image")
dataset = dataset.map(operations=cast_op, input_columns="label")
dataset = dataset.batch(batch_size=batch_size, drop_remainder=True)
dataset = dataset.repeat(repeat_num)

return dataset

def run_train():

context.set_context(mode=context.GRAPH_MODE, device_target="GPU")
init("nccl")
rank_id = get_rank()
rank_size = get_group_size()
print("rank size: {}, rank id: {}".format(rank_size, rank_id), flush=True)
set_seed(0)
context.set_auto_parallel_context(
    device_num=rank_size, gradients_mean=True, parallel_mode=context.ParallelMode.DATA_PARALLEL)

train_dataset_dir = "/mnt/data_0002_24t/xingchaolong/dataset/Fruits_360/fruits-360_dataset/fruits-360/Training"
test_dataset_dir = "/mnt/data_0002_24t/xingchaolong/dataset/Fruits_360/fruits-360_dataset/fruits-360/Test"
batch_size = 32

train_dataset = create_dataset(dataset_dir=train_dataset_dir, batch_size=batch_size//rank_size)
test_dataset = create_dataset(dataset_dir=test_dataset_dir, mode="test")
train_batch_num = train_dataset.get_dataset_size()
test_batch_num = test_dataset.get_dataset_size()
print("train dataset batch num: {}".format(train_batch_num), flush=True)
print("test dataset batch num: {}".format(test_batch_num), flush=True)

# build model
net = resnet50(class_num=131)
loss = SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
optim = Momentum(params=net.trainable_params(), learning_rate=0.01, momentum=0.9, loss_scale=1024.0)
model = Model(net, loss_fn=loss, optimizer=optim, metrics={"accuracy"})

# CheckPoint CallBack definition
config_ck = CheckpointConfig(save_checkpoint_steps=train_batch_num, keep_checkpoint_max=35)
ckpoint_cb = ModelCheckpoint(prefix="fruit_360_renet50_{}".format(rank_id), directory="./ckpt/", config=config_ck)
# LossMonitor is used to print loss value on screen
loss_cb = LossMonitor()

# model train
model.train(10, train_dataset, callbacks=[ckpoint_cb, loss_cb], dataset_sink_mode=True)

# model eval
result = model.eval(test_dataset)
print("eval result: {}".format(result), flush=True)

def main():

run_train()

if name == "__main__":

main()

4.2 训练部分下面来介绍如何使用多卡GPU训练。4.2.1 4卡GPU训练使用如下命令,进行4卡GPU训练:export CUDA_VISIBLE_DEVICES=0,1,2,3
mpirun -n 4 python3 gpu_distributed_train.py
训练过程中,输出内容如下:rank size: 4, rank id: 0
rank size: 4, rank id: 1
rank size: 4, rank id: 2
rank size: 4, rank id: 3
train dataset batch num: 2115
test dataset batch num: 709
train dataset batch num: 2115
test dataset batch num: 709
train dataset batch num: 2115
test dataset batch num: 709
train dataset batch num: 2115
test dataset batch num: 709
[WARNING] PRE_ACT(294248,7fa67e831740,python3):2022-07-13-17:11:24.528.381 [mindspore/ccsrc/backend/common/pass/communication_op_fusion.cc:198] GetAllReduceSplitSegment] Split threshold is 0. AllReduce nodes will take default fusion strategy.
[WARNING] PRE_ACT(294245,7f57993a5740,python3):2022-07-13-17:11:26.176.114 [mindspore/ccsrc/backend/common/pass/communication_op_fusion.cc:198] GetAllReduceSplitSegment] Split threshold is 0. AllReduce nodes will take default fusion strategy.
[WARNING] PRE_ACT(294247,7f36f889b740,python3):2022-07-13-17:11:30.475.177 [mindspore/ccsrc/backend/common/pass/communication_op_fusion.cc:198] GetAllReduceSplitSegment] Split threshold is 0. AllReduce nodes will take default fusion strategy.
[WARNING] PRE_ACT(294246,7f5f1820c740,python3):2022-07-13-17:11:31.271.259 [mindspore/ccsrc/backend/common/pass/communication_op_fusion.cc:198] GetAllReduceSplitSegment] Split threshold is 0. AllReduce nodes will take default fusion strategy.
epoch: 1 step: 2115, loss is 4.536644458770752
epoch: 1 step: 2115, loss is 4.347061634063721
epoch: 1 step: 2115, loss is 4.557111740112305
epoch: 1 step: 2115, loss is 4.467658519744873
......
epoch: 10 step: 2115, loss is 3.263073205947876
epoch: 10 step: 2115, loss is 3.169656753540039
epoch: 10 step: 2115, loss is 3.2040905952453613
epoch: 10 step: 2115, loss is 3.812671184539795
eval result: {'accuracy': 0.48113540197461213}
eval result: {'accuracy': 0.5190409026798307}
eval result: {'accuracy': 0.4886283497884344}
eval result: {'accuracy': 0.5010578279266573}
使用tree ckpt命令,查看一下模型保存目录的情况,输出内容如下:ckpt/
├── fruit_360_renet50_0-10_2115.ckpt
├── fruit_360_renet50_0-1_2115.ckpt
├── fruit_360_renet50_0-2_2115.ckpt
├── fruit_360_renet50_0-3_2115.ckpt
├── fruit_360_renet50_0-4_2115.ckpt
├── fruit_360_renet50_0-5_2115.ckpt
├── fruit_360_renet50_0-6_2115.ckpt
├── fruit_360_renet50_0-7_2115.ckpt
├── fruit_360_renet50_0-8_2115.ckpt
├── fruit_360_renet50_0-9_2115.ckpt
├── fruit_360_renet50_0-graph.meta
......
├── fruit_360_renet50_3-10_2115.ckpt
├── fruit_360_renet50_3-1_2115.ckpt
├── fruit_360_renet50_3-2_2115.ckpt
├── fruit_360_renet50_3-3_2115.ckpt
├── fruit_360_renet50_3-4_2115.ckpt
├── fruit_360_renet50_3-5_2115.ckpt
├── fruit_360_renet50_3-6_2115.ckpt
├── fruit_360_renet50_3-7_2115.ckpt
├── fruit_360_renet50_3-8_2115.ckpt
├── fruit_360_renet50_3-9_2115.ckpt
└── fruit_360_renet50_3-graph.meta
4.2.2 2卡GPU训练为了进行对比,再来进行2卡GPU训练,命令如下:这里为了验证普遍性,并非依序选择GPU。export CUDA_VISIBLE_DEVICES=2,3
mpirun -n 2 python3 gpu_distributed_train.py
训练过程中,输出内容如下:rank size: 2, rank id: 0
rank size: 2, rank id: 1
train dataset batch num: 2115
test dataset batch num: 709
train dataset batch num: 2115
test dataset batch num: 709
[WARNING] PRE_ACT(295459,7ff930118740,python3):2022-07-13-17:31:07.210.231 [mindspore/ccsrc/backend/common/pass/communication_op_fusion.cc:198] GetAllReduceSplitSegment] Split threshold is 0. AllReduce nodes will take default fusion strategy.
[WARNING] PRE_ACT(295460,7f5fed564740,python3):2022-07-13-17:31:07.649.536 [mindspore/ccsrc/backend/common/pass/communication_op_fusion.cc:198] GetAllReduceSplitSegment] Split threshold is 0. AllReduce nodes will take default fusion strategy.
epoch: 1 step: 2115, loss is 4.391518592834473
epoch: 1 step: 2115, loss is 4.337993621826172
......
epoch: 10 step: 2115, loss is 2.7631659507751465
epoch: 10 step: 2115, loss is 3.0124118328094482
eval result: {'accuracy': 0.6057827926657263}
eval result: {'accuracy': 0.6202397743300423}
使用tree ckpt命令,查看一下模型保存目录的情况,输出内容如下:ckpt/
├── fruit_360_renet50_0-10_2115.ckpt
├── fruit_360_renet50_0-1_2115.ckpt
├── fruit_360_renet50_0-2_2115.ckpt
├── fruit_360_renet50_0-3_2115.ckpt
├── fruit_360_renet50_0-4_2115.ckpt
├── fruit_360_renet50_0-5_2115.ckpt
├── fruit_360_renet50_0-6_2115.ckpt
├── fruit_360_renet50_0-7_2115.ckpt
├── fruit_360_renet50_0-8_2115.ckpt
├── fruit_360_renet50_0-9_2115.ckpt
├── fruit_360_renet50_0-graph.meta
├── fruit_360_renet50_1-10_2115.ckpt
├── fruit_360_renet50_1-1_2115.ckpt
├── fruit_360_renet50_1-2_2115.ckpt
├── fruit_360_renet50_1-3_2115.ckpt
├── fruit_360_renet50_1-4_2115.ckpt
├── fruit_360_renet50_1-5_2115.ckpt
├── fruit_360_renet50_1-6_2115.ckpt
├── fruit_360_renet50_1-7_2115.ckpt
├── fruit_360_renet50_1-8_2115.ckpt
├── fruit_360_renet50_1-9_2115.ckpt
└── fruit_360_renet50_1-graph.meta
4.2.3 多卡对比说明结合3.2部分,进行4卡GPU训练和2卡GPU训练的对比。三种情况下,分别将batch_size设置为了32、8、16,对应到的batch_num不变。也可以认为是在GPU显存不足于支持更大batch_size时,通过多卡来实现更大batch_size的方案。从实际训练情况来看(都训练了10个epoch),单卡的效果最好,2卡次之,4卡最差。导致这种情况的原因是因为网络中使用到了BatchNorm2d算子,而在多卡情况下,无法跨卡计算,从而导致精度上的差别。在GPU硬件下,笔者暂时并没有找到合理的解决方案。5. 多卡训练–非OpenMPI在4中我们介绍了依赖OpenMPI如何来进行GPU多卡训练,同时MindSpore也支持不依赖OpenMPI来进行GPU多卡训练。官方对此的说明如下:出于训练时的安全及可靠性要求,MindSpore GPU还支持不依赖OpenMPI的分布式训练。OpenMPI在分布式训练的场景中,起到在Host侧同步数据以及进程间组网的功能;MindSpore通过复用Parameter Server模式训练架构,取代了OpenMPI能力。不过Parameter Server相关的文档及代码示例不够充分。笔者尝试采用此种方式进行训练,参考了官方文档、gitee上面的测试用例,最终未能顺利完成整个pipline。6. 本文总结本来重点介绍了在GPU硬件环境下,如何依赖OpenMPI进行多卡训练。对于非依赖OpenMPI的Parameter Server本文也有所涉及,但由于官方文档的缺失和相应代码不足,无法形成可行案例。7. 遇到问题Parameter Server模式下的官方文档跳跃性太大,相关的测试用例缺失中间过程代码,希望能够完善这部分的文档和代码。8. 本文参考深度学习中的分布式训练MindSpore分布式并行总览MindSpore分布式并行训练基础样例(GPU)MindSpore Parameter Server模式本文为原创文章,版权归作者所有,未经授权不得转载!

你可能感兴趣的:(人工智能算法)