1.6新版本
昇思MindSpore是华为在2020年开源的深度学习框架,目标已实现AI应用极简开发、高效运行(训练和推理)和全场景(端边云)部署。
昇思MindSpore在1.5.0版本时已支持主流的Linux及Windows操作系统,出于技术生态的考虑,在1.6.0/1.6.1版本中,增加了对Mac(X86/M1)的支持,可以支持典型网络如Lenet、Resnet、crnn、tinybert等的训练推理。
如何安装MindSpore的Mac版本
安装前检查及准备
当前MindSpore在Mac下支持的软件和硬件如下表,当前主要支持MacOS 10-11的系统。受限于Python本身对M1芯片的支持,使用M1芯片的Mac需要安装大于3.9.1的Python版本。
芯片 |
计算架构 |
macOS版本 |
支持Python版本 |
M1 |
ARM |
11.3 |
Python 3.9.1+ |
Intel |
X86 |
10.15/11.3 |
Python 3.7.5/3.9.0 |
MacOS缺省会安装Python版本,如果需要选择高版本,可以从以下地址获取安装:
Python 版本 |
下载地址 |
3.9.0 |
https://url.cy/Ej0AI4 |
3.9.1(适配M1) |
https://url.cy/YBrZA2 |
也可以使用Conda来准备Python环境,推荐使用MiniConda3,使用如下命令完成Python版本安装:
conda create -n py39 -c conda-forge python=3.9.0
conda activate py39
或者
conda create -n py391 -c conda-forge python=3.9.1
conda activate py391
安装昇思MindSpore
使用Pip方式安装MindSpore:
pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/1.6.1/MindSpore/cpu/x86_64/mindspore-1.6.1-cp39-cp39-macosx_10_15_x86_64.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple #X86版本
pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/1.6.1/MindSpore/cpu/aarch64/mindspore-1.6.1-cp39-cp39-macosx_11_0_arm64.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple #M1版本
使用Conda方式安装MindSpore:
conda install mindspore-cpu=1.6.1 -c mindspore -c conda-forge
在命令行输入python -c "import mindspore;mindspore.run_check()"检查安装是否成功
在Mac下运行MindSpore
我们可以参考昇思MindSpore官网的初学入门指南,通过最简单的Lenet网络完成手写数字识别任务,来进一步验证下昇思MindSpore。
背景
1、下载MNIST数据集:
创建data.py文件,输入以下内容,并执行python data.py。
import os
import requests
requests.packages.urllib3.disable_warnings()
def download_dataset(dataset_url, path):
filename = dataset_url.split("/")[-1]
save_path = os.path.join(path, filename)
if os.path.exists(save_path):
return
if not os.path.exists(path):
os.makedirs(path)
res = requests.get(dataset_url, stream=True, verify=False)
with open(save_path, "wb") as f:
for chunk in res.iter_content(chunk_size=512):
if chunk:
f.write(chunk)
print("The {} file is downloaded and saved in the path {} after processing".format(os.path.basename(dataset_url), path))
train_path = "datasets/MNIST_Data/train"
test_path = "datasets/MNIST_Data/test"
download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-labels-idx1-ubyte", train_path)
download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-images-idx3-ubyte", train_path)
download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-labels-idx1-ubyte", test_path)
download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-images-idx3-ubyte", test_path)
该文件会下载MINIST数据集,并组织成如下的目录结构:
./datasets/MNIST_Data
├── test
│ ├── t10k-images-idx3-ubyte
│ └── t10k-labels-idx1-ubyte
└── train
├── train-images-idx3-ubyte
└── train-labels-idx1-ubyte
2、创建train.py,基于MindSpore的dataset模块完成数据处理:
import mindspore.dataset as ds
import mindspore.dataset.transforms.c_transforms as C
import mindspore.dataset.vision.c_transforms as CV
from mindspore.dataset.vision import Inter
from mindspore import dtype as mstype
def create_dataset(data_path, batch_size=32, repeat_size=1,
num_parallel_workers=1):
# 定义数据集
mnist_ds = ds.MnistDataset(data_path)
resize_height, resize_width = 32, 32
rescale = 1.0 / 255.0
shift = 0.0
rescale_nml = 1 / 0.3081
shift_nml = -1 * 0.1307 / 0.3081
# 定义所需要操作的map映射
resize_op = CV.Resize((resize_height, resize_width), interpolation=Inter.LINEAR)
rescale_nml_op = CV.Rescale(rescale_nml, shift_nml)
rescale_op = CV.Rescale(rescale, shift)
hwc2chw_op = CV.HWC2CHW()
type_cast_op = C.TypeCast(mstype.int32)
# 使用map映射函数,将数据操作应用到数据集
mnist_ds = mnist_ds.map(operations=type_cast_op, input_columns="label", num_parallel_workers=num_parallel_workers)
mnist_ds = mnist_ds.map(operations=[resize_op, rescale_op, rescale_nml_op, hwc2chw_op], input_columns="image", num_parallel_workers=num_parallel_workers)
# 进行shuffle、batch、repeat操作
buffer_size = 10000
mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size)
mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True)
mnist_ds = mnist_ds.repeat(count=repeat_size)
return mnist_ds
3、基于MindSpore的nn.Cell完成网络定义,添加如下代码到lenet.py中:
import mindspore.nn as nn
from mindspore.common.initializer import Normal
class LeNet5(nn.Cell):
"""
Lenet网络结构
"""
def __init__(self, num_class=10, num_channel=1):
super(LeNet5, self).__init__()
# 定义所需要的运算
self.conv1 = nn.Conv2d(num_channel, 6, 5, pad_mode='valid')
self.conv2 = nn.Conv2d(6, 16, 5, pad_mode='valid')
self.fc1 = nn.Dense(16 * 5 * 5, 120, weight_init=Normal(0.02))
self.fc2 = nn.Dense(120, 84, weight_init=Normal(0.02))
self.fc3 = nn.Dense(84, num_class, weight_init=Normal(0.02))
self.relu = nn.ReLU()
self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
self.flatten = nn.Flatten()
def construct(self, x):
# 使用定义好的运算构建前向网络
x = self.conv1(x)
x = self.relu(x)
x = self.max_pool2d(x)
x = self.conv2(x)
x = self.relu(x)
x = self.max_pool2d(x)
x = self.flatten(x)
x = self.fc1(x)
x = self.relu(x)
x = self.fc2(x)
x = self.relu(x)
x = self.fc3(x)
return x
4、损失函数定义及优化:
# 定义损失函数
net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
# 定义优化器
net_opt = nn.Momentum(net.trainable_params(), learning_rate=0.01, momentum=0.9)
5、在train.py中添加训练代码:
import os
from mindspore.train.callback import ModelCheckpoint, CheckpointConfig
# 导入模型训练需要的库
from mindspore.nn import Accuracy
from mindspore.train.callback import LossMonitor
from mindspore import Model
from lenet import Lenet5
def train_net(model, epoch_size, data_path, repeat_size, ckpoint_cb, sink_mode):
"""定义训练的方法"""
# 加载训练数据集
ds_train = create_dataset(os.path.join(data_path, "train"), 32, repeat_size)
model.train(epoch_size, ds_train, callbacks=[ckpoint_cb, LossMonitor(125)], dataset_sink_mode=sink_mode)
# 设置模型保存参数
config_ck = CheckpointConfig(save_checkpoint_steps=1875, keep_checkpoint_max=10)
# 应用模型保存参数
ckpoint = ModelCheckpoint(prefix="checkpoint_lenet", config=config_ck)
# 训练参数设定,选择只跑一个epoch
train_epoch = 1
mnist_path = "./datasets/MNIST_Data"
dataset_size = 1
model = Model(net, net_loss, net_opt, metrics={"Accuracy": Accuracy()})
train_net(model, train_epoch, mnist_path, dataset_size, ckpoint, False)
test_net(model, mnist_path)
6、执行python train.py开始训练,可以看到按照训练代码,执行完1875个step后,损失函数收敛到约0.05:
python train.py
epoch: 1 step: 125, loss is 2.2945752143859863
epoch: 1 step: 250, loss is 2.2834312915802
epoch: 1 step: 375, loss is 2.286731004714966
epoch: 1 step: 500, loss is 2.2865426540374756
epoch: 1 step: 625, loss is 2.1827993392944336
epoch: 1 step: 750, loss is 0.6413211226463318
epoch: 1 step: 875, loss is 0.3101319372653961
epoch: 1 step: 1000, loss is 0.1193467304110527
epoch: 1 step: 1125, loss is 0.09959482401609421
epoch: 1 step: 1250, loss is 0.11662383377552032
epoch: 1 step: 1375, loss is 0.13491152226924896
epoch: 1 step: 1500, loss is 0.11873210221529007
epoch: 1 step: 1625, loss is 0.019252609461545944
epoch: 1 step: 1750, loss is 0.011969765648245811
epoch: 1 step: 1875, loss is 0.0546155609190464
7、评估模型准确率,在train.py中增加评估的代码:
def test_net(model, data_path):
"""定义验证的方法"""
ds_eval = create_dataset(os.path.join(data_path, "test"))
acc = model.eval(ds_eval, dataset_sink_mode=False)
print("{}".format(acc))
test_net(model, mnist_path)
执行python train.py后可以看到模型准确率的评估结果:
{'Accuracy': 0.9663461538461539}
8、基于测试数据集测试模型,创建test.py,输入如下内容:
import numpy as np
import os
import mindspore.dataset as ds
import mindspore.dataset.transforms.c_transforms as C
import mindspore.dataset.vision.c_transforms as CV
from mindspore.dataset.vision import Inter
from mindspore import dtype as mstype
from mindspore import load_checkpoint, load_param_into_net
from mindspore import Tensor
from mindspore import Model
from lenet import LeNet5
def create_dataset(data_path, batch_size=32, repeat_size=1,
num_parallel_workers=1):
# 定义数据集
mnist_ds = ds.MnistDataset(data_path)
resize_height, resize_width = 32, 32
rescale = 1.0 / 255.0
shift = 0.0
rescale_nml = 1 / 0.3081
shift_nml = -1 * 0.1307 / 0.3081
# 定义所需要操作的map映射
resize_op = CV.Resize((resize_height, resize_width), interpolation=Inter.LINEAR)
rescale_nml_op = CV.Rescale(rescale_nml, shift_nml)
rescale_op = CV.Rescale(rescale, shift)
hwc2chw_op = CV.HWC2CHW()
type_cast_op = C.TypeCast(mstype.int32)
# 使用map映射函数,将数据操作应用到数据集
mnist_ds = mnist_ds.map(operations=type_cast_op, input_columns="label", num_parallel_workers=num_parallel_workers)
mnist_ds = mnist_ds.map(operations=[resize_op, rescale_op, rescale_nml_op, hwc2chw_op], input_columns="image", num_parallel_workers=num_parallel_workers)
# 进行shuffle、batch、repeat操作
buffer_size = 10000
mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size)
mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True)
mnist_ds = mnist_ds.repeat(count=repeat_size)
return mnist_ds
#初始化网络实例
net = LeNet5()
# 加载已经保存的用于测试的模型
param_dict = load_checkpoint("checkpoint_lenet-1_1875.ckpt")
# 加载参数到网络中
load_param_into_net(net, param_dict)
model = Model(net)
mnist_path = "./datasets/MNIST_Data"
# 定义测试数据集,batch_size设置为1,则取出一张图片
ds_test = create_dataset(os.path.join(mnist_path, "test"), batch_size=1).create_dict_iterator()
data = next(ds_test)
# images为测试图片,labels为测试图片的实际分类
images = data["image"].asnumpy()
labels = data["label"].asnumpy()
# 使用函数model.predict预测image对应分类
output = model.predict(Tensor(data['image']))
predicted = np.argmax(output.asnumpy(), axis=1)
# 输出预测分类与实际分类
print(f'Predicted: "{predicted[0]}", Actual: "{labels[0]}"')
执行python test.py,可以看到输出结果为:
Predicted: "8", Actual: "8"
代码从测试数据集中选择一副标记为“8”的图片,通过模型预测结果也为“8”,实际的标签一致。
昇思MindSpore今年会持续增加支持的系统、硬件范围,为开发者提供更好的易用性体验,请大家持续关注昇思MindSpore,并欢迎给出使用的反馈,你们的意见是我们改进提升的动力!
MindSpore官方资料
官方QQ群 : 486831414
官网:https://www.mindspore.cn/
Gitee : https : //gitee.com/mindspore/mindspore
GitHub : https://github.com/mindspore-ai/mindspore
论坛:https://bbs.huaweicloud.com/forum/forum-1076-1.html