深度学习_Softmax简洁实现(Gluon实现)

Softmax多分类简洁实现 (Gluon实现)

导入必要的包

import d2lzh as d2l
from mxnet import nd
from mxnet.gluon import data as gdata,loss as gloss,nn
from mxnet import gluon,init

获取和读取数据

batch_size = 256
train_iter,test_iter = d2l.load_data_fashion_mnist(batch_size)

定义和初始化模型

net = nn.Sequential()
net.add(nn.Dense(10))
net.initialize(init.Normal(sigma=0.01))

定义交叉熵损失函数

loss = gloss.SoftmaxCrossEntropyLoss()

定义优化算法

trainer = gluon.Trainer(net.collect_params(),'sgd',{'learning_rate':0.1})

训练模型

num_epochs = 5
help(d2l.train_ch3)
d2l.train_ch3(net,train_iter,test_iter,loss,num_epochs,batch_size,None,None,trainer)
Help on function train_ch3 in module d2lzh.utils:

train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, params=None, lr=None, trainer=None)
    Train and evaluate a model with CPU.

epoch 1, loss 0.7863, train acc 0.749, test acc 0.801
epoch 2, loss 0.5733, train acc 0.811, test acc 0.819
epoch 3, loss 0.5288, train acc 0.824, test acc 0.833
epoch 4, loss 0.5050, train acc 0.831, test acc 0.834
epoch 5, loss 0.4897, train acc 0.835, test acc 0.838

显示预测结果

for X, y in test_iter:
    break

true_labels = d2l.get_fashion_mnist_labels(y.asnumpy())
pred_labels = d2l.get_fashion_mnist_labels(net(X).argmax(axis=1).asnumpy())
titles = [true + '\n' + pred for true, pred in zip(true_labels, pred_labels)]

d2l.show_fashion_mnist(X[0:9], titles[0:9])
预测结果

你可能感兴趣的:(深度学习_Softmax简洁实现(Gluon实现))