动手学深度学习(一)——逻辑回归(gluon)

文章作者:Tyan
博客:noahsnail.com  |  CSDN  | 

注:本文为李沐大神的《动手学深度学习》的课程笔记!

# 导入mxnet
import mxnet as mx

# 设置随机种子
mx.random.seed(2)

from mxnet import gluon
from mxnet import ndarray as nd
from mxnet import autograd
from mxnet import image

辅助函数

from utils import load_data_fashion_mnist, accuracy, evaluate_accuracy

获取和读取数据

# 批数据大小
batch_size = 256

# 获取训练数据和测试数据
train_data, test_data = load_data_fashion_mnist(batch_size)

定义和初始化模型

# 定义一个空的模型
net = gluon.nn.Sequential()

# name_scope作用, 方便管理参数命名
with net.name_scope():
    # 加入一个平铺层, 其会将输入数据平铺为batch_size*?维
    net.add(gluon.nn.Flatten())
    # 加入一个全连接层, 输出为10类
    net.add(gluon.nn.Dense(10))

# 参数初始化
net.initialize()

Softmax和交叉熵损失函数

# 定义交叉熵损失
softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()

优化

# 定义训练器和优化方法
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.1})

训练

# 定义迭代周期
epochs = 5

# 训练
for epoch in range(epochs):
    # 训练损失
    train_loss = 0.0
    # 训练集准确率
    train_acc = 0.0
    # 迭代训练
    for data, label in train_data:
        # 记录梯度
        with autograd.record():
            # 计算输出
            output = net(data)
            # 计算损失
            loss = softmax_cross_entropy(output, label)
        # 反向传播求梯度
        loss.backward()
        # 梯度下降
        trainer.step(batch_size)
        # 总的训练损失
        train_loss += nd.mean(loss).asscalar()
        # 总的训练准确率
        train_acc += accuracy(output, label)
    
    # 测试集的准确率
    test_acc = evaluate_accuracy(test_data, net)
    
    print("Epoch %d. Loss: %f, Train acc %f, Test acc %f" % (
        epoch, train_loss / len(train_data), train_acc / len(train_data), test_acc))
Epoch 0. Loss: 0.793821, Train acc 0.744107, Test acc 0.786659
Epoch 1. Loss: 0.575076, Train acc 0.809879, Test acc 0.820112
Epoch 2. Loss: 0.530560, Train acc 0.822583, Test acc 0.831731
Epoch 3. Loss: 0.506161, Train acc 0.829728, Test acc 0.835837
Epoch 4. Loss: 0.488752, Train acc 0.834769, Test acc 0.834135

你可能感兴趣的:(动手学深度学习(一)——逻辑回归(gluon))