tensorflow2简洁实现softmax回归

softmax回归简洁实现

xiaoyao 动手学深度学习 tensorflow 2

import tensorflow as tf
from tensorflow import keras
print(tf.__version__)
2.1.0

1 .获取和读取数据

使用Fashion-MNIST数据集和上一节中设置的批量大小。

fashion_mnist = keras.datasets.fashion_mnist
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()

对数据进行处理,归一化,便于训练

x_train = x_train / 255.0
x_test = x_test / 255.0

在“softmax回归”一节中提到,softmax回归的输出层是一个全连接层。因此,添加一个输出个数为10的全连接层。
第一层是Flatten,将28 * 28的像素值,压缩成一行 (784, )
第二层还是Dense,因为是多分类问题,激活函数使用softmax

2 .定义和初始化模型

model = keras.Sequential([
    keras.layers.Flatten(input_shape=(28, 28)),
    keras.layers.Dense(10, activation=tf.nn.softmax)
])

3 .softmax和交叉熵损失函数

Tensorflow 2的keras API提供了一个loss参数。他的数值稳定性更好。

loss = 'sparse_categorical_crossentropy'

4 .定义优化算法

使用学习率为0.1的小批量梯度下降作为优化算法

optimizer = tf.keras.optimizers.SGD(0.1)

5 . 训练模型

model.compile(optimizer = tf.keras.optimizers.SGD(0.1), 
             loss = 'sparse_categorical_crossentropy', 
             metrics = ['accuracy'])

6.训练模型

model.fit(x_train,y_train,epochs=20,batch_size=256)
Train on 60000 samples
Epoch 1/20
60000/60000 [==============================] - 1s 16us/sample - loss: 0.4752 - accuracy: 0.8400
Epoch 2/20
60000/60000 [==============================] - 1s 14us/sample - loss: 0.4666 - accuracy: 0.8421
Epoch 3/20
60000/60000 [==============================] - 1s 13us/sample - loss: 0.4594 - accuracy: 0.8442
Epoch 4/20
60000/60000 [==============================] - 1s 14us/sample - loss: 0.4539 - accuracy: 0.8463
Epoch 5/20
60000/60000 [==============================] - 1s 13us/sample - loss: 0.4485 - accuracy: 0.8465
Epoch 6/20
60000/60000 [==============================] - 1s 13us/sample - loss: 0.4438 - accuracy: 0.8490
Epoch 7/20
60000/60000 [==============================] - 1s 14us/sample - loss: 0.4403 - accuracy: 0.8505
Epoch 8/20
60000/60000 [==============================] - 1s 14us/sample - loss: 0.4372 - accuracy: 0.8511
Epoch 9/20
60000/60000 [==============================] - 1s 13us/sample - loss: 0.4347 - accuracy: 0.8527
Epoch 10/20
60000/60000 [==============================] - 1s 14us/sample - loss: 0.4326 - accuracy: 0.8518
Epoch 11/20
60000/60000 [==============================] - 1s 14us/sample - loss: 0.4286 - accuracy: 0.8540
Epoch 12/20
60000/60000 [==============================] - 1s 14us/sample - loss: 0.4260 - accuracy: 0.8553s - loss: 0.4249 - accuracy: 0.85
Epoch 13/20
60000/60000 [==============================] - 1s 14us/sample - loss: 0.4248 - accuracy: 0.8550
Epoch 14/20
60000/60000 [==============================] - 1s 15us/sample - loss: 0.4226 - accuracy: 0.8557
Epoch 15/20
60000/60000 [==============================] - 1s 15us/sample - loss: 0.4206 - accuracy: 0.8565
Epoch 16/20
60000/60000 [==============================] - 1s 14us/sample - loss: 0.4188 - accuracy: 0.8572
Epoch 17/20
60000/60000 [==============================] - 1s 15us/sample - loss: 0.4175 - accuracy: 0.8579
Epoch 18/20
60000/60000 [==============================] - 1s 15us/sample - loss: 0.4166 - accuracy: 0.8571
Epoch 19/20
60000/60000 [==============================] - 1s 14us/sample - loss: 0.4153 - accuracy: 0.8583
Epoch 20/20
60000/60000 [==============================] - 1s 14us/sample - loss: 0.4135 - accuracy: 0.8585






接下来,比较模型在测试数据集上的表现情况

test_loss, test_acc = model.evaluate(x_test, y_test)
print('Test Acc:',test_acc)
10000/10000 [==============================] - 1s 54us/sample - loss: 0.4650 - accuracy: 0.8361
Test Acc: 0.8361

你可能感兴趣的:(深度学习,深度学习,tensorflow,神经网络,softmax回归,动手学深度学习)