在进行 TensorFlow 实战之前,先对 TensorFlow 的一些基本概念做一些了解。
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/msys2/
# 设置搜索时显示通道地址
conda config --set show_channel_urls yes
conda create -n mlcc
conda activate mlcc
# 安装 TensorFlow,注意检查版本是否为 2.1
conda install tensorflow
# 安装 TensorFlow 所常用的一些包
conda install matplotlib
conda install jupyter notebook
jupyter notebook
机器学习编程一般有以下几个步骤:
1. 数据
① 获取数据
② 处理数据
③ 拆分数据
④ 检查数据
2. 模型
① 构建模型
② 检查模型
③ 训练模型
④ 进行预测
下面按照以上步骤进行讲解。
数据
# 导入一个数据处理工具包和一个数据可视化工具
import matplotlib.pyplot as plt
import numpy as np
# 构建一个可以被线性方程拟合的数据集并处理数据
np.random.seed(2020)
x_data = np.linspace(-1, 1, 100) # x_data为 -1 至 1 的等差数列
y_data = 2 * x_data + 1.0 + np.random.randn(*x_data.shape) * 0.3 # 将x_data通过指定方式映射至y_data
拆分数据
由于数据量过小,为防止影响训练效果先不进行拆分。(其实是我不会…)
检查数据
plt.scatter(x_data, y_data)
# 检查数据是否在回归线附近
plt.plot(x_data, 2 * x_data + 1.0, color='red', linewidth=3)
# 先导入 TensorFlow 有关的包
import tensorflow as tf
from tensorflow import keras # TF 的一个高级 API
from tensorflow.keras import layers # 神经层
model = keras.Sequential([
# 这里设置了两层神经层(关于层的设置目前没有一个系统的方法,主要靠经验法则,一个个试验)
# 第一层有20个神经元,接收形状为(1,)的数据,激活函数ReLU;
# 第二层只有一个神经元,故只有一个输出
layers.Dense(20, activation='relu', input_shape=(1,)),
layers.Dense(1)
])
# 设置梯度下降优化器和学习速率
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
# 选择损失函数,编译模型
model.compile(loss='mse',
optimizer=optimizer,
metrics=['mae', 'mse'])
model.summary()
Model: "sequential_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_7 (Dense) (None, 20) 40
_________________________________________________________________
dense_8 (Dense) (None, 1) 21
=================================================================
Total params: 61
Trainable params: 61
Non-trainable params: 0
_________________________________________________________________
# 训练此模型 100 次
model.fit(x_data, y_data, epochs=100)
Train on 100 samples
Epoch 1/100
100/100 [==============================] - 1s 5ms/sample - loss: 2.8671 - mae: 1.3909 - mse: 2.8671
Epoch 2/100
100/100 [==============================] - 0s 240us/sample - loss: 2.4969 - mae: 1.3072 - mse: 2.4969
Epoch 3/100
100/100 [==============================] - 0s 240us/sample - loss: 2.1785 - mae: 1.2223 - mse: 2.1785
Epoch 4/100
100/100 [==============================] - 0s 220us/sample - loss: 1.9672 - mae: 1.1646 - mse: 1.9672
Epoch 5/100
100/100 [==============================] - 0s 260us/sample - loss: 1.7808 - mae: 1.1090 - mse: 1.7808
Epoch 6/100
100/100 [==============================] - 0s 270us/sample - loss: 1.6284 - mae: 1.0632 - mse: 1.6284
Epoch 7/100
100/100 [==============================] - 0s 250us/sample - loss: 1.5117 - mae: 1.0251 - mse: 1.5117
Epoch 8/100
100/100 [==============================] - 0s 270us/sample - loss: 1.3864 - mae: 0.9824 - mse: 1.3864
Epoch 9/100
100/100 [==============================] - 0s 300us/sample - loss: 1.2847 - mae: 0.9431 - mse: 1.2847
Epoch 10/100
100/100 [==============================] - 0s 190us/sample - loss: 1.1653 - mae: 0.8977 - mse: 1.1653
(11-89省去)
Epoch 90/100
100/100 [==============================] - 0s 230us/sample - loss: 0.0887 - mae: 0.2327 - mse: 0.0887
Epoch 91/100
100/100 [==============================] - 0s 220us/sample - loss: 0.0890 - mae: 0.2337 - mse: 0.0890
Epoch 92/100
100/100 [==============================] - 0s 320us/sample - loss: 0.0885 - mae: 0.2333 - mse: 0.0885
Epoch 93/100
100/100 [==============================] - 0s 220us/sample - loss: 0.0888 - mae: 0.2340 - mse: 0.0888
Epoch 94/100
100/100 [==============================] - 0s 200us/sample - loss: 0.0886 - mae: 0.2340 - mse: 0.0886
Epoch 95/100
100/100 [==============================] - 0s 180us/sample - loss: 0.0884 - mae: 0.2330 - mse: 0.0884
Epoch 96/100
100/100 [==============================] - 0s 170us/sample - loss: 0.0883 - mae: 0.2321 - mse: 0.0883
Epoch 97/100
100/100 [==============================] - 0s 190us/sample - loss: 0.0887 - mae: 0.2344 - mse: 0.0887
Epoch 98/100
100/100 [==============================] - 0s 170us/sample - loss: 0.0888 - mae: 0.2351 - mse: 0.0888
Epoch 99/100
100/100 [==============================] - 0s 200us/sample - loss: 0.0883 - mae: 0.2345 - mse: 0.0883
Epoch 100/100
100/100 [==============================] - 0s 180us/sample - loss: 0.0880 - mae: 0.2325 - mse: 0.0880
你可以看到,随着迭代次数不断增加,损失不断降低,到了第90次迭代之后,损失逐渐收敛,再迭代下去有过拟合的风险
plt.scatter(x_data, y_data)
# 使用 model.predict 即可进行预测
plt.scatter(x_data, model.predict(x_data), color="lightgreen")
plt.plot(x_data, 2 * x_data + 1.0, color="red",linewidth=3)
上一篇:【Google 机器学习笔记】九、神经网络
下一篇:【Google 机器学习笔记】十、TensorFlow 2.1 实战(二)基本图像分类(MNIST)