在TensorFlow 2.0版本中完全移除了tf.contrib这个高阶API库,官方推荐的高阶API只有tf.keras。
基于keras构建神经网络模型有两种编写方式: 一种是基于Squential的序列编排模式,层次比较清晰;另一种是基于函数式的编写方式,适合比较复杂的神经网络模型。本文使用高阶API tf.keras.Sequential进行神经网络模型的构建。
# 导入所需要的依赖包
import tensorflow as tf
import numpy as np
## 1.使用tf.keras高阶API构建神经网络模型
# 实例化一个tf.keras.Sequential
model = tf.keras.Sequential()
# 使用Sequential的add方法添加一层全连接神经网络
model.add(tf.keras.layers.Dense(input_dim=1,units=1))
# 使用Sequential的compile方法对神经网络进行编译 loss函数使用MSE,optimiser使用随机梯度下降SGD
model.compile(loss='mse',optimizer='sgd')
## 2.使用tf.keras高阶API训练神经网络模型
# 随机生成训练数据
X = np.linspace(-10,10,700)
# 通过一个简单的算法生成Y数据,模拟训练数据的标签
Y = 2*X + 100 + np.random.normal(0, 0.1, (700,))
# 开始训练: verbose=1 表示以进度条的形式显示训练信息
model.fit(X,Y,verbose=1,epochs=200, validation_split=0.2)
## 3.使用tf.keras高阶API保存神经网络模型
filename = 'line_model.h5'
model.save(filename)
print("保存模型为line_model.h5")
if __name__ == '__main__':
## 4.使用tf.keras高阶API加载模型进行预测
x = tf.constant([0.5])
model = tf.keras.models.load_model(filename)
y = model.predict(x)
print(y)
运行结果:
Epoch 1/200
18/18 [==============================] - 0s 10ms/step - loss: 7416.3120 - val_loss: 9564.7744
Epoch 2/200
18/18 [==============================] - 0s 1ms/step - loss: 4073.0646 - val_loss: 7390.2007
Epoch 3/200
18/18 [==============================] - 0s 1ms/step - loss: 2097.5207 - val_loss: 4977.2725
Epoch 4/200
18/18 [==============================] - 0s 1ms/step - loss: 1124.1739 - val_loss: 2389.1121
Epoch 5/200
18/18 [==============================] - 0s 1ms/step - loss: 653.9625 - val_loss: 1637.1498
Epoch 6/200
18/18 [==============================] - 0s 1ms/step - loss: 359.1631 - val_loss: 731.1840
Epoch 7/200
18/18 [==============================] - 0s 1ms/step - loss: 195.7785 - val_loss: 282.8145
Epoch 8/200
18/18 [==============================] - 0s 1ms/step - loss: 99.4764 - val_loss: 215.6970
Epoch 9/200
18/18 [==============================] - 0s 1ms/step - loss: 56.1445 - val_loss: 106.3238
Epoch 10/200
18/18 [==============================] - 0s 1ms/step - loss: 29.7115 - val_loss: 61.6051
Epoch 11/200
18/18 [==============================] - 0s 1ms/step - loss: 16.1793 - val_loss: 29.6411
...........................................
...........................................
...........................................
Epoch 188/200
18/18 [==============================] - 0s 1ms/step - loss: 0.0100 - val_loss: 0.0095
Epoch 189/200
18/18 [==============================] - 0s 1ms/step - loss: 0.0092 - val_loss: 0.0093
Epoch 190/200
18/18 [==============================] - 0s 1ms/step - loss: 0.0106 - val_loss: 0.0090
Epoch 191/200
18/18 [==============================] - 0s 1ms/step - loss: 0.0098 - val_loss: 0.0089
Epoch 192/200
18/18 [==============================] - 0s 1ms/step - loss: 0.0102 - val_loss: 0.0089
Epoch 193/200
18/18 [==============================] - 0s 1ms/step - loss: 0.0096 - val_loss: 0.0098
Epoch 194/200
18/18 [==============================] - 0s 1ms/step - loss: 0.0100 - val_loss: 0.0094
Epoch 195/200
18/18 [==============================] - 0s 1ms/step - loss: 0.0108 - val_loss: 0.0092
Epoch 196/200
18/18 [==============================] - 0s 1ms/step - loss: 0.0096 - val_loss: 0.0089
Epoch 197/200
18/18 [==============================] - 0s 1ms/step - loss: 0.0105 - val_loss: 0.0089
Epoch 198/200
18/18 [==============================] - 0s 1ms/step - loss: 0.0101 - val_loss: 0.0095
Epoch 199/200
18/18 [==============================] - 0s 1ms/step - loss: 0.0098 - val_loss: 0.0090
Epoch 200/200
18/18 [==============================] - 0s 1ms/step - loss: 0.0097 - val_loss: 0.0089
保存模型为line_model.h5
[[101.00128]]