作为用tensorflow1.4一值没更新的人来讲,本来决定换pytorch,但是看了下tensorflow2的一些简单操作后,决定再次投入到tensorflow的怀抱。
首先看一些基础操作
import tensorflow as tf
import numpy as np
tf.__version__
#'2.2.0'
x = [[1.]]
m = tf.matmul(x, x)
print(m)
tf.Tensor([[1.]], shape=(1, 1), dtype=float32)
竟然可以直接打印,之前debug提起session就崩溃,看到现在这么轻易的print简直不敢相信
x = tf.constant([[1,9],[3,6]])
x
#
#array([[1, 9],
# [3, 6]], dtype=int32)>
x = tf.add(x, 1)
x
#
#array([[ 2, 10],
# [ 4, 7]], dtype=int32)>
简直欣喜若狂
在看跟numpy的无缝隙转换
x.numpy()
#array([[ 2, 10],
# [ 4, 7]], dtype=int32)
用一个简单的数据集演练回归
features = pd.read_csv('temps.csv')
#看看数据长什么样子
features.head()
字段解释:
数据需要做一些简单的处理
一些常用参数已经列出,如下所示:
玄学
按顺序构造网络模型
model = tf.keras.Sequential()
model.add(layers.Dense(16))
model.add(layers.Dense(32))
model.add(layers.Dense(1))
#compile相当于对网络进行配置,指定好优化器和损失函数等
model.compile(optimizer=tf.keras.optimizers.SGD(0.001),
loss='mean_squared_error')
model.fit(input_features, labels, validation_split=0.25, epochs=10, batch_size=64)
看下最后一次训练结果
Epoch 100/100
5/5 [==============================] - 0s 7ms/step - loss: 26.3111 - val_loss: 36.1550
看下模型参数
model.summary()
240 = 14 ∗ 16 + 16 240=14*16+16 240=14∗16+16
544 = 16 ∗ 32 + 32 544=16*32+32 544=16∗32+32
33 = 32 ∗ 1 + 1 33=32*1+1 33=32∗1+1
model = tf.keras.Sequential()
model.add(layers.Dense(16,kernel_initializer='random_normal'))
model.add(layers.Dense(32,kernel_initializer='random_normal'))
model.add(layers.Dense(1,kernel_initializer='random_normal'))
model.compile(optimizer=tf.keras.optimizers.SGD(0.001),
loss='mean_squared_error')
model.fit(input_features, labels, validation_split=0.25, epochs=100, batch_size=64)
Epoch 100/100
5/5 [==============================] - 0s 7ms/step - loss: 35.6163 - val_loss: 19.1636
model = tf.keras.Sequential()
model.add(layers.Dense(16,kernel_initializer='random_normal',kernel_regularizer=tf.keras.regularizers.l2(0.03)))
model.add(layers.Dense(32,kernel_initializer='random_normal',kernel_regularizer=tf.keras.regularizers.l2(0.03)))
model.add(layers.Dense(1,kernel_initializer='random_normal',kernel_regularizer=tf.keras.regularizers.l2(0.03)))
model.compile(optimizer=tf.keras.optimizers.SGD(0.001),
loss='mean_squared_error')
model.fit(input_features, labels, validation_split=0.25, epochs=100, batch_size=64)
predict = model.predict(input_features)
完整代码github
数据使用mnist数据集
import tensorflow as tf
from tensorflow.keras import layers
model = tf.keras.Sequential()
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))
model.compile(optimizer=tf.keras.optimizers.Adam(0.005),
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
model.fit(x_train, y_train, epochs=5, batch_size=64,
validation_data=(x_valid, y_valid))
782/782 [==============================] - 2s 3ms/step - loss: 0.1102 - sparse_categorical_accuracy: 0.9666 - val_loss: 0.1491 - val_sparse_categorical_accuracy: 0.9585