Tensorflow2.0学习(13):在Tf1.0中使用Dataset

Dataset

  • dataset的构建
tf.data.Dataset.from_tensor_slices()
  • dataset的读取
  • dataset.make_one_shot_iterator()
    iterator = dataset.make_one_shot_iterator()从dataset中实例化了一个iterator,这个iterator是一个“one shot iterator”,即只能从头到尾读取一次。one_element = iterator.get_next()表示从iterator里取出一个元素。由于这是非Eager模式,所以one_element只是一个Tensor,并不是一个实际的值。调用sess.run(one_element)后,才能真正地取出一个值。如果一个dataset中元素被读取完了,再尝试sess.run(one_element)的话,就会抛出tf.errors.OutOfRangeError异常,这个行为与使用队列方式读取数据的行为是一致的。在实际程序中,可以在外界捕捉这个异常以判断数据是否读取完。它可以自动初始化,但用完一次后不能再次初始化。
iterator = dataset.make_one_shot_iterator()
one_element = iterator.get_next()
with tf.Session() as sess:
    for i in range(5):
        print(sess.run(one_element))
  • make_initializable_iterator
    initializable iterator在one_shot的基础上可以多次使用该iterator,每次读取完之后需要重新初始化,实现了单个iterator下单个dataset中填充数据的切换。

实战

  • 导包
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import sklearn
import pandas as pd
import os
import sys
import time
import tensorflow as tf
from tensorflow import keras
print(tf.__version__)
print(sys.version_info)
for module in mpl, np ,pd, sklearn, tf, keras:
    print(module.__name__, module.__version__)
1.15.0
sys.version_info(major=3, minor=7, micro=6, releaselevel='final', serial=0)
matplotlib 3.1.3
numpy 1.18.1
pandas 1.0.1
sklearn 0.22.1
tensorflow 1.15.0
tensorflow.python.keras.api._v1.keras 2.2.4-tf
  • 读取、处理数据
# 读取keras中的进阶版mnist数据集
fashion_mnist = keras.datasets.fashion_mnist
# 加载数据集,切分为训练集和测试集
(x_train_all, y_train_all),(x_test, y_test) = fashion_mnist.load_data()
# 从训练集中将后五千张作为验证集,前五千张作为训练集
# [:5000]默认从头开始,从头开始取5000个
# [5000:]从第5000开始(不包含5000),结束位置默认为最后
x_valid, x_train = x_train_all[:5000],x_train_all[5000:]
y_valid, y_train = y_train_all[:5000],y_train_all[5000:]
# 打印这些数据集的大小
print(x_valid.shape, y_valid.shape)
print(x_train.shape, y_train.shape)
print(x_test.shape, y_test.shape)
(5000, 28, 28) (5000,)
(55000, 28, 28) (55000,)
(10000, 28, 28) (10000,)
# 归一化处理:x = (x - u)/std :减去均值除以方差,是均值为0,方差为1 -> 正态分布

from sklearn.preprocessing import StandardScaler
# 初始化一个StandarScaler对象
scaler = StandardScaler()
# fit_transform要求为二维矩阵,因此需要先转换
# 要进行除法,因此先转化为浮点型
# x_train是三维矩阵[None,28,28],先将其转换为二维矩阵[None,784],再将其转回三维矩阵
# reshape(-1, 1)转化为一列(-1代表不确定几行)
# fit: 求得训练集的均值、方差、最大值、最小值等训练集固有的属性
# transform: 在fit的基础上,进行标准化,降维,归一化等操作

x_train_scaled = scaler.fit_transform(
    x_train.astype(np.float32).reshape(-1, 1)).reshape(-1, 28 * 28)
x_valid_scaled = scaler.transform(
    x_valid.astype(np.float32).reshape(-1, 1)).reshape(-1, 28 * 28)
x_test_scaled = scaler.transform(
    x_test.astype(np.float32).reshape(-1, 1)).reshape(-1, 28 * 28)

# 更改数据类型
y_train = np.asarray(y_train, dtype = np.int64)
y_valid = np.asarray(y_valid, dtype = np.int64)
y_test = np.asarray(y_test, dtype = np.int64)
  • 构建读取Dataset
def make_dataset(images, labels, epochs, batch_size, shuffle = True):
    dataset = tf.data.Dataset.from_tensor_slices((images, labels))
    if shuffle:
        dataset = dataset.shuffle(10000)
    dataset = dataset.repeat(epochs).batch(batch_size)
    return dataset
# tf2.0中这样使用dataset
# batch_size = 20
# epochs = 10
# dataset = make_dataset(x_train_scaled, y_train,
#                        epochs=epochs, batch_size = batch_size)
# for data,label in dataset.take(1):
#     print(data)
#     print(label)

batch_size = 20
epochs = 10
dataset = make_dataset(x_train_scaled, y_train,
                       epochs=epochs, batch_size = batch_size)
# dataset输出数据的方式
# 用dataset.make_one_shot_iterator()迭代器来读取数据
dataset_iter = dataset.make_one_shot_iterator()
# 从读取的数据中产生一个样本
x, y = dataset_iter.get_next()
with tf.Session() as sess:
    x_val, y_val = sess.run([x, y])
    print(x_val.shape)
    print(y_val.shape)
WARNING:tensorflow:From :16: DatasetV1.make_one_shot_iterator (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `for ... in dataset:` to iterate over a dataset. If using `tf.estimator`, return the `Dataset` object directly from your input function. As a last resort, you can use `tf.compat.v1.data.make_one_shot_iterator(dataset)`.
(20, 784)
(20,)
batch_size = 20
epochs = 10
images_placeholder = tf.placeholder(tf.float32, [None, 28*28])
labels_placeholder = tf.placeholder(tf.int64, [None, ])
dataset = make_dataset(images_placeholder, labels_placeholder, epochs=epochs, batch_size = batch_size)
# 用dataset.make__shot_iterator()迭代器来读取数据
dataset_iter = dataset.make_initializable_iterator()
# 从读取的数据中产生一个样本
x, y = dataset_iter.get_next()
with tf.Session() as sess:
	#初始化迭代器,向占位符中送入数据
	sess.run(dataset_iter.initializer,
	feed_dict = {
		images_placeholder:x_train_scaled,
		labels_placeholder:y_train,
    x_val, y_val = sess.run([x, y])
    print(x_val.shape)
    print(y_val.shape)
    sess.run(dataset_iter.initializer,
	feed_dict = {
		images_placeholder:x_valid_scaled,
		labels_placeholder:y_valid,
    x_val, y_val = sess.run([x, y])
    print(x_val.shape)
    print(y_val.shape)
  • 训练
# 定义全连接层有两层,每次有100个神经元
hidden_units = [100, 100]
# 类别数
class_num = 10


# 定义层次
# 输入
input_for_next_layer = x
# 隐藏层
for hidden_unit in hidden_units:
    input_for_next_layer = tf.layers.dense(input_for_next_layer, 
                                           hidden_unit,
                                          activation=tf.nn.relu)
# 输出层
logits = tf.layers.dense(input_for_next_layer,class_num)

# 定义损失函数:tf.losses.sparse_softmax_cross_entropy
# 1.最后一个隐层的输出*最后一组权重=输出神经节点的输出值->softmax->变成了概率
# 2.对labels做one-hot编码
# 3.计算交叉熵
loss = tf.losses.sparse_softmax_cross_entropy(labels = y,
                                             logits = logits)

# 获得精确度
# 预测值,就是logits中最大的那个值对应的索引
prediction = tf.argmax(logits, 1)
correct_prediction = tf.equal(prediction, y)
# tf.reduce_mean用来计算张量tensor沿着指定轴的平均值
# tf.cast执行tensorflow中张量数据类型的转换
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float64))

# 运行一遍train_op,网络就被训练一次
train_op = tf.train.AdamOptimizer(1e-3).minimize(loss)

WARNING:tensorflow:From :14: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.Dense instead.
WARNING:tensorflow:From E:\Anaconda\anaconda\envs\tensorflow1\lib\site-packages\tensorflow_core\python\layers\core.py:187: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `layer.__call__` method instead.
WARNING:tensorflow:From E:\Anaconda\anaconda\envs\tensorflow1\lib\site-packages\tensorflow_core\python\ops\losses\losses_impl.py:121: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
print(x)
print(logits)
Tensor("IteratorGetNext:0", shape=(?, 784), dtype=float32)
Tensor("dense_2/BiasAdd:0", shape=(?, 10), dtype=float32)
# 构建完图之后,运行图
# session

init = tf.global_variables_initializer()

train_steps_per_epoch = x_train.shape[0] // batch_size

# 打开一个session
with tf.Session() as sess:
    # 初始化
    sess.run(init)
    for epoch in range(epochs):
        for step in range(train_steps_per_epoch):
            
            loss_val, accuracy_val, _ =sess.run([loss, accuracy, train_op])
            print('\r[Train] epoch: %d, step:%d, loss: %3.5f, accuracy: %2.2f'
                 % (epoch, step, loss_val, accuracy_val), end="")
     
[Train] epoch: 9, step:2749, loss: 0.09922, accuracy: 0.95

你可能感兴趣的:(tensorflow,深度学习,python,神经网络)