tf.data模块

tf.data模块

import tensorflow as tf
tf.__version__

‘2.0.0’

tf.data模块用于构建输入,且可以处理大量数据、不同的数据格式,以及数据转换。

tf.data.Dateset表示一组数据。tf.data.Dateset中一个元素包含一个或者多个Tensor对象,例如一个元素代表单个训练样本或者代表一对训练数据和标签

创建Dataset

tf.data.Dataset.from_tensor_slices(tensors)
tensors可以为列表,字典,元组,numpy的ndarray,tensor
Dataset对象中每个元素的结构必须相同,一个元素可以包含一个或多个tensor对象,这些tensor对象被称为组件。
Dataset对象可以对其直接进行迭代使用

dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(dataset)

TensorSliceDataset shapes 表示Dataset中元素的shape,这里元素为数字,所以shapes为()

for element in dataset:
    print(element)

tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)

可以用.numpy()的方法,对Tensor进行转换

for element in dataset:
    print(element.numpy())

1
2
3

用一个二维列表创建

dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]])
print(dataset)
print()
for element in dataset:
    print(element)
print()
for element in dataset:
    print(element.numpy())

tf.Tensor([1 2], shape=(2,), dtype=int32)
tf.Tensor([3 4], shape=(2,), dtype=int32)

[1 2]
[3 4]

用字典类型创建
Dataset的每个元素就是一个字典

dict = {
     'a': [[1], [2]], 'b': [[3], [4]]}
dataset = tf.data.Dataset.from_tensor_slices(dict)
print(dataset)
print()
for element in dataset:
    print(element)
print()
for element in dataset:
    print(element['a'].numpy(), element['b'].numpy())

{‘a’: , ‘b’: }
{‘a’: , ‘b’: }

[1] [3]
[2] [4]

使用Dataset

Dataset.take(count) 从当前的Dataset里取出最多count个元素作为一个新的Dataset返回

import numpy as np
dataset = tf.data.Dataset.from_tensor_slices(np.array([1, 2, 3]))
for ele in dataset.take(2):
    print(ele.numpy())
print()
for ele in dataset.take(4):
    print(ele.numpy())

1
2

1
2
3

Dataset.shuffle(buffer_size) 将当前Dataset中buffer_size个元素填充缓冲区,在缓冲区进行随机采样。如:下面dataset中有6个元素,buffer_size为2,先把元素1,2放入缓冲区进行随机抽样,抽样完把下一个元素3放入缓冲区保持缓冲区的元素数为2,再继续抽样。

dataset = tf.data.Dataset.from_tensor_slices(np.array([1, 2, 3, 4, 5, 6]))
for ele in dataset.shuffle(2):
    print(ele.numpy())

2
1
3
5
6
4

Dataset.repeat(count) 将当前Dataset重复count次,count默认为None,表示一直重复

for ele in dataset.repeat(2):
    print(ele.numpy())

1
2
3
4
5
6
1
2
3
4
5
6

Dataset.batch(batch_size) 将Dataset中batch个元素合并为一个元素

for ele in dataset.batch(2):
    print(ele.numpy())

[1 2]
[3 4]
[5 6]

Dataset.map(map_func) 将Dataset的每个元素都用map_func处理

for ele in dataset.map(tf.square):
    print(ele.numpy())

1
4
9
16
25
36

Dataset.zip(datasets) 将datasets中各个dataset的对应元素合并为元组作为新的Dataset的元素

a = tf.data.Dataset.from_tensor_slices([[1], [2], [3]])
b = tf.data.Dataset.from_tensor_slices([4, 5, 6])
print(tf.data.Dataset.zip((a, b)))
print() 
for ele in tf.data.Dataset.zip((a, b)):
    print(ele[0], ele[1])

tf.Tensor([1], shape=(1,), dtype=int32) tf.Tensor(4, shape=(), dtype=int32)
tf.Tensor([2], shape=(1,), dtype=int32) tf.Tensor(5, shape=(), dtype=int32)
tf.Tensor([3], shape=(1,), dtype=int32) tf.Tensor(6, shape=(), dtype=int32)

示例 fashion_mnist数据集

读取数据集

(train_image, train_label), (test_image, test_label) = tf.keras.datasets.fashion_mnist.load_data()
print("train_image.shape:", train_image.shape)
print("train_label.shape:", train_label.shape)
print("test_image.shape:", test_image.shape)
print("test_label.shape:", test_label.shape)

train_image.shape: (60000, 28, 28)
train_label.shape: (60000,)
test_image.shape: (10000, 28, 28)
test_label.shape: (10000,)

train_image = train_image / 255.0
test_image  = test_image / 255.0

创建训练数据

train_img = tf.data.Dataset.from_tensor_slices(train_image)
train_lab = tf.data.Dataset.from_tensor_slices(train_label)
train_data = tf.data.Dataset.zip((train_img, train_lab))
print(train_data)

batch_size = 128
train_data = train_data.shuffle(train_image.shape[0]).repeat().batch(batch_size)
print(train_data)

创建测试数据

test_data = tf.data.Dataset.from_tensor_slices((test_image, test_label))
print(test_data)

test_data = test_data.batch(batch_size)
print(test_data)

构建模型

model = tf.keras.Sequential([
    tf.keras.layers.Flatten(input_shape=(28, 28)),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dense(10, activation='softmax')
])
model.summary()

Model: “sequential”
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten (Flatten) (None, 784) 0
_________________________________________________________________
dense (Dense) (None, 128) 100480
_________________________________________________________________
dense_1 (Dense) (None, 10) 1290
=================================================================
Total params: 101,770
Trainable params: 101,770
Non-trainable params: 0
_________________________________________________________________

model.compile(optimizer=tf.keras.optimizers.Adam(),
              loss="sparse_categorical_crossentropy",
              metrics=['accuracy']
)
train_steps = train_image.shape[0] // batch_size
test_steps = test_image.shape[0] // batch_size
model.fit(train_data,
          epochs=5, 
          steps_per_epoch=train_steps,
          validation_data=test_data, 
          validation_steps=test_steps,
)

Train for 468 steps, validate for 78 steps
Epoch 1/5
468/468 [============================== ] - 3s 6ms/step - loss: 0.5548 - accuracy: 0.8091 - val_loss: 0.4598 - val_accuracy: 0.8394
Epoch 2/5
468/468 [============================== ] - 2s 4ms/step - loss: 0.4051 - accuracy: 0.8576 - val_loss: 0.4223 - val_accuracy: 0.8503
Epoch 3/5
468/468 [============================== ] - 2s 4ms/step - loss: 0.3666 - accuracy: 0.8704 - val_loss: 0.3969 - val_accuracy: 0.8596
Epoch 4/5
468/468 [============================== ] - 2s 4ms/step - loss: 0.3395 - accuracy: 0.8787 - val_loss: 0.3742 - val_accuracy: 0.8683
Epoch 5/5
468/468 [============================== ] - 2s 4ms/step - loss: 0.3235 - accuracy: 0.8835 - val_loss: 0.3619 - val_accuracy: 0.8715

你可能感兴趣的:(TensorFlow2.0)