参考:Ubuntu 16 安装TensorFlow及Jupyter notebook 安装TensorFlow。
本篇博客翻译来自 Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning
仅供学习、交流等非盈利性质使用!!!
其他相关文章
Coursera TensorFlow 基础课程-week4
Coursera TensorFlow 基础课程-week3
Coursera TensorFlow 基础课程-week2
Coursera TensorFlow 基础课程-week1
TensorFlow这项新科技如何造福人类?
请看: here
或 从这下载;
从上面的视频可以看出,一项新技术(比如TensorFlow)确实是可以解决人们面对的实际问题的。
那么现实生活中的问题,使用TensorFlow怎么解决呢?
所以,就需要了解一些数据的预处理工作,这样才能真正的把TensorFlow用起来!
TensorFlow中有一个工具,ImageGenerator,它可以帮我们把数据进行分割,同时进行label化。
如下图所示:
如果我们建立一个images目录,然后在其下建立两个目录,Training 、Validation,接着,在Training和Validation目录下面,再分别建立Horse、Human目录,并把图片放入其下,那么使用ImageGenerator就可以自动的把对应目录的数据分别加载为训练集和测试集,并且标签也会根据目录的不同,为每个图片配上一个对应的标签。
如果要使用ImageGenerator,那么需要导入,使用如下代码:
from tensorflow.keras.preprocessing.image import ImageDataGenerator
加载数据使用下面的代码:
train_datagen = ImageDataGenerator(rescale= 1./255)
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size =(300,300),
batch_size = 128,
class_model='binary'
)
test_datagen = ImageDataGenerator(rescale= 1./255)
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size =(300,300),
batch_size = 32,
class_model='binary'
)
代码解释如下:
本应用实例是应用TensorFlow来针对提供的图片构建模型,进而可以识别真实生活中的人和马。
为问题定义一个网络模型,代码如下:
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16,(3,3),activation='relu',input_shape=(300,300,3)),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(32,(3,3),activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64,(3,3),activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512,activation='relu'),
tf.keras.layers.Dense(1,activation='sigmoid')
])
该代码解释如下:
这个模型进行summary,可以得到如下输出:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 298, 298, 16) 448
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 149, 149, 16) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 147, 147, 32) 4640
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 73, 73, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 71, 71, 64) 18496
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 35, 35, 64) 0
_________________________________________________________________
flatten (Flatten) (None, 78400) 0
_________________________________________________________________
dense (Dense) (None, 512) 40141312
_________________________________________________________________
dense_1 (Dense) (None, 1) 513
=================================================================
Total params: 40,165,409
Trainable params: 40,165,409
Non-trainable params: 0
_________________________________________________________________
这个输出,就和前面解释一样了,这里只做个简单解释:
接着,就可以编译模型,其代码如下:
from tensorflow.keras.optimizers import RMSprop
model.compile(loss='binary_crossentropy',
optimizer = RMSprop(lr=0.001),
metrics=['acc']
)
在这个代码中,损失函数使用二分类的交叉熵函数,优化器使用RMS(此优化器可以调整学习速率-learning rate),打印正确率。
接着就可以训练模型,如下:
history = model.fit_generator(
train_generator,
steps_per_epoch=8,
epochs=15,
validation_data = validation_generator,
validation_steps = 8,
verbose=2
)
代码解释如下:
下载数据,在Linux中使用如下命令来下载数据(或在这下载:链接:https://pan.baidu.com/s/1NTUQkQyy1P4nkEcUa8wNRA 提取码:fsfc
):
wget https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip
下载后,把其移动到/opt/data/human_horse 目录,然后使用命令进行解压,并查看其输出:
unzip horse-or-human.zip
#ls
horse-or-human.zip horses humans
可以看到有两个目录,horses和humans目录。
import os
# horse pictures目录
train_horse_dir = os.path.join('/opt/data/horse_human/horses')
# human pictures目录
train_human_dir = os.path.join('/opt/data/horse_human/humans')
train_horse_names = os.listdir(train_horse_dir)
print(train_horse_names[:10])
train_human_names = os.listdir(train_human_dir)
print(train_human_names[:10])
其输出为:
['horse33-5.png', 'horse03-9.png', 'horse20-3.png', 'horse44-9.png', 'horse04-7.png', 'horse42-1.png', 'horse06-8.png', 'horse48-8.png', 'horse39-0.png', 'horse07-3.png']
['human12-25.png', 'human15-08.png', 'human02-10.png', 'human13-26.png', 'human02-29.png', 'human14-09.png', 'human14-29.png', 'human11-02.png', 'human09-01.png', 'human16-10.png']
可以看到horse和human的图片各10个,同时通过文件名也可以知道该图片的label。
3. 查看数据总个数
print('total training horse images:', len(os.listdir(train_horse_dir)))
print('total training human images:', len(os.listdir(train_human_dir)))
输出为:
total training horse images: 500
total training human images: 527
可以看出,马和人的图片个数基本是一半一半的。
4. 展示部分图片
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# Parameters for our graph; we'll output images in a 4x4 configuration
nrows = 4
ncols = 4
# Index for iterating over images
pic_index = 0
# Set up matplotlib fig, and size it to fit 4x4 pics
fig = plt.gcf()
fig.set_size_inches(ncols * 4, nrows * 4)
pic_index += 8
next_horse_pix = [os.path.join(train_horse_dir, fname)
for fname in train_horse_names[pic_index-8:pic_index]]
next_human_pix = [os.path.join(train_human_dir, fname)
for fname in train_human_names[pic_index-8:pic_index]]
for i, img_path in enumerate(next_horse_pix+next_human_pix):
# Set up subplot; subplot indices start at 1
sp = plt.subplot(nrows, ncols, i + 1)
sp.axis('Off') # Don't show axes (or gridlines)
img = mpimg.imread(img_path)
plt.imshow(img)
plt.show()
执行上述代码,可以得到如下图片:
从图片可以分别看到8个horse和8个human的图片。
model = tf.keras.models.Sequential([
# 输入直接使用其原始图片大小 300*300
# This is the first convolution
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
# The second convolution
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The third convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fourth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fifth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# Flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
# 512 neuron hidden layer
tf.keras.layers.Dense(512, activation='relu'),
# Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans')
tf.keras.layers.Dense(1, activation='sigmoid')
])
注意,此处的input_size需要和ImageDataGenerator对应的地方设置其target_size为对应的值。
2. 编译模型
from tensorflow.keras.optimizers import RMSprop
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=0.001),
metrics=['acc'])
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1/255)
# Flow training images in batches of 128 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
'/opt/data/horse_human/', # This is the source directory for training images
target_size=(300, 300), # All images will be resized to 300x300
batch_size=128,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
注意,这里target_size设置为300x300,输出为:
Found 1027 images belonging to 2 classes.
history = model.fit_generator(
train_generator,
steps_per_epoch=8,
epochs=15,
verbose=1)
pip3 install pillow , 如果有相关报错,需要安装这个包;
此处设置verbose为1,则会把建模完整信息进行输出,如下(虚拟机6G3核可运行,较慢):
Epoch 1/15
9/9 [==============================] - 412s 46s/step - loss: 0.7084 - acc: 0.5696
Epoch 2/15
9/9 [==============================] - 535s 59s/step - loss: 0.7116 - acc: 0.6972
Epoch 3/15
9/9 [==============================] - 385s 43s/step - loss: 0.7930 - acc: 0.7546
Epoch 4/15
9/9 [==============================] - 427s 47s/step - loss: 0.4925 - acc: 0.8169
Epoch 5/15
9/9 [==============================] - 596s 66s/step - loss: 0.2293 - acc: 0.9241
Epoch 6/15
9/9 [==============================] - 366s 41s/step - loss: 0.1235 - acc: 0.9426
Epoch 7/15
9/9 [==============================] - 290s 32s/step - loss: 0.1182 - acc: 0.9513
Epoch 8/15
9/9 [==============================] - 242s 27s/step - loss: 0.2449 - acc: 0.9338
Epoch 9/15
9/9 [==============================] - 273s 30s/step - loss: 0.0659 - acc: 0.9727
Epoch 10/15
9/9 [==============================] - 368s 41s/step - loss: 0.0964 - acc: 0.9649
Epoch 11/15
9/9 [==============================] - 353s 39s/step - loss: 0.0674 - acc: 0.9834
Epoch 12/15
9/9 [==============================] - 428s 48s/step - loss: 0.0714 - acc: 0.9796
Epoch 13/15
9/9 [==============================] - 378s 42s/step - loss: 0.0701 - acc: 0.9776
Epoch 14/15
9/9 [==============================] - 398s 44s/step - loss: 0.1257 - acc: 0.9581
Epoch 15/15
9/9 [==============================] - 373s 41s/step - loss: 0.0505 - acc: 0.9776
可以看到,其输出正确率可以达到98%左右。
5. 使用真实图片来进行预测
从网上下载图片(这四张图片,也可以从这下载),如下:
从图片中可以看到有四副真实图片,包含两个human和两个horse,并且其分辨率是不一样的。
下面使用刚才咱们构建的模型来进行预测(注意把下载的图片放到/opt/data/new_human_horse目录中):
import numpy as np
from tensorflow.keras.preprocessing import image
new_files = os.listdir('/opt/data/new_human_horse')
#print(new_files)
for fn in new_files:
# predicting images
path = '/opt/data/new_human_horse/' + fn
img = image.load_img(path, target_size=(150, 150))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = model.predict(images, batch_size=10)
print(classes[0])
if classes[0]>0.5:
print(fn + " is a human")
else:
print(fn + " is a horse")
其输出为:
[0.]
human_2d13d9403be7421981a325ff383a0f4a.jpg is a horse
[0.]
human-4320806490398999875.jpg is a horse
[0.]
horse-image-placeholder-title.jpg is a horse
[0.]
horse-th.jpg is a horse
从图片可以看出,关于human的图片分类错误了。(当然,原视频中选择的图片是都分类正确了,可能和我选的图片有关???!!!)
import numpy as np
import random
from tensorflow.keras.preprocessing.image import img_to_array, load_img
# Let's define a new Model that will take an image as input, and will output
# intermediate representations for all layers in the previous model after
# the first.
successive_outputs = [layer.output for layer in model.layers[1:]]
#visualization_model = Model(img_input, successive_outputs)
visualization_model = tf.keras.models.Model(inputs = model.input, outputs = successive_outputs)
# Let's prepare a random input image from the training set.
horse_img_files = [os.path.join(train_horse_dir, f) for f in train_horse_names]
human_img_files = [os.path.join(train_human_dir, f) for f in train_human_names]
img_path = random.choice(horse_img_files + human_img_files)
img = load_img(img_path, target_size=(300, 300)) # this is a PIL image
x = img_to_array(img) # Numpy array with shape (300, 300, 3)
x = x.reshape((1,) + x.shape) # Numpy array with shape (1, 300, 300, 3)
# Rescale by 1/255
x /= 255
# Let's run our image through our network, thus obtaining all
# intermediate representations for this image.
successive_feature_maps = visualization_model.predict(x)
# These are the names of the layers, so can have them as part of our plot
layer_names = [layer.name for layer in model.layers]
# Now let's display our representations
for layer_name, feature_map in zip(layer_names, successive_feature_maps):
if len(feature_map.shape) == 4:
# Just do this for the conv / maxpool layers, not the fully-connected layers
n_features = feature_map.shape[-1] # number of features in feature map
# The feature map has shape (1, size, size, n_features)
size = feature_map.shape[1]
# We will tile our images in this matrix
display_grid = np.zeros((size, size * n_features))
for i in range(n_features):
# Postprocess the feature to make it visually palatable
x = feature_map[0, :, :, i]
x -= x.mean()
x /= x.std()
x *= 64
x += 128
x = np.clip(x, 0, 255).astype('uint8')
# We'll tile each filter into this big horizontal grid
display_grid[:, i * size : (i + 1) * size] = x
# Display the grid
scale = 20. / n_features
plt.figure(figsize=(scale * n_features, scale))
plt.title(layer_name)
plt.grid(False)
plt.imshow(display_grid, aspect='auto', cmap='viridis')
其输出为:
从上往下看,可以认为是把一个图片进行转换,只提取其和目标特征(此处就是是一个人还是一个马)相关的特征。(原文说的是一个蒸馏流程,保留其精华)
wget https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip
接着,把其放入目录/opt/data/horse_human_validation:
# Directory with our training horse pictures
validation_horse_dir = os.path.join('/opt/data/horse_human_validation/horses')
# Directory with our training human pictures
validation_human_dir = os.path.join('/opt/data/horse_human_validation/humans')
接着,构造测试数据的ImageDataGenerator ,如下:
validation_datagen = ImageDataGenerator(rescale=1/255)
# Flow training images in batches of 128 using train_datagen generator
validation_generator = validation_datagen.flow_from_directory(
'/opt/data/horse_human_validation/', # This is the source directory for training images
target_size=(300, 300), # All images will be resized to 150x150
batch_size=32,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
重新进行model.fit_generator ,指定validation参数,如下:
history = model.fit_generator(
train_generator,
steps_per_epoch=8,
epochs=15,
verbose=1,
validation_data = validation_generator,
validation_steps=8)
其输出为:
Epoch 1/15
8/8 [==============================] - 22s 3s/step - loss: 0.7348 - acc: 0.5000
9/9 [==============================] - 362s 40s/step - loss: 0.9071 - acc: 0.5307 - val_loss: 0.7348 - val_acc: 0.5000
Epoch 2/15
8/8 [==============================] - 22s 3s/step - loss: 0.7880 - acc: 0.5078
9/9 [==============================] - 285s 32s/step - loss: 0.8083 - acc: 0.5735 - val_loss: 0.7880 - val_acc: 0.5078
Epoch 3/15
8/8 [==============================] - 40s 5s/step - loss: 0.6607 - acc: 0.5938
9/9 [==============================] - 297s 33s/step - loss: 1.0091 - acc: 0.7322 - val_loss: 0.6607 - val_acc: 0.5938
Epoch 4/15
8/8 [==============================] - 37s 5s/step - loss: 0.4939 - acc: 0.7617
9/9 [==============================] - 283s 31s/step - loss: 0.6453 - acc: 0.6699 - val_loss: 0.4939 - val_acc: 0.7617
Epoch 5/15
8/8 [==============================] - 24s 3s/step - loss: 1.1673 - acc: 0.7695
9/9 [==============================] - 289s 32s/step - loss: 0.3698 - acc: 0.8763 - val_loss: 1.1673 - val_acc: 0.7695
Epoch 6/15
8/8 [==============================] - 23s 3s/step - loss: 0.7771 - acc: 0.7930
9/9 [==============================] - 273s 30s/step - loss: 0.2960 - acc: 0.8627 - val_loss: 0.7771 - val_acc: 0.7930
Epoch 7/15
8/8 [==============================] - 25s 3s/step - loss: 0.5656 - acc: 0.8906
9/9 [==============================] - 284s 32s/step - loss: 0.1102 - acc: 0.9542 - val_loss: 0.5656 - val_acc: 0.8906
Epoch 8/15
8/8 [==============================] - 26s 3s/step - loss: 2.3894 - acc: 0.7031
9/9 [==============================] - 289s 32s/step - loss: 0.3854 - acc: 0.8987 - val_loss: 2.3894 - val_acc: 0.7031
Epoch 9/15
8/8 [==============================] - 36s 5s/step - loss: 2.4947 - acc: 0.7031
9/9 [==============================] - 301s 33s/step - loss: 0.1747 - acc: 0.9445 - val_loss: 2.4947 - val_acc: 0.7031
Epoch 10/15
8/8 [==============================] - 34s 4s/step - loss: 0.2772 - acc: 0.9219
9/9 [==============================] - 471s 52s/step - loss: 0.0700 - acc: 0.9679 - val_loss: 0.2772 - val_acc: 0.9219
Epoch 11/15
8/8 [==============================] - 32s 4s/step - loss: 1.1290 - acc: 0.8008
9/9 [==============================] - 373s 41s/step - loss: 0.8948 - acc: 0.8705 - val_loss: 1.1290 - val_acc: 0.8008
Epoch 12/15
8/8 [==============================] - 43s 5s/step - loss: 1.0806 - acc: 0.8594
9/9 [==============================] - 608s 68s/step - loss: 0.0565 - acc: 0.9834 - val_loss: 1.0806 - val_acc: 0.8594
Epoch 13/15
8/8 [==============================] - 44s 5s/step - loss: 1.1165 - acc: 0.8398
9/9 [==============================] - 552s 61s/step - loss: 0.0357 - acc: 0.9864 - val_loss: 1.1165 - val_acc: 0.8398
Epoch 14/15
8/8 [==============================] - 35s 4s/step - loss: 1.7010 - acc: 0.7617
9/9 [==============================] - 647s 72s/step - loss: 0.0177 - acc: 0.9951 - val_loss: 1.7010 - val_acc: 0.7617
Epoch 15/15
8/8 [==============================] - 35s 4s/step - loss: 1.1240 - acc: 0.8555
9/9 [==============================] - 503s 56s/step - loss: 0.0318 - acc: 0.9854 - val_loss: 1.1240 - val_acc: 0.8555
从上面的结果可以看出,模型对训练集的效果较好(正确率达到98.5%),对于没有应用到训练过程的测试集,效果也还可以(正确率达到85.5%)。
再次对新数据进行分类,结果如下:
[1.]
human_2d13d9403be7421981a325ff383a0f4a.jpg is a human
[0.]
human-4320806490398999875.jpg is a horse
[0.]
horse-image-placeholder-title.jpg is a horse
[0.]
horse-th.jpg is a horse
发现只有一个human被分错了,说明精度有所提升;
把相关模块改为150后,再次训练模型,输出为:
Epoch 1/15
8/8 [==============================] - 8s 963ms/step - loss: 0.6753 - acc: 0.5000
9/9 [==============================] - 76s 8s/step - loss: 0.7239 - acc: 0.5278 - val_loss: 0.6753 - val_acc: 0.5000
Epoch 2/15
8/8 [==============================] - 9s 1s/step - loss: 0.4213 - acc: 0.8438
9/9 [==============================] - 73s 8s/step - loss: 0.7311 - acc: 0.7050 - val_loss: 0.4213 - val_acc: 0.8438
Epoch 3/15
8/8 [==============================] - 9s 1s/step - loss: 1.0088 - acc: 0.6172
9/9 [==============================] - 68s 8s/step - loss: 0.5170 - acc: 0.8169 - val_loss: 1.0088 - val_acc: 0.6172
Epoch 4/15
8/8 [==============================] - 8s 980ms/step - loss: 0.3365 - acc: 0.8828
9/9 [==============================] - 69s 8s/step - loss: 0.5933 - acc: 0.7897 - val_loss: 0.3365 - val_acc: 0.8828
Epoch 5/15
8/8 [==============================] - 8s 976ms/step - loss: 0.4389 - acc: 0.8516
9/9 [==============================] - 72s 8s/step - loss: 0.2726 - acc: 0.8987 - val_loss: 0.4389 - val_acc: 0.8516
Epoch 6/15
8/8 [==============================] - 8s 943ms/step - loss: 1.1277 - acc: 0.8008
9/9 [==============================] - 71s 8s/step - loss: 0.1266 - acc: 0.9494 - val_loss: 1.1277 - val_acc: 0.8008
Epoch 7/15
8/8 [==============================] - 9s 1s/step - loss: 1.9571 - acc: 0.6953
9/9 [==============================] - 72s 8s/step - loss: 0.1502 - acc: 0.9464 - val_loss: 1.9571 - val_acc: 0.6953
Epoch 8/15
8/8 [==============================] - 11s 1s/step - loss: 0.7124 - acc: 0.8359
9/9 [==============================] - 80s 9s/step - loss: 0.3604 - acc: 0.8763 - val_loss: 0.7124 - val_acc: 0.8359
Epoch 9/15
8/8 [==============================] - 14s 2s/step - loss: 0.6322 - acc: 0.8320
9/9 [==============================] - 85s 9s/step - loss: 0.1740 - acc: 0.9416 - val_loss: 0.6322 - val_acc: 0.8320
Epoch 10/15
8/8 [==============================] - 8s 940ms/step - loss: 0.6428 - acc: 0.8242
9/9 [==============================] - 78s 9s/step - loss: 0.1222 - acc: 0.9640 - val_loss: 0.6428 - val_acc: 0.8242
Epoch 11/15
8/8 [==============================] - 11s 1s/step - loss: 0.8398 - acc: 0.8516
9/9 [==============================] - 72s 8s/step - loss: 0.0538 - acc: 0.9844 - val_loss: 0.8398 - val_acc: 0.8516
Epoch 12/15
8/8 [==============================] - 9s 1s/step - loss: 0.4072 - acc: 0.8242
9/9 [==============================] - 72s 8s/step - loss: 0.5111 - acc: 0.8802 - val_loss: 0.4072 - val_acc: 0.8242
Epoch 13/15
8/8 [==============================] - 8s 996ms/step - loss: 0.8312 - acc: 0.8438
9/9 [==============================] - 72s 8s/step - loss: 0.1396 - acc: 0.9426 - val_loss: 0.8312 - val_acc: 0.8438
Epoch 14/15
8/8 [==============================] - 10s 1s/step - loss: 0.8713 - acc: 0.8477
9/9 [==============================] - 72s 8s/step - loss: 0.1203 - acc: 0.9552 - val_loss: 0.8713 - val_acc: 0.8477
Epoch 15/15
8/8 [==============================] - 8s 1s/step - loss: 1.0197 - acc: 0.8516
9/9 [==============================] - 75s 8s/step - loss: 0.0227 - acc: 0.9942 - val_loss: 1.0197 - val_acc: 0.8516
同时使用新模型来对新数据进行验证,可以得到结果为:
[1.]
human_2d13d9403be7421981a325ff383a0f4a.jpg is a human
[0.]
human-4320806490398999875.jpg is a horse
[0.]
horse-image-placeholder-title.jpg is a horse
[0.]
horse-th.jpg is a horse
通过对比,发现:
a. 把target_size改为150后,训练过程明显加快;
b. 改为150后,模型精度有所下降,对新数据的验证也有所下降;
1.Using Image Generator, how do you label images?
What method on the Image Generator is used to normalize the image?
How did we specify the training size for the images?
When we specify the input_shape to be (300, 300, 3), what does that mean?
If your training data is close to 1.000 accuracy, but your validation data isn’t, what’s the risk here?
Convolutional Neural Networks are better for classifying images like horses and humans because:
After reducing the size of the images, the training results were different. Why?
My guess:
- a
- a
- b
- d
- a
- d
- b
根据提供的数据,包含80个图片,其中40个笑脸,40个哭脸,需要你训练一个模型,需要精确率达到99.9%+。
提示:3层卷积层效果最好!
提示代码如下:
注意,先下载数据 或使用命令下载:
wget https://storage.googleapis.com/laurencemoroney-blog.appspot.com/happy-or-sad.zip
下载完成后,解压,并移动到/opt/data/happy_sad 目录
import tensorflow as tf
import os
DESIRED_ACCURACY = 0.999
class myCallback(# Your Code):
# Your Code
callbacks = myCallback()
# This Code Block should Define and Compile the Model
model = tf.keras.models.Sequential([
# Your Code Here
])
from tensorflow.keras.optimizers import RMSprop
model.compile(# Your Code Here #)
# This code block should create an instance of an ImageDataGenerator called train_datagen
# And a train_generator by calling train_datagen.flow_from_directory
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = # Your Code Here
train_generator = train_datagen.flow_from_directory(
# Your Code Here)
# Expected output: 'Found 80 images belonging to 2 classes'
# This code block should call model.fit_generator and train for
# a number of epochs.
history = model.fit_generator(
# Your Code Here)
# Expected output: "Reached 99.9% accuracy so cancelling training!""
Answer Download Here
Code Download Here:
First
Second
Third