使用图像生成器对horse-or-human数据集训练

import os
import zipfile
local_zip='E:\PyCharm\dataset\horse-or-human.zip'
zip_ref=zipfile.ZipFile(local_zip,'r')
zip_ref.extractall('E:\PyCharm\dataset\horse-or-human')
zip_ref.close()

标记马和人的训练集路径

train_horse_dir = os.path.join('E:\PyCharm\dataset\horse-or-human\horses')
train_human_dir = os.path.join('E:\PyCharm\dataset\horse-or-human\humans')

观察训练路径下的文件名

train_horse_names = os.listdir(train_horse_dir)
train_human_names = os.listdir(train_human_dir)
print(train_horse_names[:10])
print(train_human_names[:10])
['horse01-0.png', 'horse01-1.png', 'horse01-2.png', 'horse01-3.png', 'horse01-4.png', 'horse01-5.png', 'horse01-6.png', 'horse01-7.png', 'horse01-8.png', 'horse01-9.png']
['human01-00.png', 'human01-01.png', 'human01-02.png', 'human01-03.png', 'human01-04.png', 'human01-05.png', 'human01-06.png', 'human01-07.png', 'human01-08.png', 'human01-09.png']

看一下训练集中图像的总数

print('total training horse images:',len(os.listdir(train_horse_dir)))
print('total training human images:',len(os.listdir(train_human_dir)))
total training horse images: 500
total training human images: 527
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg

#输出图像大小为4*4 迭代图像索引为0
nrows = 4
ncols = 4
pic_index = 0

导出一组8张马和8张人的图片

#设置好matplotlib的图片,设置它的大小让它与4*4相符合
fig = plt.gcf()
fig.set_size_inches(ncols * 4, nrows * 4)

pic_index += 8
next_horse_pix = [os.path.join(train_horse_dir, fname) 
                for fname in train_horse_names[pic_index-8:pic_index]]
next_human_pix = [os.path.join(train_human_dir, fname) 
                for fname in train_human_names[pic_index-8:pic_index]]

for i, img_path in enumerate(next_horse_pix+next_human_pix):
  # Set up subplot; subplot indices start at 1
  sp = plt.subplot(nrows, ncols, i + 1)
  sp.axis('Off') # Don't show axes (or gridlines)

  img = mpimg.imread(img_path)
  plt.imshow(img) 

plt.show()

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-J6U6Hp9I-1582265399244)(output_12_0.png)]

开始建立训练模型

import tensorflow as tf
model=tf.keras.models.Sequential([
    #输入模型为300*300,和3种颜色通道
    #共5层卷积
    tf.keras.layers.Conv2D(16, (3,3),activation='relu',input_shape=(300,300,3)),
    tf.keras.layers.MaxPooling2D(2,2),
    tf.keras.layers.Conv2D(32, (3,3),activation='relu'),
    tf.keras.layers.MaxPooling2D(2,2),
    tf.keras.layers.Conv2D(64, (3,3),activation='relu'),
    tf.keras.layers.MaxPooling2D(2,2),
    tf.keras.layers.Conv2D(64, (3,3),activation='relu'),
    tf.keras.layers.MaxPooling2D(2,2),
    tf.keras.layers.Conv2D(64, (3,3),activation='relu'),
    tf.keras.layers.MaxPooling2D(2,2),
    #将结果压缩,输入DNN
    tf.keras.layers.Flatten(),
    #512个隐藏神经元
    tf.keras.layers.Dense(512,activation='relu'),
    #只有一个神经元输出,输出0或1 两个值,0代表马,1代表人
    tf.keras.layers.Dense(1, activation='sigmoid')
])
WARNING:tensorflow:From E:\anaconda3\Anaconda\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 298, 298, 16)      448       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 149, 149, 16)      0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 147, 147, 32)      4640      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 73, 73, 32)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 71, 71, 64)        18496     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 35, 35, 64)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 33, 33, 64)        36928     
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 16, 16, 64)        0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 14, 14, 64)        36928     
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 7, 7, 64)          0         
_________________________________________________________________
flatten (Flatten)            (None, 3136)              0         
_________________________________________________________________
dense (Dense)                (None, 512)               1606144   
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 513       
=================================================================
Total params: 1,704,097
Trainable params: 1,704,097
Non-trainable params: 0
_________________________________________________________________

设置损失函数和优化器

from tensorflow.keras.optimizers import RMSprop
model.compile(loss='binary_crossentropy',optimizer=RMSprop(lr=0.001),metrics=['acc'])

进行数据预处理,使用图像生成器

from tensorflow.keras.preprocessing.image import ImageDataGenerator

train_datagen = ImageDataGenerator(rescale=1/255)
#使用train_datagen生成器生成128个流训练图像
train_generator = train_datagen.flow_from_directory('E:\PyCharm\dataset\horse-or-human',target_size=(300,300),batch_size=128,class_mode='binary')
Found 1027 images belonging to 2 classes.

训练模型

history=model.fit_generator(train_generator,steps_per_epoch=8,epochs=15,verbose=1)
WARNING:tensorflow:From E:\anaconda3\Anaconda\lib\site-packages\tensorflow\python\ops\math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/15
9/9 [==============================] - 34s 4s/step - loss: 1.3804 - acc: 0.5716
Epoch 2/15
9/9 [==============================] - 31s 3s/step - loss: 0.5235 - acc: 0.7244
Epoch 3/15
9/9 [==============================] - 28s 3s/step - loss: 0.6241 - acc: 0.7955
Epoch 4/15
9/9 [==============================] - 26s 3s/step - loss: 0.4100 - acc: 0.8423
Epoch 5/15
9/9 [==============================] - 27s 3s/step - loss: 0.2192 - acc: 0.9133
Epoch 6/15
9/9 [==============================] - 27s 3s/step - loss: 0.2067 - acc: 0.9396
Epoch 7/15
9/9 [==============================] - 27s 3s/step - loss: 0.1124 - acc: 0.9523
Epoch 8/15
9/9 [==============================] - 27s 3s/step - loss: 0.1809 - acc: 0.9202
Epoch 9/15
9/9 [==============================] - 26s 3s/step - loss: 0.3245 - acc: 0.8929
Epoch 10/15
9/9 [==============================] - 26s 3s/step - loss: 0.0822 - acc: 0.9679
Epoch 11/15
9/9 [==============================] - 27s 3s/step - loss: 0.0171 - acc: 0.9942
Epoch 12/15
9/9 [==============================] - 26s 3s/step - loss: 0.0070 - acc: 0.9990
Epoch 13/15
9/9 [==============================] - 26s 3s/step - loss: 0.1541 - acc: 0.9640
Epoch 14/15
9/9 [==============================] - 26s 3s/step - loss: 0.1282 - acc: 0.9484
Epoch 15/15
9/9 [==============================] - 26s 3s/step - loss: 0.0081 - acc: 0.9971

运行模型,用模型来进行判断

import numpy as np
from google.colab import files
from keras.preprocessing import image
---------------------------------------------------------------------------

ModuleNotFoundError                       Traceback (most recent call last)

 in 
      1 import numpy as np
----> 2 from google.colab import files
      3 from keras.preprocessing import image


ModuleNotFoundError: No module named 'google.colab'

你可能感兴趣的:(Deep,Learning)