深度学习——tensorflow2.0使用fine-tune实现CNN中的retnet50模型

import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import sklearn
import sys
import tensorflow as tf
import time

from tensorflow import keras
from PIL import Image

%matplotlib inline

print(tf.__version__)

2.0.0-alpha0

train_dir = "./input/training"
valid_dir = "./input/validation"
label_file = "./input/monkey_labels.txt"
print(os.path.exists(train_dir))
print(os.path.exists(valid_dir))
print(os.path.exists(label_file))

print(os.listdir(train_dir))
print(os.listdir(valid_dir))

True
True
True
['n0', 'n1', 'n2', 'n3', 'n4', 'n5', 'n6', 'n7', 'n8', 'n9']
['n0', 'n1', 'n2', 'n3', 'n4', 'n5', 'n6', 'n7', 'n8', 'n9']

# resnet处理的图像大小为(224,224)
height = 224
width = 224
channels = 3
batch_size = 24
num_classes = 10

# keras.preprocessing.image.ImageDataGenerator:图像数据生成器,用于数据增强
train_datagen = keras.preprocessing.image.ImageDataGenerator(
    preprocessing_function = keras.applications.resnet50.preprocess_input,
    rotation_range = 40,    
    width_shift_range = 0.2,    
    height_shift_range = 0.2,  
    shear_range = 0.2,        
    zoom_range = 0.2,        
    horizontal_flip = True,    
    fill_mode = 'nearest',      
)
# train_datagen.flow_from_directory:获取一个目录的路径并生成一批扩展数据
# class_mode:控制标签的格式,categorical表示对label进行one-hot编码
train_generator = train_datagen.flow_from_directory(train_dir,
                                                   target_size = (height, width),
                                                   batch_size = batch_size,
                                                   seed = 7,
                                                   shuffle = True,
                                                   class_mode = "categorical") 
# 验证集只需要进行rescale
valid_datagen = keras.preprocessing.image.ImageDataGenerator(preprocessing_function = keras.applications.resnet50.preprocess_input)
valid_generator = valid_datagen.flow_from_directory(valid_dir,
                                                    target_size = (height, width),
                                                    batch_size = batch_size,
                                                    seed = 7,
                                                    shuffle = False,
                                                    class_mode = "categorical")
# generator.samples:获取样本数
train_num = train_generator.samples
valid_num = valid_generator.samples
print(train_num, valid_num)

Found 1098 images belonging to 10 classes.
Found 272 images belonging to 10 classes.
1098 272

resnet50_fine_tune = keras.models.Sequential()

resnet50_fine_tune.add(keras.applications.ResNet50(include_top = False, pooling = 'avg', weights = 'imagenet'))
resnet50_fine_tune.add(keras.layers.Dense(num_classes, activation = 'softmax'))
resnet50_fine_tune.layers[0].trainable = False
resnet50_fine_tune.compile(loss="categorical_crossentropy", optimizer="sgd", metrics=['accuracy'])
resnet50_fine_tune.summary()

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
resnet50 (Model)             (None, 2048)              23587712  
_________________________________________________________________
dense (Dense)                (None, 10)                20490     
=================================================================
Total params: 23,608,202
Trainable params: 20,490
Non-trainable params: 23,587,712
_________________________________________________________________

epochs = 10
history = resnet50_fine_tune.fit_generator(train_generator,
                                           steps_per_epoch = train_num // batch_size,
                                           epochs = epochs,
                                           validation_data = valid_generator,
                                           validation_steps = valid_num // batch_size)

Epoch 1/10
45/45 [==============================] - 289s 6s/step - loss: 2.2982 - accuracy: 0.1518 - val_loss: 2.0966 - val_accuracy: 0.2386
Epoch 2/10
45/45 [==============================] - 259s 6s/step - loss: 2.0478 - accuracy: 0.2514 - val_loss: 1.8225 - val_accuracy: 0.3674
Epoch 3/10
45/45 [==============================] - 269s 6s/step - loss: 1.8017 - accuracy: 0.3994 - val_loss: 1.6067 - val_accuracy: 0.4848
Epoch 4/10
45/45 [==============================] - 269s 6s/step - loss: 1.6268 - accuracy: 0.4926 - val_loss: 1.4279 - val_accuracy: 0.5985
Epoch 5/10
45/45 [==============================] - 300s 7s/step - loss: 1.4954 - accuracy: 0.5642 - val_loss: 1.2865 - val_accuracy: 0.7008
Epoch 6/10
45/45 [==============================] - 290s 6s/step - loss: 1.3311 - accuracy: 0.6415 - val_loss: 1.1639 - val_accuracy: 0.7576
Epoch 7/10
45/45 [==============================] - 260s 6s/step - loss: 1.2305 - accuracy: 0.7020 - val_loss: 1.0586 - val_accuracy: 0.8030
Epoch 8/10
45/45 [==============================] - 254s 6s/step - loss: 1.1048 - accuracy: 0.7607 - val_loss: 0.9693 - val_accuracy: 0.8409
Epoch 9/10
45/45 [==============================] - 284s 6s/step - loss: 1.0260 - accuracy: 0.7952 - val_loss: 0.8904 - val_accuracy: 0.8674
Epoch 10/10
45/45 [==============================] - 266s 6s/step - loss: 0.9508 - accuracy: 0.7970 - val_loss: 0.8229 - val_accuracy: 0.8674

你可能感兴趣的:(深度学习——tensorflow2.0使用fine-tune实现CNN中的retnet50模型)