残差网络(ResNet)训练(CIFAR10)

CIFAR10数据集训练残差网络(ResNet)


ResNet v1: Deep Residual Learning for Image Recognition

ResNet v2: Identity Mappings in Deep Residual Networks


  • 残差网络 v1
Model n 200-epoch accuracy Original paper accuracy sec/epoch GTX1080Ti
ResNet20 v1 3 92.16 % 91.25 % 35
ResNet32 v1 5 92.46 % 92.49 % 50
ResNet44 v1 7 92.50 % 92.83 % 70
ResNet56 v1 9 92.71 % 93.03 % 90
ResNet110 v1 18 92.65 % 93.39±.16 % 165
ResNet164 v1 27 - % 94.07 % -
ResNet1001 v1 N/A - % 92.39 % -
  • 残差网络 v2
Model n 200-epoch accuracy Original paper accuracy sec/epoch GTX1080Ti
ResNet20 v2 2 - % - %
ResNet32 v2 N/A NA % NA % NA
ResNet44 v2 N/A NA % NA % NA
ResNet56 v2 6 93.01 % NA % 100
ResNet110 v2 12 93.15 % 93.63 % 180
ResNet164 v2 18 - % 94.54 % -
ResNet1001 v2 111 - % 95.08±.14 % -
from __future__ import print_function
import keras
from keras.layers import Dense, Conv2D, BatchNormalization, Activation
from keras.layers import AveragePooling2D, Input, Flatten
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint, LearningRateScheduler
from keras.callbacks import ReduceLROnPlateau
from keras.preprocessing.image import ImageDataGenerator
from keras.regularizers import l2
from keras import backend as K
from keras.models import Model
from keras.datasets import cifar10
import numpy as np
import os

学习率

def lr_schedule(epoch):
    """Learning Rate Schedule

    Learning rate is scheduled to be reduced after 80, 120, 160, 180 epochs.
    Called automatically every epoch as part of callbacks during training.

    # Arguments
        epoch (int): The number of epochs

    # Returns
        lr (float32): learning rate
    """
    lr = 1e-3
    if epoch > 180:
        lr *= 0.5e-3
    elif epoch > 160:
        lr *= 1e-3
    elif epoch > 120:
        lr *= 1e-2
    elif epoch > 80:
        lr *= 1e-1
    print('Learning rate: ', lr)
    return lr

卷积层

def resnet_layer(inputs,
                 num_filters=16,
                 kernel_size=3,
                 strides=1,
                 activation='relu',
                 batch_normalization=True,
                 conv_first=True):
    """2D Convolution-Batch Normalization-Activation stack builder

    # Arguments
        inputs (tensor): input tensor from input image or previous layer
        num_filters (int): Conv2D number of filters
        kernel_size (int): Conv2D square kernel dimensions
        strides (int): Conv2D square stride dimensions
        activation (string): activation name
        batch_normalization (bool): whether to include batch normalization
        conv_first (bool): conv-bn-activation (True) or
            bn-activation-conv (False)

    # Returns
        x (tensor): tensor as input to the next layer
    """
    conv = Conv2D(num_filters,
                  kernel_size=kernel_size,
                  strides=strides,
                  padding='same',
                  kernel_initializer='he_normal',
                  kernel_regularizer=l2(1e-4))

    x = inputs
    if conv_first:
        x = conv(x)
        if batch_normalization:
            x = BatchNormalization()(x)
        if activation is not None:
            x = Activation(activation)(x)
    else:
        if batch_normalization:
            x = BatchNormalization()(x)
        if activation is not None:
            x = Activation(activation)(x)
        x = conv(x)
    return x

残差网络 v1

def resnet_v1(input_shape, depth, num_classes=10):
    """ResNet Version 1 Model builder [a]

    Stacks of 2 x (3 x 3) Conv2D-BN-ReLU
    Last ReLU is after the shortcut connection.
    At the beginning of each stage, the feature map size is halved (downsampled)
    by a convolutional layer with strides=2, while the number of filters is
    doubled. Within each stage, the layers have the same number filters and the
    same number of filters.
    Features maps sizes:
    stage 0: 32x32, 16
    stage 1: 16x16, 32
    stage 2:  8x8,  64
    The Number of parameters is approx the same as Table 6 of [a]:
    ResNet20 0.27M
    ResNet32 0.46M
    ResNet44 0.66M
    ResNet56 0.85M
    ResNet110 1.7M

    # Arguments
        input_shape (tensor): shape of input image tensor
        depth (int): number of core convolutional layers
        num_classes (int): number of classes (CIFAR10 has 10)

    # Returns
        model (Model): Keras model instance
    """
    if (depth - 2) % 6 != 0:
        raise ValueError('depth should be 6n+2 (eg 20, 32, 44 in [a])')
    # Start model definition.
    num_filters = 16
    num_res_blocks = int((depth - 2) / 6)

    inputs = Input(shape=input_shape)
    x = resnet_layer(inputs=inputs)
    # Instantiate the stack of residual units
    for stack in range(3):
        for res_block in range(num_res_blocks):
            strides = 1
            if stack > 0 and res_block == 0:  # first layer but not first stack
                strides = 2  # downsample
            y = resnet_layer(inputs=x,
                             num_filters=num_filters,
                             strides=strides)
            y = resnet_layer(inputs=y,
                             num_filters=num_filters,
                             activation=None)
            if stack > 0 and res_block == 0:  # first layer but not first stack
                # linear projection residual shortcut connection to match
                # changed dims
                x = resnet_layer(inputs=x,
                                 num_filters=num_filters,
                                 kernel_size=1,
                                 strides=strides,
                                 activation=None,
                                 batch_normalization=False)
            x = keras.layers.add([x, y])
            x = Activation('relu')(x)
        num_filters *= 2

    # Add classifier on top.
    # v1 does not use BN after last shortcut connection-ReLU
    x = AveragePooling2D(pool_size=8)(x)
    y = Flatten()(x)
    outputs = Dense(num_classes,
                    activation='softmax',
                    kernel_initializer='he_normal')(y)

    # Instantiate model.
    model = Model(inputs=inputs, outputs=outputs)
    return model

残差网络 v2

def resnet_v2(input_shape, depth, num_classes=10):
    """ResNet Version 2 Model builder [b]

    Stacks of (1 x 1)-(3 x 3)-(1 x 1) BN-ReLU-Conv2D or also known as
    bottleneck layer
    First shortcut connection per layer is 1 x 1 Conv2D.
    Second and onwards shortcut connection is identity.
    At the beginning of each stage, the feature map size is halved (downsampled)
    by a convolutional layer with strides=2, while the number of filter maps is
    doubled. Within each stage, the layers have the same number filters and the
    same filter map sizes.
    Features maps sizes:
    conv1  : 32x32,  16
    stage 0: 32x32,  64
    stage 1: 16x16, 128
    stage 2:  8x8,  256

    # Arguments
        input_shape (tensor): shape of input image tensor
        depth (int): number of core convolutional layers
        num_classes (int): number of classes (CIFAR10 has 10)

    # Returns
        model (Model): Keras model instance
    """
    if (depth - 2) % 9 != 0:
        raise ValueError('depth should be 9n+2 (eg 56 or 110 in [b])')
    # Start model definition.
    num_filters_in = 16
    num_res_blocks = int((depth - 2) / 9)

    inputs = Input(shape=input_shape)
    # v2 performs Conv2D with BN-ReLU on input before splitting into 2 paths
    x = resnet_layer(inputs=inputs,
                     num_filters=num_filters_in,
                     conv_first=True)

    # Instantiate the stack of residual units
    for stage in range(3):
        for res_block in range(num_res_blocks):
            activation = 'relu'
            batch_normalization = True
            strides = 1
            if stage == 0:
                num_filters_out = num_filters_in * 4
                if res_block == 0:  # first layer and first stage
                    activation = None
                    batch_normalization = False
            else:
                num_filters_out = num_filters_in * 2
                if res_block == 0:  # first layer but not first stage
                    strides = 2    # downsample

            # bottleneck residual unit
            y = resnet_layer(inputs=x,
                             num_filters=num_filters_in,
                             kernel_size=1,
                             strides=strides,
                             activation=activation,
                             batch_normalization=batch_normalization,
                             conv_first=False)
            y = resnet_layer(inputs=y,
                             num_filters=num_filters_in,
                             conv_first=False)
            y = resnet_layer(inputs=y,
                             num_filters=num_filters_out,
                             kernel_size=1,
                             conv_first=False)
            if res_block == 0:
                # linear projection residual shortcut connection to match
                # changed dims
                x = resnet_layer(inputs=x,
                                 num_filters=num_filters_out,
                                 kernel_size=1,
                                 strides=strides,
                                 activation=None,
                                 batch_normalization=False)
            x = keras.layers.add([x, y])

        num_filters_in = num_filters_out

    # Add classifier on top.
    # v2 has BN-ReLU before Pooling
    x = BatchNormalization()(x)
    x = Activation('relu')(x)
    x = AveragePooling2D(pool_size=8)(x)
    y = Flatten()(x)
    outputs = Dense(num_classes,
                    activation='softmax',
                    kernel_initializer='he_normal')(y)

    # Instantiate model.
    model = Model(inputs=inputs, outputs=outputs)
    return model

训练20层残差网络v1

# Training parameters
batch_size = 32  # orig paper trained all networks with batch_size=128
epochs = 200
data_augmentation = True
num_classes = 10

# Subtracting pixel mean improves accuracy
subtract_pixel_mean = True

# Model parameter
# ----------------------------------------------------------------------------
#           |      | 200-epoch | Orig Paper| 200-epoch | Orig Paper| sec/epoch
# Model     |  n   | ResNet v1 | ResNet v1 | ResNet v2 | ResNet v2 | GTX1080Ti
#           |v1(v2)| %Accuracy | %Accuracy | %Accuracy | %Accuracy | v1 (v2)
# ----------------------------------------------------------------------------
# ResNet20  | 3 (2)| 92.16     | 91.25     | -----     | -----     | 35 (---)
# ResNet32  | 5(NA)| 92.46     | 92.49     | NA        | NA        | 50 ( NA)
# ResNet44  | 7(NA)| 92.50     | 92.83     | NA        | NA        | 70 ( NA)
# ResNet56  | 9 (6)| 92.71     | 93.03     | 93.01     | NA        | 90 (100)
# ResNet110 |18(12)| 92.65     | 93.39+-.16| 93.15     | 93.63     | 165(180)
# ResNet164 |27(18)| -----     | 94.07     | -----     | 94.54     | ---(---)
# ResNet1001| (111)| -----     | 92.39     | -----     | 95.08+-.14| ---(---)
# ---------------------------------------------------------------------------
n = 3

# Model version
# Orig paper: version = 1 (ResNet v1), Improved ResNet: version = 2 (ResNet v2)
version = 1

# Computed depth from supplied model parameter n
if version == 1:
    depth = n * 6 + 2
elif version == 2:
    depth = n * 9 + 2

# Model name, depth and version
model_type = 'ResNet%dv%d' % (depth, version)

# Load the CIFAR10 data.
(x_train, y_train), (x_test, y_test) = cifar10.load_data()

# Input image dimensions.
input_shape = x_train.shape[1:]

# Normalize data.
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255

# If subtract pixel mean is enabled
if subtract_pixel_mean:
    x_train_mean = np.mean(x_train, axis=0)
    x_train -= x_train_mean
    x_test -= x_train_mean

print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
print('y_train shape:', y_train.shape)

# Convert class vectors to binary class matrices.
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

if version == 2:
    model = resnet_v2(input_shape=input_shape, depth=depth)
else:
    model = resnet_v1(input_shape=input_shape, depth=depth)

model.compile(loss='categorical_crossentropy',
              optimizer=Adam(lr=lr_schedule(0)),
              metrics=['accuracy'])
model.summary()
print(model_type)

# Prepare model model saving directory.
save_dir = os.path.join(os.getcwd(), 'saved_models')
model_name = 'cifar10_%s_model.{epoch:03d}.h5' % model_type
if not os.path.isdir(save_dir):
    os.makedirs(save_dir)
filepath = os.path.join(save_dir, model_name)

# Prepare callbacks for model saving and for learning rate adjustment.
checkpoint = ModelCheckpoint(filepath=filepath,
                             monitor='val_acc',
                             verbose=1,
                             save_best_only=True)

lr_scheduler = LearningRateScheduler(lr_schedule)

lr_reducer = ReduceLROnPlateau(factor=np.sqrt(0.1),
                               cooldown=0,
                               patience=5,
                               min_lr=0.5e-6)

callbacks = [checkpoint, lr_reducer, lr_scheduler]

# Run training, with or without data augmentation.
if not data_augmentation:
    print('Not using data augmentation.')
    model.fit(x_train, y_train,
              batch_size=batch_size,
              epochs=epochs,
              validation_data=(x_test, y_test),
              shuffle=True,
              callbacks=callbacks)
else:
    print('Using real-time data augmentation.')
    # This will do preprocessing and realtime data augmentation:
    datagen = ImageDataGenerator(
        # set input mean to 0 over the dataset
        featurewise_center=False,
        # set each sample mean to 0
        samplewise_center=False,
        # divide inputs by std of dataset
        featurewise_std_normalization=False,
        # divide each input by its std
        samplewise_std_normalization=False,
        # apply ZCA whitening
        zca_whitening=False,
        # epsilon for ZCA whitening
        zca_epsilon=1e-06,
        # randomly rotate images in the range (deg 0 to 180)
        rotation_range=0,
        # randomly shift images horizontally
        width_shift_range=0.1,
        # randomly shift images vertically
        height_shift_range=0.1,
        # set range for random shear
        shear_range=0.,
        # set range for random zoom
        zoom_range=0.,
        # set range for random channel shifts
        channel_shift_range=0.,
        # set mode for filling points outside the input boundaries
        fill_mode='nearest',
        # value used for fill_mode = "constant"
        cval=0.,
        # randomly flip images
        horizontal_flip=True,
        # randomly flip images
        vertical_flip=False,
        # set rescaling factor (applied before any other transformation)
        rescale=None,
        # set function that will be applied on each input
        preprocessing_function=None,
        # image data format, either "channels_first" or "channels_last"
        data_format=None,
        # fraction of images reserved for validation (strictly between 0 and 1)
        validation_split=0.0)

    # Compute quantities required for featurewise normalization
    # (std, mean, and principal components if ZCA whitening is applied).
    datagen.fit(x_train)

    # Fit the model on the batches generated by datagen.flow().
    model.fit_generator(datagen.flow(x_train, y_train, batch_size=batch_size),
                        validation_data=(x_test, y_test),
                        epochs=epochs, verbose=1, workers=4,
                        callbacks=callbacks)

# Score trained model.
scores = model.evaluate(x_test, y_test, verbose=1)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
Using TensorFlow backend.


Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
170500096/170498071 [==============================] - 229s 1us/step
x_train shape: (50000, 32, 32, 3)
50000 train samples
10000 test samples
y_train shape: (50000, 1)
Learning rate:  0.001
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            (None, 32, 32, 3)    0                                            
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 32, 32, 16)   448         input_1[0][0]                    
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 32, 32, 16)   64          conv2d_1[0][0]                   
__________________________________________________________________________________________________
activation_1 (Activation)       (None, 32, 32, 16)   0           batch_normalization_1[0][0]      
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 32, 32, 16)   2320        activation_1[0][0]               
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 32, 32, 16)   64          conv2d_2[0][0]                   
__________________________________________________________________________________________________
activation_2 (Activation)       (None, 32, 32, 16)   0           batch_normalization_2[0][0]      
__________________________________________________________________________________________________
conv2d_3 (Conv2D)               (None, 32, 32, 16)   2320        activation_2[0][0]               
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 32, 32, 16)   64          conv2d_3[0][0]                   
__________________________________________________________________________________________________
add_1 (Add)                     (None, 32, 32, 16)   0           activation_1[0][0]               
                                                                 batch_normalization_3[0][0]      
__________________________________________________________________________________________________
activation_3 (Activation)       (None, 32, 32, 16)   0           add_1[0][0]                      
__________________________________________________________________________________________________
conv2d_4 (Conv2D)               (None, 32, 32, 16)   2320        activation_3[0][0]               
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 32, 32, 16)   64          conv2d_4[0][0]                   
__________________________________________________________________________________________________
activation_4 (Activation)       (None, 32, 32, 16)   0           batch_normalization_4[0][0]      
__________________________________________________________________________________________________
conv2d_5 (Conv2D)               (None, 32, 32, 16)   2320        activation_4[0][0]               
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 32, 32, 16)   64          conv2d_5[0][0]                   
__________________________________________________________________________________________________
add_2 (Add)                     (None, 32, 32, 16)   0           activation_3[0][0]               
                                                                 batch_normalization_5[0][0]      
__________________________________________________________________________________________________
activation_5 (Activation)       (None, 32, 32, 16)   0           add_2[0][0]                      
__________________________________________________________________________________________________
conv2d_6 (Conv2D)               (None, 32, 32, 16)   2320        activation_5[0][0]               
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 32, 32, 16)   64          conv2d_6[0][0]                   
__________________________________________________________________________________________________
activation_6 (Activation)       (None, 32, 32, 16)   0           batch_normalization_6[0][0]      
__________________________________________________________________________________________________
conv2d_7 (Conv2D)               (None, 32, 32, 16)   2320        activation_6[0][0]               
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 32, 32, 16)   64          conv2d_7[0][0]                   
__________________________________________________________________________________________________
add_3 (Add)                     (None, 32, 32, 16)   0           activation_5[0][0]               
                                                                 batch_normalization_7[0][0]      
__________________________________________________________________________________________________
activation_7 (Activation)       (None, 32, 32, 16)   0           add_3[0][0]                      
__________________________________________________________________________________________________
conv2d_8 (Conv2D)               (None, 16, 16, 32)   4640        activation_7[0][0]               
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 16, 16, 32)   128         conv2d_8[0][0]                   
__________________________________________________________________________________________________
activation_8 (Activation)       (None, 16, 16, 32)   0           batch_normalization_8[0][0]      
__________________________________________________________________________________________________
conv2d_9 (Conv2D)               (None, 16, 16, 32)   9248        activation_8[0][0]               
__________________________________________________________________________________________________
conv2d_10 (Conv2D)              (None, 16, 16, 32)   544         activation_7[0][0]               
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, 16, 16, 32)   128         conv2d_9[0][0]                   
__________________________________________________________________________________________________
add_4 (Add)                     (None, 16, 16, 32)   0           conv2d_10[0][0]                  
                                                                 batch_normalization_9[0][0]      
__________________________________________________________________________________________________
activation_9 (Activation)       (None, 16, 16, 32)   0           add_4[0][0]                      
__________________________________________________________________________________________________
conv2d_11 (Conv2D)              (None, 16, 16, 32)   9248        activation_9[0][0]               
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 16, 16, 32)   128         conv2d_11[0][0]                  
__________________________________________________________________________________________________
activation_10 (Activation)      (None, 16, 16, 32)   0           batch_normalization_10[0][0]     
__________________________________________________________________________________________________
conv2d_12 (Conv2D)              (None, 16, 16, 32)   9248        activation_10[0][0]              
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, 16, 16, 32)   128         conv2d_12[0][0]                  
__________________________________________________________________________________________________
add_5 (Add)                     (None, 16, 16, 32)   0           activation_9[0][0]               
                                                                 batch_normalization_11[0][0]     
__________________________________________________________________________________________________
activation_11 (Activation)      (None, 16, 16, 32)   0           add_5[0][0]                      
__________________________________________________________________________________________________
conv2d_13 (Conv2D)              (None, 16, 16, 32)   9248        activation_11[0][0]              
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, 16, 16, 32)   128         conv2d_13[0][0]                  
__________________________________________________________________________________________________
activation_12 (Activation)      (None, 16, 16, 32)   0           batch_normalization_12[0][0]     
__________________________________________________________________________________________________
conv2d_14 (Conv2D)              (None, 16, 16, 32)   9248        activation_12[0][0]              
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, 16, 16, 32)   128         conv2d_14[0][0]                  
__________________________________________________________________________________________________
add_6 (Add)                     (None, 16, 16, 32)   0           activation_11[0][0]              
                                                                 batch_normalization_13[0][0]     
__________________________________________________________________________________________________
activation_13 (Activation)      (None, 16, 16, 32)   0           add_6[0][0]                      
__________________________________________________________________________________________________
conv2d_15 (Conv2D)              (None, 8, 8, 64)     18496       activation_13[0][0]              
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, 8, 8, 64)     256         conv2d_15[0][0]                  
__________________________________________________________________________________________________
activation_14 (Activation)      (None, 8, 8, 64)     0           batch_normalization_14[0][0]     
__________________________________________________________________________________________________
conv2d_16 (Conv2D)              (None, 8, 8, 64)     36928       activation_14[0][0]              
__________________________________________________________________________________________________
conv2d_17 (Conv2D)              (None, 8, 8, 64)     2112        activation_13[0][0]              
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, 8, 8, 64)     256         conv2d_16[0][0]                  
__________________________________________________________________________________________________
add_7 (Add)                     (None, 8, 8, 64)     0           conv2d_17[0][0]                  
                                                                 batch_normalization_15[0][0]     
__________________________________________________________________________________________________
activation_15 (Activation)      (None, 8, 8, 64)     0           add_7[0][0]                      
__________________________________________________________________________________________________
conv2d_18 (Conv2D)              (None, 8, 8, 64)     36928       activation_15[0][0]              
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, 8, 8, 64)     256         conv2d_18[0][0]                  
__________________________________________________________________________________________________
activation_16 (Activation)      (None, 8, 8, 64)     0           batch_normalization_16[0][0]     
__________________________________________________________________________________________________
conv2d_19 (Conv2D)              (None, 8, 8, 64)     36928       activation_16[0][0]              
__________________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, 8, 8, 64)     256         conv2d_19[0][0]                  
__________________________________________________________________________________________________
add_8 (Add)                     (None, 8, 8, 64)     0           activation_15[0][0]              
                                                                 batch_normalization_17[0][0]     
__________________________________________________________________________________________________
activation_17 (Activation)      (None, 8, 8, 64)     0           add_8[0][0]                      
__________________________________________________________________________________________________
conv2d_20 (Conv2D)              (None, 8, 8, 64)     36928       activation_17[0][0]              
__________________________________________________________________________________________________
batch_normalization_18 (BatchNo (None, 8, 8, 64)     256         conv2d_20[0][0]                  
__________________________________________________________________________________________________
activation_18 (Activation)      (None, 8, 8, 64)     0           batch_normalization_18[0][0]     
__________________________________________________________________________________________________
conv2d_21 (Conv2D)              (None, 8, 8, 64)     36928       activation_18[0][0]              
__________________________________________________________________________________________________
batch_normalization_19 (BatchNo (None, 8, 8, 64)     256         conv2d_21[0][0]                  
__________________________________________________________________________________________________
add_9 (Add)                     (None, 8, 8, 64)     0           activation_17[0][0]              
                                                                 batch_normalization_19[0][0]     
__________________________________________________________________________________________________
activation_19 (Activation)      (None, 8, 8, 64)     0           add_9[0][0]                      
__________________________________________________________________________________________________
average_pooling2d_1 (AveragePoo (None, 1, 1, 64)     0           activation_19[0][0]              
__________________________________________________________________________________________________
flatten_1 (Flatten)             (None, 64)           0           average_pooling2d_1[0][0]        
__________________________________________________________________________________________________
dense_1 (Dense)                 (None, 10)           650         flatten_1[0][0]                  
==================================================================================================
Total params: 274,442
Trainable params: 273,066
Non-trainable params: 1,376
__________________________________________________________________________________________________
ResNet20v1
Using real-time data augmentation.
Epoch 1/200
Learning rate:  0.001
1563/1563 [==============================] - 62s 40ms/step - loss: 1.5450 - acc: 0.4869 - val_loss: 1.4411 - val_acc: 0.5572

Epoch 00001: val_acc improved from -inf to 0.55720, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.001.h5
Epoch 2/200
Learning rate:  0.001
1563/1563 [==============================] - 54s 35ms/step - loss: 1.1575 - acc: 0.6364 - val_loss: 1.2866 - val_acc: 0.5988

Epoch 00002: val_acc improved from 0.55720 to 0.59880, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.002.h5
Epoch 3/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.9936 - acc: 0.7001 - val_loss: 1.1053 - val_acc: 0.6696

Epoch 00003: val_acc improved from 0.59880 to 0.66960, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.003.h5
Epoch 4/200
Learning rate:  0.001
1563/1563 [==============================] - 56s 36ms/step - loss: 0.9068 - acc: 0.7330 - val_loss: 1.2314 - val_acc: 0.6543

Epoch 00004: val_acc did not improve
Epoch 5/200
Learning rate:  0.001
1563/1563 [==============================] - 56s 36ms/step - loss: 0.8399 - acc: 0.7599 - val_loss: 1.1613 - val_acc: 0.6895

Epoch 00005: val_acc improved from 0.66960 to 0.68950, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.005.h5
Epoch 6/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.7922 - acc: 0.7781 - val_loss: 0.9809 - val_acc: 0.7272

Epoch 00006: val_acc improved from 0.68950 to 0.72720, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.006.h5
Epoch 7/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.7638 - acc: 0.7903 - val_loss: 1.2414 - val_acc: 0.6659

Epoch 00007: val_acc did not improve
Epoch 8/200
Learning rate:  0.001
1563/1563 [==============================] - 54s 35ms/step - loss: 0.7334 - acc: 0.8041 - val_loss: 0.7423 - val_acc: 0.8026

Epoch 00008: val_acc improved from 0.72720 to 0.80260, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.008.h5
Epoch 9/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.7154 - acc: 0.8112 - val_loss: 0.9000 - val_acc: 0.7604

Epoch 00009: val_acc did not improve
Epoch 10/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.6913 - acc: 0.8205 - val_loss: 1.0617 - val_acc: 0.7192

Epoch 00010: val_acc did not improve
Epoch 11/200
Learning rate:  0.001
1563/1563 [==============================] - 54s 35ms/step - loss: 0.6799 - acc: 0.8259 - val_loss: 0.7639 - val_acc: 0.7962

Epoch 00011: val_acc did not improve
Epoch 12/200
Learning rate:  0.001
1563/1563 [==============================] - 54s 35ms/step - loss: 0.6642 - acc: 0.8328 - val_loss: 0.9833 - val_acc: 0.7424

Epoch 00012: val_acc did not improve
Epoch 13/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.6554 - acc: 0.8336 - val_loss: 0.7743 - val_acc: 0.7950

Epoch 00013: val_acc did not improve
Epoch 14/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.6433 - acc: 0.8399 - val_loss: 1.1566 - val_acc: 0.7168

Epoch 00014: val_acc did not improve
Epoch 15/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.6337 - acc: 0.8441 - val_loss: 0.7490 - val_acc: 0.8098

Epoch 00015: val_acc improved from 0.80260 to 0.80980, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.015.h5
Epoch 16/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.6280 - acc: 0.8460 - val_loss: 0.8865 - val_acc: 0.7770

Epoch 00016: val_acc did not improve
Epoch 17/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.6197 - acc: 0.8498 - val_loss: 0.8461 - val_acc: 0.7778

Epoch 00017: val_acc did not improve
Epoch 18/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.6085 - acc: 0.8560 - val_loss: 0.7646 - val_acc: 0.8076

Epoch 00018: val_acc did not improve
Epoch 19/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.6068 - acc: 0.8560 - val_loss: 0.9842 - val_acc: 0.7631

Epoch 00019: val_acc did not improve
Epoch 20/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5933 - acc: 0.8602 - val_loss: 0.7247 - val_acc: 0.8288

Epoch 00020: val_acc improved from 0.80980 to 0.82880, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.020.h5
Epoch 21/200
Learning rate:  0.001
1563/1563 [==============================] - 56s 36ms/step - loss: 0.5903 - acc: 0.8626 - val_loss: 0.7705 - val_acc: 0.8188

Epoch 00021: val_acc did not improve
Epoch 22/200
Learning rate:  0.001
1563/1563 [==============================] - 56s 36ms/step - loss: 0.5885 - acc: 0.8629 - val_loss: 0.9729 - val_acc: 0.7541

Epoch 00022: val_acc did not improve
Epoch 23/200
Learning rate:  0.001
1563/1563 [==============================] - 56s 36ms/step - loss: 0.5883 - acc: 0.8640 - val_loss: 0.7466 - val_acc: 0.8185

Epoch 00023: val_acc did not improve
Epoch 24/200
Learning rate:  0.001
1563/1563 [==============================] - 56s 36ms/step - loss: 0.5783 - acc: 0.8669 - val_loss: 0.8232 - val_acc: 0.7970

Epoch 00024: val_acc did not improve
Epoch 25/200
Learning rate:  0.001
1563/1563 [==============================] - 56s 36ms/step - loss: 0.5757 - acc: 0.8684 - val_loss: 0.7166 - val_acc: 0.8330

Epoch 00025: val_acc improved from 0.82880 to 0.83300, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.025.h5
Epoch 26/200
Learning rate:  0.001
1563/1563 [==============================] - 56s 36ms/step - loss: 0.5682 - acc: 0.8710 - val_loss: 0.9105 - val_acc: 0.7804

Epoch 00026: val_acc did not improve
Epoch 27/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5720 - acc: 0.8697 - val_loss: 0.6989 - val_acc: 0.8347

Epoch 00027: val_acc improved from 0.83300 to 0.83470, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.027.h5
Epoch 28/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5602 - acc: 0.8744 - val_loss: 1.0792 - val_acc: 0.7432

Epoch 00028: val_acc did not improve
Epoch 29/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5610 - acc: 0.8732 - val_loss: 0.7994 - val_acc: 0.8054

Epoch 00029: val_acc did not improve
Epoch 30/200
Learning rate:  0.001
1563/1563 [==============================] - 54s 35ms/step - loss: 0.5597 - acc: 0.8750 - val_loss: 0.7123 - val_acc: 0.8344

Epoch 00030: val_acc did not improve
Epoch 31/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5540 - acc: 0.8782 - val_loss: 0.7432 - val_acc: 0.8168

Epoch 00031: val_acc did not improve
Epoch 32/200
Learning rate:  0.001
1563/1563 [==============================] - 54s 35ms/step - loss: 0.5500 - acc: 0.8787 - val_loss: 0.7249 - val_acc: 0.8253

Epoch 00032: val_acc did not improve
Epoch 33/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5486 - acc: 0.8812 - val_loss: 0.6406 - val_acc: 0.8509

Epoch 00033: val_acc improved from 0.83470 to 0.85090, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.033.h5
Epoch 34/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5472 - acc: 0.8796 - val_loss: 0.7195 - val_acc: 0.8308

Epoch 00034: val_acc did not improve
Epoch 35/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5412 - acc: 0.8822 - val_loss: 0.6515 - val_acc: 0.8556

Epoch 00035: val_acc improved from 0.85090 to 0.85560, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.035.h5
Epoch 36/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5414 - acc: 0.8824 - val_loss: 0.6497 - val_acc: 0.8488

Epoch 00036: val_acc did not improve
Epoch 37/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5379 - acc: 0.8827 - val_loss: 0.7754 - val_acc: 0.8096

Epoch 00037: val_acc did not improve
Epoch 38/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5373 - acc: 0.8844 - val_loss: 0.6920 - val_acc: 0.8396

Epoch 00038: val_acc did not improve
Epoch 39/200
Learning rate:  0.001
1563/1563 [==============================] - 53s 34ms/step - loss: 0.5356 - acc: 0.8843 - val_loss: 0.6264 - val_acc: 0.8569

Epoch 00039: val_acc improved from 0.85560 to 0.85690, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.039.h5
Epoch 40/200
Learning rate:  0.001
1563/1563 [==============================] - 53s 34ms/step - loss: 0.5332 - acc: 0.8845 - val_loss: 0.6754 - val_acc: 0.8395

Epoch 00040: val_acc did not improve
Epoch 41/200
Learning rate:  0.001
1563/1563 [==============================] - 52s 33ms/step - loss: 0.5285 - acc: 0.8874 - val_loss: 0.8491 - val_acc: 0.7873

Epoch 00041: val_acc did not improve
Epoch 42/200
Learning rate:  0.001
1563/1563 [==============================] - 53s 34ms/step - loss: 0.5344 - acc: 0.8855 - val_loss: 0.7352 - val_acc: 0.8274

Epoch 00042: val_acc did not improve
Epoch 43/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5261 - acc: 0.8892 - val_loss: 0.6778 - val_acc: 0.8520

Epoch 00043: val_acc did not improve
Epoch 44/200
Learning rate:  0.001
1563/1563 [==============================] - 56s 36ms/step - loss: 0.5258 - acc: 0.8887 - val_loss: 0.7849 - val_acc: 0.8130

Epoch 00044: val_acc did not improve
Epoch 45/200
Learning rate:  0.001
1563/1563 [==============================] - 56s 36ms/step - loss: 0.5287 - acc: 0.8885 - val_loss: 0.7625 - val_acc: 0.8169

Epoch 00045: val_acc did not improve
Epoch 46/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5220 - acc: 0.8886 - val_loss: 0.6575 - val_acc: 0.8472

Epoch 00046: val_acc did not improve
Epoch 47/200
Learning rate:  0.001
1563/1563 [==============================] - 56s 36ms/step - loss: 0.5241 - acc: 0.8874 - val_loss: 0.7637 - val_acc: 0.8172

Epoch 00047: val_acc did not improve
Epoch 48/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5174 - acc: 0.8914 - val_loss: 0.6850 - val_acc: 0.8447

Epoch 00048: val_acc did not improve
Epoch 49/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5182 - acc: 0.8901 - val_loss: 0.7210 - val_acc: 0.8338

Epoch 00049: val_acc did not improve
Epoch 50/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5185 - acc: 0.8908 - val_loss: 0.7338 - val_acc: 0.8394

Epoch 00050: val_acc did not improve
Epoch 51/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5175 - acc: 0.8912 - val_loss: 0.8267 - val_acc: 0.8043

Epoch 00051: val_acc did not improve
Epoch 52/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5132 - acc: 0.8929 - val_loss: 0.7177 - val_acc: 0.8431

Epoch 00052: val_acc did not improve
Epoch 53/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5127 - acc: 0.8928 - val_loss: 0.6524 - val_acc: 0.8531

Epoch 00053: val_acc did not improve
Epoch 54/200
Learning rate:  0.001
1563/1563 [==============================] - 56s 36ms/step - loss: 0.5123 - acc: 0.8925 - val_loss: 0.7192 - val_acc: 0.8334

Epoch 00054: val_acc did not improve
Epoch 55/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5103 - acc: 0.8935 - val_loss: 0.7552 - val_acc: 0.8261

Epoch 00055: val_acc did not improve
Epoch 56/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5101 - acc: 0.8928 - val_loss: 0.6905 - val_acc: 0.8425

Epoch 00056: val_acc did not improve
Epoch 57/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5061 - acc: 0.8959 - val_loss: 0.7146 - val_acc: 0.8328

Epoch 00057: val_acc did not improve
Epoch 58/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5064 - acc: 0.8973 - val_loss: 0.6820 - val_acc: 0.8439

Epoch 00058: val_acc did not improve
Epoch 59/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5099 - acc: 0.8934 - val_loss: 0.6743 - val_acc: 0.8489

Epoch 00059: val_acc did not improve
Epoch 60/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5035 - acc: 0.8946 - val_loss: 0.7986 - val_acc: 0.8095

Epoch 00060: val_acc did not improve
Epoch 61/200
Learning rate:  0.001
1563/1563 [==============================] - 54s 35ms/step - loss: 0.5055 - acc: 0.8943 - val_loss: 0.7580 - val_acc: 0.8172

Epoch 00061: val_acc did not improve
Epoch 62/200
Learning rate:  0.001
1563/1563 [==============================] - 53s 34ms/step - loss: 0.5021 - acc: 0.8971 - val_loss: 0.6608 - val_acc: 0.8505

Epoch 00062: val_acc did not improve
Epoch 63/200
Learning rate:  0.001
1563/1563 [==============================] - 53s 34ms/step - loss: 0.4998 - acc: 0.8977 - val_loss: 0.6325 - val_acc: 0.8638

Epoch 00063: val_acc improved from 0.85690 to 0.86380, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.063.h5
Epoch 64/200
Learning rate:  0.001
1563/1563 [==============================] - 54s 34ms/step - loss: 0.4954 - acc: 0.8991 - val_loss: 0.7133 - val_acc: 0.8333

Epoch 00064: val_acc did not improve
Epoch 65/200
Learning rate:  0.001
1563/1563 [==============================] - 54s 34ms/step - loss: 0.5006 - acc: 0.8963 - val_loss: 0.5884 - val_acc: 0.8700

Epoch 00065: val_acc improved from 0.86380 to 0.87000, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.065.h5
Epoch 66/200
Learning rate:  0.001
1563/1563 [==============================] - 54s 34ms/step - loss: 0.4968 - acc: 0.8991 - val_loss: 0.7798 - val_acc: 0.8250

Epoch 00066: val_acc did not improve
Epoch 67/200
Learning rate:  0.001
1563/1563 [==============================] - 53s 34ms/step - loss: 0.4973 - acc: 0.8975 - val_loss: 0.6273 - val_acc: 0.8610

Epoch 00067: val_acc did not improve
Epoch 68/200
Learning rate:  0.001
1563/1563 [==============================] - 54s 34ms/step - loss: 0.4951 - acc: 0.8984 - val_loss: 0.6510 - val_acc: 0.8530

Epoch 00068: val_acc did not improve
Epoch 69/200
Learning rate:  0.001
1563/1563 [==============================] - 53s 34ms/step - loss: 0.4940 - acc: 0.9002 - val_loss: 0.6601 - val_acc: 0.8486

Epoch 00069: val_acc did not improve
Epoch 70/200
Learning rate:  0.001
1563/1563 [==============================] - 53s 34ms/step - loss: 0.4966 - acc: 0.8999 - val_loss: 0.7840 - val_acc: 0.8328

Epoch 00070: val_acc did not improve
Epoch 71/200
Learning rate:  0.001
1563/1563 [==============================] - 54s 34ms/step - loss: 0.4920 - acc: 0.9001 - val_loss: 0.6757 - val_acc: 0.8427

Epoch 00071: val_acc did not improve
Epoch 72/200
Learning rate:  0.001
1563/1563 [==============================] - 54s 34ms/step - loss: 0.4894 - acc: 0.9021 - val_loss: 0.6498 - val_acc: 0.8579

Epoch 00072: val_acc did not improve
Epoch 73/200
Learning rate:  0.001
1563/1563 [==============================] - 54s 34ms/step - loss: 0.4923 - acc: 0.9017 - val_loss: 0.7005 - val_acc: 0.8454

Epoch 00073: val_acc did not improve
Epoch 74/200
Learning rate:  0.001
1563/1563 [==============================] - 54s 34ms/step - loss: 0.4925 - acc: 0.9000 - val_loss: 0.6865 - val_acc: 0.8393

Epoch 00074: val_acc did not improve
Epoch 75/200
Learning rate:  0.001
1563/1563 [==============================] - 56s 36ms/step - loss: 0.4863 - acc: 0.9029 - val_loss: 0.6502 - val_acc: 0.8564

Epoch 00075: val_acc did not improve
Epoch 76/200
Learning rate:  0.001
1563/1563 [==============================] - 53s 34ms/step - loss: 0.4893 - acc: 0.9012 - val_loss: 0.7414 - val_acc: 0.8398

Epoch 00076: val_acc did not improve
Epoch 77/200
Learning rate:  0.001
1563/1563 [==============================] - 54s 34ms/step - loss: 0.4878 - acc: 0.9022 - val_loss: 0.6993 - val_acc: 0.8454

Epoch 00077: val_acc did not improve
Epoch 78/200
Learning rate:  0.001
1563/1563 [==============================] - 54s 35ms/step - loss: 0.4879 - acc: 0.9026 - val_loss: 0.6885 - val_acc: 0.8408

Epoch 00078: val_acc did not improve
Epoch 79/200
Learning rate:  0.001
1563/1563 [==============================] - 54s 35ms/step - loss: 0.4852 - acc: 0.9022 - val_loss: 0.7038 - val_acc: 0.8450

Epoch 00079: val_acc did not improve
Epoch 80/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.4882 - acc: 0.9018 - val_loss: 0.7610 - val_acc: 0.8337

Epoch 00080: val_acc did not improve
Epoch 81/200
Learning rate:  0.001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.4904 - acc: 0.9013 - val_loss: 0.6659 - val_acc: 0.8493

Epoch 00081: val_acc did not improve
Epoch 82/200
Learning rate:  0.0001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.4031 - acc: 0.9305 - val_loss: 0.4911 - val_acc: 0.9002

Epoch 00082: val_acc improved from 0.87000 to 0.90020, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.082.h5
Epoch 83/200
Learning rate:  0.0001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.3702 - acc: 0.9417 - val_loss: 0.4916 - val_acc: 0.9034

Epoch 00083: val_acc improved from 0.90020 to 0.90340, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.083.h5
Epoch 84/200
Learning rate:  0.0001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.3576 - acc: 0.9443 - val_loss: 0.4750 - val_acc: 0.9065

Epoch 00084: val_acc improved from 0.90340 to 0.90650, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.084.h5
Epoch 85/200
Learning rate:  0.0001
1563/1563 [==============================] - 56s 36ms/step - loss: 0.3437 - acc: 0.9479 - val_loss: 0.4818 - val_acc: 0.9051

Epoch 00085: val_acc did not improve
Epoch 86/200
Learning rate:  0.0001
1563/1563 [==============================] - 52s 33ms/step - loss: 0.3326 - acc: 0.9504 - val_loss: 0.4778 - val_acc: 0.9056

Epoch 00086: val_acc did not improve
Epoch 87/200
Learning rate:  0.0001
1563/1563 [==============================] - 52s 33ms/step - loss: 0.3224 - acc: 0.9529 - val_loss: 0.4658 - val_acc: 0.9087

Epoch 00087: val_acc improved from 0.90650 to 0.90870, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.087.h5
Epoch 88/200
Learning rate:  0.0001
1563/1563 [==============================] - 53s 34ms/step - loss: 0.3168 - acc: 0.9534 - val_loss: 0.4598 - val_acc: 0.9082

Epoch 00088: val_acc did not improve
Epoch 89/200
Learning rate:  0.0001
1563/1563 [==============================] - 52s 33ms/step - loss: 0.3106 - acc: 0.9555 - val_loss: 0.4645 - val_acc: 0.9090

Epoch 00089: val_acc improved from 0.90870 to 0.90900, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.089.h5
Epoch 90/200
Learning rate:  0.0001
1563/1563 [==============================] - 53s 34ms/step - loss: 0.3030 - acc: 0.9572 - val_loss: 0.4550 - val_acc: 0.9088

Epoch 00090: val_acc did not improve
Epoch 91/200
Learning rate:  0.0001
1563/1563 [==============================] - 53s 34ms/step - loss: 0.2973 - acc: 0.9565 - val_loss: 0.4518 - val_acc: 0.9103

Epoch 00091: val_acc improved from 0.90900 to 0.91030, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.091.h5
Epoch 92/200
Learning rate:  0.0001
1563/1563 [==============================] - 54s 34ms/step - loss: 0.2904 - acc: 0.9581 - val_loss: 0.4672 - val_acc: 0.9072

Epoch 00092: val_acc did not improve
Epoch 93/200
Learning rate:  0.0001
1563/1563 [==============================] - 54s 34ms/step - loss: 0.2856 - acc: 0.9602 - val_loss: 0.4597 - val_acc: 0.9103

Epoch 00093: val_acc did not improve
Epoch 94/200
Learning rate:  0.0001
1563/1563 [==============================] - 54s 34ms/step - loss: 0.2818 - acc: 0.9604 - val_loss: 0.4600 - val_acc: 0.9099

Epoch 00094: val_acc did not improve
Epoch 95/200
Learning rate:  0.0001
1563/1563 [==============================] - 53s 34ms/step - loss: 0.2775 - acc: 0.9611 - val_loss: 0.4558 - val_acc: 0.9104

Epoch 00095: val_acc improved from 0.91030 to 0.91040, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.095.h5
Epoch 96/200
Learning rate:  0.0001
1563/1563 [==============================] - 53s 34ms/step - loss: 0.2721 - acc: 0.9627 - val_loss: 0.4542 - val_acc: 0.9122

Epoch 00096: val_acc improved from 0.91040 to 0.91220, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.096.h5
Epoch 97/200
Learning rate:  0.0001
1563/1563 [==============================] - 53s 34ms/step - loss: 0.2663 - acc: 0.9643 - val_loss: 0.4562 - val_acc: 0.9112

Epoch 00097: val_acc did not improve
Epoch 98/200
Learning rate:  0.0001
1563/1563 [==============================] - 53s 34ms/step - loss: 0.2677 - acc: 0.9615 - val_loss: 0.4438 - val_acc: 0.9135

Epoch 00098: val_acc improved from 0.91220 to 0.91350, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.098.h5
Epoch 99/200
Learning rate:  0.0001
1563/1563 [==============================] - 53s 34ms/step - loss: 0.2612 - acc: 0.9639 - val_loss: 0.4575 - val_acc: 0.9110

Epoch 00099: val_acc did not improve
Epoch 100/200
Learning rate:  0.0001
1563/1563 [==============================] - 52s 33ms/step - loss: 0.2560 - acc: 0.9643 - val_loss: 0.4570 - val_acc: 0.9109

Epoch 00100: val_acc did not improve
Epoch 101/200
Learning rate:  0.0001
1563/1563 [==============================] - 54s 34ms/step - loss: 0.2532 - acc: 0.9661 - val_loss: 0.4484 - val_acc: 0.9136

Epoch 00101: val_acc improved from 0.91350 to 0.91360, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.101.h5
Epoch 102/200
Learning rate:  0.0001
1563/1563 [==============================] - 53s 34ms/step - loss: 0.2506 - acc: 0.9653 - val_loss: 0.4632 - val_acc: 0.9094

Epoch 00102: val_acc did not improve
Epoch 103/200
Learning rate:  0.0001
1563/1563 [==============================] - 54s 35ms/step - loss: 0.2495 - acc: 0.9647 - val_loss: 0.4633 - val_acc: 0.9064

Epoch 00103: val_acc did not improve
Epoch 104/200
Learning rate:  0.0001
1563/1563 [==============================] - 52s 34ms/step - loss: 0.2434 - acc: 0.9671 - val_loss: 0.4424 - val_acc: 0.9140

Epoch 00104: val_acc improved from 0.91360 to 0.91400, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.104.h5
Epoch 105/200
Learning rate:  0.0001
1563/1563 [==============================] - 53s 34ms/step - loss: 0.2407 - acc: 0.9678 - val_loss: 0.4628 - val_acc: 0.9090

Epoch 00105: val_acc did not improve
Epoch 106/200
Learning rate:  0.0001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.2374 - acc: 0.9688 - val_loss: 0.4561 - val_acc: 0.9100

Epoch 00106: val_acc did not improve
Epoch 107/200
Learning rate:  0.0001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.2339 - acc: 0.9692 - val_loss: 0.4449 - val_acc: 0.9124

Epoch 00107: val_acc did not improve
Epoch 108/200
Learning rate:  0.0001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.2302 - acc: 0.9698 - val_loss: 0.4397 - val_acc: 0.9130

Epoch 00108: val_acc did not improve
Epoch 109/200
Learning rate:  0.0001
1563/1563 [==============================] - 53s 34ms/step - loss: 0.2282 - acc: 0.9701 - val_loss: 0.4774 - val_acc: 0.9061

Epoch 00109: val_acc did not improve
Epoch 110/200
Learning rate:  0.0001
1563/1563 [==============================] - 60s 39ms/step - loss: 0.2311 - acc: 0.9677 - val_loss: 0.4568 - val_acc: 0.9108

Epoch 00110: val_acc did not improve
Epoch 111/200
Learning rate:  0.0001
1563/1563 [==============================] - 60s 39ms/step - loss: 0.2255 - acc: 0.9702 - val_loss: 0.4464 - val_acc: 0.9109

Epoch 00111: val_acc did not improve
Epoch 112/200
Learning rate:  0.0001
1563/1563 [==============================] - 60s 38ms/step - loss: 0.2192 - acc: 0.9724 - val_loss: 0.4427 - val_acc: 0.9087

Epoch 00112: val_acc did not improve
Epoch 113/200
Learning rate:  0.0001
1563/1563 [==============================] - 60s 39ms/step - loss: 0.2198 - acc: 0.9705 - val_loss: 0.4315 - val_acc: 0.9150

Epoch 00113: val_acc improved from 0.91400 to 0.91500, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.113.h5
Epoch 114/200
Learning rate:  0.0001
1563/1563 [==============================] - 60s 38ms/step - loss: 0.2158 - acc: 0.9719 - val_loss: 0.4483 - val_acc: 0.9096

Epoch 00114: val_acc did not improve
Epoch 115/200
Learning rate:  0.0001
1563/1563 [==============================] - 60s 38ms/step - loss: 0.2139 - acc: 0.9723 - val_loss: 0.4432 - val_acc: 0.9144

Epoch 00115: val_acc did not improve
Epoch 116/200
Learning rate:  0.0001
1563/1563 [==============================] - 60s 39ms/step - loss: 0.2120 - acc: 0.9727 - val_loss: 0.4394 - val_acc: 0.9154

Epoch 00116: val_acc improved from 0.91500 to 0.91540, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.116.h5
Epoch 117/200
Learning rate:  0.0001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.2094 - acc: 0.9725 - val_loss: 0.4531 - val_acc: 0.9105

Epoch 00117: val_acc did not improve
Epoch 118/200
Learning rate:  0.0001
1563/1563 [==============================] - 52s 33ms/step - loss: 0.2079 - acc: 0.9737 - val_loss: 0.4340 - val_acc: 0.9141

Epoch 00118: val_acc did not improve
Epoch 119/200
Learning rate:  0.0001
1563/1563 [==============================] - 55s 35ms/step - loss: 0.2072 - acc: 0.9724 - val_loss: 0.4483 - val_acc: 0.9097

Epoch 00119: val_acc did not improve
Epoch 120/200
Learning rate:  0.0001
1563/1563 [==============================] - 53s 34ms/step - loss: 0.2053 - acc: 0.9722 - val_loss: 0.4489 - val_acc: 0.9111

Epoch 00120: val_acc did not improve
Epoch 121/200
Learning rate:  0.0001
1563/1563 [==============================] - 53s 34ms/step - loss: 0.2018 - acc: 0.9741 - val_loss: 0.4450 - val_acc: 0.9116

Epoch 00121: val_acc did not improve
Epoch 122/200
Learning rate:  1e-05
1563/1563 [==============================] - 53s 34ms/step - loss: 0.1936 - acc: 0.9773 - val_loss: 0.4296 - val_acc: 0.9174

Epoch 00122: val_acc improved from 0.91540 to 0.91740, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.122.h5
Epoch 123/200
Learning rate:  1e-05
1563/1563 [==============================] - 53s 34ms/step - loss: 0.1921 - acc: 0.9782 - val_loss: 0.4286 - val_acc: 0.9176

Epoch 00123: val_acc improved from 0.91740 to 0.91760, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.123.h5
Epoch 124/200
Learning rate:  1e-05
1563/1563 [==============================] - 59s 38ms/step - loss: 0.1896 - acc: 0.9787 - val_loss: 0.4283 - val_acc: 0.9178

Epoch 00124: val_acc improved from 0.91760 to 0.91780, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.124.h5
Epoch 125/200
Learning rate:  1e-05
1563/1563 [==============================] - 53s 34ms/step - loss: 0.1869 - acc: 0.9802 - val_loss: 0.4280 - val_acc: 0.9177

Epoch 00125: val_acc did not improve
Epoch 126/200
Learning rate:  1e-05
1563/1563 [==============================] - 53s 34ms/step - loss: 0.1859 - acc: 0.9801 - val_loss: 0.4285 - val_acc: 0.9174

Epoch 00126: val_acc did not improve
Epoch 127/200
Learning rate:  1e-05
1563/1563 [==============================] - 53s 34ms/step - loss: 0.1845 - acc: 0.9805 - val_loss: 0.4290 - val_acc: 0.9160

Epoch 00127: val_acc did not improve
Epoch 128/200
Learning rate:  1e-05
1563/1563 [==============================] - 54s 35ms/step - loss: 0.1836 - acc: 0.9807 - val_loss: 0.4292 - val_acc: 0.9182

Epoch 00128: val_acc improved from 0.91780 to 0.91820, saving model to E:\src\jupyter\paper\ResNet\saved_models\cifar10_ResNet20v1_model.128.h5
Epoch 129/200
Learning rate:  1e-05
1563/1563 [==============================] - 54s 34ms/step - loss: 0.1843 - acc: 0.9806 - val_loss: 0.4259 - val_acc: 0.9174

Epoch 00129: val_acc did not improve
Epoch 130/200
Learning rate:  1e-05
1563/1563 [==============================] - 52s 34ms/step - loss: 0.1832 - acc: 0.9804 - val_loss: 0.4282 - val_acc: 0.9178

Epoch 00130: val_acc did not improve
Epoch 131/200
Learning rate:  1e-05
1563/1563 [==============================] - 52s 33ms/step - loss: 0.1817 - acc: 0.9811 - val_loss: 0.4275 - val_acc: 0.9175

Epoch 00131: val_acc did not improve
Epoch 132/200
Learning rate:  1e-05
1563/1563 [==============================] - 52s 33ms/step - loss: 0.1848 - acc: 0.9802 - val_loss: 0.4287 - val_acc: 0.9178

Epoch 00132: val_acc did not improve
Epoch 133/200
Learning rate:  1e-05
1563/1563 [==============================] - 52s 33ms/step - loss: 0.1818 - acc: 0.9814 - val_loss: 0.4281 - val_acc: 0.9167

Epoch 00133: val_acc did not improve
Epoch 134/200
Learning rate:  1e-05
1563/1563 [==============================] - 52s 33ms/step - loss: 0.1811 - acc: 0.9810 - val_loss: 0.4269 - val_acc: 0.9177

Epoch 00134: val_acc did not improve
Epoch 135/200
Learning rate:  1e-05
1563/1563 [==============================] - 52s 33ms/step - loss: 0.1814 - acc: 0.9813 - val_loss: 0.4321 - val_acc: 0.9161

Epoch 00135: val_acc did not improve
Epoch 136/200
Learning rate:  1e-05
1563/1563 [==============================] - 54s 34ms/step - loss: 0.1805 - acc: 0.9808 - val_loss: 0.4292 - val_acc: 0.9171

Epoch 00136: val_acc did not improve
Epoch 137/200
Learning rate:  1e-05
1563/1563 [==============================] - 55s 35ms/step - loss: 0.1789 - acc: 0.9819 - val_loss: 0.4309 - val_acc: 0.9162

Epoch 00137: val_acc did not improve
Epoch 138/200
Learning rate:  1e-05
1563/1563 [==============================] - 52s 33ms/step - loss: 0.1817 - acc: 0.9807 - val_loss: 0.4314 - val_acc: 0.9172

Epoch 00138: val_acc did not improve
Epoch 139/200
Learning rate:  1e-05
1563/1563 [==============================] - 54s 35ms/step - loss: 0.1802 - acc: 0.9814 - val_loss: 0.4309 - val_acc: 0.9166

Epoch 00139: val_acc did not improve
Epoch 140/200
Learning rate:  1e-05
1563/1563 [==============================] - 53s 34ms/step - loss: 0.1790 - acc: 0.9819 - val_loss: 0.4314 - val_acc: 0.9163

Epoch 00140: val_acc did not improve
Epoch 141/200
Learning rate:  1e-05
1563/1563 [==============================] - 52s 33ms/step - loss: 0.1770 - acc: 0.9825 - val_loss: 0.4300 - val_acc: 0.9161

Epoch 00141: val_acc did not improve
Epoch 142/200
Learning rate:  1e-05
1563/1563 [==============================] - 51s 33ms/step - loss: 0.1777 - acc: 0.9815 - val_loss: 0.4293 - val_acc: 0.9169

Epoch 00142: val_acc did not improve
Epoch 143/200
Learning rate:  1e-05
1563/1563 [==============================] - 51s 33ms/step - loss: 0.1786 - acc: 0.9816 - val_loss: 0.4338 - val_acc: 0.9164

Epoch 00143: val_acc did not improve
Epoch 144/200
Learning rate:  1e-05
1563/1563 [==============================] - 51s 33ms/step - loss: 0.1774 - acc: 0.9829 - val_loss: 0.4296 - val_acc: 0.9160

Epoch 00144: val_acc did not improve
Epoch 145/200
Learning rate:  1e-05
1563/1563 [==============================] - 51s 33ms/step - loss: 0.1773 - acc: 0.9821 - val_loss: 0.4301 - val_acc: 0.9165

Epoch 00145: val_acc did not improve
Epoch 146/200
Learning rate:  1e-05
1563/1563 [==============================] - 54s 34ms/step - loss: 0.1770 - acc: 0.9817 - val_loss: 0.4327 - val_acc: 0.9163

Epoch 00146: val_acc did not improve
Epoch 147/200
Learning rate:  1e-05
1563/1563 [==============================] - 55s 35ms/step - loss: 0.1766 - acc: 0.9815 - val_loss: 0.4306 - val_acc: 0.9168

Epoch 00147: val_acc did not improve
Epoch 148/200
Learning rate:  1e-05
1563/1563 [==============================] - 54s 35ms/step - loss: 0.1746 - acc: 0.9828 - val_loss: 0.4314 - val_acc: 0.9170

Epoch 00148: val_acc did not improve
Epoch 149/200
Learning rate:  1e-05
1563/1563 [==============================] - 53s 34ms/step - loss: 0.1739 - acc: 0.9835 - val_loss: 0.4339 - val_acc: 0.9165

Epoch 00149: val_acc did not improve
Epoch 150/200
Learning rate:  1e-05
1563/1563 [==============================] - 54s 34ms/step - loss: 0.1748 - acc: 0.9824 - val_loss: 0.4321 - val_acc: 0.9169

Epoch 00150: val_acc did not improve
Epoch 151/200
Learning rate:  1e-05
1563/1563 [==============================] - 53s 34ms/step - loss: 0.1729 - acc: 0.9835 - val_loss: 0.4350 - val_acc: 0.9163

Epoch 00151: val_acc did not improve
Epoch 152/200
Learning rate:  1e-05
1563/1563 [==============================] - 52s 34ms/step - loss: 0.1743 - acc: 0.9828 - val_loss: 0.4340 - val_acc: 0.9159

Epoch 00152: val_acc did not improve
Epoch 153/200
Learning rate:  1e-05
1563/1563 [==============================] - 54s 34ms/step - loss: 0.1734 - acc: 0.9833 - val_loss: 0.4356 - val_acc: 0.9155

Epoch 00153: val_acc did not improve
Epoch 154/200
Learning rate:  1e-05
1563/1563 [==============================] - 53s 34ms/step - loss: 0.1744 - acc: 0.9826 - val_loss: 0.4352 - val_acc: 0.9158

Epoch 00154: val_acc did not improve
Epoch 155/200
Learning rate:  1e-05
1563/1563 [==============================] - 56s 36ms/step - loss: 0.1718 - acc: 0.9837 - val_loss: 0.4389 - val_acc: 0.9154

Epoch 00155: val_acc did not improve
Epoch 156/200
Learning rate:  1e-05
1563/1563 [==============================] - 52s 33ms/step - loss: 0.1755 - acc: 0.9815 - val_loss: 0.4369 - val_acc: 0.9163

Epoch 00156: val_acc did not improve
Epoch 157/200
Learning rate:  1e-05
1563/1563 [==============================] - 52s 34ms/step - loss: 0.1741 - acc: 0.9827 - val_loss: 0.4321 - val_acc: 0.9164

Epoch 00157: val_acc did not improve
Epoch 158/200
Learning rate:  1e-05
1563/1563 [==============================] - 53s 34ms/step - loss: 0.1725 - acc: 0.9832 - val_loss: 0.4329 - val_acc: 0.9167

Epoch 00158: val_acc did not improve
Epoch 159/200
Learning rate:  1e-05
1563/1563 [==============================] - 54s 34ms/step - loss: 0.1716 - acc: 0.9834 - val_loss: 0.4338 - val_acc: 0.9153

Epoch 00159: val_acc did not improve
Epoch 160/200
Learning rate:  1e-05
1563/1563 [==============================] - 52s 33ms/step - loss: 0.1712 - acc: 0.9838 - val_loss: 0.4369 - val_acc: 0.9154

Epoch 00160: val_acc did not improve
Epoch 161/200
Learning rate:  1e-05
1563/1563 [==============================] - 54s 34ms/step - loss: 0.1718 - acc: 0.9827 - val_loss: 0.4356 - val_acc: 0.9160

Epoch 00161: val_acc did not improve
Epoch 162/200
Learning rate:  1e-06
1563/1563 [==============================] - 53s 34ms/step - loss: 0.1711 - acc: 0.9832 - val_loss: 0.4353 - val_acc: 0.9160

Epoch 00162: val_acc did not improve
Epoch 163/200
Learning rate:  1e-06
1563/1563 [==============================] - 52s 33ms/step - loss: 0.1701 - acc: 0.9841 - val_loss: 0.4358 - val_acc: 0.9149

Epoch 00163: val_acc did not improve
Epoch 164/200
Learning rate:  1e-06
1563/1563 [==============================] - 52s 33ms/step - loss: 0.1706 - acc: 0.9840 - val_loss: 0.4345 - val_acc: 0.9156

Epoch 00164: val_acc did not improve
Epoch 165/200
Learning rate:  1e-06
1563/1563 [==============================] - 51s 33ms/step - loss: 0.1697 - acc: 0.9835 - val_loss: 0.4337 - val_acc: 0.9161

Epoch 00165: val_acc did not improve
Epoch 166/200
Learning rate:  1e-06
1563/1563 [==============================] - 51s 33ms/step - loss: 0.1690 - acc: 0.9844 - val_loss: 0.4332 - val_acc: 0.9163

Epoch 00166: val_acc did not improve
Epoch 167/200
Learning rate:  1e-06
1563/1563 [==============================] - 52s 33ms/step - loss: 0.1698 - acc: 0.9837 - val_loss: 0.4335 - val_acc: 0.9158

Epoch 00167: val_acc did not improve
Epoch 168/200
Learning rate:  1e-06
1563/1563 [==============================] - 53s 34ms/step - loss: 0.1713 - acc: 0.9836 - val_loss: 0.4342 - val_acc: 0.9158

Epoch 00168: val_acc did not improve
Epoch 169/200
Learning rate:  1e-06
1563/1563 [==============================] - 52s 33ms/step - loss: 0.1702 - acc: 0.9840 - val_loss: 0.4367 - val_acc: 0.9161

Epoch 00169: val_acc did not improve
Epoch 170/200
Learning rate:  1e-06
1563/1563 [==============================] - 51s 33ms/step - loss: 0.1694 - acc: 0.9847 - val_loss: 0.4326 - val_acc: 0.9163

Epoch 00170: val_acc did not improve
Epoch 171/200
Learning rate:  1e-06
1563/1563 [==============================] - 52s 33ms/step - loss: 0.1702 - acc: 0.9840 - val_loss: 0.4331 - val_acc: 0.9166

Epoch 00171: val_acc did not improve
Epoch 172/200
Learning rate:  1e-06
1563/1563 [==============================] - 52s 33ms/step - loss: 0.1694 - acc: 0.9839 - val_loss: 0.4347 - val_acc: 0.9164

Epoch 00172: val_acc did not improve
Epoch 173/200
Learning rate:  1e-06
1563/1563 [==============================] - 52s 33ms/step - loss: 0.1696 - acc: 0.9833 - val_loss: 0.4337 - val_acc: 0.9168

Epoch 00173: val_acc did not improve
Epoch 174/200
Learning rate:  1e-06
1563/1563 [==============================] - 52s 33ms/step - loss: 0.1694 - acc: 0.9840 - val_loss: 0.4360 - val_acc: 0.9162

Epoch 00174: val_acc did not improve
Epoch 175/200
Learning rate:  1e-06
1563/1563 [==============================] - 52s 33ms/step - loss: 0.1697 - acc: 0.9841 - val_loss: 0.4344 - val_acc: 0.9161

Epoch 00175: val_acc did not improve
Epoch 176/200
Learning rate:  1e-06
1563/1563 [==============================] - 52s 33ms/step - loss: 0.1696 - acc: 0.9841 - val_loss: 0.4334 - val_acc: 0.9165

Epoch 00176: val_acc did not improve
Epoch 177/200
Learning rate:  1e-06
1563/1563 [==============================] - 52s 33ms/step - loss: 0.1702 - acc: 0.9837 - val_loss: 0.4346 - val_acc: 0.9163

Epoch 00177: val_acc did not improve
Epoch 178/200
Learning rate:  1e-06
1563/1563 [==============================] - 51s 33ms/step - loss: 0.1689 - acc: 0.9841 - val_loss: 0.4337 - val_acc: 0.9162

Epoch 00178: val_acc did not improve
Epoch 179/200
Learning rate:  1e-06
1563/1563 [==============================] - 53s 34ms/step - loss: 0.1689 - acc: 0.9835 - val_loss: 0.4342 - val_acc: 0.9161

Epoch 00179: val_acc did not improve
Epoch 180/200
Learning rate:  1e-06
1563/1563 [==============================] - 55s 35ms/step - loss: 0.1689 - acc: 0.9841 - val_loss: 0.4334 - val_acc: 0.9168

Epoch 00180: val_acc did not improve
Epoch 181/200
Learning rate:  1e-06
1563/1563 [==============================] - 55s 35ms/step - loss: 0.1687 - acc: 0.9844 - val_loss: 0.4336 - val_acc: 0.9162

Epoch 00181: val_acc did not improve
Epoch 182/200
Learning rate:  5e-07
1563/1563 [==============================] - 55s 35ms/step - loss: 0.1695 - acc: 0.9839 - val_loss: 0.4327 - val_acc: 0.9163

Epoch 00182: val_acc did not improve
Epoch 183/200
Learning rate:  5e-07
1563/1563 [==============================] - 55s 35ms/step - loss: 0.1697 - acc: 0.9838 - val_loss: 0.4349 - val_acc: 0.9160

Epoch 00183: val_acc did not improve
Epoch 184/200
Learning rate:  5e-07
1563/1563 [==============================] - 55s 35ms/step - loss: 0.1690 - acc: 0.9847 - val_loss: 0.4347 - val_acc: 0.9164

Epoch 00184: val_acc did not improve
Epoch 185/200
Learning rate:  5e-07
1563/1563 [==============================] - 55s 35ms/step - loss: 0.1700 - acc: 0.9844 - val_loss: 0.4329 - val_acc: 0.9166

Epoch 00185: val_acc did not improve
Epoch 186/200
Learning rate:  5e-07
1563/1563 [==============================] - 57s 36ms/step - loss: 0.1686 - acc: 0.9845 - val_loss: 0.4340 - val_acc: 0.9170

Epoch 00186: val_acc did not improve
Epoch 187/200
Learning rate:  5e-07
1563/1563 [==============================] - 56s 36ms/step - loss: 0.1690 - acc: 0.9840 - val_loss: 0.4339 - val_acc: 0.9163

Epoch 00187: val_acc did not improve
Epoch 188/200
Learning rate:  5e-07
1563/1563 [==============================] - 55s 35ms/step - loss: 0.1706 - acc: 0.9838 - val_loss: 0.4352 - val_acc: 0.9162

Epoch 00188: val_acc did not improve
Epoch 189/200
Learning rate:  5e-07
1563/1563 [==============================] - 54s 35ms/step - loss: 0.1677 - acc: 0.9847 - val_loss: 0.4332 - val_acc: 0.9163

Epoch 00189: val_acc did not improve
Epoch 190/200
Learning rate:  5e-07
1563/1563 [==============================] - 54s 35ms/step - loss: 0.1685 - acc: 0.9847 - val_loss: 0.4339 - val_acc: 0.9161

Epoch 00190: val_acc did not improve
Epoch 191/200
Learning rate:  5e-07
1563/1563 [==============================] - 56s 36ms/step - loss: 0.1689 - acc: 0.9843 - val_loss: 0.4348 - val_acc: 0.9162

Epoch 00191: val_acc did not improve
Epoch 192/200
Learning rate:  5e-07
1563/1563 [==============================] - 55s 35ms/step - loss: 0.1699 - acc: 0.9841 - val_loss: 0.4344 - val_acc: 0.9160

Epoch 00192: val_acc did not improve
Epoch 193/200
Learning rate:  5e-07
1563/1563 [==============================] - 55s 35ms/step - loss: 0.1683 - acc: 0.9842 - val_loss: 0.4333 - val_acc: 0.9159

Epoch 00193: val_acc did not improve
Epoch 194/200
Learning rate:  5e-07
1563/1563 [==============================] - 55s 35ms/step - loss: 0.1702 - acc: 0.9839 - val_loss: 0.4347 - val_acc: 0.9161

Epoch 00194: val_acc did not improve
Epoch 195/200
Learning rate:  5e-07
1563/1563 [==============================] - 55s 35ms/step - loss: 0.1697 - acc: 0.9843 - val_loss: 0.4323 - val_acc: 0.9166

Epoch 00195: val_acc did not improve
Epoch 196/200
Learning rate:  5e-07
1563/1563 [==============================] - 55s 35ms/step - loss: 0.1712 - acc: 0.9833 - val_loss: 0.4334 - val_acc: 0.9166

Epoch 00196: val_acc did not improve
Epoch 197/200
Learning rate:  5e-07
1563/1563 [==============================] - 55s 35ms/step - loss: 0.1680 - acc: 0.9841 - val_loss: 0.4320 - val_acc: 0.9164

Epoch 00197: val_acc did not improve
Epoch 198/200
Learning rate:  5e-07
1563/1563 [==============================] - 55s 35ms/step - loss: 0.1696 - acc: 0.9844 - val_loss: 0.4331 - val_acc: 0.9166

Epoch 00198: val_acc did not improve
Epoch 199/200
Learning rate:  5e-07
1563/1563 [==============================] - 55s 35ms/step - loss: 0.1677 - acc: 0.9842 - val_loss: 0.4336 - val_acc: 0.9164

Epoch 00199: val_acc did not improve
Epoch 200/200
Learning rate:  5e-07
1563/1563 [==============================] - 55s 35ms/step - loss: 0.1705 - acc: 0.9834 - val_loss: 0.4347 - val_acc: 0.9164

Epoch 00200: val_acc did not improve
10000/10000 [==============================] - 3s 297us/step
Test loss: 0.43472515029907227
Test accuracy: 0.9164

你可能感兴趣的:(残差网络(ResNet)训练(CIFAR10))