keras 的 example 文件 cifar10_resnet.py 解析

该代码功能是卷积神经网络进行图像识别,数据集是cifar10

同时演示了回调函数 ModelCheckpoint, LearningRateScheduler, ReduceLROnPlateau 的用法

输入数据的shape

x_train shape: (50000, 32, 32, 3)
y_train shape: (50000, 1)

默认的神经网络结构:

____________________________________________________________________________________________________________________________________________
Layer (type)                                  Output Shape                   Param #         Connected to
============================================================================================================================================
input_1 (InputLayer)                          (None, 32, 32, 3)              0
____________________________________________________________________________________________________________________________________________
conv2d_1 (Conv2D)                             (None, 32, 32, 16)             448             input_1[0][0]
____________________________________________________________________________________________________________________________________________
batch_normalization_1 (BatchNormalization)    (None, 32, 32, 16)             64              conv2d_1[0][0]
____________________________________________________________________________________________________________________________________________
activation_1 (Activation)                     (None, 32, 32, 16)             0               batch_normalization_1[0][0]
____________________________________________________________________________________________________________________________________________
conv2d_2 (Conv2D)                             (None, 32, 32, 16)             2320            activation_1[0][0]
____________________________________________________________________________________________________________________________________________
batch_normalization_2 (BatchNormalization)    (None, 32, 32, 16)             64              conv2d_2[0][0]
____________________________________________________________________________________________________________________________________________
activation_2 (Activation)                     (None, 32, 32, 16)             0               batch_normalization_2[0][0]
____________________________________________________________________________________________________________________________________________
conv2d_3 (Conv2D)                             (None, 32, 32, 16)             2320            activation_2[0][0]
____________________________________________________________________________________________________________________________________________
batch_normalization_3 (BatchNormalization)    (None, 32, 32, 16)             64              conv2d_3[0][0]
____________________________________________________________________________________________________________________________________________
add_1 (Add)                                   (None, 32, 32, 16)             0               activation_1[0][0]
                                                                                             batch_normalization_3[0][0]
____________________________________________________________________________________________________________________________________________
activation_3 (Activation)                     (None, 32, 32, 16)             0               add_1[0][0]
____________________________________________________________________________________________________________________________________________
conv2d_4 (Conv2D)                             (None, 32, 32, 16)             2320            activation_3[0][0]
____________________________________________________________________________________________________________________________________________
batch_normalization_4 (BatchNormalization)    (None, 32, 32, 16)             64              conv2d_4[0][0]
____________________________________________________________________________________________________________________________________________
activation_4 (Activation)                     (None, 32, 32, 16)             0               batch_normalization_4[0][0]
____________________________________________________________________________________________________________________________________________
conv2d_5 (Conv2D)                             (None, 32, 32, 16)             2320            activation_4[0][0]
____________________________________________________________________________________________________________________________________________
batch_normalization_5 (BatchNormalization)    (None, 32, 32, 16)             64              conv2d_5[0][0]
____________________________________________________________________________________________________________________________________________
add_2 (Add)                                   (None, 32, 32, 16)             0               activation_3[0][0]
                                                                                             batch_normalization_5[0][0]
____________________________________________________________________________________________________________________________________________
activation_5 (Activation)                     (None, 32, 32, 16)             0               add_2[0][0]
____________________________________________________________________________________________________________________________________________
conv2d_6 (Conv2D)                             (None, 32, 32, 16)             2320            activation_5[0][0]
____________________________________________________________________________________________________________________________________________
batch_normalization_6 (BatchNormalization)    (None, 32, 32, 16)             64              conv2d_6[0][0]
____________________________________________________________________________________________________________________________________________
activation_6 (Activation)                     (None, 32, 32, 16)             0               batch_normalization_6[0][0]
____________________________________________________________________________________________________________________________________________
conv2d_7 (Conv2D)                             (None, 32, 32, 16)             2320            activation_6[0][0]
____________________________________________________________________________________________________________________________________________
batch_normalization_7 (BatchNormalization)    (None, 32, 32, 16)             64              conv2d_7[0][0]
____________________________________________________________________________________________________________________________________________
add_3 (Add)                                   (None, 32, 32, 16)             0               activation_5[0][0]
                                                                                             batch_normalization_7[0][0]
____________________________________________________________________________________________________________________________________________
activation_7 (Activation)                     (None, 32, 32, 16)             0               add_3[0][0]
____________________________________________________________________________________________________________________________________________
conv2d_8 (Conv2D)                             (None, 16, 16, 32)             4640            activation_7[0][0]
____________________________________________________________________________________________________________________________________________
batch_normalization_8 (BatchNormalization)    (None, 16, 16, 32)             128             conv2d_8[0][0]
____________________________________________________________________________________________________________________________________________
activation_8 (Activation)                     (None, 16, 16, 32)             0               batch_normalization_8[0][0]
____________________________________________________________________________________________________________________________________________
conv2d_9 (Conv2D)                             (None, 16, 16, 32)             9248            activation_8[0][0]
____________________________________________________________________________________________________________________________________________
conv2d_10 (Conv2D)                            (None, 16, 16, 32)             544             activation_7[0][0]
____________________________________________________________________________________________________________________________________________
batch_normalization_9 (BatchNormalization)    (None, 16, 16, 32)             128             conv2d_9[0][0]
____________________________________________________________________________________________________________________________________________
add_4 (Add)                                   (None, 16, 16, 32)             0               conv2d_10[0][0]
                                                                                             batch_normalization_9[0][0]
____________________________________________________________________________________________________________________________________________
activation_9 (Activation)                     (None, 16, 16, 32)             0               add_4[0][0]
____________________________________________________________________________________________________________________________________________
conv2d_11 (Conv2D)                            (None, 16, 16, 32)             9248            activation_9[0][0]
____________________________________________________________________________________________________________________________________________
batch_normalization_10 (BatchNormalization)   (None, 16, 16, 32)             128             conv2d_11[0][0]
____________________________________________________________________________________________________________________________________________
activation_10 (Activation)                    (None, 16, 16, 32)             0               batch_normalization_10[0][0]
____________________________________________________________________________________________________________________________________________
conv2d_12 (Conv2D)                            (None, 16, 16, 32)             9248            activation_10[0][0]
____________________________________________________________________________________________________________________________________________
batch_normalization_11 (BatchNormalization)   (None, 16, 16, 32)             128             conv2d_12[0][0]
____________________________________________________________________________________________________________________________________________
add_5 (Add)                                   (None, 16, 16, 32)             0               activation_9[0][0]
                                                                                             batch_normalization_11[0][0]
____________________________________________________________________________________________________________________________________________
activation_11 (Activation)                    (None, 16, 16, 32)             0               add_5[0][0]
____________________________________________________________________________________________________________________________________________
conv2d_13 (Conv2D)                            (None, 16, 16, 32)             9248            activation_11[0][0]
____________________________________________________________________________________________________________________________________________
batch_normalization_12 (BatchNormalization)   (None, 16, 16, 32)             128             conv2d_13[0][0]
____________________________________________________________________________________________________________________________________________
activation_12 (Activation)                    (None, 16, 16, 32)             0               batch_normalization_12[0][0]
____________________________________________________________________________________________________________________________________________
conv2d_14 (Conv2D)                            (None, 16, 16, 32)             9248            activation_12[0][0]
____________________________________________________________________________________________________________________________________________
batch_normalization_13 (BatchNormalization)   (None, 16, 16, 32)             128             conv2d_14[0][0]
____________________________________________________________________________________________________________________________________________
add_6 (Add)                                   (None, 16, 16, 32)             0               activation_11[0][0]
                                                                                             batch_normalization_13[0][0]
____________________________________________________________________________________________________________________________________________
activation_13 (Activation)                    (None, 16, 16, 32)             0               add_6[0][0]
____________________________________________________________________________________________________________________________________________
conv2d_15 (Conv2D)                            (None, 8, 8, 64)               18496           activation_13[0][0]
____________________________________________________________________________________________________________________________________________
batch_normalization_14 (BatchNormalization)   (None, 8, 8, 64)               256             conv2d_15[0][0]
____________________________________________________________________________________________________________________________________________
activation_14 (Activation)                    (None, 8, 8, 64)               0               batch_normalization_14[0][0]
____________________________________________________________________________________________________________________________________________
conv2d_16 (Conv2D)                            (None, 8, 8, 64)               36928           activation_14[0][0]
____________________________________________________________________________________________________________________________________________
conv2d_17 (Conv2D)                            (None, 8, 8, 64)               2112            activation_13[0][0]
____________________________________________________________________________________________________________________________________________
batch_normalization_15 (BatchNormalization)   (None, 8, 8, 64)               256             conv2d_16[0][0]
____________________________________________________________________________________________________________________________________________
add_7 (Add)                                   (None, 8, 8, 64)               0               conv2d_17[0][0]
                                                                                             batch_normalization_15[0][0]
____________________________________________________________________________________________________________________________________________
activation_15 (Activation)                    (None, 8, 8, 64)               0               add_7[0][0]
____________________________________________________________________________________________________________________________________________
conv2d_18 (Conv2D)                            (None, 8, 8, 64)               36928           activation_15[0][0]
____________________________________________________________________________________________________________________________________________
batch_normalization_16 (BatchNormalization)   (None, 8, 8, 64)               256             conv2d_18[0][0]
____________________________________________________________________________________________________________________________________________
activation_16 (Activation)                    (None, 8, 8, 64)               0               batch_normalization_16[0][0]
____________________________________________________________________________________________________________________________________________
conv2d_19 (Conv2D)                            (None, 8, 8, 64)               36928           activation_16[0][0]
____________________________________________________________________________________________________________________________________________
batch_normalization_17 (BatchNormalization)   (None, 8, 8, 64)               256             conv2d_19[0][0]
____________________________________________________________________________________________________________________________________________
add_8 (Add)                                   (None, 8, 8, 64)               0               activation_15[0][0]
                                                                                             batch_normalization_17[0][0]
____________________________________________________________________________________________________________________________________________
activation_17 (Activation)                    (None, 8, 8, 64)               0               add_8[0][0]
____________________________________________________________________________________________________________________________________________
conv2d_20 (Conv2D)                            (None, 8, 8, 64)               36928           activation_17[0][0]
____________________________________________________________________________________________________________________________________________
batch_normalization_18 (BatchNormalization)   (None, 8, 8, 64)               256             conv2d_20[0][0]
____________________________________________________________________________________________________________________________________________
activation_18 (Activation)                    (None, 8, 8, 64)               0               batch_normalization_18[0][0]
____________________________________________________________________________________________________________________________________________
conv2d_21 (Conv2D)                            (None, 8, 8, 64)               36928           activation_18[0][0]
____________________________________________________________________________________________________________________________________________
batch_normalization_19 (BatchNormalization)   (None, 8, 8, 64)               256             conv2d_21[0][0]
____________________________________________________________________________________________________________________________________________
add_9 (Add)                                   (None, 8, 8, 64)               0               activation_17[0][0]
                                                                                             batch_normalization_19[0][0]
____________________________________________________________________________________________________________________________________________
activation_19 (Activation)                    (None, 8, 8, 64)               0               add_9[0][0]
____________________________________________________________________________________________________________________________________________
average_pooling2d_1 (AveragePooling2D)        (None, 1, 1, 64)               0               activation_19[0][0]
____________________________________________________________________________________________________________________________________________
flatten_1 (Flatten)                           (None, 64)                     0               average_pooling2d_1[0][0]
____________________________________________________________________________________________________________________________________________
dense_1 (Dense)                               (None, 10)                     650             flatten_1[0][0]
============================================================================================================================================
Total params: 274,442
Trainable params: 273,066
Non-trainable params: 1,376
____________________________________________________________________________________________________________________________________________

按说 ReduceLROnPlateau 和 LearningRateScheduler 都可以调整学习率,但是两个同时用就很奇怪,下面添加一个很无聊的日志打印,可以看到,在这个演示中, ReduceLROnPlateau 没有机会起到作用,实际学习率被 LearningRateScheduler 掌控了

Using real-time data augmentation.
Epoch 1/200
Learning rate:  0.001
2020-03-30 14:35:22.037154: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
2020-03-30 14:35:22.308184: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2020-03-30 14:35:23.351692: W tensorflow/stream_executor/gpu/redzone_allocator.cc:312] Internal: Invoking GPU asm compilation is supported on Cuda non-Windows platforms only
Relying on driver to perform ptx compilation. This message will be only logged once.
1563/1563 [==============================] - 98s 63ms/step - loss: 1.5917 - accuracy: 0.4782 - val_loss: 1.5505 - val_accuracy: 0.5016
C:\ProgramData\Miniconda3\lib\site-packages\keras\callbacks\callbacks.py:707: RuntimeWarning: Can save best model only with val_acc available, skipping.
  'skipping.' % (self.monitor), RuntimeWarning)
Epoch 2/200
Learning rate:  0.001
1563/1563 [==============================] - 94s 60ms/step - loss: 1.2313 - accuracy: 0.6161 - val_loss: 1.1633 - val_accuracy: 0.6397
Epoch 3/200
Learning rate:  0.001
1563/1563 [==============================] - 94s 60ms/step - loss: 1.0533 - accuracy: 0.6842 - val_loss: 1.0276 - val_accuracy: 0.6976
Epoch 4/200
Learning rate:  0.001
1563/1563 [==============================] - 92s 59ms/step - loss: 0.9482 - accuracy: 0.7261 - val_loss: 1.2751 - val_accuracy: 0.6241
Epoch 5/200
Learning rate:  0.001
1563/1563 [==============================] - 98s 63ms/step - loss: 0.8763 - accuracy: 0.7547 - val_loss: 1.1799 - val_accuracy: 0.6838
Epoch 6/200
Learning rate:  0.001
1563/1563 [==============================] - 93s 59ms/step - loss: 0.8290 - accuracy: 0.7732 - val_loss: 0.8725 - val_accuracy: 0.7599
Epoch 7/200
Learning rate:  0.001
1563/1563 [==============================] - 94s 60ms/step - loss: 0.7937 - accuracy: 0.7866 - val_loss: 1.1870 - val_accuracy: 0.6852
Epoch 8/200
Learning rate:  0.001
1563/1563 [==============================] - 97s 62ms/step - loss: 0.7625 - accuracy: 0.7994 - val_loss: 0.8278 - val_accuracy: 0.7790
Epoch 9/200
Learning rate:  0.001
1563/1563 [==============================] - 99s 63ms/step - loss: 0.7379 - accuracy: 0.8089 - val_loss: 0.9497 - val_accuracy: 0.7545
Epoch 10/200
Learning rate:  0.001
1563/1563 [==============================] - 91s 58ms/step - loss: 0.7212 - accuracy: 0.8143 - val_loss: 0.8850 - val_accuracy: 0.7719
Epoch 11/200
Learning rate:  0.001
1563/1563 [==============================] - 95s 60ms/step - loss: 0.6999 - accuracy: 0.8225 - val_loss: 0.8480 - val_accuracy: 0.7767
Epoch 12/200
Learning rate:  0.001
1563/1563 [==============================] - 93s 60ms/step - loss: 0.6896 - accuracy: 0.8265 - val_loss: 0.7616 - val_accuracy: 0.8063
Epoch 13/200
Learning rate:  0.001
1563/1563 [==============================] - 91s 58ms/step - loss: 0.6746 - accuracy: 0.8338 - val_loss: 0.8415 - val_accuracy: 0.7851
Epoch 14/200
Learning rate:  0.001
1563/1563 [==============================] - 98s 63ms/step - loss: 0.6642 - accuracy: 0.8378 - val_loss: 0.8608 - val_accuracy: 0.7775
Epoch 15/200
Learning rate:  0.001
1563/1563 [==============================] - 94s 60ms/step - loss: 0.6501 - accuracy: 0.8423 - val_loss: 0.8510 - val_accuracy: 0.7770
Epoch 16/200
Learning rate:  0.001
1563/1563 [==============================] - 89s 57ms/step - loss: 0.6393 - accuracy: 0.8479 - val_loss: 0.8638 - val_accuracy: 0.7814
Epoch 17/200
Learning rate:  0.001
1563/1563 [==============================] - 95s 61ms/step - loss: 0.6355 - accuracy: 0.8470 - val_loss: 0.7241 - val_accuracy: 0.8216
Epoch 18/200
Learning rate:  0.001
1563/1563 [==============================] - 91s 58ms/step - loss: 0.6221 - accuracy: 0.8524 - val_loss: 1.1078 - val_accuracy: 0.7379
Epoch 19/200
Learning rate:  0.001
1563/1563 [==============================] - 92s 59ms/step - loss: 0.6178 - accuracy: 0.8549 - val_loss: 0.7532 - val_accuracy: 0.8163
Epoch 20/200
Learning rate:  0.001
1563/1563 [==============================] - 96s 62ms/step - loss: 0.6091 - accuracy: 0.8578 - val_loss: 0.6477 - val_accuracy: 0.8508
Epoch 21/200
Learning rate:  0.001
1563/1563 [==============================] - 94s 60ms/step - loss: 0.6051 - accuracy: 0.8596 - val_loss: 0.8007 - val_accuracy: 0.8062
Epoch 22/200
Learning rate:  0.001
1563/1563 [==============================] - 94s 60ms/step - loss: 0.6003 - accuracy: 0.8621 - val_loss: 0.7801 - val_accuracy: 0.8096
Epoch 23/200
Learning rate:  0.001
1563/1563 [==============================] - 96s 62ms/step - loss: 0.6010 - accuracy: 0.8594 - val_loss: 0.7796 - val_accuracy: 0.8145
Epoch 24/200
Learning rate:  0.001
1563/1563 [==============================] - 95s 61ms/step - loss: 0.5891 - accuracy: 0.8652 - val_loss: 0.6671 - val_accuracy: 0.8459
Epoch 25/200
Learning rate:  0.001
1563/1563 [==============================] - 101s 65ms/step - loss: 0.5896 - accuracy: 0.8654 - val_loss: 0.7667 - val_accuracy: 0.8187
Epoch 26/200
Learning rate:  0.001
1563/1563 [==============================] - 103s 66ms/step - loss: 0.5820 - accuracy: 0.8691 - val_loss: 0.6568 - val_accuracy: 0.8508
Epoch 27/200
Learning rate:  0.001
1563/1563 [==============================] - 99s 63ms/step - loss: 0.5797 - accuracy: 0.8689 - val_loss: 0.7201 - val_accuracy: 0.8269
Epoch 28/200
Learning rate:  0.001
1563/1563 [==============================] - 94s 60ms/step - loss: 0.5771 - accuracy: 0.8702 - val_loss: 0.6693 - val_accuracy: 0.8477
Epoch 29/200
Learning rate:  0.001
1563/1563 [==============================] - 97s 62ms/step - loss: 0.5693 - accuracy: 0.8740 - val_loss: 1.0859 - val_accuracy: 0.7421
Epoch 30/200
Learning rate:  0.001
1563/1563 [==============================] - 101s 64ms/step - loss: 0.5706 - accuracy: 0.8738 - val_loss: 0.7400 - val_accuracy: 0.8251
Epoch 31/200
Learning rate:  0.001
1563/1563 [==============================] - 95s 61ms/step - loss: 0.5658 - accuracy: 0.8748 - val_loss: 0.7000 - val_accuracy: 0.8332
Epoch 32/200
Learning rate:  0.001
1563/1563 [==============================] - 99s 63ms/step - loss: 0.5626 - accuracy: 0.8774 - val_loss: 0.8014 - val_accuracy: 0.8078
Epoch 33/200
Learning rate:  0.001
1563/1563 [==============================] - 102s 66ms/step - loss: 0.5619 - accuracy: 0.8760 - val_loss: 0.8234 - val_accuracy: 0.7989
Epoch 34/200
Learning rate:  0.001
1563/1563 [==============================] - 93s 59ms/step - loss: 0.5573 - accuracy: 0.8785 - val_loss: 0.6726 - val_accuracy: 0.8442
Epoch 35/200
Learning rate:  0.001
1563/1563 [==============================] - 101s 65ms/step - loss: 0.5550 - accuracy: 0.8783 - val_loss: 0.9754 - val_accuracy: 0.7682
Epoch 36/200
Learning rate:  0.001
1563/1563 [==============================] - 101s 64ms/step - loss: 0.5540 - accuracy: 0.8799 - val_loss: 0.7412 - val_accuracy: 0.8246
Epoch 37/200
Learning rate:  0.001
1563/1563 [==============================] - 94s 60ms/step - loss: 0.5478 - accuracy: 0.8810 - val_loss: 0.7362 - val_accuracy: 0.8321
Epoch 38/200
Learning rate:  0.001
1563/1563 [==============================] - 94s 60ms/step - loss: 0.5461 - accuracy: 0.8832 - val_loss: 0.6396 - val_accuracy: 0.8605
Epoch 39/200
Learning rate:  0.001
1563/1563 [==============================] - 95s 61ms/step - loss: 0.5417 - accuracy: 0.8849 - val_loss: 0.6857 - val_accuracy: 0.8451
Epoch 40/200
Learning rate:  0.001
1563/1563 [==============================] - 94s 60ms/step - loss: 0.5427 - accuracy: 0.8840 - val_loss: 0.6789 - val_accuracy: 0.8469
Epoch 41/200
Learning rate:  0.001
1563/1563 [==============================] - 99s 64ms/step - loss: 0.5400 - accuracy: 0.8857 - val_loss: 0.7397 - val_accuracy: 0.8308
Epoch 42/200
Learning rate:  0.001
1563/1563 [==============================] - 101s 65ms/step - loss: 0.5460 - accuracy: 0.8836 - val_loss: 0.7218 - val_accuracy: 0.8360
Epoch 43/200
Learning rate:  0.001
1563/1563 [==============================] - 95s 61ms/step - loss: 0.5358 - accuracy: 0.8877 - val_loss: 0.6488 - val_accuracy: 0.8531
Epoch 44/200
Learning rate:  0.001
1563/1563 [==============================] - 98s 63ms/step - loss: 0.5342 - accuracy: 0.8867 - val_loss: 0.6707 - val_accuracy: 0.8507
Epoch 45/200
Learning rate:  0.001
1563/1563 [==============================] - 98s 63ms/step - loss: 0.5338 - accuracy: 0.8875 - val_loss: 0.6588 - val_accuracy: 0.8493
Epoch 46/200
Learning rate:  0.001
1563/1563 [==============================] - 96s 61ms/step - loss: 0.5307 - accuracy: 0.8870 - val_loss: 0.8961 - val_accuracy: 0.7928
Epoch 47/200
Learning rate:  0.001
1563/1563 [==============================] - 98s 63ms/step - loss: 0.5302 - accuracy: 0.8877 - val_loss: 0.7348 - val_accuracy: 0.8391
Epoch 48/200
Learning rate:  0.001
1563/1563 [==============================] - 100s 64ms/step - loss: 0.5333 - accuracy: 0.8873 - val_loss: 0.6815 - val_accuracy: 0.8498
Epoch 49/200
Learning rate:  0.001
1563/1563 [==============================] - 95s 61ms/step - loss: 0.5267 - accuracy: 0.8914 - val_loss: 0.7237 - val_accuracy: 0.8390
Epoch 50/200
Learning rate:  0.001
1563/1563 [==============================] - 101s 65ms/step - loss: 0.5304 - accuracy: 0.8882 - val_loss: 0.6860 - val_accuracy: 0.8454
Epoch 51/200
Learning rate:  0.001
1563/1563 [==============================] - 97s 62ms/step - loss: 0.5238 - accuracy: 0.8915 - val_loss: 0.6352 - val_accuracy: 0.8598
Epoch 52/200
Learning rate:  0.001
1563/1563 [==============================] - 94s 60ms/step - loss: 0.5250 - accuracy: 0.8898 - val_loss: 0.8026 - val_accuracy: 0.8173
Epoch 53/200
Learning rate:  0.001
1563/1563 [==============================] - 98s 63ms/step - loss: 0.5244 - accuracy: 0.8906 - val_loss: 0.6443 - val_accuracy: 0.8586
Epoch 54/200
Learning rate:  0.001
1563/1563 [==============================] - 99s 64ms/step - loss: 0.5236 - accuracy: 0.8918 - val_loss: 0.9139 - val_accuracy: 0.7908
Epoch 55/200
Learning rate:  0.001
1563/1563 [==============================] - 96s 61ms/step - loss: 0.5212 - accuracy: 0.8925 - val_loss: 0.6718 - val_accuracy: 0.8489
Epoch 56/200
Learning rate:  0.001
1563/1563 [==============================] - 99s 63ms/step - loss: 0.5193 - accuracy: 0.8938 - val_loss: 0.6052 - val_accuracy: 0.8678
Epoch 57/200
Learning rate:  0.001
1563/1563 [==============================] - 97s 62ms/step - loss: 0.5212 - accuracy: 0.8915 - val_loss: 0.8292 - val_accuracy: 0.8088
Epoch 58/200
Learning rate:  0.001
1563/1563 [==============================] - 96s 62ms/step - loss: 0.5177 - accuracy: 0.8933 - val_loss: 0.7282 - val_accuracy: 0.8370
Epoch 59/200
Learning rate:  0.001
1563/1563 [==============================] - 99s 63ms/step - loss: 0.5150 - accuracy: 0.8958 - val_loss: 0.6112 - val_accuracy: 0.8663
Epoch 60/200
Learning rate:  0.001
1563/1563 [==============================] - 101s 65ms/step - loss: 0.5133 - accuracy: 0.8945 - val_loss: 0.7822 - val_accuracy: 0.8178
Epoch 61/200
Learning rate:  0.001
1563/1563 [==============================] - 93s 60ms/step - loss: 0.5164 - accuracy: 0.8936 - val_loss: 0.6592 - val_accuracy: 0.8560
Epoch 62/200
Learning rate:  0.001
1563/1563 [==============================] - 100s 64ms/step - loss: 0.5141 - accuracy: 0.8953 - val_loss: 0.6427 - val_accuracy: 0.8583
Epoch 63/200
Learning rate:  0.001
1563/1563 [==============================] - 100s 64ms/step - loss: 0.5143 - accuracy: 0.8952 - val_loss: 0.6357 - val_accuracy: 0.8587
Epoch 64/200
Learning rate:  0.001
1563/1563 [==============================] - 97s 62ms/step - loss: 0.5101 - accuracy: 0.8953 - val_loss: 0.7361 - val_accuracy: 0.8427
Epoch 65/200
Learning rate:  0.001
1563/1563 [==============================] - 100s 64ms/step - loss: 0.5068 - accuracy: 0.8962 - val_loss: 0.6661 - val_accuracy: 0.8537
Epoch 66/200
Learning rate:  0.001
1563/1563 [==============================] - 101s 64ms/step - loss: 0.5126 - accuracy: 0.8959 - val_loss: 0.8921 - val_accuracy: 0.8074
Epoch 67/200
Learning rate:  0.001
1563/1563 [==============================] - 98s 62ms/step - loss: 0.5054 - accuracy: 0.8982 - val_loss: 0.6273 - val_accuracy: 0.8628
Epoch 68/200
Learning rate:  0.001
1563/1563 [==============================] - 98s 63ms/step - loss: 0.5067 - accuracy: 0.8973 - val_loss: 0.6847 - val_accuracy: 0.8505
Epoch 69/200
Learning rate:  0.001
1563/1563 [==============================] - 97s 62ms/step - loss: 0.5024 - accuracy: 0.8994 - val_loss: 0.7245 - val_accuracy: 0.8410
Epoch 70/200
Learning rate:  0.001
1563/1563 [==============================] - 96s 62ms/step - loss: 0.5029 - accuracy: 0.8985 - val_loss: 0.7776 - val_accuracy: 0.8251
Epoch 71/200
Learning rate:  0.001
1563/1563 [==============================] - 100s 64ms/step - loss: 0.5032 - accuracy: 0.8990 - val_loss: 0.8186 - val_accuracy: 0.8204
Epoch 72/200
Learning rate:  0.001
1563/1563 [==============================] - 101s 65ms/step - loss: 0.4986 - accuracy: 0.8993 - val_loss: 0.6269 - val_accuracy: 0.8642
Epoch 73/200
Learning rate:  0.001
1563/1563 [==============================] - 96s 61ms/step - loss: 0.5010 - accuracy: 0.9007 - val_loss: 0.6221 - val_accuracy: 0.8692
Epoch 74/200
Learning rate:  0.001
1563/1563 [==============================] - 97s 62ms/step - loss: 0.4985 - accuracy: 0.8998 - val_loss: 0.8424 - val_accuracy: 0.8088
Epoch 75/200
Learning rate:  0.001
1563/1563 [==============================] - 100s 64ms/step - loss: 0.5012 - accuracy: 0.8994 - val_loss: 0.7132 - val_accuracy: 0.8388
Epoch 76/200
Learning rate:  0.001
1563/1563 [==============================] - 92s 59ms/step - loss: 0.5027 - accuracy: 0.8987 - val_loss: 0.6768 - val_accuracy: 0.8448
Epoch 77/200
Learning rate:  0.001
1563/1563 [==============================] - 98s 62ms/step - loss: 0.4966 - accuracy: 0.9006 - val_loss: 0.6833 - val_accuracy: 0.8494
Epoch 78/200
Learning rate:  0.001
1563/1563 [==============================] - 96s 61ms/step - loss: 0.5039 - accuracy: 0.8976 - val_loss: 0.6121 - val_accuracy: 0.8681
Epoch 79/200
Learning rate:  0.001
1563/1563 [==============================] - 94s 60ms/step - loss: 0.4977 - accuracy: 0.9016 - val_loss: 0.7325 - val_accuracy: 0.8442
Epoch 80/200
Learning rate:  0.001
1563/1563 [==============================] - 96s 61ms/step - loss: 0.4955 - accuracy: 0.9009 - val_loss: 0.6505 - val_accuracy: 0.8569
Epoch 81/200
Learning rate:  0.001
1563/1563 [==============================] - 96s 62ms/step - loss: 0.4925 - accuracy: 0.9017 - val_loss: 0.6268 - val_accuracy: 0.8655
Epoch 82/200
Learning rate:  0.0001
1563/1563 [==============================] - 98s 62ms/step - loss: 0.4126 - accuracy: 0.9286 - val_loss: 0.5112 - val_accuracy: 0.8999
Epoch 83/200
Learning rate:  0.0001
1563/1563 [==============================] - 96s 61ms/step - loss: 0.3749 - accuracy: 0.9407 - val_loss: 0.5044 - val_accuracy: 0.9022
Epoch 84/200
Learning rate:  0.0001
1563/1563 [==============================] - 99s 63ms/step - loss: 0.3620 - accuracy: 0.9446 - val_loss: 0.4960 - val_accuracy: 0.9054
Epoch 85/200
Learning rate:  0.0001
1563/1563 [==============================] - 97s 62ms/step - loss: 0.3490 - accuracy: 0.9462 - val_loss: 0.4931 - val_accuracy: 0.9064
Epoch 86/200
Learning rate:  0.0001
1563/1563 [==============================] - 100s 64ms/step - loss: 0.3375 - accuracy: 0.9496 - val_loss: 0.4849 - val_accuracy: 0.9048
Epoch 87/200
Learning rate:  0.0001
1563/1563 [==============================] - 94s 60ms/step - loss: 0.3306 - accuracy: 0.9523 - val_loss: 0.4776 - val_accuracy: 0.9066
Epoch 88/200
Learning rate:  0.0001
1563/1563 [==============================] - 95s 60ms/step - loss: 0.3230 - accuracy: 0.9527 - val_loss: 0.4767 - val_accuracy: 0.9078
Epoch 89/200
Learning rate:  0.0001
1563/1563 [==============================] - 99s 63ms/step - loss: 0.3177 - accuracy: 0.9541 - val_loss: 0.4885 - val_accuracy: 0.9055
Epoch 90/200
Learning rate:  0.0001
1563/1563 [==============================] - 99s 63ms/step - loss: 0.3120 - accuracy: 0.9544 - val_loss: 0.4663 - val_accuracy: 0.9097
Epoch 91/200
Learning rate:  0.0001
1563/1563 [==============================] - 93s 59ms/step - loss: 0.3049 - accuracy: 0.9562 - val_loss: 0.4675 - val_accuracy: 0.9110
Epoch 92/200
Learning rate:  0.0001
1563/1563 [==============================] - 96s 61ms/step - loss: 0.2961 - accuracy: 0.9582 - val_loss: 0.4800 - val_accuracy: 0.9059
Epoch 93/200
Learning rate:  0.0001
1563/1563 [==============================] - 95s 61ms/step - loss: 0.2946 - accuracy: 0.9577 - val_loss: 0.4724 - val_accuracy: 0.9093
Epoch 94/200
Learning rate:  0.0001
1563/1563 [==============================] - 92s 59ms/step - loss: 0.2884 - accuracy: 0.9593 - val_loss: 0.4710 - val_accuracy: 0.9101
Epoch 95/200
Learning rate:  0.0001
1563/1563 [==============================] - 99s 63ms/step - loss: 0.2842 - accuracy: 0.9598 - val_loss: 0.4719 - val_accuracy: 0.9072
Epoch 96/200
Learning rate:  0.0001
1563/1563 [==============================] - 99s 63ms/step - loss: 0.2787 - accuracy: 0.9616 - val_loss: 0.4642 - val_accuracy: 0.9115
Epoch 97/200
Learning rate:  0.0001
1563/1563 [==============================] - 96s 62ms/step - loss: 0.2732 - accuracy: 0.9623 - val_loss: 0.4672 - val_accuracy: 0.9078
Epoch 98/200
Learning rate:  0.0001
1563/1563 [==============================] - 97s 62ms/step - loss: 0.2695 - accuracy: 0.9634 - val_loss: 0.4602 - val_accuracy: 0.9099
Epoch 99/200
Learning rate:  0.0001
1563/1563 [==============================] - 100s 64ms/step - loss: 0.2671 - accuracy: 0.9632 - val_loss: 0.4683 - val_accuracy: 0.9077
Epoch 100/200
Learning rate:  0.0001
1563/1563 [==============================] - 98s 62ms/step - loss: 0.2623 - accuracy: 0.9642 - val_loss: 0.4616 - val_accuracy: 0.9080
Epoch 101/200
Learning rate:  0.0001
1563/1563 [==============================] - 99s 63ms/step - loss: 0.2615 - accuracy: 0.9630 - val_loss: 0.4519 - val_accuracy: 0.9124
Epoch 102/200
Learning rate:  0.0001
1563/1563 [==============================] - 100s 64ms/step - loss: 0.2539 - accuracy: 0.9658 - val_loss: 0.4647 - val_accuracy: 0.9109
Epoch 103/200
Learning rate:  0.0001
1563/1563 [==============================] - 93s 60ms/step - loss: 0.2512 - accuracy: 0.9661 - val_loss: 0.4578 - val_accuracy: 0.9097
Epoch 104/200
Learning rate:  0.0001
1563/1563 [==============================] - 102s 65ms/step - loss: 0.2491 - accuracy: 0.9660 - val_loss: 0.4634 - val_accuracy: 0.9100
Epoch 105/200
Learning rate:  0.0001
1563/1563 [==============================] - 96s 61ms/step - loss: 0.2445 - accuracy: 0.9672 - val_loss: 0.4574 - val_accuracy: 0.9119
Epoch 106/200
Learning rate:  0.0001
1563/1563 [==============================] - 98s 63ms/step - loss: 0.2439 - accuracy: 0.9666 - val_loss: 0.4641 - val_accuracy: 0.9076
Epoch 107/200
Learning rate:  0.0001
1563/1563 [==============================] - 98s 63ms/step - loss: 0.2399 - accuracy: 0.9677 - val_loss: 0.4651 - val_accuracy: 0.9108
Epoch 108/200
Learning rate:  0.0001
1563/1563 [==============================] - 101s 65ms/step - loss: 0.2370 - accuracy: 0.9684 - val_loss: 0.4523 - val_accuracy: 0.9139
Epoch 109/200
Learning rate:  0.0001
1563/1563 [==============================] - 97s 62ms/step - loss: 0.2357 - accuracy: 0.9686 - val_loss: 0.4567 - val_accuracy: 0.9094
Epoch 110/200
Learning rate:  0.0001
1563/1563 [==============================] - 100s 64ms/step - loss: 0.2326 - accuracy: 0.9689 - val_loss: 0.4692 - val_accuracy: 0.9070
Epoch 111/200
Learning rate:  0.0001
1563/1563 [==============================] - 96s 61ms/step - loss: 0.2292 - accuracy: 0.9698 - val_loss: 0.4608 - val_accuracy: 0.9096
Epoch 112/200
Learning rate:  0.0001
1563/1563 [==============================] - 95s 61ms/step - loss: 0.2238 - accuracy: 0.9718 - val_loss: 0.4633 - val_accuracy: 0.9092
Epoch 113/200
Learning rate:  0.0001
1563/1563 [==============================] - 95s 61ms/step - loss: 0.2247 - accuracy: 0.9703 - val_loss: 0.4741 - val_accuracy: 0.9083
Epoch 114/200
Learning rate:  0.0001
1563/1563 [==============================] - 95s 61ms/step - loss: 0.2212 - accuracy: 0.9722 - val_loss: 0.4676 - val_accuracy: 0.9066
Epoch 115/200
Learning rate:  0.0001
1563/1563 [==============================] - 90s 58ms/step - loss: 0.2228 - accuracy: 0.9704 - val_loss: 0.4635 - val_accuracy: 0.9100
Epoch 116/200
Learning rate:  0.0001
1563/1563 [==============================] - 93s 59ms/step - loss: 0.2184 - accuracy: 0.9710 - val_loss: 0.4772 - val_accuracy: 0.9083
Epoch 117/200
Learning rate:  0.0001
1563/1563 [==============================] - 95s 61ms/step - loss: 0.2153 - accuracy: 0.9723 - val_loss: 0.4661 - val_accuracy: 0.9078
Epoch 118/200
Learning rate:  0.0001
1563/1563 [==============================] - 86s 55ms/step - loss: 0.2125 - accuracy: 0.9725 - val_loss: 0.4642 - val_accuracy: 0.9085
Epoch 119/200
Learning rate:  0.0001
1563/1563 [==============================] - 95s 61ms/step - loss: 0.2131 - accuracy: 0.9724 - val_loss: 0.4626 - val_accuracy: 0.9092
Epoch 120/200
Learning rate:  0.0001
1563/1563 [==============================] - 96s 61ms/step - loss: 0.2100 - accuracy: 0.9729 - val_loss: 0.4905 - val_accuracy: 0.9051
Epoch 121/200
Learning rate:  0.0001
1563/1563 [==============================] - 88s 57ms/step - loss: 0.2070 - accuracy: 0.9732 - val_loss: 0.4681 - val_accuracy: 0.9072
Epoch 122/200
Learning rate:  1e-05
1563/1563 [==============================] - 94s 60ms/step - loss: 0.1978 - accuracy: 0.9759 - val_loss: 0.4565 - val_accuracy: 0.9086
Epoch 123/200
Learning rate:  1e-05
1563/1563 [==============================] - 92s 59ms/step - loss: 0.1934 - accuracy: 0.9791 - val_loss: 0.4514 - val_accuracy: 0.9107
Epoch 124/200
Learning rate:  1e-05
1563/1563 [==============================] - 89s 57ms/step - loss: 0.1929 - accuracy: 0.9788 - val_loss: 0.4530 - val_accuracy: 0.9104
Epoch 125/200
Learning rate:  1e-05
1563/1563 [==============================] - 97s 62ms/step - loss: 0.1916 - accuracy: 0.9789 - val_loss: 0.4515 - val_accuracy: 0.9101
Epoch 126/200
Learning rate:  1e-05
1563/1563 [==============================] - 92s 59ms/step - loss: 0.1895 - accuracy: 0.9800 - val_loss: 0.4483 - val_accuracy: 0.9119
Epoch 127/200
Learning rate:  1e-05
1563/1563 [==============================] - 97s 62ms/step - loss: 0.1913 - accuracy: 0.9794 - val_loss: 0.4525 - val_accuracy: 0.9110
Epoch 128/200
Learning rate:  1e-05
1563/1563 [==============================] - 95s 61ms/step - loss: 0.1900 - accuracy: 0.9793 - val_loss: 0.4514 - val_accuracy: 0.9107
Epoch 129/200
Learning rate:  1e-05
1563/1563 [==============================] - 99s 64ms/step - loss: 0.1881 - accuracy: 0.9805 - val_loss: 0.4504 - val_accuracy: 0.9114
Epoch 130/200
Learning rate:  1e-05
1563/1563 [==============================] - 104s 66ms/step - loss: 0.1877 - accuracy: 0.9801 - val_loss: 0.4500 - val_accuracy: 0.9113
Epoch 131/200
Learning rate:  1e-05
1563/1563 [==============================] - 97s 62ms/step - loss: 0.1870 - accuracy: 0.9804 - val_loss: 0.4493 - val_accuracy: 0.9117
Epoch 132/200
Learning rate:  1e-05
1563/1563 [==============================] - 96s 62ms/step - loss: 0.1873 - accuracy: 0.9803 - val_loss: 0.4483 - val_accuracy: 0.9118
Epoch 133/200
Learning rate:  1e-05
1563/1563 [==============================] - 92s 59ms/step - loss: 0.1867 - accuracy: 0.9804 - val_loss: 0.4503 - val_accuracy: 0.9127
Epoch 134/200
Learning rate:  1e-05
1563/1563 [==============================] - 96s 61ms/step - loss: 0.1853 - accuracy: 0.9804 - val_loss: 0.4510 - val_accuracy: 0.9118
Epoch 135/200
Learning rate:  1e-05
1563/1563 [==============================] - 94s 60ms/step - loss: 0.1874 - accuracy: 0.9806 - val_loss: 0.4525 - val_accuracy: 0.9113
Epoch 136/200
Learning rate:  1e-05
1563/1563 [==============================] - 90s 58ms/step - loss: 0.1844 - accuracy: 0.9813 - val_loss: 0.4498 - val_accuracy: 0.9123
Epoch 137/200
Learning rate:  1e-05
1563/1563 [==============================] - 99s 63ms/step - loss: 0.1862 - accuracy: 0.9805 - val_loss: 0.4480 - val_accuracy: 0.9134
Epoch 138/200
Learning rate:  1e-05
1563/1563 [==============================] - 90s 58ms/step - loss: 0.1841 - accuracy: 0.9806 - val_loss: 0.4484 - val_accuracy: 0.9134
Epoch 139/200
Learning rate:  1e-05
1563/1563 [==============================] - 94s 60ms/step - loss: 0.1839 - accuracy: 0.9805 - val_loss: 0.4488 - val_accuracy: 0.9123
Epoch 140/200
Learning rate:  1e-05
1563/1563 [==============================] - 97s 62ms/step - loss: 0.1819 - accuracy: 0.9820 - val_loss: 0.4468 - val_accuracy: 0.9125
Epoch 141/200
Learning rate:  1e-05
1563/1563 [==============================] - 95s 61ms/step - loss: 0.1820 - accuracy: 0.9820 - val_loss: 0.4463 - val_accuracy: 0.9119
Epoch 142/200
Learning rate:  1e-05
1563/1563 [==============================] - 90s 58ms/step - loss: 0.1831 - accuracy: 0.9811 - val_loss: 0.4513 - val_accuracy: 0.9125
Epoch 143/200
Learning rate:  1e-05
1563/1563 [==============================] - 96s 61ms/step - loss: 0.1830 - accuracy: 0.9813 - val_loss: 0.4491 - val_accuracy: 0.9117
Epoch 144/200
Learning rate:  1e-05
1563/1563 [==============================] - 97s 62ms/step - loss: 0.1802 - accuracy: 0.9830 - val_loss: 0.4513 - val_accuracy: 0.9127
Epoch 145/200
Learning rate:  1e-05
1563/1563 [==============================] - 99s 64ms/step - loss: 0.1793 - accuracy: 0.9829 - val_loss: 0.4513 - val_accuracy: 0.9128
Epoch 146/200
Learning rate:  1e-05
1563/1563 [==============================] - 98s 63ms/step - loss: 0.1819 - accuracy: 0.9812 - val_loss: 0.4494 - val_accuracy: 0.9124
Epoch 147/200
Learning rate:  1e-05
1563/1563 [==============================] - 91s 58ms/step - loss: 0.1818 - accuracy: 0.9803 - val_loss: 0.4512 - val_accuracy: 0.9120
Epoch 148/200
Learning rate:  1e-05
1563/1563 [==============================] - 91s 58ms/step - loss: 0.1812 - accuracy: 0.9809 - val_loss: 0.4531 - val_accuracy: 0.9127
Epoch 149/200
Learning rate:  1e-05
1563/1563 [==============================] - 93s 59ms/step - loss: 0.1790 - accuracy: 0.9828 - val_loss: 0.4506 - val_accuracy: 0.9135
Epoch 150/200
Learning rate:  1e-05
1563/1563 [==============================] - 91s 59ms/step - loss: 0.1791 - accuracy: 0.9826 - val_loss: 0.4495 - val_accuracy: 0.9131
Epoch 151/200
Learning rate:  1e-05
1563/1563 [==============================] - 94s 60ms/step - loss: 0.1805 - accuracy: 0.9821 - val_loss: 0.4493 - val_accuracy: 0.9136
Epoch 152/200
Learning rate:  1e-05
1563/1563 [==============================] - 95s 61ms/step - loss: 0.1783 - accuracy: 0.9824 - val_loss: 0.4516 - val_accuracy: 0.9129
Epoch 153/200
Learning rate:  1e-05
1563/1563 [==============================] - 95s 60ms/step - loss: 0.1769 - accuracy: 0.9837 - val_loss: 0.4533 - val_accuracy: 0.9134
Epoch 154/200
Learning rate:  1e-05
1563/1563 [==============================] - 87s 56ms/step - loss: 0.1783 - accuracy: 0.9823 - val_loss: 0.4512 - val_accuracy: 0.9136
Epoch 155/200
Learning rate:  1e-05
1563/1563 [==============================] - 92s 59ms/step - loss: 0.1788 - accuracy: 0.9819 - val_loss: 0.4517 - val_accuracy: 0.9136
Epoch 156/200
Learning rate:  1e-05
1563/1563 [==============================] - 95s 61ms/step - loss: 0.1768 - accuracy: 0.9834 - val_loss: 0.4486 - val_accuracy: 0.9152
Epoch 157/200
Learning rate:  1e-05
1563/1563 [==============================] - 92s 59ms/step - loss: 0.1791 - accuracy: 0.9818 - val_loss: 0.4504 - val_accuracy: 0.9143
Epoch 158/200
Learning rate:  1e-05
1563/1563 [==============================] - 96s 62ms/step - loss: 0.1769 - accuracy: 0.9822 - val_loss: 0.4519 - val_accuracy: 0.9141
Epoch 159/200
Learning rate:  1e-05
1563/1563 [==============================] - 94s 60ms/step - loss: 0.1756 - accuracy: 0.9831 - val_loss: 0.4515 - val_accuracy: 0.9150
Epoch 160/200
Learning rate:  1e-05
1563/1563 [==============================] - 90s 58ms/step - loss: 0.1772 - accuracy: 0.9820 - val_loss: 0.4501 - val_accuracy: 0.9144
Epoch 161/200
Learning rate:  1e-05
1563/1563 [==============================] - 93s 60ms/step - loss: 0.1748 - accuracy: 0.9829 - val_loss: 0.4518 - val_accuracy: 0.9141
Epoch 162/200
Learning rate:  1e-06
1563/1563 [==============================] - 93s 60ms/step - loss: 0.1753 - accuracy: 0.9826 - val_loss: 0.4511 - val_accuracy: 0.9130
Epoch 163/200
Learning rate:  1e-06
1563/1563 [==============================] - 91s 58ms/step - loss: 0.1758 - accuracy: 0.9827 - val_loss: 0.4513 - val_accuracy: 0.9138
Epoch 164/200
Learning rate:  1e-06
1563/1563 [==============================] - 93s 59ms/step - loss: 0.1764 - accuracy: 0.9822 - val_loss: 0.4536 - val_accuracy: 0.9141
Epoch 165/200
Learning rate:  1e-06
1563/1563 [==============================] - 92s 59ms/step - loss: 0.1751 - accuracy: 0.9839 - val_loss: 0.4511 - val_accuracy: 0.9133
Epoch 166/200
Learning rate:  1e-06
1563/1563 [==============================] - 89s 57ms/step - loss: 0.1743 - accuracy: 0.9838 - val_loss: 0.4500 - val_accuracy: 0.9144
Epoch 167/200
Learning rate:  1e-06
1563/1563 [==============================] - 95s 61ms/step - loss: 0.1730 - accuracy: 0.9841 - val_loss: 0.4530 - val_accuracy: 0.9143
Epoch 168/200
Learning rate:  1e-06
1563/1563 [==============================] - 100s 64ms/step - loss: 0.1747 - accuracy: 0.9827 - val_loss: 0.4506 - val_accuracy: 0.9142
Epoch 169/200
Learning rate:  1e-06
1563/1563 [==============================] - 94s 60ms/step - loss: 0.1740 - accuracy: 0.9834 - val_loss: 0.4509 - val_accuracy: 0.9148
Epoch 170/200
Learning rate:  1e-06
1563/1563 [==============================] - 96s 61ms/step - loss: 0.1749 - accuracy: 0.9835 - val_loss: 0.4491 - val_accuracy: 0.9146
Epoch 171/200
Learning rate:  1e-06
1563/1563 [==============================] - 91s 58ms/step - loss: 0.1734 - accuracy: 0.9837 - val_loss: 0.4500 - val_accuracy: 0.9145
Epoch 172/200
Learning rate:  1e-06
1563/1563 [==============================] - 92s 59ms/step - loss: 0.1739 - accuracy: 0.9837 - val_loss: 0.4506 - val_accuracy: 0.9143
Epoch 173/200
Learning rate:  1e-06
1563/1563 [==============================] - 97s 62ms/step - loss: 0.1747 - accuracy: 0.9831 - val_loss: 0.4513 - val_accuracy: 0.9144
Epoch 174/200
Learning rate:  1e-06
1563/1563 [==============================] - 99s 63ms/step - loss: 0.1762 - accuracy: 0.9822 - val_loss: 0.4503 - val_accuracy: 0.9143
Epoch 175/200
Learning rate:  1e-06
1563/1563 [==============================] - 91s 58ms/step - loss: 0.1738 - accuracy: 0.9837 - val_loss: 0.4504 - val_accuracy: 0.9146
Epoch 176/200
Learning rate:  1e-06
1563/1563 [==============================] - 99s 63ms/step - loss: 0.1744 - accuracy: 0.9833 - val_loss: 0.4514 - val_accuracy: 0.9145
Epoch 177/200
Learning rate:  1e-06
1563/1563 [==============================] - 96s 61ms/step - loss: 0.1738 - accuracy: 0.9834 - val_loss: 0.4534 - val_accuracy: 0.9141
Epoch 178/200
Learning rate:  1e-06
1563/1563 [==============================] - 94s 60ms/step - loss: 0.1744 - accuracy: 0.9827 - val_loss: 0.4491 - val_accuracy: 0.9148
Epoch 179/200
Learning rate:  1e-06
1563/1563 [==============================] - 97s 62ms/step - loss: 0.1743 - accuracy: 0.9833 - val_loss: 0.4500 - val_accuracy: 0.9139
Epoch 180/200
Learning rate:  1e-06
1563/1563 [==============================] - 97s 62ms/step - loss: 0.1733 - accuracy: 0.9838 - val_loss: 0.4502 - val_accuracy: 0.9150
Epoch 181/200
Learning rate:  1e-06
1563/1563 [==============================] - 96s 61ms/step - loss: 0.1753 - accuracy: 0.9825 - val_loss: 0.4520 - val_accuracy: 0.9149
Epoch 182/200
Learning rate:  5e-07
1563/1563 [==============================] - 92s 59ms/step - loss: 0.1744 - accuracy: 0.9834 - val_loss: 0.4505 - val_accuracy: 0.9144
Epoch 183/200
Learning rate:  5e-07
1563/1563 [==============================] - 91s 58ms/step - loss: 0.1731 - accuracy: 0.9833 - val_loss: 0.4494 - val_accuracy: 0.9145
Epoch 184/200
Learning rate:  5e-07
1563/1563 [==============================] - 88s 56ms/step - loss: 0.1741 - accuracy: 0.9834 - val_loss: 0.4500 - val_accuracy: 0.9145
Epoch 185/200
Learning rate:  5e-07
1563/1563 [==============================] - 96s 61ms/step - loss: 0.1748 - accuracy: 0.9832 - val_loss: 0.4512 - val_accuracy: 0.9138
Epoch 186/200
Learning rate:  5e-07
1563/1563 [==============================] - 845s 541ms/step - loss: 0.1743 - accuracy: 0.9834 - val_loss: 0.4495 - val_accuracy: 0.9136
Epoch 187/200
Learning rate:  5e-07
1563/1563 [==============================] - 142s 91ms/step - loss: 0.1747 - accuracy: 0.9826 - val_loss: 0.4513 - val_accuracy: 0.9139
Epoch 188/200
Learning rate:  5e-07
1563/1563 [==============================] - 119s 76ms/step - loss: 0.1723 - accuracy: 0.9838 - val_loss: 0.4497 - val_accuracy: 0.9148
Epoch 189/200
Learning rate:  5e-07
1563/1563 [==============================] - 89s 57ms/step - loss: 0.1754 - accuracy: 0.9829 - val_loss: 0.4515 - val_accuracy: 0.9149
Epoch 190/200
Learning rate:  5e-07
1563/1563 [==============================] - 85s 54ms/step - loss: 0.1728 - accuracy: 0.9839 - val_loss: 0.4498 - val_accuracy: 0.9147
Epoch 191/200
Learning rate:  5e-07
1563/1563 [==============================] - 91s 58ms/step - loss: 0.1738 - accuracy: 0.9836 - val_loss: 0.4507 - val_accuracy: 0.9133
Epoch 192/200
Learning rate:  5e-07
1563/1563 [==============================] - 89s 57ms/step - loss: 0.1738 - accuracy: 0.9836 - val_loss: 0.4496 - val_accuracy: 0.9145
Epoch 193/200
Learning rate:  5e-07
1563/1563 [==============================] - 90s 58ms/step - loss: 0.1720 - accuracy: 0.9844 - val_loss: 0.4518 - val_accuracy: 0.9147
Epoch 194/200
Learning rate:  5e-07
1563/1563 [==============================] - 88s 56ms/step - loss: 0.1731 - accuracy: 0.9838 - val_loss: 0.4508 - val_accuracy: 0.9149
Epoch 195/200
Learning rate:  5e-07
1563/1563 [==============================] - 89s 57ms/step - loss: 0.1740 - accuracy: 0.9837 - val_loss: 0.4500 - val_accuracy: 0.9148
Epoch 196/200
Learning rate:  5e-07
1563/1563 [==============================] - 91s 58ms/step - loss: 0.1723 - accuracy: 0.9841 - val_loss: 0.4518 - val_accuracy: 0.9143
Epoch 197/200
Learning rate:  5e-07
1563/1563 [==============================] - 90s 58ms/step - loss: 0.1740 - accuracy: 0.9832 - val_loss: 0.4501 - val_accuracy: 0.9146
Epoch 198/200
Learning rate:  5e-07
1563/1563 [==============================] - 100s 64ms/step - loss: 0.1731 - accuracy: 0.9831 - val_loss: 0.4493 - val_accuracy: 0.9150
Epoch 199/200
Learning rate:  5e-07
1563/1563 [==============================] - 105s 67ms/step - loss: 0.1750 - accuracy: 0.9832 - val_loss: 0.4495 - val_accuracy: 0.9152
Epoch 200/200
Learning rate:  5e-07
1563/1563 [==============================] - 95s 61ms/step - loss: 0.1727 - accuracy: 0.9837 - val_loss: 0.4505 - val_accuracy: 0.9147
10000/10000 [==============================] - 8s 810us/step
Test loss: 0.45047624320983887
Test accuracy: 0.9146999716758728

 

——————————————————————

总目录

keras的example文件解析

你可能感兴趣的:(TensorFlow,python)