第四周学习 猴痘识别

  • 本文为365天深度学习训练营 中的学习记录博客
  • 参考文章地址: 深度学习100例-卷积神经网络(CNN)猴痘病识别 | 第45天
  • 作者:K同学啊
import os,PIL,pathlib
data_dir = "D:/jupyter notebook/45-data/"

data_dir = pathlib.Path(data_dir)
image_count = len(list(data_dir.glob('*/*.jpg')))

print("图片总数为:",image_count)
图片总数为: 2142
Monkeypox = list(data_dir.glob('Monkeypox/*.jpg'))
PIL.Image.open(str(Monkeypox[0]))

第四周学习 猴痘识别_第1张图片

batch_size = 32
img_height = 224
img_width = 224
"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="training",
    seed=123,
    image_size=(img_height, img_width),
    batch_size=batch_size)
Found 2142 files belonging to 2 classes.
Using 1714 files for training.
"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="validation",
    seed=123,
    image_size=(img_height, img_width),
    batch_size=batch_size)
Found 2142 files belonging to 2 classes.
Using 428 files for validation.
class_names = train_ds.class_names
print(class_names)
['Monkeypox', 'Others']
import matplotlib.pyplot as plt
plt.figure(figsize=(20, 10))

for images, labels in train_ds.take(1):
    for i in range(20):
        ax = plt.subplot(5, 10, i + 1)

        plt.imshow(images[i].numpy().astype("uint8"))
        plt.title(class_names[labels[i]])
        
        plt.axis("off")

第四周学习 猴痘识别_第2张图片

for image_batch, labels_batch in train_ds:
    print(image_batch.shape)
    print(labels_batch.shape)
    break
(32, 224, 224, 3)
(32,)
AUTOTUNE = tf.data.AUTOTUNE

train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
from tensorflow.keras import datasets, layers, models
num_classes = 2

"""
关于卷积核的计算不懂的可以参考文章:https://blog.csdn.net/qq_38251616/article/details/114278995

layers.Dropout(0.4) 作用是防止过拟合,提高模型的泛化能力。
在上一篇文章花朵识别中,训练准确率与验证准确率相差巨大就是由于模型过拟合导致的

关于Dropout层的更多介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/115826689
"""

model = models.Sequential([
    layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
    
    layers.Conv2D(16, (3, 3), activation='relu', input_shape=(img_height, img_width, 3)), # 卷积层1,卷积核3*3  
    layers.AveragePooling2D((2, 2)),               # 池化层1,2*2采样
    layers.Conv2D(32, (3, 3), activation='relu'),  # 卷积层2,卷积核3*3
    layers.AveragePooling2D((2, 2)),               # 池化层2,2*2采样
    layers.Dropout(0.3),  
    layers.Conv2D(64, (3, 3), activation='relu'),  # 卷积层3,卷积核3*3
    layers.Dropout(0.3),  
    
    layers.Flatten(),                       # Flatten层,连接卷积层与全连接层
    layers.Dense(128, activation='relu'),   # 全连接层,特征进一步提取
    layers.Dense(num_classes)               # 输出层,输出预期结果
])

model.summary()  # 打印网络结构
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 rescaling (Rescaling)       (None, 224, 224, 3)       0         
                                                                 
 conv2d (Conv2D)             (None, 222, 222, 16)      448       
                                                                 
 average_pooling2d (AverageP  (None, 111, 111, 16)     0         
 ooling2D)                                                       
                                                                 
 conv2d_1 (Conv2D)           (None, 109, 109, 32)      4640      
                                                                 
 average_pooling2d_1 (Averag  (None, 54, 54, 32)       0         
 ePooling2D)                                                     
                                                                 
 dropout (Dropout)           (None, 54, 54, 32)        0         
                                                                 
 conv2d_2 (Conv2D)           (None, 52, 52, 64)        18496     
                                                                 
 dropout_1 (Dropout)         (None, 52, 52, 64)        0         
                                                                 
 flatten (Flatten)           (None, 173056)            0         
                                                                 
 dense (Dense)               (None, 128)               22151296  
                                                                 
 dense_1 (Dense)             (None, 2)                 258       
                                                                 
=================================================================
Total params: 22,175,138
Trainable params: 22,175,138
Non-trainable params: 0
_________________________________________________________________
# 设置优化器
opt = tf.keras.optimizers.Adam(learning_rate=1e-4)

model.compile(optimizer=opt,
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])
from tensorflow.keras.callbacks import ModelCheckpoint

epochs = 50

checkpointer = ModelCheckpoint('best_model.h5',
                                monitor='val_accuracy',
                                verbose=1,
                                save_best_only=True,
                                save_weights_only=True)

history = model.fit(train_ds,
                    validation_data=val_ds,
                    epochs=epochs,
                    callbacks=[checkpointer])
Epoch 1/50
54/54 [==============================] - ETA: 0s - loss: 0.6935 - accuracy: 0.5776
Epoch 1: val_accuracy improved from -inf to 0.58879, saving model to best_model.h5
54/54 [==============================] - 57s 1s/step - loss: 0.6935 - accuracy: 0.5776 - val_loss: 0.6587 - val_accuracy: 0.5888
Epoch 2/50
54/54 [==============================] - ETA: 0s - loss: 0.6237 - accuracy: 0.6418
Epoch 2: val_accuracy improved from 0.58879 to 0.63084, saving model to best_model.h5
54/54 [==============================] - 53s 985ms/step - loss: 0.6237 - accuracy: 0.6418 - val_loss: 0.6376 - val_accuracy: 0.6308
Epoch 3/50
54/54 [==============================] - ETA: 0s - loss: 0.6038 - accuracy: 0.6762
Epoch 3: val_accuracy improved from 0.63084 to 0.68692, saving model to best_model.h5
54/54 [==============================] - 51s 950ms/step - loss: 0.6038 - accuracy: 0.6762 - val_loss: 0.5934 - val_accuracy: 0.6869
Epoch 4/50
54/54 [==============================] - ETA: 0s - loss: 0.5544 - accuracy: 0.7211
Epoch 4: val_accuracy improved from 0.68692 to 0.70327, saving model to best_model.h5
54/54 [==============================] - 52s 962ms/step - loss: 0.5544 - accuracy: 0.7211 - val_loss: 0.5594 - val_accuracy: 0.7033
Epoch 5/50
54/54 [==============================] - ETA: 0s - loss: 0.5302 - accuracy: 0.7369
Epoch 5: val_accuracy improved from 0.70327 to 0.72897, saving model to best_model.h5
54/54 [==============================] - 50s 934ms/step - loss: 0.5302 - accuracy: 0.7369 - val_loss: 0.5155 - val_accuracy: 0.7290
Epoch 6/50
54/54 [==============================] - ETA: 0s - loss: 0.4754 - accuracy: 0.7882
Epoch 6: val_accuracy improved from 0.72897 to 0.77804, saving model to best_model.h5
54/54 [==============================] - 51s 942ms/step - loss: 0.4754 - accuracy: 0.7882 - val_loss: 0.4696 - val_accuracy: 0.7780
Epoch 7/50
54/54 [==============================] - ETA: 0s - loss: 0.4659 - accuracy: 0.7830
Epoch 7: val_accuracy did not improve from 0.77804
54/54 [==============================] - 49s 912ms/step - loss: 0.4659 - accuracy: 0.7830 - val_loss: 0.4793 - val_accuracy: 0.7710
Epoch 8/50
54/54 [==============================] - ETA: 0s - loss: 0.4385 - accuracy: 0.8057
Epoch 8: val_accuracy improved from 0.77804 to 0.81075, saving model to best_model.h5
54/54 [==============================] - 54s 999ms/step - loss: 0.4385 - accuracy: 0.8057 - val_loss: 0.4467 - val_accuracy: 0.8107
Epoch 9/50
54/54 [==============================] - ETA: 0s - loss: 0.3998 - accuracy: 0.8302
Epoch 9: val_accuracy did not improve from 0.81075
54/54 [==============================] - 50s 934ms/step - loss: 0.3998 - accuracy: 0.8302 - val_loss: 0.5211 - val_accuracy: 0.7593
Epoch 10/50
54/54 [==============================] - ETA: 0s - loss: 0.3874 - accuracy: 0.8384
Epoch 10: val_accuracy did not improve from 0.81075
54/54 [==============================] - 48s 889ms/step - loss: 0.3874 - accuracy: 0.8384 - val_loss: 0.5087 - val_accuracy: 0.7500
Epoch 11/50
54/54 [==============================] - ETA: 0s - loss: 0.3919 - accuracy: 0.8302
Epoch 11: val_accuracy did not improve from 0.81075
54/54 [==============================] - 46s 854ms/step - loss: 0.3919 - accuracy: 0.8302 - val_loss: 0.4635 - val_accuracy: 0.7850
Epoch 12/50
54/54 [==============================] - ETA: 0s - loss: 0.3428 - accuracy: 0.8646
Epoch 12: val_accuracy did not improve from 0.81075
54/54 [==============================] - 47s 871ms/step - loss: 0.3428 - accuracy: 0.8646 - val_loss: 0.4612 - val_accuracy: 0.7780
Epoch 13/50
54/54 [==============================] - ETA: 0s - loss: 0.3320 - accuracy: 0.8629
Epoch 13: val_accuracy improved from 0.81075 to 0.85047, saving model to best_model.h5
54/54 [==============================] - 48s 886ms/step - loss: 0.3320 - accuracy: 0.8629 - val_loss: 0.3994 - val_accuracy: 0.8505
Epoch 14/50
54/54 [==============================] - ETA: 0s - loss: 0.3210 - accuracy: 0.8699
Epoch 14: val_accuracy did not improve from 0.85047
54/54 [==============================] - 47s 866ms/step - loss: 0.3210 - accuracy: 0.8699 - val_loss: 0.3772 - val_accuracy: 0.8458
Epoch 15/50
54/54 [==============================] - ETA: 0s - loss: 0.2828 - accuracy: 0.8915
Epoch 15: val_accuracy did not improve from 0.85047
54/54 [==============================] - 46s 848ms/step - loss: 0.2828 - accuracy: 0.8915 - val_loss: 0.3983 - val_accuracy: 0.8271
Epoch 16/50
54/54 [==============================] - ETA: 0s - loss: 0.2596 - accuracy: 0.9037
Epoch 16: val_accuracy improved from 0.85047 to 0.85981, saving model to best_model.h5
54/54 [==============================] - 45s 843ms/step - loss: 0.2596 - accuracy: 0.9037 - val_loss: 0.3624 - val_accuracy: 0.8598
Epoch 17/50
54/54 [==============================] - ETA: 0s - loss: 0.2546 - accuracy: 0.9090
Epoch 17: val_accuracy did not improve from 0.85981
54/54 [==============================] - 44s 823ms/step - loss: 0.2546 - accuracy: 0.9090 - val_loss: 0.3762 - val_accuracy: 0.8598
Epoch 18/50
54/54 [==============================] - ETA: 0s - loss: 0.2260 - accuracy: 0.9177
Epoch 18: val_accuracy did not improve from 0.85981
54/54 [==============================] - 44s 824ms/step - loss: 0.2260 - accuracy: 0.9177 - val_loss: 0.3736 - val_accuracy: 0.8528
Epoch 19/50
54/54 [==============================] - ETA: 0s - loss: 0.2255 - accuracy: 0.9113
Epoch 19: val_accuracy did not improve from 0.85981
54/54 [==============================] - 44s 821ms/step - loss: 0.2255 - accuracy: 0.9113 - val_loss: 0.3658 - val_accuracy: 0.8598
Epoch 20/50
54/54 [==============================] - ETA: 0s - loss: 0.2122 - accuracy: 0.9189
Epoch 20: val_accuracy did not improve from 0.85981
54/54 [==============================] - 45s 829ms/step - loss: 0.2122 - accuracy: 0.9189 - val_loss: 0.4123 - val_accuracy: 0.8551
Epoch 21/50
54/54 [==============================] - ETA: 0s - loss: 0.2344 - accuracy: 0.9061
Epoch 21: val_accuracy improved from 0.85981 to 0.86449, saving model to best_model.h5
54/54 [==============================] - 45s 830ms/step - loss: 0.2344 - accuracy: 0.9061 - val_loss: 0.3776 - val_accuracy: 0.8645
Epoch 22/50
54/54 [==============================] - ETA: 0s - loss: 0.1825 - accuracy: 0.9347
Epoch 22: val_accuracy did not improve from 0.86449
54/54 [==============================] - 45s 828ms/step - loss: 0.1825 - accuracy: 0.9347 - val_loss: 0.3744 - val_accuracy: 0.8575
Epoch 23/50
54/54 [==============================] - ETA: 0s - loss: 0.1857 - accuracy: 0.9300
Epoch 23: val_accuracy improved from 0.86449 to 0.87150, saving model to best_model.h5
54/54 [==============================] - 45s 827ms/step - loss: 0.1857 - accuracy: 0.9300 - val_loss: 0.3589 - val_accuracy: 0.8715
Epoch 24/50
54/54 [==============================] - ETA: 0s - loss: 0.1794 - accuracy: 0.9265
Epoch 24: val_accuracy did not improve from 0.87150
54/54 [==============================] - 45s 828ms/step - loss: 0.1794 - accuracy: 0.9265 - val_loss: 0.3833 - val_accuracy: 0.8528
Epoch 25/50
54/54 [==============================] - ETA: 0s - loss: 0.1930 - accuracy: 0.9259
Epoch 25: val_accuracy did not improve from 0.87150
54/54 [==============================] - 46s 846ms/step - loss: 0.1930 - accuracy: 0.9259 - val_loss: 0.3579 - val_accuracy: 0.8692
Epoch 26/50
54/54 [==============================] - ETA: 0s - loss: 0.1496 - accuracy: 0.9463
Epoch 26: val_accuracy did not improve from 0.87150
54/54 [==============================] - 48s 890ms/step - loss: 0.1496 - accuracy: 0.9463 - val_loss: 0.3763 - val_accuracy: 0.8621
Epoch 27/50
54/54 [==============================] - ETA: 0s - loss: 0.1329 - accuracy: 0.9574
Epoch 27: val_accuracy improved from 0.87150 to 0.87850, saving model to best_model.h5
54/54 [==============================] - 45s 837ms/step - loss: 0.1329 - accuracy: 0.9574 - val_loss: 0.3631 - val_accuracy: 0.8785
Epoch 28/50
54/54 [==============================] - ETA: 0s - loss: 0.1282 - accuracy: 0.9557
Epoch 28: val_accuracy did not improve from 0.87850
54/54 [==============================] - 45s 835ms/step - loss: 0.1282 - accuracy: 0.9557 - val_loss: 0.3903 - val_accuracy: 0.8621
Epoch 29/50
54/54 [==============================] - ETA: 0s - loss: 0.1349 - accuracy: 0.9469
Epoch 29: val_accuracy did not improve from 0.87850
54/54 [==============================] - 47s 869ms/step - loss: 0.1349 - accuracy: 0.9469 - val_loss: 0.3715 - val_accuracy: 0.8668
Epoch 30/50
54/54 [==============================] - ETA: 0s - loss: 0.1441 - accuracy: 0.9457
Epoch 30: val_accuracy did not improve from 0.87850
54/54 [==============================] - 47s 867ms/step - loss: 0.1441 - accuracy: 0.9457 - val_loss: 0.3774 - val_accuracy: 0.8762
Epoch 31/50
54/54 [==============================] - ETA: 0s - loss: 0.1095 - accuracy: 0.9621
Epoch 31: val_accuracy improved from 0.87850 to 0.88318, saving model to best_model.h5
54/54 [==============================] - 47s 864ms/step - loss: 0.1095 - accuracy: 0.9621 - val_loss: 0.3697 - val_accuracy: 0.8832
Epoch 32/50
54/54 [==============================] - ETA: 0s - loss: 0.0998 - accuracy: 0.9656
Epoch 32: val_accuracy did not improve from 0.88318
54/54 [==============================] - 47s 878ms/step - loss: 0.0998 - accuracy: 0.9656 - val_loss: 0.3795 - val_accuracy: 0.8738
Epoch 33/50
54/54 [==============================] - ETA: 0s - loss: 0.0857 - accuracy: 0.9702
Epoch 33: val_accuracy improved from 0.88318 to 0.89252, saving model to best_model.h5
54/54 [==============================] - 48s 880ms/step - loss: 0.0857 - accuracy: 0.9702 - val_loss: 0.3805 - val_accuracy: 0.8925
Epoch 34/50
54/54 [==============================] - ETA: 0s - loss: 0.0975 - accuracy: 0.9679
Epoch 34: val_accuracy did not improve from 0.89252
54/54 [==============================] - 47s 869ms/step - loss: 0.0975 - accuracy: 0.9679 - val_loss: 0.3665 - val_accuracy: 0.8925
Epoch 35/50
54/54 [==============================] - ETA: 0s - loss: 0.0751 - accuracy: 0.9784
Epoch 35: val_accuracy did not improve from 0.89252
54/54 [==============================] - 47s 873ms/step - loss: 0.0751 - accuracy: 0.9784 - val_loss: 0.3890 - val_accuracy: 0.8902
Epoch 36/50
54/54 [==============================] - ETA: 0s - loss: 0.0697 - accuracy: 0.9790
Epoch 36: val_accuracy did not improve from 0.89252
54/54 [==============================] - 49s 910ms/step - loss: 0.0697 - accuracy: 0.9790 - val_loss: 0.3908 - val_accuracy: 0.8925
Epoch 37/50
54/54 [==============================] - ETA: 0s - loss: 0.0692 - accuracy: 0.9813
Epoch 37: val_accuracy improved from 0.89252 to 0.89486, saving model to best_model.h5
54/54 [==============================] - 47s 880ms/step - loss: 0.0692 - accuracy: 0.9813 - val_loss: 0.3982 - val_accuracy: 0.8949
Epoch 38/50
54/54 [==============================] - ETA: 0s - loss: 0.0684 - accuracy: 0.9778
Epoch 38: val_accuracy did not improve from 0.89486
54/54 [==============================] - 47s 875ms/step - loss: 0.0684 - accuracy: 0.9778 - val_loss: 0.3940 - val_accuracy: 0.8902
Epoch 39/50
54/54 [==============================] - ETA: 0s - loss: 0.0642 - accuracy: 0.9807
Epoch 39: val_accuracy did not improve from 0.89486
54/54 [==============================] - 47s 880ms/step - loss: 0.0642 - accuracy: 0.9807 - val_loss: 0.4160 - val_accuracy: 0.8855
Epoch 40/50
54/54 [==============================] - ETA: 0s - loss: 0.0556 - accuracy: 0.9842
Epoch 40: val_accuracy did not improve from 0.89486
54/54 [==============================] - 47s 872ms/step - loss: 0.0556 - accuracy: 0.9842 - val_loss: 0.4070 - val_accuracy: 0.8925
Epoch 41/50
54/54 [==============================] - ETA: 0s - loss: 0.0592 - accuracy: 0.9784
Epoch 41: val_accuracy did not improve from 0.89486
54/54 [==============================] - 47s 867ms/step - loss: 0.0592 - accuracy: 0.9784 - val_loss: 0.4227 - val_accuracy: 0.8855
Epoch 42/50
54/54 [==============================] - ETA: 0s - loss: 0.0590 - accuracy: 0.9807
Epoch 42: val_accuracy did not improve from 0.89486
54/54 [==============================] - 46s 855ms/step - loss: 0.0590 - accuracy: 0.9807 - val_loss: 0.4156 - val_accuracy: 0.8902
Epoch 43/50
54/54 [==============================] - ETA: 0s - loss: 0.0606 - accuracy: 0.9837
Epoch 43: val_accuracy did not improve from 0.89486
54/54 [==============================] - 47s 866ms/step - loss: 0.0606 - accuracy: 0.9837 - val_loss: 0.4433 - val_accuracy: 0.8808
Epoch 44/50
54/54 [==============================] - ETA: 0s - loss: 0.0535 - accuracy: 0.9854
Epoch 44: val_accuracy did not improve from 0.89486
54/54 [==============================] - 46s 860ms/step - loss: 0.0535 - accuracy: 0.9854 - val_loss: 0.4276 - val_accuracy: 0.8925
Epoch 45/50
54/54 [==============================] - ETA: 0s - loss: 0.0610 - accuracy: 0.9790
Epoch 45: val_accuracy did not improve from 0.89486
54/54 [==============================] - 47s 877ms/step - loss: 0.0610 - accuracy: 0.9790 - val_loss: 0.4782 - val_accuracy: 0.8785
Epoch 46/50
54/54 [==============================] - ETA: 0s - loss: 0.0404 - accuracy: 0.9907
Epoch 46: val_accuracy did not improve from 0.89486
54/54 [==============================] - 47s 866ms/step - loss: 0.0404 - accuracy: 0.9907 - val_loss: 0.4712 - val_accuracy: 0.8855
Epoch 47/50
54/54 [==============================] - ETA: 0s - loss: 0.0395 - accuracy: 0.9872
Epoch 47: val_accuracy did not improve from 0.89486
54/54 [==============================] - 47s 868ms/step - loss: 0.0395 - accuracy: 0.9872 - val_loss: 0.4991 - val_accuracy: 0.8668
Epoch 48/50
54/54 [==============================] - ETA: 0s - loss: 0.0441 - accuracy: 0.9872
Epoch 48: val_accuracy did not improve from 0.89486
54/54 [==============================] - 46s 855ms/step - loss: 0.0441 - accuracy: 0.9872 - val_loss: 0.4852 - val_accuracy: 0.8855
Epoch 49/50
54/54 [==============================] - ETA: 0s - loss: 0.0330 - accuracy: 0.9907
Epoch 49: val_accuracy did not improve from 0.89486
54/54 [==============================] - 46s 862ms/step - loss: 0.0330 - accuracy: 0.9907 - val_loss: 0.4739 - val_accuracy: 0.8902
Epoch 50/50
54/54 [==============================] - ETA: 0s - loss: 0.0311 - accuracy: 0.9930
Epoch 50: val_accuracy improved from 0.89486 to 0.89720, saving model to best_model.h5
54/54 [==============================] - 46s 857ms/step - loss: 0.0311 - accuracy: 0.9930 - val_loss: 0.4897 - val_accuracy: 0.8972
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss = history.history['loss']
val_loss = history.history['val_loss']

epochs_range = range(epochs)

plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

第四周学习 猴痘识别_第3张图片

# 加载效果最好的模型权重
model.load_weights('best_model.h5')
from PIL import Image
import numpy as np
from matplotlib import pyplot as plt
img = Image.open("D:/jupyter notebook/45-data/Others/NM15_02_11.jpg")  #这里选择你需要预测的图片
num_img = np.asarray(img)
image = tf.image.resize(num_img, [img_height, img_width])
img_array = tf.expand_dims(image, 0) 

predictions = model.predict(img_array)
1/1 [==============================] - 0s 35ms/step
print("预测结果为:",class_names[np.argmax(predictions)])
预测结果为: Others

你可能感兴趣的:(学习,python,深度学习)