人工智能机器学习工具Keras快速入门教程2训练模型识别人猫狗等

Keras中的Fine-tuning预训练模型

为什么我们使用Fine Tune模型

Fine-tuning是调整预训练模型,使得参数适应新模型。 当我们想要从头开始训练新模型时,我们需要大量数据,网络可以找到所有参数。这里我们将使用预先训练的模型,参数已经学习并具有权重。

例如,如果我们想训练我们自己的模型来解决分类问题但我们只有少量数据,那么我们可以通过使用Transfer Learning + Fine-Tuning方法来解决这个问题。

使用预先训练的网络和权重,我们不需要训练整个网络。 我们只需要训练解决任务的最后一层,我们称之为微调。

网络模型准备

我们可以加载Keras已经在其库中的各种模型:

  • VGG16
  • InceptionV3
  • ResNet
  • MobileNet
  • Xception
  • InceptionResNetV2

我们将使用VGG16网络模型和imageNet作为模型的权重。 我们将使用来自Kaggle Natural Images Dataset的图像对网络进行微调以对8种不同类别进行分类

VGG16模型架构

image

参考资料

对于我们的训练过程,我们将使用来自8个不同类别的自然图像的图片,例如飞机,汽车,猫,狗,花,水果,摩托车和人。

数据集下载:https://itbooks.pipipan.com/fs/18113597-329046186
代码:https://github.com/china-testing/python-api-tesing/blob/master/practices/keras/load_kaggle_natural.py

Figure_1.png

之后我们使用imageNet预先训练的从VGG16创建我们的网络模型。 我们将冻结这些层,以便层不可训练以帮助我们减少计算时间。

Using TensorFlow backend.
Found 6899 images belonging to 8 classes.
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 224, 224, 3)       0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 224, 224, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 224, 224, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 112, 112, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 112, 112, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 112, 112, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 56, 56, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 56, 56, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 28, 28, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 28, 28, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 14, 14, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 7, 7, 512)         0         
=================================================================
Total params: 14,714,688
Trainable params: 14,714,688
Non-trainable params: 0
_________________________________________________________________
None
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
vgg16 (Model)                (None, 7, 7, 512)         14714688  
_________________________________________________________________
flatten_1 (Flatten)          (None, 25088)             0         
_________________________________________________________________
dense_1 (Dense)              (None, 1024)              25691136  
_________________________________________________________________
dense_2 (Dense)              (None, 1024)              1049600   
_________________________________________________________________
dense_3 (Dense)              (None, 8)                 8200      
=================================================================
Total params: 41,463,624
Trainable params: 26,748,936
Non-trainable params: 14,714,688
_________________________________________________________________
None

参考资料

  • 讨论qq群630011153 144081101
  • 本文最新版本地址
  • 本文涉及的python测试开发库 谢谢点赞!
  • 本文相关海量书籍下载
  • 2018最佳人工智能机器学习工具书及下载(持续更新)
  • https://www.pyimagesearch.com/2018/09/10/keras-tutorial-how-to-get-started-with-keras-deep-learning-and-python/
  • https://elitedatascience.com/keras-tutorial-deep-learning-in-python
  • https://www.learnopencv.com/keras-tutorial-using-pre-trained-imagenet-models/#why-pretrained-models
  • https://www.guru99.com/keras-tutorial.html

训练

代码:https://github.com/china-testing/python-api-tesing/blob/master/practices/keras/train_kaggle_natural.py

image

我们的损失显着下降,准确率几乎达到100%。 为了测试我们的模型,我们通过互联网随机选择图像并将其放在具有不同类的测试文件夹中进行测试

测试

代码:https://github.com/china-testing/python-api-tesing/blob/master/practices/keras/test_kaggle_natural.py

测试结果:

Figure_1.png

你可能感兴趣的:(人工智能机器学习工具Keras快速入门教程2训练模型识别人猫狗等)