迁移学习真的有用???

哈喽,大家好,你们深爱的小明哥又来了。

emm,前提是这样的,这次接着写,克服了原来模型path的问题。

但迁移学习真的有用吗?我表示怀疑,如图:

迁移学习真的有用???_第1张图片

这特么也是神操作,6得一批。acc越来越小。。。。。。。。。。。。。。。。如此下去,最后是0,

迁移学习真的有用???_第2张图片

老子也是开了眼,哪里有问题吗??ResNet也是个坑爹的玩意。

【我放弃了不再等到100了,这样训练20%一大关

Epoch 00049: val_acc did not improve from 0.21455
Epoch 50/100
15285/15285 [==============================] - 62s 4ms/step - loss: 2.7266 - acc: 0.1080 - val_loss: 2.7066 - val_acc: 0.1102

Epoch 00050: val_acc did not improve from 0.21455
Epoch 51/100
15285/15285 [==============================] - 62s 4ms/step - loss: 2.7118 - acc: 0.1129 - val_loss: 2.7510 - val_acc: 0.1054

Epoch 00051: val_acc did not improve from 0.21455
Epoch 52/100
15285/15285 [==============================] - 62s 4ms/step - loss: 2.7278 - acc: 0.1076 - val_loss: 2.7958 - val_acc: 0.1086

Epoch 00052: ReduceLROnPlateau reducing learning rate to 9.765625463842298e-07.

Epoch 00052: val_acc did not improve from 0.21455
Epoch 53/100
15285/15285 [==============================] - 62s 4ms/step - loss: 2.7284 - acc: 0.1100 - val_loss: 2.7003 - val_acc: 0.1107

Epoch 00053: val_acc did not improve from 0.21455
Epoch 54/100
15285/15285 [==============================] - 62s 4ms/step - loss: 2.6883 - acc: 0.1158 - val_loss: 2.7425 - val_acc: 0.1156

Epoch 00054: val_acc did not improve from 0.21455
Epoch 55/100
15285/15285 [==============================] - 62s 4ms/step - loss: 2.7303 - acc: 0.1102 - val_loss: 2.7388 - val_acc: 0.1109

Epoch 00055: val_acc did not improve from 0.21455
Epoch 56/100
15285/15285 [==============================] - 62s 4ms/step - loss: 2.7066 - acc: 0.1096 - val_loss: 2.7128 - val_acc: 0.1143

Epoch 00056: val_acc did not improve from 0.21455
Epoch 57/100

不知道哪里做错了,做啥事都不顺利。肯定是上天嫉妒我的美,这个我真的没办法了。

又是哪个坑逼货瞎几把写的博文,老子参考了就出这种鬼样子。再看看其他的,改改。

请教了同事,查看了模型的区别,之前的参数设置有问题【应该是不对】,模型打印出来训练的参数很多,如下

迁移学习真的有用???_第3张图片

同事的如下:

迁移学习真的有用???_第4张图片

试看不久的将来,应该没问题吧,我试试,emmm

不容乐观啊,

Epoch 00083: val_acc did not improve from 0.46285
Epoch 84/100
15285/15285 [==============================] - 21s 1ms/step - loss: 1.1178 - acc: 0.6306 - val_loss: 2.0733 - val_acc: 0.4456

Epoch 00084: val_acc did not improve from 0.46285
Epoch 85/100
15285/15285 [==============================] - 22s 1ms/step - loss: 1.1213 - acc: 0.6324 - val_loss: 2.0731 - val_acc: 0.4464

Epoch 00085: val_acc did not improve from 0.46285
Epoch 86/100
15285/15285 [==============================] - 21s 1ms/step - loss: 1.1162 - acc: 0.6311 - val_loss: 2.0735 - val_acc: 0.4464

Epoch 00086: val_acc did not improve from 0.46285
Epoch 87/100
15285/15285 [==============================] - 21s 1ms/step - loss: 1.1106 - acc: 0.6353 - val_loss: 2.0722 - val_acc: 0.4464

Epoch 00087: val_acc did not improve from 0.46285
Epoch 88/100
15285/15285 [==============================] - 21s 1ms/step - loss: 1.1100 - acc: 0.6362 - val_loss: 2.0726 - val_acc: 0.4461

Epoch 00088: val_acc did not improve from 0.46285
Epoch 89/100
15285/15285 [==============================] - 22s 1ms/step - loss: 1.1184 - acc: 0.6296 - val_loss: 2.0725 - val_acc: 0.4461

50%都达不到。迁移学习还是不行啊。还没我之前瞎改的60%高【说的都是验证集】,而且参数比这少很多。

让其中一部分参数保留,其他参数可以训练。【一半对一半】结果如下:保持微笑,真是垃圾

迁移学习真的有用???_第5张图片

打印迁移学习notop模型构成的新模型的层名字:

input_1
conv1_pad
conv1
bn_conv1
activation_1
pool1_pad
max_pooling2d_1
res2a_branch2a
bn2a_branch2a
activation_2
res2a_branch2b
bn2a_branch2b
activation_3
res2a_branch2c
res2a_branch1
bn2a_branch2c
bn2a_branch1
add_1
activation_4
res2b_branch2a
bn2b_branch2a
activation_5
res2b_branch2b
bn2b_branch2b
activation_6
res2b_branch2c
bn2b_branch2c
add_2
activation_7
res2c_branch2a
bn2c_branch2a
activation_8
res2c_branch2b
bn2c_branch2b
activation_9
res2c_branch2c
bn2c_branch2c
add_3
activation_10
res3a_branch2a
bn3a_branch2a
activation_11
res3a_branch2b
bn3a_branch2b
activation_12
res3a_branch2c
res3a_branch1
bn3a_branch2c
bn3a_branch1
add_4
activation_13
res3b_branch2a
bn3b_branch2a
activation_14
res3b_branch2b
bn3b_branch2b
activation_15
res3b_branch2c
bn3b_branch2c
add_5
activation_16
res3c_branch2a
bn3c_branch2a
activation_17
res3c_branch2b
bn3c_branch2b
activation_18
res3c_branch2c
bn3c_branch2c
add_6
activation_19
res3d_branch2a
bn3d_branch2a
activation_20
res3d_branch2b
bn3d_branch2b
activation_21
res3d_branch2c
bn3d_branch2c
add_7
activation_22
res4a_branch2a
bn4a_branch2a
activation_23
res4a_branch2b
bn4a_branch2b
activation_24
res4a_branch2c
res4a_branch1
bn4a_branch2c
bn4a_branch1
add_8
activation_25
res4b_branch2a
bn4b_branch2a
activation_26
res4b_branch2b
bn4b_branch2b
activation_27
res4b_branch2c
bn4b_branch2c
add_9
activation_28
res4c_branch2a
bn4c_branch2a
activation_29
res4c_branch2b
bn4c_branch2b
activation_30
res4c_branch2c
bn4c_branch2c
add_10
activation_31
res4d_branch2a
bn4d_branch2a
activation_32
res4d_branch2b
bn4d_branch2b
activation_33
res4d_branch2c
bn4d_branch2c
add_11
activation_34
res4e_branch2a
bn4e_branch2a
activation_35
res4e_branch2b
bn4e_branch2b
activation_36
res4e_branch2c
bn4e_branch2c
add_12
activation_37
res4f_branch2a
bn4f_branch2a
activation_38
res4f_branch2b
bn4f_branch2b
activation_39
res4f_branch2c
bn4f_branch2c
add_13
activation_40
res5a_branch2a
bn5a_branch2a
activation_41
res5a_branch2b
bn5a_branch2b
activation_42
res5a_branch2c
res5a_branch1
bn5a_branch2c
bn5a_branch1
add_14
activation_43
res5b_branch2a
bn5b_branch2a
activation_44
res5b_branch2b
bn5b_branch2b
activation_45
res5b_branch2c
bn5b_branch2c
add_15
activation_46
res5c_branch2a
bn5c_branch2a
activation_47
res5c_branch2b
bn5c_branch2b
activation_48
res5c_branch2c
bn5c_branch2c
add_16
activation_49
global_average_pooling2d_1

但是我想知道他实际的名字,括号内的

迁移学习真的有用???_第6张图片

但是好像找不到,get_config也不行

迁移学习真的有用???_第7张图片

找不到啊。本想设置只训练卷积层的参数。

 

持续更新,不要走开哦。。。且看下回分解:https://blog.csdn.net/SPESEG/article/details/103137352

 

另外有相关问题可以加入QQ群讨论,不设微信群

QQ群:868373192 

语音深度学习群

 

你可能感兴趣的:(imagednn,ResNet50,迁移学习,keras)