个人主页:研学社的博客
欢迎来到本博客❤️❤️
博主优势:博客内容尽量做到思维缜密,逻辑清晰,为了方便读者。
⛳️座右铭:行百里者,半于九十。
本文目录如下:
目录
1 概述
2 运行结果
3 参考文献
4 python代码、数据
利用cnn-Lstm提取深层特征,concatenate进行特征融合,进入attention机制,最终输出负荷数值
input.py为读取原始数据,输入的数据为校园综合能源系统数据
attention_model.py输入冷、热、电负荷,提取深层特征,加入attention机制,观察结果,模型流程图保存在attention_model.png
multi_attention_model.py分别利用cnn-Lstm提取冷、热、电负荷深层特征,concatenate进行特征融合,进入attention机制,最终输出负荷数值,模型流程图保存在multi_attention_model.png,bar_1.png为权重柱状图。
部分代码:
# # 单独训练电负荷模型,提取电负荷深层特征 # model, middle = elec_model() # model = load_model("./elec.h5") # model.summary() # middle = load_model("./elec_feature.h5") # elec_feature = middle.predict(X_test[0, :, 0].reshape((1, 24, 1))) # print("------------------------") # print("elec_feature.shape: ", elec_feature.shape) # (712, 24, 48) # print(elec_feature) # # # weight_Dense_1 = model.get_layer('elec_feature').get_weights() # # weight_Dense_1 = np.array(weight_Dense_1) # # print("weight_Dense_1[0].shape: ", weight_Dense_1[0].shape) # # print("weight_Dense_1: ", weight_Dense_1) # # # print("bias_Dense_1: ", bias_Dense_1) def cool_model(): time_steps = 24 input_dim = 1 lstm_units = 48 epoch = 200 batch_size = 48 input_tensor = Input(shape=(time_steps, input_dim), name='cool_input') cnn_out = Conv1D(filters=64, kernel_size=1, activation='relu', name="cool_cnn")(input_tensor) hidden_1 = Dropout(0.3, name='cool_dropout1')(cnn_out) lstm_out = CuDNNLSTM(lstm_units, return_sequences=True, name='cool_feature')(hidden_1) hidden_2 = Dropout(0.3, name='cool_dropout2')(lstm_out) hidden_3 = Flatten(name='cool_flatten')(hidden_2) output = Dense(1, activation='sigmoid', name='cool_dense')(hidden_3) model = Model(input_tensor, output) model.summary() model.compile(loss='mae', optimizer='adam') # plot_model(model, to_file='model.png') model.fit(X_train[:, :, 1].reshape((2920, 24, 1)), y_train[:, 1], epochs=epoch, batch_size=batch_size, shuffle=False) middle = Model(input_tensor, model.get_layer('cool_feature').output) model.save("./cool.h5") middle.save("./cool_feature.h5") return model, middle
部分理论来源于网络,如有侵权请联系删除。