Keras查看神经网络每层输出

Keras查看神经网络每层输出

@author:Heisenberg

主要介绍Keras框架下应用K.functions()查看神经网络每层的输出。

先介绍主体代码,本篇以一个简单的neural networks为例。

import numpy as np
import keras.backend as K
from keras import Model
from keras.layers import *
class Normal(Layer):
    def __init__(self, **kwargs):
        super(Normal, self).__init__(**kwargs) #必须定义的
    def build(self, input_shape):
        # 添加可训练的参数
        self.kernel = self.add_weight(name='kernel',shape=(1,),
                                      initializer='zeros',trainable=True)
        self.built = True
    def call(self, x):
        # 定义功能,相当于Lambda层的功能函数
        self.x_normalized = K.l2_normalize(x, -1)
        return self.x_normalized * self.kernel

x_in = Input(shape=(784,))
x = x_in

x = Dense(512, activation='relu')(x)
x = Dropout(0.2)(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.2)(x)
normal = Normal()
x = normal(x)
x = Dense(10, activation='softmax')(x)

model = Model(x_in, x)

1. 查看某一层的输出

在上例中,通过model.layers可以查看所有层的信息

In [2]: model.layers
Out[2]:
[<keras.engine.input_layer.InputLayer at 0x1f47b0546d8>,
 <keras.layers.core.Dense at 0x1f412d024a8>,
 <keras.layers.core.Dropout at 0x1f47cdde128>,
 <keras.layers.core.Dense at 0x1f412d02780>,
 <keras.layers.core.Dropout at 0x1f412d027b8>,
 <__main__.Normal at 0x1f412d02c18>,
 <keras.layers.core.Dense at 0x1f412d02cc0>]

K.function的用法跟定义一个新模型类似,直接封装好了后端的输入输出操作,定义的时候需要指定输入和输出的张量,且输入输出的向量要相关,但不要求input是一个层的输出,允许任意张量!返回的fn是一个具有函数功能的对象,便于调用。

如果我们想查看normalized层后输出的张量,可以定义

batch_size = 1
x_test = np.ones((batch_size,)+K.int_shape(x_in)[1:])
fn = K.function([x_in], [normal.x_normalized])
v1 = fn([x_test])
np.array(v1).shape ##(1,1,256)
#Out[7]: 
#array([0.08498713, 0.05334507, 0.        , 0.        , 0.07348377,
#       0.01111731, 0.06810687, 0.        , 0.        , 0.        ,
#       0.        , 0.1667348 , 0.        , 0.        , 0.07852139,

2. 查看所有层的输出

如果我们想把自己的neural networks每一层都剥开细细“欣赏”的话,可以直接用model.layers[index].output

outputs = [layer.output for layer in model.layers]          # all layer outputs
functors = [K.function([model.input], [out]) for out in outputs] # x_test: model.input

x_test = np.ones((batch_size,)+K.int_shape(x_in)[1:])
layer_outs = [func([x_test, 1.]) for func in functors]
for i in range(len(layer_outs)):
    print(np.array(layer_outs[i]).shape)
#Out[10]:
#(1, 1, 784)
#(1, 1, 512)
#(1, 1, 512)
#(1, 1, 256)
#(1, 1, 256)
#(1, 1, 256)
#(1, 1, 10)

3. 选定某一特定层

根据model.layers的名字来选:

# Transformer的前向传播层
output_layer = 'Transformer-%s-FeedForward-Norm' % (bert_layers - 1)
output = model.get_layer(output_layer).output

# Embedding层
label_in = Input(shape=(1,))  # 指定标签
input = model.get_layer('Embedding-Token').output
output = model.output

参考资料:

Keras 获取中间层/变量的输出

Keras, How to get the output of each layer?

“让Keras更酷一些!”:中间变量、权重滑动和安全生成器

你可能感兴趣的:(Coding,Deep,Learning)