LRP算法也是可解释算法的一种,全称Layer-wise Relevance Propagation,原始LRP算法主要是应用在CV等领域,针对NLP中通过Word2Vec等手段将token转化为分布式词向量并通过RNN向量化文档的可解释手段真不多。LIME虽然可以针对文本数据进行解释,但适用场景是tf-idf或者bag-of-word这种词向量,向量中一个维度代表一个词。
这里参考的论文是Explaining Recurrent Neural Network Predictions in Sentiment Analysis,作者将LRP算法扩展到LSTM中,这里应用的任务是一个5分类情感任务,输入一个词序列,输出这句话属于哪个类别,LRP算法输出序列中哪几个词是重点。
代码开源到github,但是这里作者用numpy实现了个LSTM,并用LRP解释。如何添加对pytorch,keras框架还没人做过。先来研究下内部实现。
关于LSTM可以参考LSTM详解,这里为了跟作者的代码同步,再提几句。LSTM大致结构如下
这里模型每一个时间步的输出为 h t h_t ht,从t - 1时刻到t时刻可以用 C t , h t = L S T M C e l l ( C t − 1 , h t − 1 , x t ) C_t, h_t = LSTMCell(C_{t - 1}, h_{t - 1}, x_t) Ct,ht=LSTMCell(Ct−1,ht−1,xt) 表示, x t x_t xt 是输入向量, W f , W i , W o , W g , b f , b i , b o , b g W_f, W_i, W_o, W_g, b_f, b_i, b_o, b_g Wf,Wi,Wo,Wg,bf,bi,bo,bg 均为模型参数(模型参数写法有很多种,作者代码实现的就有些区别)。
上图
这里就把上面涉及的公式总结下,要是觉得多余了就直接往下翻
针对双向LSTM,假设输入序列长度为 T T T,那么最终输出 c c c
c = W l e f t . h l e f t T + W r i g h t . h r i g h t T c = W_{left} . h_{left}^T + W_{right}. h_{right}^T c=Wleft.hleftT+Wright.hrightT
class LSTM_bidi:
def __init__(self, model_path='./model/'):
"""
Load trained model from file.
"""
# vocabulary
f_voc = open(model_path + "vocab", 'rb')
self.voc = pickle.load(f_voc)
f_voc.close()
# word embeddings
self.E = np.load(model_path + 'embeddings.npy', mmap_mode='r') # shape V*e
# model weights
f_model = open(model_path + 'model', 'rb')
model = pickle.load(f_model)
f_model.close()
# LSTM left encoder
self.Wxh_Left = model["Wxh_Left"] # shape 4d*e
self.bxh_Left = model["bxh_Left"] # shape 4d
self.Whh_Left = model["Whh_Left"] # shape 4d*d
self.bhh_Left = model["bhh_Left"] # shape 4d
# LSTM right encoder
self.Wxh_Right = model["Wxh_Right"]
self.bxh_Right = model["bxh_Right"]
self.Whh_Right = model["Whh_Right"]
self.bhh_Right = model["bhh_Right"]
# linear output layer
self.Why_Left = model["Why_Left"] # shape C*d
self.Why_Right = model["Why_Right"] # shape C*d
这里作者实现的是双向LSTM,一个left,一个right,我们只关注其中一个,left。可以看到作者这里并没有定义concat操作,而是对参数进行了重组,之前的 W f , W i , W g , W o W_f, W_i, W_g, W_o Wf,Wi,Wg,Wo 重组成了 W x h W_{xh} Wxh 和 W h h W_{hh} Whh,一个用来与 x t x_t xt 运算一个用来与 h t − 1 h_{t - 1} ht−1 运算。 W x h W_{xh} Wxh 包括了 W f , W i , W g , W o W_f, W_i, W_g, W_o Wf,Wi,Wg,Wo 中与 x t x_t xt 运算的部分而 W h h W_{hh} Whh 包括了与 h t − 1 h_{t - 1} ht−1 运算的部分。
def forward(self):
"""
Standard forward pass.
Compute the hidden layer values (assuming input x/x_rev was previously set)
"""
print(f"x.shape:{self.x.shape}")
T = len(self.w)
d = int(self.Wxh_Left.shape[0]/4)
# gate indices (assuming the gate ordering in the LSTM weights is i,g,f,o):
idx = np.hstack((np.arange(0,d), np.arange(2*d,4*d))).astype(int) # indices of gates i,f,o together
idx_i, idx_g, idx_f, idx_o = np.arange(0,d), np.arange(d,2*d), np.arange(2*d,3*d), np.arange(3*d,4*d) # indices of gates i,g,f,o separately
# initialize
self.gates_xh_Left = np.zeros((T, 4*d))
self.gates_hh_Left = np.zeros((T, 4*d))
self.gates_pre_Left = np.zeros((T, 4*d)) # gates pre-activation
self.gates_Left = np.zeros((T, 4*d)) # gates activation
self.gates_xh_Right = np.zeros((T, 4*d))
self.gates_hh_Right = np.zeros((T, 4*d))
self.gates_pre_Right= np.zeros((T, 4*d))
self.gates_Right = np.zeros((T, 4*d))
for t in range(T):
self.gates_xh_Left[t] = np.dot(self.Wxh_Left, self.x[t])
self.gates_hh_Left[t] = np.dot(self.Whh_Left, self.h_Left[t-1])
self.gates_pre_Left[t] = self.gates_xh_Left[t] + self.gates_hh_Left[t] + self.bxh_Left + self.bhh_Left
self.gates_Left[t,idx] = 1.0/(1.0 + np.exp(- self.gates_pre_Left[t,idx])) # it, ft, ot = sigmoid(it), sigmoid(ft), sigmoid(ot)
self.gates_Left[t,idx_g] = np.tanh(self.gates_pre_Left[t,idx_g]) # g_t = tanh(g_t)
self.c_Left[t] = self.gates_Left[t,idx_f]*self.c_Left[t-1] + self.gates_Left[t,idx_i]*self.gates_Left[t,idx_g] # ct = ft * ct-1 + it * gt
self.h_Left[t] = self.gates_Left[t,idx_o]*np.tanh(self.c_Left[t]) # ht = ot * tanh(ct)
self.gates_xh_Right[t] = np.dot(self.Wxh_Right, self.x_rev[t])
self.gates_hh_Right[t] = np.dot(self.Whh_Right, self.h_Right[t-1])
self.gates_pre_Right[t] = self.gates_xh_Right[t] + self.gates_hh_Right[t] + self.bxh_Right + self.bhh_Right
self.gates_Right[t,idx] = 1.0/(1.0 + np.exp(- self.gates_pre_Right[t,idx]))
self.gates_Right[t,idx_g] = np.tanh(self.gates_pre_Right[t,idx_g])
self.c_Right[t] = self.gates_Right[t,idx_f]*self.c_Right[t-1] + self.gates_Right[t,idx_i]*self.gates_Right[t,idx_g]
self.h_Right[t] = self.gates_Right[t,idx_o]*np.tanh(self.c_Right[t])
self.y_Left = np.dot(self.Why_Left, self.h_Left[T-1])
self.y_Right = np.dot(self.Why_Right, self.h_Right[T-1])
self.s = self.y_Left + self.y_Right
return self.s.copy() # prediction scores
这里重点看for
循环,并且只关注left结尾的变量。
for
循环第4句self.gates_Left[t,idx] = 1.0/(1.0 + np.exp(- self.gates_pre_Left[t,idx]))
对应 f t = σ ( W f . [ h t − 1 ; x t ] + b f ) f_t = \sigma(W_f.[h_{t - 1}; x_t] + b_f) ft=σ(Wf.[ht−1;xt]+bf) , i t = σ ( W i . [ h t − 1 ; x t ] + b i ) i_t = \sigma(W_i.[h_{t - 1}; x_t] + b_i) it=σ(Wi.[ht−1;xt]+bi) 和 o t = σ ( W o . [ h t − 1 ; x t ] + b o ) o_t = \sigma(W_o.[h_{t - 1}; x_t] + b_o) ot=σ(Wo.[ht−1;xt]+bo)。self.gates_Left[t,idx_g] = np.tanh( self.gates_pre_Left[t,idx_g] )
对应 g t = t a n h ( W g . [ h t − 1 ; x t ] + b g ) g_t = tanh(W_g.[h_{t - 1}; x_t] + b_g) gt=tanh(Wg.[ht−1;xt]+bg)。在NLP任务中,每个词会首先用分布式词向量向量化,因此传统的给每个特征分配权值的方式在这里不太适用,作者这里针对每个词分配权重,计算方式就是将词向量的每个维度的权值相加,比如输入序列shape [ 8 , 60 ] [8 , 60] [8,60],8个词,每个词向量60维。LRP算法计算结果shape = [ 8 , 60 ] [8, 60] [8,60],经过一个sum之后成了 [8,],即为每个词分配权值。
作者认为LSTM和GRU中存在2种运算,Weighted Connections和Multiplicative Interactions
下图红框展现上面LSTM图中的Weighted Connections部分
Weighted Connections中基础运算依旧是 y = σ ( W . x + b ) y = \sigma(W.x + b) y=σ(W.x+b),这里作者避免显式引入非线性激活函数,所以在论文中很多公式没有带激活函数,但是,如果有激活函数,那么激活值采用激活函数之后的。假设前向计算有 z j = ∑ i z i . ω i j + b j z_j = \sum_i z_i. \omega_{ij} + b_j zj=∑izi.ωij+bj,这里如果引入激活函数, z j z_j zj 就是被激活函数激活后的值, i i i 是 j j j 的前置神经元,LRP针对 j j j 的relevance计算结果为 R j R_j Rj,那么 j → i j \rightarrow i j→i 的relevance 。
R i ← j = z i . ω i j + ϵ . s i g n ( z i ) + δ . b j N z j + ϵ . s i g n ( z j ) . R j R_{i \leftarrow j} = \frac{z_i.\omega_{ij} + \frac{\epsilon . sign(z_i) + \delta.b_j}{N}}{z_j + \epsilon.sign(z_j)}.R_j Ri←j=zj+ϵ.sign(zj)zi.ωij+Nϵ.sign(zi)+δ.bj.Rj
R i = ∑ j R i ← j R_i = \sum_j R_{i \leftarrow j} Ri=j∑Ri←j
这里作者设置 ϵ = 0.001 , δ = 1.0 , N \epsilon = 0.001, \delta = 1.0, N ϵ=0.001,δ=1.0,N 是对应神经网络层拥有的神经元数量, s i g n sign sign 函数就是非负转1,负转-1的函数。
这里记 R j → R i R_j \rightarrow R_i Rj→Ri 为 L L LL LL (LRP_Linear)
Multiplicative Interactions这个概念要和RNN中门的概念结合起来,下面红框框出的是Multiplicative Interactions部分。
LSTM和GRU中还有一种计算: z j = z g . z s z_j = z_g . z_s zj=zg.zs。通常称为门,上图红框框出的部分,通常一个操作数值在 [ 0 , 1 ] [0, 1] [0,1] 之间(连接sigmoid),起到一个门的作用,作者称之为 gate neuron z g z_g zg,另一个称为 source neuron z s z_s zs。
对于这种情况
等于说在反向运算的时候,绿线标出部分直接清零,计算流图就不包括绿线部分了。Weighted Connections部分只有tanh激活函数参与反向计算。
所以对于上述计算过程可以总结成(无视激活函数,给定 R C t , R h t R_{C_t}, R_{h_t} RCt,Rht):
在 R R R 向量初始化上,模型最终输出向量 c = W l e f t . h l e f t T + W r i g h t . h r i g h t T c = W_{left} . h_{left}^T + W_{right}. h_{right}^T c=Wleft.hleftT+Wright.hrightT,这是个5分类任务,如果目标类别是2,那么 R c = [ 0 , 0 , 1 , 0 , 0 ] R^c = [0, 0, 1, 0, 0] Rc=[0,0,1,0,0] ,其余均初始化0。
LRP
函数:
def lrp(self, w, LRP_class, eps=0.001, bias_factor=0.0):
"""
Layer-wise Relevance Propagation (LRP) backward pass.
Compute the hidden layer relevances by performing LRP for the target class LRP_class
(according to the papers:
- https://doi.org/10.1371/journal.pone.0130140
- https://doi.org/10.18653/v1/W17-5221 )
"""
# forward pass
self.set_input(w)
self.forward()
T = len(self.w)
d = int(self.Wxh_Left.shape[0]/4)
e = self.E.shape[1]
C = self.Why_Left.shape[0] # number of classes
idx = np.hstack((np.arange(0,d), np.arange(2*d,4*d))).astype(int) # indices of gates i,f,o together
idx_i, idx_g, idx_f, idx_o = np.arange(0,d), np.arange(d,2*d), np.arange(2*d,3*d), np.arange(3*d,4*d) # indices of gates i,g,f,o separately
# initialize
Rx = np.zeros(self.x.shape)
Rx_rev = np.zeros(self.x.shape)
Rh_Left = np.zeros((T+1, d))
Rc_Left = np.zeros((T+1, d))
Rg_Left = np.zeros((T, d)) # gate g only
Rh_Right = np.zeros((T+1, d))
Rc_Right = np.zeros((T+1, d))
Rg_Right = np.zeros((T, d)) # gate g only
Rout_mask = np.zeros((C))
Rout_mask[LRP_class] = 1.0
# format reminder: lrp_linear(hin, w, b, hout, Rout, bias_nb_units, eps, bias_factor)
Rh_Left[T-1] = lrp_linear(self.h_Left[T-1], self.Why_Left.T , np.zeros((C)), self.s, self.s*Rout_mask, 2*d, eps, bias_factor, debug=False)
Rh_Right[T-1] = lrp_linear(self.h_Right[T-1], self.Why_Right.T, np.zeros((C)), self.s, self.s*Rout_mask, 2*d, eps, bias_factor, debug=False)
for t in reversed(range(T)):
Rc_Left[t] += Rh_Left[t]
Rc_Left[t-1] = lrp_linear(self.gates_Left[t,idx_f]*self.c_Left[t-1], np.identity(d), np.zeros((d)), self.c_Left[t], Rc_Left[t], 2*d, eps, bias_factor, debug=False)
Rg_Left[t] = lrp_linear(self.gates_Left[t,idx_i]*self.gates_Left[t,idx_g], np.identity(d), np.zeros((d)), self.c_Left[t], Rc_Left[t], 2*d, eps, bias_factor, debug=False)
Rx[t] = lrp_linear(self.x[t], self.Wxh_Left[idx_g].T, self.bxh_Left[idx_g]+self.bhh_Left[idx_g], self.gates_pre_Left[t,idx_g], Rg_Left[t], d+e, eps, bias_factor, debug=False)
Rh_Left[t-1] = lrp_linear(self.h_Left[t-1], self.Whh_Left[idx_g].T, self.bxh_Left[idx_g]+self.bhh_Left[idx_g], self.gates_pre_Left[t,idx_g], Rg_Left[t], d+e, eps, bias_factor, debug=False)
Rc_Right[t] += Rh_Right[t]
Rc_Right[t-1] = lrp_linear(self.gates_Right[t,idx_f]*self.c_Right[t-1], np.identity(d), np.zeros((d)), self.c_Right[t], Rc_Right[t], 2*d, eps, bias_factor, debug=False)
Rg_Right[t] = lrp_linear(self.gates_Right[t,idx_i]*self.gates_Right[t,idx_g], np.identity(d), np.zeros((d)), self.c_Right[t], Rc_Right[t], 2*d, eps, bias_factor, debug=False)
Rx_rev[t] = lrp_linear(self.x_rev[t], self.Wxh_Right[idx_g].T, self.bxh_Right[idx_g]+self.bhh_Right[idx_g], self.gates_pre_Right[t,idx_g], Rg_Right[t], d+e, eps, bias_factor, debug=False)
Rh_Right[t-1] = lrp_linear(self.h_Right[t-1], self.Whh_Right[idx_g].T, self.bxh_Right[idx_g]+self.bhh_Right[idx_g], self.gates_pre_Right[t,idx_g], Rg_Right[t], d+e, eps, bias_factor, debug=False)
return Rx, Rx_rev[::-1,:], Rh_Left[-1].sum()+Rc_Left[-1].sum()+Rh_Right[-1].sum()+Rc_Right[-1].sum()
lrp_linear
函数
def lrp_linear(hin, w, b, hout, Rout, bias_nb_units, eps, bias_factor=0.0, debug=False):
"""
LRP for a linear layer with input dim D and output dim M.
Args:
- hin: forward pass input, of shape (D,)
- w: connection weights, of shape (D, M)
- b: biases, of shape (M,)
- hout: forward pass output, of shape (M,) (unequal to np.dot(w.T,hin)+b if more than one incoming layer!)
- Rout: relevance at layer output, of shape (M,)
- bias_nb_units: total number of connected lower-layer units (onto which the bias/stabilizer contribution is redistributed for sanity check)
- eps: stabilizer (small positive number)
- bias_factor: set to 1.0 to check global relevance conservation, otherwise use 0.0 to ignore bias/stabilizer redistribution (recommended)
Returns:
- Rin: relevance at layer input, of shape (D,)
"""
sign_out = np.where(hout[na,:]>=0, 1., -1.) # shape (1, M)
numer = (w * hin[:,na]) + ( bias_factor * (b[na,:]*1. + eps*sign_out*1.) / bias_nb_units ) # shape (D, M)
# Note: here we multiply the bias_factor with both the bias b and the stabilizer eps since in fact
# using the term (b[na,:]*1. + eps*sign_out*1.) / bias_nb_units in the numerator is only useful for sanity check
# (in the initial paper version we were using (bias_factor*b[na,:]*1. + eps*sign_out*1.) / bias_nb_units instead)
denom = hout[na,:] + (eps*sign_out*1.) # shape (1, M)
message = (numer/denom) * Rout[na,:] # shape (D, M)
Rin = message.sum(axis=1) # shape (D,)
if debug:
print("local diff: ", Rout.sum() - Rin.sum())
# Note:
# - local layer relevance conservation if bias_factor==1.0 and bias_nb_units==D (i.e. when only one incoming layer)
# - global network relevance conservation if bias_factor==1.0 and bias_nb_units set accordingly to the total number of lower-layer connections
# -> can be used for sanity check
return Rin
lrp_linear
针对神经网络中每一个线性操作 y = W . x + b y = W.x + b y=W.x+b,给定 y y y 处relevance R y R_y Ry,计算 x x x 处值 R x R_x Rx,numer
对应 z i . ω i j + ϵ . s i g n ( z i ) + δ . b j N z_i.\omega_{ij} + \frac{\epsilon . sign(z_i) + \delta.b_j}{N} zi.ωij+Nϵ.sign(zi)+δ.bj,denom
对应 z j + ϵ . s i g n ( z j ) z_j + \epsilon.sign(z_j) zj+ϵ.sign(zj)。
回到LRP
函数,针对 C t , h t = L S T M C e l l ( C t − 1 , h t − 1 , x t ) C_t, h_t = LSTMCell(C_{t - 1}, h_{t - 1}, x_t) Ct,ht=LSTMCell(Ct−1,ht−1,xt),,给定 C t C_t Ct 的relevance值 R C t R_{C_t} RCt 和 h t h_t ht 的relevance值 R h t R_{h_t} Rht ,需要计算 R x t , R h t − 1 , R C t − 1 R_{x_t}, R_{h_{t - 1}}, R_{C_{t - 1}} Rxt,Rht−1,RCt−1。相应计算如下:
Rc_Left[t] += Rh_Left[t]
Rc_Left[t-1] = lrp_linear(self.gates_Left[t,idx_f]*self.c_Left[t-1], np.identity(d), np.zeros((d)), self.c_Left[t], Rc_Left[t], 2*d, eps, bias_factor, debug=False)
Rg_Left[t] = lrp_linear(self.gates_Left[t,idx_i]*self.gates_Left[t,idx_g], np.identity(d), np.zeros((d)), self.c_Left[t], Rc_Left[t], 2*d, eps, bias_factor, debug=False)
Rx[t] = lrp_linear(self.x[t], self.Wxh_Left[idx_g].T, self.bxh_Left[idx_g]+self.bhh_Left[idx_g], self.gates_pre_Left[t,idx_g], Rg_Left[t], d+e, eps, bias_factor, debug=False)
Rh_Left[t-1] = lrp_linear(self.h_Left[t-1], self.Whh_Left[idx_g].T, self.bxh_Left[idx_g]+self.bhh_Left[idx_g], self.gates_pre_Left[t,idx_g], Rg_Left[t], d+e, eps, bias_factor, debug=False)
再给一个LSTM图
红色线条表示 R C t − 1 R_{C_{t - 1}} RCt−1 和 R g t R_{g_t} Rgt 的计算流程,绿色线条表示 R x t R_{x_t} Rxt 和 R h t − 1 R_{h_{t - 1}} Rht−1 的计算流程。
计算过程:
按照前面的理论, R x t R_{x_t} Rxt 和 R h t − 1 R_{h_{t-1}} Rht−1 的计算过程按绿线所示反向进行,因为红框乘法部分 z g z_g zg 会被忽略掉,所以指挥按一个流程进行。
[1] Arras, L. , et al. “Explaining Recurrent Neural Network
Predictions in Sentiment Analysis.” EMNLP’17 Workshop on Computational
Approaches to Subjectivity, Sentiment & Social Media Analysis 2017.