geometric源码阅读和分析:MessagePassin类详解和使用

目录

      • MessagePassing设计整体思路的理解
      • 统一代码框架:
      • 使用

MessagePassing设计整体思路的理解

MessagePassing类使用模板方法设计模式进行设计,因为对于图的操作可以分为两步:聚合邻居的特征、标签信息和消息传递到下一层。模板方法非常适合,定义一个操作中算法的骨架,而将一些步骤延迟到子类中,模板方法使得子类可以不改变算法的结构即可重定义该算法的某些特定步骤。通俗的说,成一件事情,有固定的数个步骤,但是每个步骤根据对象的不同,而实现细节不同;就可以在父类中定义一个完成该事情的总方法,按照完成事件需要的步骤去调用其每个步骤的实现方法。每个步骤的具体实现,由子类完成。优点是:具体细节步骤实现定义在子类中,子类定义详细处理算法是不会改变算法整体结构。

python实现虚函数的方法是通过继承,定义子类必须重写的虚函数可以通过raise NotImplementedError来控制,如下:

def message_and_aggregate(
        self,
        adj_t: Union[SparseTensor, Tensor],
    ) -> Tensor:
        raise NotImplementedError

统一代码框架:

图的卷积操作一般可以分成两步:邻居聚合和消息传递 neighborhood aggregation or message passing scheme.

With x i ( k − 1 ) ∈ R F \mathbf{x}^{(k-1)}_i \in \mathbb{R}^F xi(k1)RFdenoting node features of node i i i in layer ( k − 1 ) (k-1) (k1)and e j , i ∈ R D \mathbf{e}_{j,i} \in \mathbb{R}^D ej,iRDdenoting (optional) edge features from node j j jto node i i i, message passing graph neural networks can be described as:

x i ( k ) = γ ( k ) ( x i ( k − 1 ) , □ j ∈ N ( i )   ϕ ( k ) ( x i ( k − 1 ) , x j ( k − 1 ) , e j , i ) ) , \mathbf{x}_i^{(k)} = \gamma^{(k)} \left( \mathbf{x}_i^{(k-1)}, \square_{j \in \mathcal{N}(i)} \, \phi^{(k)}\left(\mathbf{x}_i^{(k-1)}, \mathbf{x}_j^{(k-1)},\mathbf{e}_{j,i}\right) \right), xi(k)=γ(k)(xi(k1),jN(i)ϕ(k)(xi(k1),xj(k1),ej,i)),
where ◻ ◻ denotes a differentiable, permutation invariant function, e.g., sum, mean or max, and γ \gamma γ and ϕ \phi ϕ denote differentiable functions such as MLPs (Multi Layer Perceptrons).

使用

MessagePassing(aggr=“add”,flow=“source_to_target”,node_dim=-2):

定义要使用的聚合方案(“add”、“mean”或“max”)和消息传递的流向(“source_to-target”或“target_to_source”)。此外,node_dim属性指示沿哪个轴传播。

MessagePassing.propagate(edge_index,size=None,**kwargs):

开始传播消息的初始调用。获取边索引和构造消息和更新节点嵌入所需的所有附加数据。注意,propagate()不仅限于在形状[N,N]的正方形邻接矩阵中交换消息,还可以通过传递size=(N,M)作为附加参数,在形状[N,M]的一般稀疏分配矩阵(例如二分图)中交换消息。如果设置为“无”,则假定赋值矩阵为方阵。对于具有两个独立的节点和索引集合的二分图,并且每个集合都持有自己的信息,可以通过将信息作为元组传递来标记这种分裂,例如x=(x_N,x_M)。

MessagePassing.message(…)

构造消息到节点 i i i相当于对每条边 ( j , i ) ∈ ε (j,i)\in\varepsilon (j,i)ε如果flow=“source_to_target”,或者条边 ( i , j ) ∈ ε (i,j)\in\varepsilon (i,j)ε如果flow="target_to_source"进行 ϕ \phi ϕ运算。可以使用propagate传递的所有参数。并且tensors通过propagate()可以通过添加后缀 _ i , _ j \_i,\_j _i,_j被映射到各自的节点 i i i和节点 j j j。例如 x i , x j x_i,x_j xi,xj,通常 i i i是中心节点,而 j j j是邻居节点。

MessagePassing.update(aggr_out, …):

更新节点嵌入相当于对每个节点 i ∈ V i\in V iV进行 γ \gamma γ运算.将聚合的输出作为第一个参数和最初传递给propagate()的任何参数.

在propagate中顺序调用message、aggregate()和update函数。

示例:The GCN layer is mathematically defined as

x i ( k ) = ∑ j ∈ N ( i ) ∪ { i } 1 deg ⁡ ( i ) ⋅ deg ⁡ ( j ) ⋅ ( W ⊤ ⋅ x j ( k − 1 ) ) + b , \mathbf{x}_i^{(k)} = \sum_{j \in \mathcal{N}(i) \cup \{ i \}} \frac{1}{\sqrt{\deg(i)} \cdot \sqrt{\deg(j)}} \cdot \left( \mathbf{W}^{\top} \cdot \mathbf{x}_j^{(k-1)} \right) + \mathbf{b}, xi(k)=jN(i){i}deg(i) deg(j) 1(Wxj(k1))+b,

抽象的框架如下:

geometric源码阅读和分析:MessagePassin类详解和使用_第1张图片

对应于基本的框架的话:
ϕ ( k ) ( x i ( k − 1 ) , x j ( k − 1 ) , e j , i ) = 1 deg ⁡ ( i ) ⋅ deg ⁡ ( j ) ⋅ ( W ⊤ ⋅ x j ( k − 1 ) ) + b \phi^{(k)}\left(\mathbf{x}_i^{(k-1)}, \mathbf{x}_j^{(k-1)},\mathbf{e}_{j,i}\right)=\frac{1}{\sqrt{\deg(i)} \cdot \sqrt{\deg(j)}} \cdot \left( \mathbf{W}^{\top} \cdot \mathbf{x}_j^{(k-1)} \right) + \mathbf{b} ϕ(k)(xi(k1),xj(k1),ej,i)=deg(i) deg(j) 1(Wxj(k1))+b

□ j ∈ N ( i ) = ∑ j ∈ N ( i ) ∪ { i } \square_{j \in \mathcal{N}(i)}=\sum_{j \in \mathcal{N}(i) \cup \{ i \}} jN(i)=jN(i){i}

γ ( k ) = D i r e c t   m a p p i n g \gamma^{(k)}=Direct \ mapping γ(k)=Direct mapping

代码实现:
注意,和公式描述有区别,使用矩阵乘法代替求和的聚合操作,减少计算量;

"""
1、Add self-loops to the adjacency matrix.
2、Linearly transform node feature matrix.
3、Compute normalization coefficients.
4、Normalize node features
5、Sum up neighboring node features ("add" aggregation).Apply a final bias vector.

Steps 1-3 are typically computed before message passing takes place. Steps 4-5 can be easily processed using the MessagePassing base class. The full layer implementation is shown below:
"""
import torch
from torch.nn import Linear, Parameter
from torch_geometric.nn import MessagePassing
from torch_geometric.utils import add_self_loops, degree

class GCNConv(MessagePassing):
    def __init__(self, in_channels, out_channels):
        super().__init__(aggr='add')  # "Add" aggregation (Step 5).
        
        # 设置可训练参数
        self.lin = Linear(in_channels, out_channels, bias=False)#shape=(in_channels, out_channels)
        self.bias = Parameter(torch.Tensor(out_channels))

        #重置和初始化可学习的参数
        self.reset_parameters()

    def reset_parameters(self):
        self.lin.reset_parameters()
        self.bias.data.zero_()

    def forward(self, x, edge_index):
        # x has shape [N, in_channels]
        # edge_index has shape [2, E]

        # Step 1: Add self-loops to the adjacency matrix.
        edge_index, _ = add_self_loops(edge_index, num_nodes=x.size(0))

        # Step 2: Linearly transform node feature matrix.
        # 使用矩阵乘法代替求和操作,提高运算速度;
        x = self.lin(x)

        # Step 3: Compute normalization.
        row, col = edge_index
        # 计算度和开方
        deg = degree(col, x.size(0), dtype=x.dtype)
        deg_inv_sqrt = deg.pow(-0.5)
        deg_inv_sqrt[deg_inv_sqrt == float('inf')] = 0        
        norm = deg_inv_sqrt[row] * deg_inv_sqrt[col]# 数组,节点和所有邻居的度的开方乘积;

        # Step 4-5: Start propagating messages.
        # 使用基类的聚合操作
        out = self.propagate(edge_index, x=x, norm=norm)

        # Step 6: Apply a final bias vector.
        out += self.bias

        return out

    def message(self, x_j, norm):
        # x_j has shape [E, out_channels]
        # x_j i节点每条边的source node的特征,就是节点i的邻居节点的特征;
        # Step 4: Normalize node features.使用矩阵乘法代替累加,提高运算速度
        return norm.view(-1, 1) * x_j

示例2:Implementing the Edge Convolution

The edge convolutional layer processes graphs or point clouds and is mathematically defined as
x i ( k ) = max ⁡ j ∈ N ( i ) h Θ ( x i ( k − 1 ) , x j ( k − 1 ) − x i ( k − 1 ) ) , \mathbf{x}_i^{(k)} = \max_{j \in \mathcal{N}(i)} h_{\mathbf{\Theta}} \left( \mathbf{x}_i^{(k-1)}, \mathbf{x}_j^{(k-1)} - \mathbf{x}_i^{(k-1)} \right), xi(k)=jN(i)maxhΘ(xi(k1),xj(k1)xi(k1)),
where h Θ h_{\mathbf{\Theta}} hΘ denotes an MLP.

对应于基本框架的话:
ϕ ( k ) ( x i ( k − 1 ) , x j ( k − 1 ) , e j , i ) = h Θ ( x i ( k − 1 ) , x j ( k − 1 ) − x i ( k − 1 ) ) , \phi^{(k)}\left(\mathbf{x}_i^{(k-1)}, \mathbf{x}_j^{(k-1)},\mathbf{e}_{j,i}\right)=h_{\mathbf{\Theta}} \left( \mathbf{x}_i^{(k-1)}, \mathbf{x}_j^{(k-1)} - \mathbf{x}_i^{(k-1)} \right), ϕ(k)(xi(k1),xj(k1),ej,i)=hΘ(xi(k1),xj(k1)xi(k1)),

□ j ∈ N ( i ) = max ⁡ j ∈ N ( i ) \square_{j \in \mathcal{N}(i)}= \max_{j \in \mathcal{N}(i)} jN(i)=jN(i)max

γ ( k ) = D i r e c t   m a p p i n g \gamma^{(k)}=Direct \ mapping γ(k)=Direct mapping

import torch
from torch.nn import Sequential as Seq, Linear, ReLU
from torch_geometric.nn import MessagePassing

class EdgeConv(MessagePassing):
    def __init__(self, in_channels, out_channels):
        super().__init__(aggr='max') #  "Max" aggregation.
        # 定义MLP层
        self.mlp = Seq(Linear(2 * in_channels, out_channels),
                       ReLU(),
                       Linear(out_channels, out_channels))

    def forward(self, x, edge_index):
        # x has shape [N, in_channels]
        # edge_index has shape [2, E]

        return self.propagate(edge_index, x=x)

    def message(self, x_i, x_j):
        # x_i has shape [E, in_channels]
        # x_j has shape [E, in_channels]

        tmp = torch.cat([x_i, x_j - x_i], dim=1)  # tmp has shape [E, 2 * in_channels]
        return self.mlp(tmp)

参考:

Creating Message Passing Networks — pytorch_geometric documentation (pytorch-geometric.readthedocs.io)

23 种设计模式详解(全23种)_鬼灭之刃的博客-CSDN博客_设计模式

你可能感兴趣的:(机器学习,python)