torch_geometric -- Pooling Layers

torch_geometric – Pooling Layers

global_add_pool

通过在节点维度上添加节点特征来返回批量图级输出,因此对于单个图
它的输出由下式计算

torch_geometric -- Pooling Layers_第1张图片

from torch_geometric.nn import global_mean_pool, global_max_pool, global_add_pool
import torch as th

f = [[1,2,3,4,5], [6,7,8,9,10], [0,0,1,1,1],[2,2,2,0,0],[3,0,3,0,3]]
batch = [0, 0, 0, 1, 1]

f_t = th.tensor(f)
batch_t = th.tensor(batch)

result_add = global_add_pool(f_t, batch_t)
print(result_add)
-----------------------------------
tensor([[ 7,  9, 12, 14, 16],
        [ 5,  2,  5,  0,  3]])

global_mean_pool

通过在节点维度上平均节点特征返回批量图级输出,因此对于单个图
它的输出由下式计算

torch_geometric -- Pooling Layers_第2张图片

from torch_geometric.nn import global_mean_pool, global_max_pool, global_add_pool
import torch as th


f = [[1,2,3,4,5], [6,7,8,9,10], [0,0,1,1,1],[2,2,2,0,0],[3,0,3,0,3]]
batch = [0, 0, 0, 1, 1]

f_t = th.tensor(f)
batch_t = th.tensor(batch)

result_mean = global_mean_pool(f_t, batch_t)
print(result_mean)
------------------------------
tensor([[2, 3, 4, 4, 5],
        [2, 1, 2, 0, 1]])

global_max_pool

通过在节点维度上获取通道方面的最大值来返回批处理图级输出,因此对于单个图
它的输出由下式计算
torch_geometric -- Pooling Layers_第3张图片

from torch_geometric.nn import global_mean_pool, global_max_pool, global_add_pool
import torch as th


f = [[1,2,3,4,5], [6,7,8,9,10], [0,0,1,1,1],[2,2,2,0,0],[3,0,3,0,3]]
batch = [0, 0, 0, 1, 1]

f_t = th.tensor(f)
batch_t = th.tensor(batch)

result_max = global_max_pool(f_t, batch_t)
print(result_max)
-------------------------------------------
tensor([[ 6,  7,  8,  9, 10],
        [ 3,  2,  3,  0,  3]])

TopKPooling

来自“Graph U-Nets”、“Towards Sparse Hierarchical Graph Classifiers”和“Understanding Attention and Generalization in Graph Neural Networks”论文的 pooling operator
torch_geometric -- Pooling Layers_第4张图片
torch_geometric -- Pooling Layers_第5张图片

from torch_geometric.nn import TopKPooling
import torch as th

f = [[1,2,3,4,5], [6,7,8,9,10], [0,0,1,1,1], [2,2,2,0,0], [3,0,3,0,3]]
edge_index = [[0,1],[2,0],[3,4]]
batch = [0, 0, 0, 1, 1]

edge_index_t = th.tensor(edge_index)
f_t = th.tensor(f)
batch_t = th.tensor(batch)

topkp = TopKPooling(in_channels=5)
reslut = topkp(x = f_t, edge_index=edge_index_t, batch=batch_t)
print(reslut)
------------------------------------------------------------------
(tensor([[5.9999, 6.9998, 7.9998, 8.9998, 9.9998],
        [0.9965, 1.9930, 2.9895, 3.9860, 4.9825],
        [2.9566, 0.0000, 2.9566, 0.0000, 2.9566]], grad_fn=<MulBackward0>), tensor([[0],
        [1]]), None, tensor([0, 0, 1]), tensor([1, 0, 4]), tensor([1.0000, 0.9965, 0.9855], grad_fn=<IndexBackward0>))

SAGPooling

来自“Self-Attention Graph Pooling”和“Understanding Attention and Generalization in Graph Neural Networks”论文的self-attention pooling算子

EdgePooling

“Towards Graph Pooling by Edge Contraction”和“Edge Contraction Pooling for Graph Neural Networks”论文中的边池运算符。

ASAPooling

“ASAP:Adaptive Structure Aware Pooling for Learning Hierarchical Graph Representations”论文中的 Adaptive Structure Aware Pooling 运算符。

PANPooling

“Path Integral Based Convolution and Pooling for Graph Neural Networks”论文中的基于路径积分的池化算子。

MemPooling

来自“Memory-Based Graph Networks”论文的基于内存的池化层,它学习基于软聚类分配的粗化图表示

max_pool

torch_geometric.data.Data根据 中定义的聚类,汇集并粗化对象给出的图cluster。

avg_pool

torch_geometric.data.Data根据 中定义的聚类,汇集并粗化对象给出的图cluster。

max_pool_x

Max-Pools 节点特征根据中定义的聚类cluster。

max_pool_neighbor_x

最大池化相邻节点特征,其中每个特征data.x都被中心节点及其邻居中具有最大值的特征值替换。

avg_pool_x

平均池节点特征根据中定义的聚类cluster。

avg_pool_neighbor_x

平均汇集相邻节点特征,其中每个特征data.x都被中心节点及其邻居的平均特征值替换。

graclus

来自“没有特征向量的加权图切割:多级方法”论文中的一种贪婪聚类算法,该算法选择一个未标记的顶点并将其与其未标记的邻居之一匹配(最大化其边缘权重)。

voxel_grid

来自Graphs论文中卷积网络中的动态边缘条件滤波器的体素网格池,它在点云上覆盖用户定义大小的规则网格,并将所有点聚集在同一体素内。

fps

“PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space”论文中的一种采样算法,该算法针对其余点迭代采样最远的点。

knn

查找kx中最近点中y的每个元素。

knn_graph

计算图形边缘到最近的k点。

radius

在 distance 内的y所有点中查找每个元素。xr

radius_graph

计算给定距离内所有点的图边。

nearest

x将距离y中给定查询点最近的点聚集在一起。

你可能感兴趣的:(图神经网络,深度学习,人工智能,python)