Datawhale组队学习GNN-task03(基于图神经网络的节点表征学习)

  • DataWhale开源学习资料:https://github.com/datawhalechina/team-learning-nlp/tree/master/GNN

进入实战阶段~~~

数据准备

  • Cora是一个论文引用网络,节点代表论文,如果两篇论文存在引用关系,那么认为对应的两个节点之间存在边,每个节点由一个1433维的词包特征向量描述。我们的任务是推断每个文档的类别(共7类)。
  • 为了展现图神经网络的强大,我们通过节点分类任务来比较MLP和GCN, GAT(两个知名度很高的图神经网络)三者的节点表征学习能力
dataset = Planetoid(root='./dataset/Cora', name='Cora', transform=NormalizeFeatures())
print()
print(f'Dataset:{dataset}:')
print(f'Number of graphs:{len(dataset)}')
print(f'Number of features:{dataset.num_features}')
print(f'Number of classes:{dataset.num_classes}')
data = dataset[0]
print()
print(data)
print('===========')
print(f'Number of nodes:{data.num_nodes}')
print(f'Number of edges:{data.num_edges}')
print(f'Average node degree:{data.num_edges/data.num_nodes:.2f}')
print(f'Number of training nodes:{data.train_mask.sum()}')
print(f'Training node label rate: {int(data.train_mask.sum()) / data.num_nodes:.2f}')
print(f'Contains isolated nodes: {data.contains_isolated_nodes()}')
print(f'Contains self-loops: {data.contains_self_loops()}')
print(f'Is undirected: {data.is_undirected()}')

# result:
Number of graphs: 1
Number of features: 1433
Number of classes: 7

MLP模型

# MLP model
import torch
from torch.nn import Linear
import torch.nn.functional as F


class MLP(torch.nn.Module):
    def __init__(self, hidden_channels):
        super(MLP, self).__init__()
        torch.manual_seed(12345)
        self.lin1 = Linear(dataset.num_features, hidden_channels)
        self.lin2 = Linear(hidden_channels, dataset.num_classes)

    def forward(self, x):
        x = self.lin1(x)
        x = x.relu()
        x = F.dropout(x, p=0.5, training=self.training)
        x = self.lin2(x)
        return x

model = MLP(hidden_channels=16)
print(model)

MLP的训练与测试

  • MLP的模型还是比较简单,老少皆宜的
model = MLP(hidden_channels=16)
criterion = torch.nn.CrossEntropyLoss()  # Define loss criterion.
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)  # Define optimizer.

def train():
    model.train()
    optimizer.zero_grad()  # Clear gradients.
    out = model(data.x[data.train_mask])  # Perform a single forward pass.
    loss = criterion(out, data.y[data.train_mask])  # Compute the loss solely based on the training nodes.
    loss.backward()  # Derive gradients.
    optimizer.step()  # Update parameters based on gradients.
    
    model.eval()
    val_out = model(data.x[data.val_mask])
    val_loss = criterion(val_out, data.y[data.val_mask])
    return loss, val_loss

for epoch in range(1, 201):
    loss, val_loss = train()
    print(f'Epoch: {epoch:03d}, Loss: {loss:.4f}, valid loss: {val_loss:.4f}')

def test():
    model.eval()
    out = model(data.x[data.test_mask])
    pred = out.argmax(dim=1)  # Use the class with highest probability.
    test_correct = pred == data.y[data.test_mask]  # Check against ground-truth labels.
    test_acc = int(test_correct.sum()) / int(data.test_mask.sum())  # Derive ratio of correct predictions.
    return test_acc

test_acc = test()
print(f'Test Accuracy: {test_acc:.4f}')

# result: 59%
  • 由此可以看出,MLP用于此任务的准确率很低,并且训练和评估损失差别较大。

  • 推测是因为训练数据较少(5%左右)导致过拟合。

  • 原始数据
    image.png
  • 使用MLP:


    image.png

GCN模型

  • Paper:Semi-supervised Classification with Graph Convolutional Network
    image.png
  • 卷积核在图像上对应位置相乘后累加求和,表达式为:


    image.png
  • 其中I为图像,K为卷积核,σ 为激活函数。
# GCN图节点分类神经网络
from torch_geometric.nn import GCNConv

class GCN(torch.nn.Module):
    def __init__(self, hidden_channels):
        super(GCN, self).__init__()
        torch.manual_seed(12345)
        self.conv1 = GCNConv(dataset.num_features, hidden_channels)
        self.conv2 = GCNConv(hidden_channels, dataset.num_classes)

    def forward(self, x, edge_index):
        x = self.conv1(x, edge_index)
        x = x.relu()
        x = F.dropout(x, p=0.5, training=self.training)
        x = self.conv2(x, edge_index)
        return x

model = GCN(hidden_channels=16)
print(model)
# result:81.4%
  • 原始数据:


  • 使用GCN:


    image.png

GAT模型

  • Paper:Graph Attention Networks
  • 数学表达式:


    image.png
  • 实质上是加入attention的聚集。
# 构造GAT节点分类神经网络
import torch
from torch.nn import Linear
import torch.nn.functional as F

from torch_geometric.nn import GATConv

class GAT(torch.nn.Module):
    def __init__(self, hidden_channels):
        super(GAT, self).__init__()
        torch.manual_seed(12345)
        self.conv1 = GATConv(dataset.num_features, hidden_channels)
        self.conv2 = GATConv(hidden_channels, dataset.num_classes)

    def forward(self, x, edge_index):
        x = self.conv1(x, edge_index)
        x = x.relu()
        x = F.dropout(x, p=0.5, training=self.training)
        x = self.conv2(x, edge_index)
        return x
#result:73.8%
  • 原始数据:


    image.png
  • 使用GAT:


    image.png

小结

  • MLP:59.0% 可视化中聚集效果最差
  • GCN:81.4% 可视化图聚集效果最好
  • GAT:73.8%
  • 作业后补~~~~

你可能感兴趣的:(Datawhale组队学习GNN-task03(基于图神经网络的节点表征学习))