图神经网络(一)图信号处理与图卷积神经网络(6)GCN实战

图神经网络(一)图信号处理与图卷积神经网络(6)GCN实战

  • GCN实战
    • 1.SetUp
    • 2.数据准备
    • 3.图卷积层定义
    • 4.模型定义
    • 5.模型训练
  • 完整代码
    • 代码说明
    • 1.SetUp
    • 2.数据准备
    • 3.图卷积层定义
    • 4.模型定义
    • 5.模型训练

GCN实战


 本节我们通过一个完整的例子来理解如何通过GCN来实现对节点的分类1

1.SetUp

导包,代码如下:

import itertools
import os
import os.path as osp
import pickle
import urllib
from collections import namedtuple

import numpy as np
import scipy.sparse as sp
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.nn.init as init
import torch.optim as optim
import matplotlib.pyplot as plt
%matplotlib inline

2.数据准备


 我们使用的是Cora数据集,该数据集由2708篇论文,及它们之间的引用关系构成的5429条边组成这些论文被根据主题划分为7类,分别是神经网络、强化学习、规则学习、概率方法、遗传算法、理论研究、案例相关。每篇论文的特征是通过词袋模型得到的,维度为1433,每一维表示一个词,1表示该词在这篇文章中出现过,0表示未出现。

 首先我们定义类CoraData来对数据进行预处理,主要包括下载数据、规范化数据并进行缓存以备重复使用。最终得到的数据形式包括如下几个部分:

  • x: 节点特征,维度为2708×1433;
  • y: 节点对应的标签,包括7个类别;
  • adjacency: 邻接矩阵,维度为2708×2708,类型为scipy.sparse.coo_matrix;
  • train_mask、val_mask、test_mask: 与节点数相同的掩码,用于划分训练集、验证集、测试集。

代码如下:

Data = namedtuple('Data', ['x', 'y', 'adjacency',
                           'train_mask', 'val_mask', 'test_mask'])

def tensor_from_numpy(x, device):
    return torch.from_numpy(x).to(device)

class CoraData(object):
    filenames = ["ind.cora.{}".format(name) for name in
                 ['x', 'tx', 'allx', 'y', 'ty', 'ally', 'graph', 'test.index']]

    def __init__(self, data_root="../data/cora", rebuild=False):
        """Cora数据,包括数据下载,处理,加载等功能
        当数据的缓存文件存在时,将使用缓存文件,否则将下载、进行处理,并缓存到磁盘

        处理之后的数据可以通过属性 .data 获得,它将返回一个数据对象,包括如下几部分:
            * x: 节点的特征,维度为 2708 * 1433,类型为 np.ndarray
            * y: 节点的标签,总共包括7个类别,类型为 np.ndarray
            * adjacency: 邻接矩阵,维度为 2708 * 2708,类型为 scipy.sparse.coo.coo_matrix
            * train_mask: 训练集掩码向量,维度为 2708,当节点属于训练集时,相应位置为True,否则False
            * val_mask: 验证集掩码向量,维度为 2708,当节点属于验证集时,相应位置为True,否则False
            * test_mask: 测试集掩码向量,维度为 2708,当节点属于测试集时,相应位置为True,否则False

        Args:
        -------
            data_root: string, optional
                存放数据的目录,原始数据路径: ../data/cora
                缓存数据路径: {data_root}/ch5_cached.pkl
            rebuild: boolean, optional
                是否需要重新构建数据集,当设为True时,如果存在缓存数据也会重建数据

        """
        self.data_root = data_root
        save_file = osp.join(self.data_root, "ch5_cached.pkl")
        if osp.exists(save_file) and not rebuild:
            print("Using Cached file: {}".format(save_file))
            self._data = pickle.load(open(save_file, "rb"))
        else:
            self._data = self.process_data()
            with open(save_file, "wb") as f:
                pickle.dump(self.data, f)
            print("Cached file: {}".format(save_file))
    
    @property
    def data(self):
        """返回Data数据对象,包括x, y, adjacency, train_mask, val_mask, test_mask"""
        return self._data


 根据下载得到的原始数据进行处理得到规范化的结果,使用矩阵来存储结果,数据处理部分仍然定义在类CoraData中。
def process_data(self):
        """
        处理数据,得到节点特征和标签,邻接矩阵,训练集、验证集以及测试集
        引用自:https://github.com/rusty1s/pytorch_geometric
        """
        print("Process data ...")
        _, tx, allx, y, ty, ally, graph, test_index = [self.read_data(
            osp.join(self.data_root, name)) for name in self.filenames]
        train_index = np.arange(y.shape[0])
        val_index = np.arange(y.shape[0], y.shape[0] + 500)
        sorted_test_index = sorted(test_index)

        x = np.concatenate((allx, tx), axis=0)
        y = np.concatenate((ally, ty), axis=0).argmax(axis=1)

        x[test_index] = x[sorted_test_index]
        y[test_index] = y[sorted_test_index]
        num_nodes = x.shape[0]

        train_mask = np.zeros(num_nodes, dtype=np.bool)
        val_mask = np.zeros(num_nodes, dtype=np.bool)
        test_mask = np.zeros(num_nodes, dtype=np.bool)
        train_mask[train_index] = True
        val_mask[val_index] = True
        test_mask[test_index] = True
        adjacency = self.build_adjacency(graph)
        print("Node's feature shape: ", x.shape)
        print("Node's label shape: ", y.shape)
        print("Adjacency's shape: ", adjacency.shape)
        print("Number of training nodes: ", train_mask.sum())
        print("Number of validation nodes: ", val_mask.sum())
        print("Number of test nodes: ", test_mask.sum())

        return Data(x=x, y=y, adjacency=adjacency,
                    train_mask=train_mask, val_mask=val_mask, test_mask=test_mask)

    @staticmethod
    def build_adjacency(adj_dict):
        """根据邻接表创建邻接矩阵"""
        edge_index = []
        num_nodes = len(adj_dict)
        for src, dst in adj_dict.items():
            edge_index.extend([src, v] for v in dst)
            edge_index.extend([v, src] for v in dst)
        # 去除重复的边
        edge_index = list(k for k, _ in itertools.groupby(sorted(edge_index)))
        edge_index = np.asarray(edge_index)
        adjacency = sp.coo_matrix((np.ones(len(edge_index)), 
                                   (edge_index[:, 0], edge_index[:, 1])),
                    shape=(num_nodes, num_nodes), dtype="float32")
        return adjacency

    @staticmethod
    def read_data(path):
        """使用不同的方式读取原始数据以进一步处理"""
        name = osp.basename(path)
        if name == "ind.cora.test.index":
            out = np.genfromtxt(path, dtype="int64")
            return out
        else:
            out = pickle.load(open(path, "rb"), encoding="latin1")
            out = out.toarray() if hasattr(out, "toarray") else out
            return out

   @staticmethod
    def normalization(adjacency):
        """计算 L=D^-0.5 * (A+I) * D^-0.5"""
        adjacency += sp.eye(adjacency.shape[0])    # 增加自连接
        degree = np.array(adjacency.sum(1))
        d_hat = sp.diags(np.power(degree, -0.5).flatten())
        return d_hat.dot(adjacency).dot(d_hat).tocoo()

3.图卷积层定义


 根据GCN的定义 X ′ = σ ( L ~ sym X W ) X'=σ(\tilde{L}_\text{sym} XW) X=σ(L~symXW)来定义GCN层,代码直接根据定义来实现,需要特别注意的是邻接矩阵是稀疏矩阵,为例提高运算效率,使用了稀疏矩阵的乘法。代码如下所示:

class GraphConvolution(nn.Module):
    def __init__(self, input_dim, output_dim, use_bias=True):
        """图卷积:L*X*\theta

        Args:
        ----------
            input_dim: int
                节点输入特征的维度
            output_dim: int
                输出特征维度
            use_bias : bool, optional
                是否使用偏置
        """
        super(GraphConvolution, self).__init__()
        self.input_dim = input_dim
        self.output_dim = output_dim
        self.use_bias = use_bias
        self.weight = nn.Parameter(torch.Tensor(input_dim, output_dim))
        if self.use_bias:
            self.bias = nn.Parameter(torch.Tensor(output_dim))
        else:
            self.register_parameter('bias', None)
        self.reset_parameters()

    def reset_parameters(self):
        init.kaiming_uniform_(self.weight)
        if self.use_bias:
            init.zeros_(self.bias)

    def forward(self, adjacency, input_feature):
        """邻接矩阵是稀疏矩阵,因此在计算时使用稀疏矩阵乘法
    
        Args: 
        -------
            adjacency: torch.sparse.FloatTensor
                邻接矩阵
            input_feature: torch.Tensor
                输入特征
        """
        support = torch.mm(input_feature, self.weight)
        output = torch.sparse.mm(adjacency, support)
        if self.use_bias:
            output += self.bias
        return output

    def __repr__(self):
        return self.__class__.__name__ + ' (' \
            + str(self.input_dim) + ' -> ' \
            + str(self.output_dim) + ')'

4.模型定义


 有了数据和GCN层,就可以构建模型进行训练了。定义一个两层的GCN,其中输入的维度为1433,隐藏层维度设为16,最后一层GCN将输出维度变为类别数7,激活函数使用的是ReLU。模型构建代码如下:

class GcnNet(nn.Module):
    """
    定义一个包含两层GraphConvolution的模型
    """
    def __init__(self, input_dim=1433):
        super(GcnNet, self).__init__()
        self.gcn1 = GraphConvolution(input_dim, 16)
        self.gcn2 = GraphConvolution(16, 7)
    
    def forward(self, adjacency, feature):
        h = F.relu(self.gcn1(adjacency, feature))
        logits = self.gcn2(adjacency, h)
        return logits

5.模型训练


 模型构建与数据准备代码如下:

# 超参数定义
LEARNING_RATE = 0.1
WEIGHT_DACAY = 5e-4
EPOCHS = 200
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
# 加载数据,并转换为torch.Tensor
dataset = CoraData().data
node_feature = dataset.x / dataset.x.sum(1, keepdims=True)  # 归一化数据,使得每一行和为1
tensor_x = tensor_from_numpy(node_feature, DEVICE)
tensor_y = tensor_from_numpy(dataset.y, DEVICE)
tensor_train_mask = tensor_from_numpy(dataset.train_mask, DEVICE)
tensor_val_mask = tensor_from_numpy(dataset.val_mask, DEVICE)
tensor_test_mask = tensor_from_numpy(dataset.test_mask, DEVICE)
normalize_adjacency = CoraData.normalization(dataset.adjacency)   # 规范化邻接矩阵

num_nodes, input_dim = node_feature.shape
indices = torch.from_numpy(np.asarray([normalize_adjacency.row, 
                                       normalize_adjacency.col]).astype('int64')).long()
values = torch.from_numpy(normalize_adjacency.data.astype(np.float32))
tensor_adjacency = torch.sparse.FloatTensor(indices, values, 
                                            (num_nodes, num_nodes)).to(DEVICE)


 所有准备工作都做好后,就可以根据神经网络运行流程进行模型训练了,代码如下所示,通过不断迭代优化,我们将记录训练过程中损失值的变化和验证集上的准确率,训练完成后在测试集上测试模型的效果。
# 训练主体函数
def train():
    loss_history = []
    val_acc_history = []
    model.train()
    train_y = tensor_y[tensor_train_mask]
    for epoch in range(EPOCHS):
        logits = model(tensor_adjacency, tensor_x)  # 前向传播
        train_mask_logits = logits[tensor_train_mask]   # 只选择训练节点进行监督
        loss = criterion(train_mask_logits, train_y)    # 计算损失值
        optimizer.zero_grad()
        loss.backward()     # 反向传播计算参数的梯度
        optimizer.step()    # 使用优化方法进行梯度更新
        train_acc, _, _ = test(tensor_train_mask)     # 计算当前模型训练集上的准确率
        val_acc, _, _ = test(tensor_val_mask)     # 计算当前模型在验证集上的准确率
        # 记录训练过程中损失值和准确率的变化,用于画图
        loss_history.append(loss.item())
        val_acc_history.append(val_acc.item())
        print("Epoch {:03d}: Loss {:.4f}, TrainAcc {:.4}, ValAcc {:.4f}".format(
            epoch, loss.item(), train_acc.item(), val_acc.item()))
    
    return loss_history, val_acc_history
# 测试函数
def test(mask):
    model.eval()
    with torch.no_grad():
        logits = model(tensor_adjacency, tensor_x)
        test_mask_logits = logits[mask]
        predict_y = test_mask_logits.max(1)[1]
        accuarcy = torch.eq(predict_y, tensor_y[mask]).float().mean()
    return accuarcy, test_mask_logits.cpu().numpy(), tensor_y[mask].cpu().numpy()


 使用上述代码进行模型训练,我们可以看到如下代码所示的日志输出:
Epoch 000: Loss 1.9635, TrainAcc 0.15, ValAcc 0.1520
Epoch 001: Loss 1.9345, TrainAcc 0.2357, ValAcc 0.3580
Epoch 002: Loss 1.8967, TrainAcc 0.6643, ValAcc 0.5600
Epoch 003: Loss 1.8437, TrainAcc 0.7643, ValAcc 0.4760
Epoch 004: Loss 1.7783, TrainAcc 0.6929, ValAcc 0.4400
Epoch 005: Loss 1.6983, TrainAcc 0.75, ValAcc 0.4400
Epoch 006: Loss 1.5963, TrainAcc 0.8, ValAcc 0.5400
Epoch 007: Loss 1.4758, TrainAcc 0.8786, ValAcc 0.6580
Epoch 008: Loss 1.3422, TrainAcc 0.9214, ValAcc 0.7180
Epoch 009: Loss 1.2007, TrainAcc 0.9429, ValAcc 0.7360
Epoch 010: Loss 1.0576, TrainAcc 0.9643, ValAcc 0.7660
Epoch 011: Loss 0.9192, TrainAcc 0.9714, ValAcc 0.7900
Epoch 012: Loss 0.7939, TrainAcc 0.9714, ValAcc 0.7900
Epoch 013: Loss 0.6828, TrainAcc 0.9786, ValAcc 0.7860
Epoch 014: Loss 0.5869, TrainAcc 0.9857, ValAcc 0.7960
Epoch 015: Loss 0.5066, TrainAcc 0.9857, ValAcc 0.8040
Epoch 016: Loss 0.4419, TrainAcc 0.9857, ValAcc 0.8020
Epoch 017: Loss 0.3886, TrainAcc 0.9857, ValAcc 0.7960
Epoch 018: Loss 0.3473, TrainAcc 0.9857, ValAcc 0.8000
Epoch 019: Loss 0.3154, TrainAcc 0.9929, ValAcc 0.8060
Epoch 020: Loss 0.2907, TrainAcc 0.9929, ValAcc 0.8020
Epoch 021: Loss 0.2717, TrainAcc 1.0, ValAcc 0.8000
Epoch 022: Loss 0.2583, TrainAcc 1.0, ValAcc 0.7920
Epoch 023: Loss 0.2480, TrainAcc 1.0, ValAcc 0.7860
Epoch 024: Loss 0.2403, TrainAcc 1.0, ValAcc 0.7840
Epoch 025: Loss 0.2344, TrainAcc 1.0, ValAcc 0.7840
Epoch 026: Loss 0.2290, TrainAcc 1.0, ValAcc 0.7820
Epoch 027: Loss 0.2235, TrainAcc 1.0, ValAcc 0.7840
Epoch 028: Loss 0.2177, TrainAcc 1.0, ValAcc 0.7860
Epoch 029: Loss 0.2115, TrainAcc 1.0, ValAcc 0.7800
Epoch 030: Loss 0.2048, TrainAcc 1.0, ValAcc 0.7820
Epoch 031: Loss 0.1976, TrainAcc 1.0, ValAcc 0.7800
Epoch 032: Loss 0.1905, TrainAcc 1.0, ValAcc 0.7820
Epoch 033: Loss 0.1836, TrainAcc 1.0, ValAcc 0.7820
Epoch 034: Loss 0.1775, TrainAcc 1.0, ValAcc 0.7780
Epoch 035: Loss 0.1719, TrainAcc 1.0, ValAcc 0.7800
Epoch 036: Loss 0.1674, TrainAcc 1.0, ValAcc 0.7820
Epoch 037: Loss 0.1633, TrainAcc 1.0, ValAcc 0.7840
Epoch 038: Loss 0.1601, TrainAcc 1.0, ValAcc 0.7880
Epoch 039: Loss 0.1576, TrainAcc 1.0, ValAcc 0.7900
Epoch 040: Loss 0.1566, TrainAcc 1.0, ValAcc 0.7740
Epoch 041: Loss 0.1549, TrainAcc 1.0, ValAcc 0.7820
Epoch 042: Loss 0.1514, TrainAcc 1.0, ValAcc 0.7880
Epoch 043: Loss 0.1483, TrainAcc 1.0, ValAcc 0.7880
Epoch 044: Loss 0.1473, TrainAcc 1.0, ValAcc 0.7840
Epoch 045: Loss 0.1461, TrainAcc 1.0, ValAcc 0.7840
Epoch 046: Loss 0.1440, TrainAcc 1.0, ValAcc 0.7880
Epoch 047: Loss 0.1434, TrainAcc 1.0, ValAcc 0.7840
Epoch 048: Loss 0.1422, TrainAcc 1.0, ValAcc 0.7860
Epoch 049: Loss 0.1397, TrainAcc 1.0, ValAcc 0.7920
Epoch 050: Loss 0.1390, TrainAcc 1.0, ValAcc 0.7900
Epoch 051: Loss 0.1386, TrainAcc 1.0, ValAcc 0.7880
Epoch 052: Loss 0.1363, TrainAcc 1.0, ValAcc 0.7900
Epoch 053: Loss 0.1353, TrainAcc 1.0, ValAcc 0.7860
Epoch 054: Loss 0.1349, TrainAcc 1.0, ValAcc 0.7940
Epoch 055: Loss 0.1333, TrainAcc 1.0, ValAcc 0.7840
Epoch 056: Loss 0.1323, TrainAcc 1.0, ValAcc 0.7900
Epoch 057: Loss 0.1321, TrainAcc 1.0, ValAcc 0.7860
Epoch 058: Loss 0.1309, TrainAcc 1.0, ValAcc 0.7900
Epoch 059: Loss 0.1301, TrainAcc 1.0, ValAcc 0.7880
Epoch 060: Loss 0.1300, TrainAcc 1.0, ValAcc 0.7860
Epoch 061: Loss 0.1293, TrainAcc 1.0, ValAcc 0.7900
Epoch 062: Loss 0.1286, TrainAcc 1.0, ValAcc 0.7860
Epoch 063: Loss 0.1285, TrainAcc 1.0, ValAcc 0.7900
Epoch 064: Loss 0.1282, TrainAcc 1.0, ValAcc 0.7860
Epoch 065: Loss 0.1274, TrainAcc 1.0, ValAcc 0.7880
Epoch 066: Loss 0.1271, TrainAcc 1.0, ValAcc 0.7860
Epoch 067: Loss 0.1269, TrainAcc 1.0, ValAcc 0.7880
Epoch 068: Loss 0.1265, TrainAcc 1.0, ValAcc 0.7840
Epoch 069: Loss 0.1260, TrainAcc 1.0, ValAcc 0.7920
Epoch 070: Loss 0.1259, TrainAcc 1.0, ValAcc 0.7860
Epoch 071: Loss 0.1258, TrainAcc 1.0, ValAcc 0.7920
Epoch 072: Loss 0.1255, TrainAcc 1.0, ValAcc 0.7860
Epoch 073: Loss 0.1252, TrainAcc 1.0, ValAcc 0.7880
Epoch 074: Loss 0.1250, TrainAcc 1.0, ValAcc 0.7880
Epoch 075: Loss 0.1250, TrainAcc 1.0, ValAcc 0.7880
Epoch 076: Loss 0.1248, TrainAcc 1.0, ValAcc 0.7880
Epoch 077: Loss 0.1246, TrainAcc 1.0, ValAcc 0.7860
Epoch 078: Loss 0.1246, TrainAcc 1.0, ValAcc 0.7900
Epoch 079: Loss 0.1248, TrainAcc 1.0, ValAcc 0.7880
Epoch 080: Loss 0.1251, TrainAcc 1.0, ValAcc 0.7960
Epoch 081: Loss 0.1255, TrainAcc 1.0, ValAcc 0.7860
Epoch 082: Loss 0.1259, TrainAcc 1.0, ValAcc 0.7980
Epoch 083: Loss 0.1260, TrainAcc 1.0, ValAcc 0.7900
Epoch 084: Loss 0.1246, TrainAcc 1.0, ValAcc 0.7940
Epoch 085: Loss 0.1221, TrainAcc 1.0, ValAcc 0.7880
Epoch 086: Loss 0.1210, TrainAcc 1.0, ValAcc 0.7900
Epoch 087: Loss 0.1224, TrainAcc 1.0, ValAcc 0.7960
Epoch 088: Loss 0.1237, TrainAcc 1.0, ValAcc 0.7900
Epoch 089: Loss 0.1231, TrainAcc 1.0, ValAcc 0.7940
Epoch 090: Loss 0.1222, TrainAcc 1.0, ValAcc 0.7920
Epoch 091: Loss 0.1225, TrainAcc 1.0, ValAcc 0.7940
Epoch 092: Loss 0.1231, TrainAcc 1.0, ValAcc 0.7980
Epoch 093: Loss 0.1232, TrainAcc 1.0, ValAcc 0.7960
Epoch 094: Loss 0.1225, TrainAcc 1.0, ValAcc 0.7920
Epoch 095: Loss 0.1217, TrainAcc 1.0, ValAcc 0.7920
Epoch 096: Loss 0.1213, TrainAcc 1.0, ValAcc 0.7960
Epoch 097: Loss 0.1216, TrainAcc 1.0, ValAcc 0.7980
Epoch 098: Loss 0.1218, TrainAcc 1.0, ValAcc 0.7920
Epoch 099: Loss 0.1212, TrainAcc 1.0, ValAcc 0.7960
Epoch 100: Loss 0.1208, TrainAcc 1.0, ValAcc 0.7960
Epoch 101: Loss 0.1212, TrainAcc 1.0, ValAcc 0.7920
Epoch 102: Loss 0.1213, TrainAcc 1.0, ValAcc 0.7940
Epoch 103: Loss 0.1211, TrainAcc 1.0, ValAcc 0.7960
Epoch 104: Loss 0.1208, TrainAcc 1.0, ValAcc 0.8000
Epoch 105: Loss 0.1207, TrainAcc 1.0, ValAcc 0.7940
Epoch 106: Loss 0.1207, TrainAcc 1.0, ValAcc 0.7980
Epoch 107: Loss 0.1206, TrainAcc 1.0, ValAcc 0.7980
Epoch 108: Loss 0.1205, TrainAcc 1.0, ValAcc 0.7900
Epoch 109: Loss 0.1204, TrainAcc 1.0, ValAcc 0.7960
Epoch 110: Loss 0.1202, TrainAcc 1.0, ValAcc 0.7960
Epoch 111: Loss 0.1200, TrainAcc 1.0, ValAcc 0.7960
Epoch 112: Loss 0.1200, TrainAcc 1.0, ValAcc 0.7980
Epoch 113: Loss 0.1200, TrainAcc 1.0, ValAcc 0.7960
Epoch 114: Loss 0.1199, TrainAcc 1.0, ValAcc 0.7980
Epoch 115: Loss 0.1196, TrainAcc 1.0, ValAcc 0.7960
Epoch 116: Loss 0.1195, TrainAcc 1.0, ValAcc 0.7980
Epoch 117: Loss 0.1194, TrainAcc 1.0, ValAcc 0.7980
Epoch 118: Loss 0.1194, TrainAcc 1.0, ValAcc 0.7960
Epoch 119: Loss 0.1193, TrainAcc 1.0, ValAcc 0.8000
Epoch 120: Loss 0.1192, TrainAcc 1.0, ValAcc 0.7960
Epoch 121: Loss 0.1191, TrainAcc 1.0, ValAcc 0.8020
Epoch 122: Loss 0.1190, TrainAcc 1.0, ValAcc 0.8000
Epoch 123: Loss 0.1189, TrainAcc 1.0, ValAcc 0.7960
Epoch 124: Loss 0.1188, TrainAcc 1.0, ValAcc 0.7960
Epoch 125: Loss 0.1187, TrainAcc 1.0, ValAcc 0.7940
Epoch 126: Loss 0.1186, TrainAcc 1.0, ValAcc 0.7960
Epoch 127: Loss 0.1186, TrainAcc 1.0, ValAcc 0.7940
Epoch 128: Loss 0.1185, TrainAcc 1.0, ValAcc 0.7960
Epoch 129: Loss 0.1183, TrainAcc 1.0, ValAcc 0.7940
Epoch 130: Loss 0.1183, TrainAcc 1.0, ValAcc 0.7960
Epoch 131: Loss 0.1182, TrainAcc 1.0, ValAcc 0.7940
Epoch 132: Loss 0.1181, TrainAcc 1.0, ValAcc 0.7960
Epoch 133: Loss 0.1180, TrainAcc 1.0, ValAcc 0.7960
Epoch 134: Loss 0.1179, TrainAcc 1.0, ValAcc 0.7960
Epoch 135: Loss 0.1178, TrainAcc 1.0, ValAcc 0.7960
Epoch 136: Loss 0.1177, TrainAcc 1.0, ValAcc 0.7980
Epoch 137: Loss 0.1177, TrainAcc 1.0, ValAcc 0.7980
Epoch 138: Loss 0.1176, TrainAcc 1.0, ValAcc 0.7960
Epoch 139: Loss 0.1175, TrainAcc 1.0, ValAcc 0.7960
Epoch 140: Loss 0.1174, TrainAcc 1.0, ValAcc 0.7980
Epoch 141: Loss 0.1174, TrainAcc 1.0, ValAcc 0.7980
Epoch 142: Loss 0.1173, TrainAcc 1.0, ValAcc 0.7960
Epoch 143: Loss 0.1172, TrainAcc 1.0, ValAcc 0.7940
Epoch 144: Loss 0.1172, TrainAcc 1.0, ValAcc 0.7980
Epoch 145: Loss 0.1172, TrainAcc 1.0, ValAcc 0.7940
Epoch 146: Loss 0.1172, TrainAcc 1.0, ValAcc 0.8000
Epoch 147: Loss 0.1174, TrainAcc 1.0, ValAcc 0.7960
Epoch 148: Loss 0.1178, TrainAcc 1.0, ValAcc 0.7980
Epoch 149: Loss 0.1189, TrainAcc 1.0, ValAcc 0.7960
Epoch 150: Loss 0.1209, TrainAcc 1.0, ValAcc 0.8040
Epoch 151: Loss 0.1224, TrainAcc 1.0, ValAcc 0.7920
Epoch 152: Loss 0.1214, TrainAcc 1.0, ValAcc 0.8020
Epoch 153: Loss 0.1163, TrainAcc 1.0, ValAcc 0.7880
Epoch 154: Loss 0.1141, TrainAcc 1.0, ValAcc 0.7920
Epoch 155: Loss 0.1161, TrainAcc 1.0, ValAcc 0.7980
Epoch 156: Loss 0.1165, TrainAcc 1.0, ValAcc 0.7860
Epoch 157: Loss 0.1171, TrainAcc 1.0, ValAcc 0.7940
Epoch 158: Loss 0.1179, TrainAcc 1.0, ValAcc 0.8020
Epoch 159: Loss 0.1169, TrainAcc 1.0, ValAcc 0.7840
Epoch 160: Loss 0.1180, TrainAcc 1.0, ValAcc 0.7900
Epoch 161: Loss 0.1188, TrainAcc 1.0, ValAcc 0.8000
Epoch 162: Loss 0.1163, TrainAcc 1.0, ValAcc 0.7920
Epoch 163: Loss 0.1159, TrainAcc 1.0, ValAcc 0.7880
Epoch 164: Loss 0.1175, TrainAcc 1.0, ValAcc 0.7880
Epoch 165: Loss 0.1163, TrainAcc 1.0, ValAcc 0.8060
Epoch 166: Loss 0.1154, TrainAcc 1.0, ValAcc 0.7920
Epoch 167: Loss 0.1168, TrainAcc 1.0, ValAcc 0.7880
Epoch 168: Loss 0.1167, TrainAcc 1.0, ValAcc 0.8040
Epoch 169: Loss 0.1159, TrainAcc 1.0, ValAcc 0.7960
Epoch 170: Loss 0.1165, TrainAcc 1.0, ValAcc 0.7900
Epoch 171: Loss 0.1165, TrainAcc 1.0, ValAcc 0.8000
Epoch 172: Loss 0.1160, TrainAcc 1.0, ValAcc 0.7960
Epoch 173: Loss 0.1164, TrainAcc 1.0, ValAcc 0.8000
Epoch 174: Loss 0.1163, TrainAcc 1.0, ValAcc 0.8000
Epoch 175: Loss 0.1158, TrainAcc 1.0, ValAcc 0.7940
Epoch 176: Loss 0.1161, TrainAcc 1.0, ValAcc 0.8020
Epoch 177: Loss 0.1162, TrainAcc 1.0, ValAcc 0.7980
Epoch 178: Loss 0.1158, TrainAcc 1.0, ValAcc 0.7980
Epoch 179: Loss 0.1159, TrainAcc 1.0, ValAcc 0.7980
Epoch 180: Loss 0.1161, TrainAcc 1.0, ValAcc 0.7980
Epoch 181: Loss 0.1160, TrainAcc 1.0, ValAcc 0.8000
Epoch 182: Loss 0.1157, TrainAcc 1.0, ValAcc 0.7940
Epoch 183: Loss 0.1159, TrainAcc 1.0, ValAcc 0.7980
Epoch 184: Loss 0.1158, TrainAcc 1.0, ValAcc 0.8020
Epoch 185: Loss 0.1157, TrainAcc 1.0, ValAcc 0.7920
Epoch 186: Loss 0.1159, TrainAcc 1.0, ValAcc 0.7940
Epoch 187: Loss 0.1159, TrainAcc 1.0, ValAcc 0.8040
Epoch 188: Loss 0.1157, TrainAcc 1.0, ValAcc 0.7980
Epoch 189: Loss 0.1157, TrainAcc 1.0, ValAcc 0.7980
Epoch 190: Loss 0.1158, TrainAcc 1.0, ValAcc 0.7920
Epoch 191: Loss 0.1156, TrainAcc 1.0, ValAcc 0.8020
Epoch 192: Loss 0.1155, TrainAcc 1.0, ValAcc 0.7980
Epoch 193: Loss 0.1156, TrainAcc 1.0, ValAcc 0.7940
Epoch 194: Loss 0.1156, TrainAcc 1.0, ValAcc 0.8020
Epoch 195: Loss 0.1156, TrainAcc 1.0, ValAcc 0.7980
Epoch 196: Loss 0.1156, TrainAcc 1.0, ValAcc 0.7980
Epoch 197: Loss 0.1156, TrainAcc 1.0, ValAcc 0.7980
Epoch 198: Loss 0.1155, TrainAcc 1.0, ValAcc 0.8000
Epoch 199: Loss 0.1155, TrainAcc 1.0, ValAcc 0.7960
Test accuarcy:  0.8040000200271606


 将损失值和验证集准确率的变化趋势可视化,如下图(a)所示,我们将最后一层得到的输出进行TSNE降维,得到如下图(b)所示的结果。
plot_loss_with_acc(loss, val_acc)
# 绘制测试数据的TSNE降维图
from sklearn.manifold import TSNE
tsne = TSNE()
out = tsne.fit_transform(test_logits)
fig = plt.figure()
for i in range(7):
    indices = test_label == i
    x, y = out[indices].T
    plt.scatter(x, y, label=str(i))
plt.legend()


图神经网络(一)图信号处理与图卷积神经网络(6)GCN实战_第1张图片
(a) 准确率变化


图神经网络(一)图信号处理与图卷积神经网络(6)GCN实战_第2张图片
(b) TSNE降维可视化

完整代码

代码说明

1.本次代码使用的是Google CoLab的框架,网址为https://colab.research.google.com/github/FighterLYL/GraphNeuralNetwork/blob/master/chapter5/GCN_Cora.ipynb;
2.代码运行前需要先将数据下载,下载地址为https://github.com/kimiyoung/planetoid/tree/master/data;
3.在Google CoLab中新建文件夹data/cora;
4.数据下载完成解压后,将其data文件夹中的全部数据文件复制到data/cora中,如下图所示:

图神经网络(一)图信号处理与图卷积神经网络(6)GCN实战_第3张图片
5.按步骤一步一步运行即可,完全无需修改任何代码。

1.SetUp

import itertools
import os
import os.path as osp
import pickle
import urllib
from collections import namedtuple

import numpy as np
import scipy.sparse as sp
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.nn.init as init
import torch.optim as optim
import matplotlib.pyplot as plt
%matplotlib inline

2.数据准备

Data = namedtuple('Data', ['x', 'y', 'adjacency',
                           'train_mask', 'val_mask', 'test_mask'])

def tensor_from_numpy(x, device):
    return torch.from_numpy(x).to(device)

class CoraData(object):
    filenames = ["ind.cora.{}".format(name) for name in
                 ['x', 'tx', 'allx', 'y', 'ty', 'ally', 'graph', 'test.index']]

    def __init__(self, data_root="../data/cora", rebuild=False):
        """Cora数据,包括数据下载,处理,加载等功能
        当数据的缓存文件存在时,将使用缓存文件,否则将下载、进行处理,并缓存到磁盘

        处理之后的数据可以通过属性 .data 获得,它将返回一个数据对象,包括如下几部分:
            * x: 节点的特征,维度为 2708 * 1433,类型为 np.ndarray
            * y: 节点的标签,总共包括7个类别,类型为 np.ndarray
            * adjacency: 邻接矩阵,维度为 2708 * 2708,类型为 scipy.sparse.coo.coo_matrix
            * train_mask: 训练集掩码向量,维度为 2708,当节点属于训练集时,相应位置为True,否则False
            * val_mask: 验证集掩码向量,维度为 2708,当节点属于验证集时,相应位置为True,否则False
            * test_mask: 测试集掩码向量,维度为 2708,当节点属于测试集时,相应位置为True,否则False

        Args:
        -------
            data_root: string, optional
                存放数据的目录,原始数据路径: ../data/cora
                缓存数据路径: {data_root}/ch5_cached.pkl
            rebuild: boolean, optional
                是否需要重新构建数据集,当设为True时,如果存在缓存数据也会重建数据

        """
        self.data_root = data_root
        save_file = osp.join(self.data_root, "ch5_cached.pkl")
        if osp.exists(save_file) and not rebuild:
            print("Using Cached file: {}".format(save_file))
            self._data = pickle.load(open(save_file, "rb"))
        else:
            self._data = self.process_data()
            with open(save_file, "wb") as f:
                pickle.dump(self.data, f)
            print("Cached file: {}".format(save_file))
    
    @property
    def data(self):
        """返回Data数据对象,包括x, y, adjacency, train_mask, val_mask, test_mask"""
        return self._data

    def process_data(self):
        """
        处理数据,得到节点特征和标签,邻接矩阵,训练集、验证集以及测试集
        引用自:https://github.com/rusty1s/pytorch_geometric
        """
        print("Process data ...")
        _, tx, allx, y, ty, ally, graph, test_index = [self.read_data(
            osp.join(self.data_root, name)) for name in self.filenames]
        train_index = np.arange(y.shape[0])
        val_index = np.arange(y.shape[0], y.shape[0] + 500)
        sorted_test_index = sorted(test_index)

        x = np.concatenate((allx, tx), axis=0)
        y = np.concatenate((ally, ty), axis=0).argmax(axis=1)

        x[test_index] = x[sorted_test_index]
        y[test_index] = y[sorted_test_index]
        num_nodes = x.shape[0]

        train_mask = np.zeros(num_nodes, dtype=np.bool)
        val_mask = np.zeros(num_nodes, dtype=np.bool)
        test_mask = np.zeros(num_nodes, dtype=np.bool)
        train_mask[train_index] = True
        val_mask[val_index] = True
        test_mask[test_index] = True
        adjacency = self.build_adjacency(graph)
        print("Node's feature shape: ", x.shape)
        print("Node's label shape: ", y.shape)
        print("Adjacency's shape: ", adjacency.shape)
        print("Number of training nodes: ", train_mask.sum())
        print("Number of validation nodes: ", val_mask.sum())
        print("Number of test nodes: ", test_mask.sum())

        return Data(x=x, y=y, adjacency=adjacency,
                    train_mask=train_mask, val_mask=val_mask, test_mask=test_mask)

    @staticmethod
    def build_adjacency(adj_dict):
        """根据邻接表创建邻接矩阵"""
        edge_index = []
        num_nodes = len(adj_dict)
        for src, dst in adj_dict.items():
            edge_index.extend([src, v] for v in dst)
            edge_index.extend([v, src] for v in dst)
        # 去除重复的边
        edge_index = list(k for k, _ in itertools.groupby(sorted(edge_index)))
        edge_index = np.asarray(edge_index)
        adjacency = sp.coo_matrix((np.ones(len(edge_index)), 
                                   (edge_index[:, 0], edge_index[:, 1])),
                    shape=(num_nodes, num_nodes), dtype="float32")
        return adjacency

    @staticmethod
    def read_data(path):
        """使用不同的方式读取原始数据以进一步处理"""
        name = osp.basename(path)
        if name == "ind.cora.test.index":
            out = np.genfromtxt(path, dtype="int64")
            return out
        else:
            out = pickle.load(open(path, "rb"), encoding="latin1")
            out = out.toarray() if hasattr(out, "toarray") else out
            return out

    @staticmethod
    def normalization(adjacency):
        """计算 L=D^-0.5 * (A+I) * D^-0.5"""
        adjacency += sp.eye(adjacency.shape[0])    # 增加自连接
        degree = np.array(adjacency.sum(1))
        d_hat = sp.diags(np.power(degree, -0.5).flatten())
        return d_hat.dot(adjacency).dot(d_hat).tocoo()

3.图卷积层定义

class GraphConvolution(nn.Module):
    def __init__(self, input_dim, output_dim, use_bias=True):
        """图卷积:L*X*\theta

        Args:
        ----------
            input_dim: int
                节点输入特征的维度
            output_dim: int
                输出特征维度
            use_bias : bool, optional
                是否使用偏置
        """
        super(GraphConvolution, self).__init__()
        self.input_dim = input_dim
        self.output_dim = output_dim
        self.use_bias = use_bias
        self.weight = nn.Parameter(torch.Tensor(input_dim, output_dim))
        if self.use_bias:
            self.bias = nn.Parameter(torch.Tensor(output_dim))
        else:
            self.register_parameter('bias', None)
        self.reset_parameters()

    def reset_parameters(self):
        init.kaiming_uniform_(self.weight)
        if self.use_bias:
            init.zeros_(self.bias)

    def forward(self, adjacency, input_feature):
        """邻接矩阵是稀疏矩阵,因此在计算时使用稀疏矩阵乘法
    
        Args: 
        -------
            adjacency: torch.sparse.FloatTensor
                邻接矩阵
            input_feature: torch.Tensor
                输入特征
        """
        support = torch.mm(input_feature, self.weight)
        output = torch.sparse.mm(adjacency, support)
        if self.use_bias:
            output += self.bias
        return output

    def __repr__(self):
        return self.__class__.__name__ + ' (' \
            + str(self.input_dim) + ' -> ' \
            + str(self.output_dim) + ')'

4.模型定义

class GcnNet(nn.Module):
    """
    定义一个包含两层GraphConvolution的模型
    """
    def __init__(self, input_dim=1433):
        super(GcnNet, self).__init__()
        self.gcn1 = GraphConvolution(input_dim, 16)
        self.gcn2 = GraphConvolution(16, 7)
    
    def forward(self, adjacency, feature):
        h = F.relu(self.gcn1(adjacency, feature))
        logits = self.gcn2(adjacency, h)
        return logits

5.模型训练

# 超参数定义
LEARNING_RATE = 0.1
WEIGHT_DACAY = 5e-4
EPOCHS = 200
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
# 加载数据,并转换为torch.Tensor
dataset = CoraData().data
node_feature = dataset.x / dataset.x.sum(1, keepdims=True)  # 归一化数据,使得每一行和为1
tensor_x = tensor_from_numpy(node_feature, DEVICE)
tensor_y = tensor_from_numpy(dataset.y, DEVICE)
tensor_train_mask = tensor_from_numpy(dataset.train_mask, DEVICE)
tensor_val_mask = tensor_from_numpy(dataset.val_mask, DEVICE)
tensor_test_mask = tensor_from_numpy(dataset.test_mask, DEVICE)
normalize_adjacency = CoraData.normalization(dataset.adjacency)   # 规范化邻接矩阵

num_nodes, input_dim = node_feature.shape
indices = torch.from_numpy(np.asarray([normalize_adjacency.row, 
                                       normalize_adjacency.col]).astype('int64')).long()
values = torch.from_numpy(normalize_adjacency.data.astype(np.float32))
tensor_adjacency = torch.sparse.FloatTensor(indices, values, 
                                            (num_nodes, num_nodes)).to(DEVICE)
# 模型定义:Model, Loss, Optimizer
model = GcnNet(input_dim).to(DEVICE)
criterion = nn.CrossEntropyLoss().to(DEVICE)
optimizer = optim.Adam(model.parameters(), 
                       lr=LEARNING_RATE, 
                       weight_decay=WEIGHT_DACAY)
# 训练主体函数
def train():
    loss_history = []
    val_acc_history = []
    model.train()
    train_y = tensor_y[tensor_train_mask]
    for epoch in range(EPOCHS):
        logits = model(tensor_adjacency, tensor_x)  # 前向传播
        train_mask_logits = logits[tensor_train_mask]   # 只选择训练节点进行监督
        loss = criterion(train_mask_logits, train_y)    # 计算损失值
        optimizer.zero_grad()
        loss.backward()     # 反向传播计算参数的梯度
        optimizer.step()    # 使用优化方法进行梯度更新
        train_acc, _, _ = test(tensor_train_mask)     # 计算当前模型训练集上的准确率
        val_acc, _, _ = test(tensor_val_mask)     # 计算当前模型在验证集上的准确率
        # 记录训练过程中损失值和准确率的变化,用于画图
        loss_history.append(loss.item())
        val_acc_history.append(val_acc.item())
        print("Epoch {:03d}: Loss {:.4f}, TrainAcc {:.4}, ValAcc {:.4f}".format(
            epoch, loss.item(), train_acc.item(), val_acc.item()))
    
    return loss_history, val_acc_history
# 测试函数
def test(mask):
    model.eval()
    with torch.no_grad():
        logits = model(tensor_adjacency, tensor_x)
        test_mask_logits = logits[mask]
        predict_y = test_mask_logits.max(1)[1]
        accuarcy = torch.eq(predict_y, tensor_y[mask]).float().mean()
    return accuarcy, test_mask_logits.cpu().numpy(), tensor_y[mask].cpu().numpy()
def plot_loss_with_acc(loss_history, val_acc_history):
    fig = plt.figure()
    ax1 = fig.add_subplot(111)
    ax1.plot(range(len(loss_history)), loss_history,
             c=np.array([255, 71, 90]) / 255.)
    plt.ylabel('Loss')
    
    ax2 = fig.add_subplot(111, sharex=ax1, frameon=False)
    ax2.plot(range(len(val_acc_history)), val_acc_history,
             c=np.array([79, 179, 255]) / 255.)
    ax2.yaxis.tick_right()
    ax2.yaxis.set_label_position("right")
    plt.ylabel('ValAcc')
    
    plt.xlabel('Epoch')
    plt.title('Training Loss & Validation Accuracy')
    plt.show()
loss, val_acc = train()
test_acc, test_logits, test_label = test(tensor_test_mask)
print("Test accuarcy: ", test_acc.item())
plot_loss_with_acc(loss, val_acc)
# 绘制测试数据的TSNE降维图
from sklearn.manifold import TSNE
tsne = TSNE()
out = tsne.fit_transform(test_logits)
fig = plt.figure()
for i in range(7):
    indices = test_label == i
    x, y = out[indices].T
    plt.scatter(x, y, label=str(i))
plt.legend()

参考文献


  1. [1] 刘忠雨, 李彦霖, 周洋.《深入浅出图神经网络: GNN原理解析》.机械工业出版社. ↩︎

你可能感兴趣的:(图神经网络,深度学习,深度学习,图神经网络)