【文献笔记】How powerful are graph neural networks?

Abstract

What do the author(s) want to know (motivation)?

  • representational properties and limitations of GNN
  • theoretical analysis of the expressibility of GNN
  • Graph Isomorphism Network (GIN)
  • validation

1 Introduction

Insight: GNN → \rightarrow WL-test if aggr is expressive and can model injective functions

2 Preliminaries

GNN

-th layer of a GNN

  • GraphSAGE 在这里插入图片描述
    • W: learnable matrix
    • COMBINE: concatenation + linear mapping
  • GCN
    在这里插入图片描述

WL-test: whether two graphs are topologically identical

3 Theory framework: overview

A maximally powerful GNN would never map two different neighborhoods, i.e., multisets of feature vectors, to the same representation. This means its aggregation scheme must be injective. Thus, we abstract a GNN’s aggregation scheme as a class of different neighborhoods, i.e., multisets of feature vectors, to the same representation. This means its aggregation scheme must be injective.

4 Building powerful graph neural networks

GIN

在这里插入图片描述

5 Less powerful but still interesting GNNs

- 1-layer perceptrons are not sufficient

- structures that confuse mean and max-pooling

  • permutation invariant but not injective

- mean learns distributions

The mean aggregator may perform well if, for the task, the statistical and distributional information in the graph is more important than the exact structure.

- max-pooling learns sets with distinct elements

7 Experiments

model parameters

  • 5 GNN + 2 MLP
  • batch normalization
  • Adam, lr = 0.01, decay by 0.5 every 50 epochs

Results: GIN-0

你可能感兴趣的:(【文献笔记】How powerful are graph neural networks?)