知识图谱、图神经网络等一周论文整理(一)

模型篇

清华孙茂松组:入门GNN必读35篇

High

  • 【图神经网络】MixHop: Higher-Order Graph Convolution Architectures via Sparsified Neighborhood Mixing
    #开源论文# #ICML 2019# 本文是图分析大牛Bryan Perozzi组发表于ICML 2019的工作。本文argue现有的GNN模型无法学习到一种很通用的邻居混合信息,然后提出了MixHop来混合不同阶邻居的信息并学习节点表示。MixHop非常的高效并且有很强的理论背景(MixHop与delta operators之间的联系)。另外,通过混合各阶信息,MixHop一定程度上避免了GNN过平滑问题。GNN的过平滑问题:随着层数的增加,GNN所学习到的节点表示变的没有区分度。 最后作者通过大量的试验验证了MixHop的效果。在Citeseer,Cora和Pubmed上,MixHop都取得了大量提升。例如,虽然MixHop没有使用注意力机制来学习邻居的重要性,但其表现依然大幅超过GAT。

Reasoning

  • 《OpenDialKG: Explainable Conversational Reasoning with Attention-based Walks over Knowledge Graphs》(ACL 2019) GitHub:

Recommendation

  • 《DKN: Deep Knowledge-Aware Network for News Recommendation》(WWW 2018)
  • 《Reinforced Negative Sampling for Recommendation with Exposure Data》(IJCAI 2019) GitHub:

Semantic Segmentation

  • 《Real-time Localized Style Transfer with Semantic Segmentation》(ICCV 2019) GitHub
  • 《CenterMask: Real-Time Anchor-Free Instance Segmentation》(2019) GitHub:

Classification

  • 《DropEdge: Towards Deep Graph Convolutional Networks on Node Classification》
  • 《Aspect-based Sentiment Classification with Aspect-specific Graph Convolutional Networks》(EMNLP 2019)
  • 《Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One》(2019) GitHub:
  • 《Self-Attention: A Better Building Block for Sentiment Analysis Neural Network Classifiers》(WASSA 2018) GitHub:(情绪分析)
  • 《Learning Norms from Stories: A Prior for Value Aligned Agents》S Frazier, M S A Nahian, M Riedl, B Harrison [Georgia Institute of Technology & University of Kentucky] (2019)
  • 《Twin Auxiliary Classifiers GAN》(NeurIPS 2019) GitHub:

Entity Linking

  • 《Context-aware Entity Linking with Attentive Neural Networks on Wikidata Knowledge Graph》I O Mulang, K Singh, A Vyas, S Shekarpour, A Sakor, M E Vidal, S Auer, J Lehmann [Cerence GmbH & TIB & University of Dayton & Fraunhofer IAIS] (2019)
  • 【实体链接、消歧、表示新趋势汇总】’Recent trend(EMNLP’19, CoNLL’19, ICLR’19 and others.)’ by izuna385 GitHub:

Entity Recognition

  • 《TENER: Adapting Transformer Encoder for Named Entity Recognition》

Clustering

  • 《Self-Supervised Learning by Cross-Modal Audio-Video Clustering》H Alwassel, D Mahajan, L Torresani, B Ghanem, D Tran [King Abdullah University of Science and Technology (KAUST) & Facebook AI] (2019)
  • 《TaxoGen: Unsupervised Topic Taxonomy Construction by Adaptive Term Embedding and Clustering》(KDD 2018) GitHub:

Aggregation

  • 《Multiview Aggregation for Learning Category-Specific Shape Reconstruction》(NeurIPS 2019)
  • 《Single Image Deraining using a Recurrent Multi-scale Aggregation and Enhancement Network》(ICME 2019) GitHub:

Reforcement Learning

  • DeepPath: A Reinforcement Learning Method for Knowledge Graph Reasoning DeepPath: 知识图谱推理的强化学习方法
  • 《Cooperative Reasoning on Knowledge Graph and Corpus: A Multi-agentReinforcement Learning Approach》
  • 《Learning To Reach Goals Without Reinforcement Learning》D Ghosh, A Gupta, J Fu, A Reddy, C Devine, B Eysenbach, S Levine [UC Berkeley & CMU] (2019)

Relation/Social Bias/Propagation

  • 《Inducing Relational Knowledge from BERT》Z Bouraoui, J Camacho-Collados, S Schockaert [CRIL - CNRS * Cardiff University] (2019)
  • 《Measuring Social Bias in Knowledge Graph Embeddings》
  • *《Learning Human Objectives by Evaluating Hypothetical Behavior》S Reddy, A D. Dragan, S Levine, S Legg, J Leike [UC Berkeley & DeepMind] (2019)(涉及到著名的电车难题)
  • 【基于Attention的知识图谱关系预测】Learning Attention-based Embeddings for Relation Prediction in Knowledge Graphs

Alignment

  • 【基于图匹配神经网络的跨语言知识图对齐】Cross-lingual Knowledge Graph Alignment via Graph Matching Neural Network
    #论文解读# #开源论文# 本文来自ACL 2019,论文通过引入图卷积神经网络,极大地提高了跨语言知识图谱中实体对齐的准确性。本文的亮点之处主要体现在以下三点:1)本文提出了主题实体图的构建,实现了相邻实体间的信息传递,使得由此方法得到的每个节点向量包含了其多跳邻居的信息,最大可能地保留了知识图谱的结构化信息。并成功地将实体对齐问题转化为图匹配问题。2)本文运用图卷积神经网络构建图匹配模型,在图匹配层运用多角度余弦匹配函数计算相似性,并通过实验论证了图匹配层在本文模型中的重要性,也说明了不仅上下文的局部信息对实体对齐效果有巨大影响,全局信息对实体对齐任务同样十分重要。3)本文验证了对知识图谱中关系信息的处理仅保留其方向而忽略其标签具体内容有助于提高模型的效率与准确性的结论。

May for Math_Help when in needed

  • 《Dynamic Convolution: Attention over Convolution Kernels》Y Chen, X Dai, M Liu, D Chen, L Yuan, Z Liu [Microsoft] (2019)
  • 《Practical Deep Learning with Bayesian Principles》(NeurIPS 2019) GitHub:Practical Deep Learning with Bayesian Principles
  • 《Physics-Informed Neural Networks for Multiphysics Data Assimilation with Application to Subsurface Transport》Q He, D Brajas-Solano, G Tartakovsky, A M. Tartakovsky [Pacific Northwest National Laboratory Richland & INTERA Incorporated] (2019)
  • 《Why ADAM Beats SGD for Attention Models》J Zhang, S P Karimireddy, A Veit, S Kim, S J Reddi, S Kumar, S Sra [MIT & Swiss Federal Institute of Technology Lausanne & Google Research] (2019)
  • 《Manifold Markov chain Monte Carlo methods for Bayesian inference in a wide class of diffusion models》M M. Graham, A H. Thiery, A Beskos [National University of Singapore & University College London] (2019)

Something interesting

attention、connectivity、understanding…or any related Graph

  • 《Linear Mode Connectivity and the Lottery Ticket Hypothesis》J Frankle, G K Dziugaite, D M. Roy, M Carbin [MIT CSAIL & Element AI & University of Toronto] (2019)
  • 《Singing Synthesis: with a little help from my attention》
  • 《Extending Machine Language Models toward Human-Level Language Understanding》J L. McClelland, F Hill, M Rudolph, J Baldridge, H Schütze [Stanford University & DeepMind & Bosch Center for Artificial Intelligence & Google Research & LMU Munich] (2019)
  • 《Actional-Structural Graph Convolutional Networks for Skeleton-based Action Recognition》(CVPR 2019) GitHub:
  • 【用基于神经符号(neural symbols)的新架构弥合神经网络表示和符号表示之间的差距】《Next-generation architectures bridge gap between neural and symbolic representations with neural symbols | Microsoft Research》
  • 《MetaInit: Initializing learning by learning to initialize》Y N. Dauphin, S S. Schoenholz [Google AI] (2019)
  • 《Machine Unlearning》L Bourtoule, V Chandrasekaran, C Choquette-Choo, H Jia, A Travers, B Zhang, D Lie, N Papernot [University of Toronto & University of Wisconsin-Madison] (2019)
  • 《Continuous Hierarchical Representations with Poincaré Variational Auto-Encoders》(2019) GitHub:
  • 《Adversarial Attacks on Neural Networks for Graph Data》(KDD 2018) GitHub
  • 《An interpretable probabilistic machine learning method for heterogeneous longitudinal studies》J Timonen, H Mannerström, A Vehtari, H Lähdesmäki [Aalto University] 2019
  • 《Meta-Learning without Memorization》M Yin, G Tucker, M Zhou, S Levine, C Finn [UT Austin & Google Research & UC Berkeley] (2019)
  • 《Compositional Invariance Constraints for Graph Embeddings》(ICML 2019) GitHub:
  • 《Mockingjay: Unsupervised Speech Representation Learning with Deep Bidirectional Transformer Encoders》(2019) GitHub:
  • 推荐一篇文章“Is PageRank All You Need for Scalable Graph Neural Networks?”,O网页链接 虽然只是一个workshop paper,但思路很有意思,就是把GNN用Personalized Pagerank进行了实现,效果还非常不错。这个思路其实和我们组Jie Zhang今年发的ProNE有很多异曲同工的地方

你可能感兴趣的:(知识图谱、图神经网络等一周论文整理(一))