NLP 2020顶会论文汇总,今天你读了吗?

疯狂泛读中
找到自己感兴趣的方向了 应该不进行更新了
大佬整理的顶会论文

目录

  • NLP经典基础paper:
  • 图像描述基础paper:
  • NIPS 2020:
  • ACL 2020:
    • 最佳论文(1个)
    • 荣誉提名论文(5个)
    • 最佳主题论文(1个)
    • 最佳 demo 论文(1个)
    • 预训练/语言模型
    • 信息抽取
    • 关系抽取
    • 事件抽取
    • 文本生成(非对话)
      • 摘要生成
      • 创意写作
      • 新的评估模型
      • 新的损失函数
      • 图像描述
      • 预训练模型
    • 基础任务(大部分为NER)
    • 知识图谱
    • 图卷积神经网络

NLP经典基础paper:

【paper带读】4. 机器翻译:Neural Machine Translation by Jointly Learning to Align and Translate
【paper带读】8. 文本分类:Convolutional Neural Networks for Sentence Classification

图像描述基础paper:

⚡【图像描述】Show and Tell: A Neural Image Caption Generator

NIPS 2020:

NIPS论文集
《A Graph Similarity for Deep Learning》

ACL 2020:

ACL2020论文整理
ACL2020最佳论文

最佳论文(1个)

  1. Beyond Accuracy: Behavioral Testing of NLP Models with CheckList
    Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin and Sameer Singh

荣誉提名论文(5个)

  1. Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks
    什么样的任务需要预训练

  2. Tangled up in BLEU: Reevaluating the Evaluation of Automatic Machine Translation Evaluation Metrics
    重新评估自动机器翻译评估度量的评估

  3. How Can We Accelerate Progress Towards Human-like Linguistic Generalization?

  4. Torch-Struct: Deep Structured Prediction Library
    深层结构预测库(好像和自己方向关系不大,放放再看)
    作者:Alexander Rush

  5. Prta: A System to Support the Analysis of Propaganda Techniques in the News
    一个支持分析新闻宣传技术的系统
    作者:Giovanni Da San Martino, Shaden Shaar, Yifan Zhang, Seunghak Yu, Alberto Barrón-Cede o and Preslav Nakov
    宣传说服技巧分析,Prta 通过自动检测正在使用宣传技术的文本片段以及正在使用的宣传技术的类型,使在线读者意识到宣传。

最佳主题论文(1个)

Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data
文章来自华盛顿大学的 Emily M. Bender 及萨尔兰大学的 Alexander Koller

最佳 demo 论文(1个)

论文题目为:GAIA: A Fine-grained Multimedia Knowledge Extraction System
最 demo 论文的获奖者为来自伊利诺伊大学、哥伦比亚大学及美国陆军研究所的Manling Li, Alireza Zareian, Ying Lin1, Xiaoman Pan, Spencer Whitehead,Brian Chen, Bo Wu, Heng Ji, Shih-Fu Chang,Clare Voss, Daniel Napierski, Marjorie Freedman 等人

预训练/语言模型

  1. Adaptive Compression of Word Embeddings
    Yeachan Kim, Kang-Min Kim and SangKeun Lee
  2. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
    Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov and Luke Zettlemoyer
  3. BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Model
    Performance
    Timo Schick and Hinrich Schütze
  4. CluBERT: A Cluster-Based Approach for Learning Sense Distributions in Multiple Languages
    Tommaso Pasini, Federico Scozzafava and Bianca Scarlini
  5. Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks
    Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey and Noah A. Smith
  6. Emerging Cross-lingual Structure in Pretrained Language Models
    Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettlemoyer and Veselin Stoyanov
  7. Exploiting Syntactic Structure for Better Language Modeling: A Syntactic Distance Approach
    Wenyu DU, Zhouhan Lin, Yikang Shen, Timothy J. O’Donnell, Yoshua Bengio and Yue Zhang
  8. Fast and Accurate Deep Bidirectional Language Representations for Unsupervised Learning
    Joongbo Shin, Yoonhyung Lee, Seunghyun Yoon and Kyomin Jung
  9. Pre-train and Plug-in: Flexible Conditional Text Generation with Variational Auto-Encoders
    Yu Duan, Canwen Xu, Jiaxin Pei, Jialong Han and Chenliang Li
  10. Pretraining with Contrastive Sentence Objectives Improves Discourse Performance of Language Models
    Dan Iter, Kelvin Guu, Larry Lansing and Dan Jurafsky
  11. Recurrent Neural Network Language Models Always Learn English-Like Relative Clause Attachment
    Forrest Davis and Marten van Schijndel
  12. Roles and Utilization of Attention Heads in Transformer-based Neural Language Models
    Jae-young Jo and Sung-Hyon Myaeng
  13. Unsupervised Domain Clusters in Pretrained Language Models
    Roee Aharoni and Yoav Goldberg
  14. A Two-Stage Masked LM Method for Term Set Expansion
    Guy Kushilevitz, Shaul Markovitch and Yoav Goldberg
  15. Do you have the right scissors? Tailoring Pre-trained Language Models via Monte-Carlo Methods
    Ning Miao, Yuxuan Song, Hao Zhou and Lei Li
  16. Enhancing Pre-trained Chinese Character Representation with Word-aligned Attention
    Yanzeng Li, Bowen Yu, Xue Mengge and Tingwen Liu
  17. Glyph2Vec: Learning Chinese Out-of-Vocabulary Word Embedding from Glyphs
    Hong-You Chen, SZ-HAN YU and Shou-de Lin
  18. Negated and Misprimed Probes for Pretrained Language Models: Birds Can Talk, But Cannot Fly
    Nora Kassner and Hinrich Schütze
  19. Overestimation of Syntactic Representation in Neural Language Models
    Jordan Kodner and Nitish Gupta
  20. Pretrained Transformers Improve Out-of-Distribution Robustness
    Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan and Dawn Song
  21. Stolen Probability: A Structural Weakness of Neural Language Models
    David Demeter, Gregory Kimmel and Doug Downey
  22. To Pretrain or Not to Pretrain: Examining the Benefits of Pretrainng on Resource Rich Tasks
    Sinong Wang, Madian Khabsa and Hao Ma

信息抽取

  1. A Joint Neural Model for Information Extraction with Global Features
    Ying Lin, Heng Ji, Fei Huang and Lingfei Wu
  2. Conditional Augmentation for Aspect Term Extraction via Masked Sequence-to-Sequence Generation
    Kun Li, Chengbo Chen, Xiaojun Quan, Qing Ling and Yan Song
  3. Discourse-Aware Neural Extractive Text Summarization
    Jiacheng Xu, Zhe Gan, Yu Cheng and Jingjing Liu
  4. Discrete Optimization for Unsupervised Sentence Summarization with Word-Level Extraction
    Raphael Schumann, Lili Mou, Yao Lu, Olga Vechtomova and Katja Markert
  5. Effective Inter-Clause Modeling for End-to-End Emotion-Cause Pair Extraction
    Penghui Wei, Jiahao Zhao and Wenji Mao
  6. Extractive Summarization as Text Matching
    Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu and Xuanjing Huang
  7. Heterogeneous Graph Neural Networks for Extractive Document Summarization
    Danqing Wang, Pengfei Liu, Yining Zheng, Xipeng Qiu and Xuanjing Huang
  8. IMoJIE: Iterative Memory-Based Joint Open Information Extraction
    Keshav Kolluru, Samarth Aggarwal, Vipul Rathore, Mausam - and Soumen Chakrabarti
  9. Representation Learning for Information Extraction from Form-like Documents
    Bodhisattwa Prasad Majumder, Navneet Potti, Sandeep Tata, James Bradley Wendt, Qi Zhao and Marc Najork
  10. SciREX: A Challenge Dataset for Document-Level Information Extraction
    Sarthak Jain, Madeleine van Zuylen, Hannaneh Hajishirzi and Iz Beltagy
  11. Transition-based Directed Graph Construction for Emotion-Cause Pair Extraction
    Chuang Fan, Chaofa Yuan, Jiachen Du, Lin Gui, Min Yang and Ruifeng Xu

关系抽取

A Novel Cascade Binary Tagging Framework for Relational Triple Extraction
Zhepei Wei, Jianlin Su, Yue Wang, Yuan Tian and Yi Chang
Dialogue-Based Relation Extraction
Dian Yu, Kai Sun, Claire Cardie and Dong Yu
Exploiting the Syntax-Model Consistency for Neural Relation Extraction
Amir Pouran Ben Veyseh, Franck Dernoncourt, Dejing Dou and Thien Huu Nguyen
In Layman’s Terms: Semi-Open Relation Extraction from Scientific Texts
Ruben Kruiper, Julian Vincent, Jessica Chen-Burger, Marc Desmulliez and Ioannis Konstas
Probing Linguistic Features of Sentence-Level Representations in Relation Extraction
Christoph Alt, Aleksandra Gabryszak and Leonhard Hennig
Reasoning with Latent Structure Refinement for Document-Level Relation Extraction
Guoshun Nan, Zhijiang Guo, Ivan Sekulic and Wei Lu
Relabel the Noise: Joint Extraction of Entities and Relations via Cooperative Multiagents
Daoyuan Chen, Yaliang Li, Kai Lei and Ying Shen
ZeroShotCeres: Zero-Shot Relation Extraction from Semi-Structured Webpages
Colin Lockard, Prashant Shiralkar, Xin Luna Dong and Hannaneh Hajishirzi
Relation Extraction with Explanation
Hamed Shahbazi, Xiaoli Fern, Reza Ghaeini and Prasad Tadepalli
Revisiting Unsupervised Relation Extraction
Thy Thy Tran, Phong Le and Sophia Ananiadou

事件抽取

Cross-media Structured Common Space for Multimedia Event Extraction
Manling Li, Alireza Zareian, Qi Zeng, Spencer Whitehead, Di Lu, Heng Ji and Shih-Fu Chang
Document-Level Event Role Filler Extraction using Multi-Granularity Contextualized Encoding
Xinya Du and Claire Cardie
Improving Event Detection via Open-domain Trigger Knowledge
Meihan Tong, Bin Xu, Shuai Wang, Yixin Cao, Lei Hou, Juanzi Li and Jun Xie
A Two-Step Approach for Implicit Event Argument Detection
Zhisong Zhang, Xiang Kong, Zhengzhong Liu, Xuezhe Ma and Eduard Hovy
Towards Open Domain Event Trigger Identification using Adversarial Domain Adaptation
Aakanksha Naik and Carolyn Rose

文本生成(非对话)

摘要生成

A Generative Model for Joint Natural Language Understanding and Generation
建立NLU 和 NLG 的联系,提出NLU 和 NLG 融合起来的生成模型

A Study of Non-autoregressive Model for Sequence Generation
提出CoMMA 分析模型,分析目标标记之间的依赖性的强度

Discourse as a Function of Event: Profiling Discourse Structure in News Articles around the Main Event
提出赛事新闻文章结构模型,提高语篇分析能力

Bridging the Structural Gap Between Encoding and Decoding for Data-To-Text Generation
提出了Dualenc(包含图结构的双编码模型),解决encoder-decoder结构不同的问题

Discrete Optimization for Unsupervised Sentence Summarization with Word-Level Extraction
提出了一种新的基于爬山算法和词级抽取的无监督句子摘要方法

Improving Adversarial Text Generation by Modeling the Distant Future
使用长远特征提升对抗性文本生成

Logical Natural Language Generation from Open-Domain Tables
提高文本生成的逻辑推理

Simple and Effective Retrieve-Edit-Rerank Text Generation
简单有效的检索-编辑-重排文本生成

Neural Data-to-Text Generation via Jointly Learning the Segmentation and Correspondence

Towards Faithful Neural Table-to-Text Generation with Content-Matching Constraints

创意写作

Automatic Generation of Citation Texts in Scholarly Papers: A Pilot Study
描述在论文A中参考的B文献

Automatic Poetry Generation from Prosaic Text
从图中自动生成诗歌

新的评估模型

BLEURT: Learning Robust Metrics for Text Generation
bleurt:学习文本生成的强大指标

GPT-too: A language-model-first approach for AMR-to-text generation
提出了一种将强大的预训练语言模型与基于循环一致性的重新评分相结合的方法。

新的损失函数

Improved Natural Language Generation via Loss Truncation
通过 via Loss 改进自然语言生成

图像描述

Clue: Cross-modal Coherence Modeling for Caption Generation
提出了一个新的推理图像和文本连贯关系的任务

Cross-modal Language Generation using Pivot Stabilization for Web-scale Language Coverage
将生成字幕和机器翻译结合,提出PLuGS模型,运行时生成英文字幕加上x语言字幕

预训练模型

Few-Shot NLG with Pre-Trained Language Model
小样本使用预训练模型

Two Birds, One Stone: A Simple, Unified Model for Text Generation from Structured and Unstructured Data
Hamidreza Shahidi, Ming Li and Jimmy Lin

基础任务(大部分为NER)

A Joint Model for Document Segmentation and Segment Labeling
Joe Barrow, Rajiv Jain, Vlad Morariu, Varun Manjunatha, Douglas Oard and Philip Resnik
A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages
Pedro Javier Ortiz Suárez, Laurent Romary and Benoît Sagot
A Unified MRC Framework for Named Entity Recognition
Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu and Jiwei Li
An Effective Transition-based Model for Discontinuous NER
Xiang Dai, Sarvnaz Karimi, Ben Hachey and Cecile Paris
Bipartite Flat-Graph Network for Nested Named Entity Recognition
Ying Luo and Hai Zhao
Breaking Through the 80% Glass Ceiling: Raising the State of the Art in Word Sense Disambiguation by Incorporating Knowledge Graph Information
Michele Bevilacqua and Roberto Navigli
Coupling Distant Annotation and Adversarial Training for Cross-Domain Chinese Word Segmentation
Ning Ding, Dingkun Long, Guangwei Xu, Muhua Zhu, Pengjun Xie, Xiaobin Wang and Haitao Zheng
Cross-Lingual Semantic Role Labeling with High-Quality Translated Training Corpus
Hao Fei, Meishan Zhang and Donghong Ji
Improving Chinese Word Segmentation with Wordhood Memory Networks
Yuanhe Tian, Yan Song, Fei Xia, Tong Zhang and Yonggang Wang
Joint Chinese Word Segmentation and Part-of-speech Tagging via Two-way Attentions of Auto-analyzed Knowledge
Yuanhe Tian, Yan Song, Xiang Ao, Fei Xia, Xiaojun Quan, Tong Zhang and Yonggang Wang
Learning to Contextually Aggregate Multi-Source Supervision for Sequence Labeling
Ouyu Lan, Xiao Huang, Bill Yuchen Lin, He Jiang, Liyuan Liu and Xiang Ren
Multi-Cell Compositional LSTM for NER Domain Adaptation
Chen Jia and Yue Zhang
Multi-Domain Named Entity Recognition with Genre-Aware and Agnostic Inference
Jing Wang, Mayank Kulkarni and Daniel Preotiuc-Pietro
Named Entity Recognition without Labelled Data: A Weak Supervision Approach
Pierre Lison, Jeremy Barnes, Aliaksandr Hubin and Samia Touileb
NAT: Noise-Aware Training for Robust Neural Sequence Labeling
Marcin Namysl, Sven Behnke and Joachim Köhler
Pyramid: A Layered Model for Nested Named Entity Recognition
Jue Wang, Lidan Shou, Ke Chen and Gang Chen
SeqVAT: Virtual Adversarial Training for Semi-Supervised Sequence Labeling
Luoxin Chen, Weitong Ruan, Xinyue Liu and Jianhua Lu
Simplify the Usage of Lexicon in Chinese NER
Ruotian Ma, Minlong Peng, Qi Zhang, Zhongyu Wei and Xuanjing Huang
Single-/Multi-Source Cross-Lingual NER via Teacher-Student Learning on Unlabeled Data in Target Language
Qianhui Wu, Zijia Lin, Börje Karlsson, Jian-Guang Lou and Biqing Huang
Sources of Transfer in Multilingual Named Entity Recognition
David Mueller, Nicholas Andrews and Mark Dredze
Structured Tuning for Semantic Role Labeling
Tao Li, Parth Anand Jawale, Martha Palmer and Vivek Srikumar
Structure-Level Knowledge Distillation For Multilingual Sequence Labeling
Xinyu Wang, Yong Jiang, Nguyen Bach, Tao Wang, Fei Huang and Kewei Tu
Temporally-Informed Analysis of Named Entity Recognition
Shruti Rijhwani and Daniel Preotiuc-Pietro
Bayesian Hierarchical Words Representation Learning
Oren Barkan, Idan Rejwan, Avi Caciularu and Noam Koenigstein
FLAT: Chinese NER Using Flat-Lattice Transformer
Xiaonan Li, Hang Yan, Xipeng Qiu and Xuanjing Huang
Improving Low-Resource Named Entity Recognition using Joint Sentence and Token Labeling
Canasai Kruengkrai, Thien Hai Nguyen, Sharifah Mahani Aljunied and Lidong Bing
Instance-Based Learning of Span Representations: A Case Study through Named Entity Recognition
Hiroki Ouchi, Jun Suzuki, Sosuke Kobayashi, Sho Yokoi, Tatsuki Kuribayashi, Ryuto Konno and Kentaro Inui
Low Resource Sequence Tagging using Sentence Reconstruction
Tal Perl, Sriram Chaudhury and Raja Giryes
Named Entity Recognition as Dependency Parsing
Juntao Yu, Bernd Bohnet and Massimo Poesio
Soft Gazetteers for Low-Resource Named Entity Recognition
Shruti Rijhwani, Shuyan Zhou, Graham Neubig and Jaime Carbonell
TriggerNER: Learning with Entity Triggers as Explanations for Named Entity Recognition
Bill Yuchen Lin, Dong-Ho Lee, Ming Shen, Ryan Moreno, Xiao Huang, Prashant Shiralkar and Xiang Ren

知识图谱

Connecting Embeddings for Knowledge Graph Entity Typing
Yu Zhao, anxiang zhang, Ruobing Xie, Kang Liu and Xiaojie WANG
Knowledge Graph Embedding Compression
Mrinmaya Sachan
Low-Dimensional Hyperbolic Knowledge Graph Embeddings
Ines Chami, Adva Wolf, Da-Cheng Juan, Frederic Sala, Sujith Ravi and Christopher Ré
Orthogonal Relation Transforms with Graph Context Modeling for Knowledge Graph Embedding
Yun Tang, Jing Huang, Guangtao Wang, Xiaodong He and Bowen Zhou
ReInceptionE: Relation-Aware Inception Network with Joint Local-Global Structural Information for Knowledge Graph Embedding
SEEK: Segmented Embedding of Knowledge Graphs
Wentao Xu, Shun Zheng, Liang He, Bin Shao, Jian Yin and Tie-Yan Liu
A Re-evaluation of Knowledge Graph Completion Methods
Zhiqing Sun, Shikhar Vashishth, Soumya Sanyal, Partha Talukdar and Yiming Yang

图卷积神经网络

Aligned Dual Channel Graph Convolutional Network for Visual Question Answering
Qingbao Huang, Jielong Wei, Yi Cai, Changmeng Zheng, Junying Chen, Ho-fung Leung and Qing Li
Autoencoding Pixies: Amortised Variational Inference with Graph Convolutions for Functional Distributional Semantics
Guy Emerson
Integrating Semantic and Structural Information with Graph Convolutional Network for Controversy Detection
Lei Zhong, Juan Cao, Qiang Sheng, Junbo Guo and Ziang Wang
Syntax-Aware Opinion Role Labeling with Dependency Graph Convolutional Networks
Bo Zhang, Yue Zhang, Rui Wang, Zhenghua Li and Min Zhang

ACL2020 QG/QA/NLG等相关感兴趣的文章

你可能感兴趣的:(❤️paper带读,人工智能,python,深度学习,自然语言处理,acl)