ROPES | Reasoning Over Paragraph Effects in Situations
.CommonSenseQA |COMMONSENSEQA: A Question Answering Challenge Targeting Commonsense Knowledge |
CoQA | CoQA: A Conversational Question Answering Challenge
MultiRC | Looking Beyond the Surface: A Challenge Set for Reading Comprehension over Multiple Sentences
OpenBookQA | Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering
RACE | RACE: Large-scale ReAding Comprehension Dataset From Examinations
XCMRC | XCMRC: Evaluating Cross-lingual Machine Reading Comprehension
CLMRC | Cross-Lingual Machine Reading Comprehension
WINOGRANDE | WINOGRANDE: An Adversarial Winograd Schema Challenge at Scale
HellaSwag | HellaSwag: Can a Machine Really Finish Your Sentence?
McTaco | “Going on a vacation” takes longer than “Going for a walk”: A Study of Temporal Commonsense Understanding
Social IQA | SOCIAL IQA: Commonsense Reasoning about Social Interactions
CosMosQA | Cosmos QA : Machine Reading Comprehension with Contextual Commonsense Reasoning (EMNLP’2019)
PubMedQA | PubMedQA : A Dataset for Biomedical Research Question Answering
GeoQA | GeoSQA: A Benchmark for Scenario-based Question Answering in the Geography Domain at High School Level
HEAD-QA | HEAD-QA: A Healthcare Dataset for Complex Reasoning
ReCoRD | ReCoRD: Bridging the Gap between Human and Machine Commonsense Reading Comprehension
c^3 | Investigating Prior Knowledge for Challenging Chinese Machine Reading Comprehension
Dream | DREAM: A Challenge Dataset and Models for Dialogue-Based Reading Comprehension
QAngaroo 、WikihopQA | Constructing Datasets for Multi-hop Reading Comprehension Across Documents
JEC-QA | JEC-QA: A Legal-Domain Question Answering Dataset
PIQA | PIQA: Reasoning about Physical Commonsense in Natural Language
TweetQA | TWEETQA: A Social Media Focused Question Answering Dataset
RC-QED | RC-QED: Evaluating Natural Language Derivations in Multi-Hop Reading Comprehension
MLQA | MLQA: Evaluating Cross-lingual Extractive Question Answering
QuAC| QuAC : Question Answering in Context
CNN/Daily-Mail | Challenging Reading Comprehension on Daily Conversation: Passage Completion on Multiparty Dialog
谷歌acl2019医疗对话数据集| Extracting Symptoms and their Status from Clinical Conversations
NAACL2019医患对话数据集 | Fast Prototyping a Dialogue Comprehension System for Nurse-Patient Conversations on Symptom Monitoring
DROP | DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs
HotpotQA | HOTPOTQA: A Dataset for Diverse, Explainable Multi-hop Question Answering
QuoRef | QUOREF: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning
Multi-QA | MULTIQA: An Empirical Investigation of Generalization and Transfer in Reading Comprehension
MRQA2019| MRQA 2019 Shared Task: Evaluating Generalization in Reading Comprehension
ATOMIC | ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning
ORB | ORB: An Open Reading Benchmark for Comprehensive Evaluation of Machine Reading Comprehension
ACL2019做对话同时发布数据集| Conversing by Reading: Contentful Neural Conversation with On-demand Machine Reading
emrQA | emrQA: A Large Corpus for Question Answering on Electronic Medical Records
ComQA | ComQA: A Community-sourced Dataset for Complex Factoid Question Answering with Paraphrase Clusters
ConvQuestions | Look before you Hop: Conversational Question Answering over Knowledge Graphs Using Judicious Context Expansion
ShARC(Shaping Answers with Rules through Conversation) | Interpretation of Natural Language Rules in Conversational Machine Reading
DuoRC | DuoRC: Towards Complex Language Understanding with Paraphrased Reading Comprehension
MCScript| MCScript: A Novel Dataset for Assessing Machine Comprehension Using Script Knowledge
SQuAD2.0 | Know What You Don’t Know: Unanswerable Questions for SQuAD
RecipeQA | RecipeQA: A Challenge Dataset for Multimodal Comprehension of Cooking Recipes
SWAG | SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference
TextWordsQA | Multi-Relational Question Answering from Narratives: Machine Reading and Reasoning in Simulated Worlds
CLOTH | Large-scale Cloze Test Dataset Created by Teachers
CLiCR | CliCR: A Dataset of Clinical Case Reports for Machine Reading Comprehension
DuReader | DuReader: a Chinese Machine Reading Comprehension Dataset from Real-world Applications
NarrativeQA | The NarrativeQA Reading Comprehension Challenge
Who did What | Who did What: A Large-Scale Person-Centered Cloze Dataset
SearchQA | SearchQA: A New Q&A Dataset
Augmented with Context from a Search Engine
TriviaQA | TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
Quasar| Quasar: Datasets for Question Answering by Search and Reading
bAbi | TOWARDS AI-COMPLETE QUESTION ANSWERING: A SET OF PREREQUISITE TOY TASKS
CBT(Children’s Books Tests) | The Goldilocks Principle: Reading Children’s Books with Explicit Memory Representations
MS_MARCO | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
NewsQA | NewsQA: A Machine Comprehension Dataset
LAMBADA | The LAMBADA dataset: Word prediction requiring a broad discourse context
SCT(Story Cloze Test) | A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
CMRC |
ROCStories:A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
SherLIiC:SherLIiC: A Typed Event-Focused Lexical Inference Benchmark for Evaluating Natural Language Inference
AlphaNLI:ABDUCTIVE COMMONSENSE REASONING
BiDAF |
QANet |
EPAr | Explore, Propose, and Assemble: An Interpretable Model for Multi-Hop Reading Comprehension
model | Using Local Knowledge Graph Construction to Scale Seq2Seq Models to Multi-Document Inputs
model | Self-Assembling Modular Networks for Interpretable Multi-Hop Reasoning
DFGN | Dynamically Fused Graph Network for Multi-hop Reasoning
model | Improving the Robustness of Deep Reading Comprehension Models by Leveraging Syntax Prior
SG-Net | SG-Net: Syntax-Guided Machine Reading Comprehension
GapQA | What’s Missing: A Knowledge Gap Guided Approach for Multi-hop Question Answering
Kagnet | KagNet: Knowledge-Aware Graph Networks for Commonsense Reasoning
AMS | Align, Mask and Select: A Simple Method for Incorporating Commonsense Knowledge into Language Representation Models
Incorporating Relation Knowledge into Commonsense Reading Comprehension with Multi-task Learning
| Incorporating Structured Commonsense Knowledge in Story Completion
KEAG | Incorporating External Knowledge into Machine Reading for Generative Question Answering
| Commonsense for Generative Multi-Hop Question Answering Tasks
K-Bert | K-BERT: Enabling Language Representation with Knowledge Graph
Reading: 14/15 comprehension : ADD+ 0
multi-hop 1/4 add
reason
1.Graph-Based Reasoning over Heterogeneous External Knowledge for Commonsense Question Answering
2. Differentiable Reasoning on Large Knowledge Bases and Natural Language
https://arxiv.org/pdf/1912.10824.pdf
Question
Answering:
20200508
None
20200507
【6】 Harvesting and Refining Question-Answer Pairs for Unsupervised QA
标题:收集和精炼无监督QA的问答对
作者: Zhongli Li, Ke Xu
备注:Accepted by ACL-20
链接:https://arxiv.org/abs/2005.02925
20200506
【19】 Probabilistic Assumptions Matter: Improved Models for Distantly-Supervised Document-Level Question Answering
标题:概率假设很重要:用于远程监督的文档级问题回答的改进模型
作者: Hao Cheng, Kristina Toutanova
备注:ACL2020
链接:https://arxiv.org/abs/2005.01898
20200505
【12】 To Test Machine Comprehension, Start by Defining Comprehension
标题:要测试机器理解力,请从定义理解力开始
作者: Jesse Dunietz, David Ferrucci
备注:9 pages; 3 figures; 1 table. To be published in the Theme track of ACL 2020
链接:https://arxiv.org/abs/2005.01525
【12】 To Test Machine Comprehension, Start by Defining Comprehension
标题:要测试机器理解力,请从定义理解力开始
作者: Jesse Dunietz, David Ferrucci
备注:9 pages; 3 figures; 1 table. To be published in the Theme track of ACL 2020
链接:https://arxiv.org/abs/2005.01525
【27】 Unsupervised Alignment-based Iterative Evidence Retrieval for Multi-hop Question Answering
标题:基于无监督对齐的迭代证据检索多跳问答
作者: Vikas Yadav, Mihai Surdeanu
备注:Accepted at ACL 2020 as a long conference paper
链接:https://arxiv.org/abs/2005.01218
【44】 How Does Selective Mechanism Improve Self-Attention Networks?
标题:选择机制如何改善自我注意网络?
作者: Xinwei Geng, Zhaopeng Tu
备注:ACL 2020
链接:https://arxiv.org/abs/2005.00979
【67】 Teaching Machine Comprehension with Compositional Explanations
标题:用构图讲解进行机器理解教学
作者: Qinyuan Ye, Xiang Ren
链接:https://arxiv.org/abs/2005.0080
【68】 Treebank Embedding Vectors for Out-of-domain Dependency Parsing
标题:用于域外依赖分析的树库嵌入向量
作者: Joachim Wagner, Jennifer Foster
备注:Camera ready for ACL 2020
链接:https://arxiv.org/abs/2005.00800
【71】 Measuring and Reducing Non-Multifact Reasoning in Multi-hop Question Answering
标题:多跳问答中非多事实推理的度量和约简
作者: Harsh Trivedi, Ashish Sabharwal
链接:https://arxiv.org/abs/2005.00789
【74】 ProtoQA: A Question Answering Dataset for Prototypical Common-Sense Reasoning
标题:ProtoQA:一个用于原型常识推理的问答数据集
作者: Michael Boratko, Andrew McCallum
链接:https://arxiv.org/abs/2005.00771
【98】 Contrastive Self-Supervised Learning for Commonsense Reasoning
标题:用于常识推理的对比自监督学习
作者: Tassilo Klein, Moin Nabi
备注:To appear at ACL2020
链接:https://arxiv.org/abs/2005.00669
【104】 Scalable Multi-Hop Relational Reasoning for Knowledge-Aware Question Answering
标题:面向知识型问答的可扩展多跳关系推理
作者: Yanlin Feng, Xiang Ren
链接:https://arxiv.org/abs/2005.00646
【112】 Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work?
标题:使用自然语言理解的预训练模型进行中间任务迁移学习:何时和为什么起作用?
作者: Yada Pruksachatkun, Samuel R. Bowman
备注:ACL 2020
链接:https://arxiv.org/abs/2005.00628
【121】 Clinical Reading Comprehension: A Thorough Analysis of the emrQA Dataset
标题:临床阅读理解:emrQA数据集的透彻分析
作者: Xiang Yue, Huan Sun
备注:Accepted by ACL 2020
链接:https://arxiv.org/abs/2005.00574
20200504
【3】 SciREX: A Challenge Dataset for Document-Level Information Extraction
标题:SciREX:一个文档级信息抽取的挑战数据集
作者: Sarthak Jain, Iz Beltagy
备注:ACL2020 Camera Ready Submission, Work done by first authors while interning at AI2
链接:https://arxiv.org/abs/2005.00512
【7】 MedType: Improving Medical Entity Linking with Semantic Type Prediction
标题:MedType:利用语义类型预测改进医学实体链接
作者: Shikhar Vashishth, Carolyn Rose
链接:https://arxiv.org/abs/2005.00460
【11】 Topological Sort for Sentence Ordering
标题:句子排序的拓扑排序
作者: Shrimai Prabhumoye, Alan W Black
备注:Will be published at the Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL) 2020
链接:https://arxiv.org/abs/2005.00432
【16】 XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning
标题:XCOPA:一个用于因果常识推理的多语言数据集
作者: Edoardo Maria Ponti, Anna Korhonen
链接:https://arxiv.org/abs/2005.00333
【19】 Self-supervised Knowledge Triplet Learning for Zero-shot Question Answering
标题:基于自监督知识三元组学习的零炮问答
作者: Pratyay Banerjee, Chitta Baral
链接:https://arxiv.org/abs/2005.00316
【30】 TORQUE: A Reading Comprehension Dataset of Temporal Ordering Questions
标题:TORQUE:时序问题的阅读理解数据集
作者: Qiang Ning, Dan Roth
链接:https://arxiv.org/abs/2005.00242
【31】 Biomedical Entity Representations with Synonym Marginalization
标题:同义词边缘化的生物医学实体表征
作者: Mujeen Sung, Jaewoo Kang
备注:ACL 2020
链接:https://arxiv.org/abs/2005.00239
【34】 Evaluating Neural Machine Comprehension Model Robustness to Noisy Inputs and Adversarial Attacks
标题:评估神经机器理解模型对噪声输入和对抗攻击的鲁棒性
作者: Winston Wu, Svitlana Volkova
链接:https://arxiv.org/abs/2005.00190
【39】 Information Seeking in the Spirit of Learning: a Dataset for Conversational Curiosity
标题:在学习精神中寻找信息:一个对话好奇心的数据集
作者: Pedro Rodriguez, Zhiguang Wang
链接:https://arxiv.org/abs/2005.00172
【42】 Attend to Medical Ontologies: Content Selection for Clinical Abstractive Summarization
标题:关注医学本体论:临床文摘的内容选择
作者: Sajad Sotudeh, Ross W. Filice
备注:Accepted to ACL 2020
链接:https://arxiv.org/abs/2005.00163
链接:https://arxiv.org/abs/2005.00048
【60】 Progressively Pretrained Dense Corpus Index for Open-Domain Question Answering
标题:用于开放领域问答的渐进式预训练密集语料库索引
作者: Wenhan Xiong, William Yang Wang
链接:https://arxiv.org/abs/2005.00038
链接:https://arxiv.org/abs/2005.00048
【67】 Bipartite Flat-Graph Network for Nested Named Entity Recognition
标题:用于嵌套命名实体识别的二部平图网络
作者: Ying Luo, Hai Zhao
备注:Accepted by ACL2020
链接:https://arxiv.org/abs/2005.00436
20200501
【41】 STARC: Structured Annotations for Reading Comprehension
标题:Starc:阅读理解的结构化注释
作者: Yevgeni Berzak, Roger Levy
备注:ACL 2020. OneStopQA dataset, STARC guidelines and human experiments data are available at this https URL
链接:https://arxiv.org/abs/2004.14797
【42】 Character-Level Translation with Self-attention
标题:自我关注的字符级翻译
作者: Yingqiang Gao, Richard H.R. Hahnloser
备注:ACL 2020
链接:https://arxiv.org/abs/2004.14788
【47】 Named Entity Recognition without Labelled Data: A Weak Supervision Approach
标题:无标记数据的命名实体识别:一种弱监督方法
作者: Pierre Lison, Samia Touileb
备注:Accepted to ACL 2020 (long paper)
链接:https://arxiv.org/abs/2004.14723
【55】 Robust Question Answering Through Sub-part Alignment
标题:通过子部分对齐进行稳健的问题回答
作者: Jifan Chen, Greg Durrett
链接:https://arxiv.org/abs/2004.14648
【61】 Look at the First Sentence: Position Bias in Question Answering
标题:看第一句:回答问题时的位置偏见
作者: Miyoung Ko, Jaewoo Kang
链接:https://arxiv.org/abs/2004.14602
【70】 RikiNet: Reading Wikipedia Pages for Natural Question Answering
标题:RikiNet:阅读维基百科自然问答页面
作者: Dayiheng Liu, Nan Duan
备注:Accepted at ACL 2020
链接:https://arxiv.org/abs/2004.14560
【84】 Instance-Based Learning of Span Representations: A Case Study through Named Entity Recognition
标题:基于实例的跨度表示学习:通过命名实体识别的案例研究
作者: Hiroki Ouchi, Kentaro Inui
备注:Accepted by ACL2020
链接:https://arxiv.org/abs/2004.14514
20200430
【17】 SubjQA: A Dataset for Subjectivity and Review Comprehension
标题:SubjQA:一个主观性和复习理解的数据集
作者: Johannes Bjerva, Isabelle Augenstein
链接:https://arxiv.org/abs/2004.14283
【22】 Towards Transparent and Explainable Attention Models
标题:走向透明和可解释的注意模型
作者: Akash Kumar Mohankumar, Balaraman Ravindran
备注:Accepted at ACL 2020
链接:https://arxiv.org/abs/2004.14243
【44】 Do Neural Language Models Show Preferences for Syntactic Formalisms?
标题:神经语言模型显示对句法形式的偏好吗?
作者: Artur Kulmizev, Joakim Nivre
备注:ACL 2020
链接:https://arxiv.org/abs/2004.14096
【46】 Pre-training Is (Almost) All You Need: An Application to Commonsense Reasoning
标题:预培训(几乎)就是你所需要的一切:常识推理的应用
作者: Alexandre Tamborrino, Louise Naudin
备注:Accepted at ACL 2020
链接:https://arxiv.org/abs/2004.14074
【47】 Enhancing Answer Boundary Detection for Multilingual Machine Reading Comprehension
标题:增强多语种机器阅读理解的答案边界检测
作者: Fei Yuan, Daxin Jiang
备注:ACL 2020
链接:https://arxiv.org/abs/2004.14069
【53】 Multi-choice Dialogue-Based Reading Comprehension with Knowledge and Key Turns
标题:基于知识和关键转折的多项选择式对话阅读理解
作者: Junlong Li, Hai Zhao
链接:https://arxiv.org/abs/2004.13988
【57】 Data Augmentation for Spoken Language Understanding via Pretrained Models
标题:通过预先训练的模型进行口语理解的数据增强
作者: Baolin Peng, Jianfeng Gao
链接:https://arxiv.org/abs/2004.13952
【89】 Every Document Owns Its Structure: Inductive Text Classification via Graph Neural Networks
标题:每个文档都有自己的结构:基于图神经网络的归纳文本分类
作者: Yufeng Zhang, Liang Wang
链接:https://arxiv.org/abs/2004.13826
【92】 Fine-tuning Multi-hop Question Answering with Hierarchical Graph Network
标题:基于层次图网络的微调多跳问答
作者: Guanming Xiong
链接:https://arxiv.org/abs/2004.13821
20200429
【11】 Event Extraction by Answering (Almost) Natural Questions
标题:通过回答(几乎)自然问题的事件提取
作者: Xinya Du, Claire Cardie
链接:https://arxiv.org/abs/2004.13625
【21】 Semantics-Aware Inferential Network for Natural Language Understanding
标题:语义感知的自然语言理解推理网络
作者: Shuailiang Zhang, Junru Zhou
链接:https://arxiv.org/abs/2004.13338
【49】 Conversational Question Answering over Passages by Leveraging Word Proximity Networks
标题:利用单词邻近网络通过段落进行会话问答
作者: Magdalena Kaiser, Gerhard Weikum
备注:SIGIR 2020 Demonstrations
链接:https://arxiv.org/abs/2004.13117
20200428
【3】 SCDE: Sentence Cloze Dataset with High Quality Distractors From Examinations
标题:SCDE:具有高质量检查干扰项的语句完形填空数据集
作者: Xiang Kong, Eduard Hovy
备注:ACL2020
链接:https://arxiv.org/abs/2004.12934
【6】 Synonyms and Antonyms: Embedded Conflict
标题:同义词和反义词:内在冲突
作者: Igor Samenko, Ivan P. Yamshchikov
链接:https://arxiv.org/abs/2004.12835
【7】 LightPAFF: A Two-Stage Distillation Framework for Pre-training and Fine-tuning
标题:LightPAFF:一个用于预训练和微调的两阶段精馏框架
作者: Kaitao Song, Tie-Yan Liu
链接:https://arxiv.org/abs/2004.12817
【24】 Masking as an Efficient Alternative to Finetuning for Pretrained Language Models
标题:掩蔽作为预培训语言模型的微调的有效替代方法
作者: Mengjie Zhao, Hinrich Schütze
链接:https://arxiv.org/abs/2004.12406
【25】 Heterogeneous Graph Neural Networks for Extractive Document Summarization
标题:用于抽取文档摘要的异构图神经网络
作者: Danqing Wang, Xuanjing Huang
备注:Accepted by ACL2020
链接:https://arxiv.org/abs/2004.12393
【28】 Relational Graph Attention Network for Aspect-based Sentiment Analysis
标题:用于基于方面的情感分析的关系图注意力网络
作者: Kai Wang, Rui Wang
备注:To appear at ACL 2020
链接:https://arxiv.org/abs/2004.12362
【37】 MCQA: Multimodal Co-attention Based Network for Question Answering
标题:MCQA:基于多模态协同注意的问答网络
作者: Abhishek Kumar, Dinesh Manocha
链接:https://arxiv.org/abs/2004.12238
【45】 A Heterogeneous Graph with Factual, Temporal and Logical Knowledge for Question Answering Over Dynamic Contexts
标题:一种具有事实、时间和逻辑知识的异构图用于动态环境下的问题回答
作者: Wanjun Zhong, Jian Yin
链接:https://arxiv.org/abs/2004.12057
【48】 Syntactic Data Augmentation Increases Robustness to Inference Heuristics
标题:句法数据增强了推理启发式的健壮性
作者: Junghyun Min, Tal Linzen
备注:ACL 2020
链接:https://arxiv.org/abs/2004.11999
【54】 A Batch Normalized Inference Network Keeps the KL Vanishing Away
标题:批量规范化推理网络使KL消失
作者: Qile Zhu, Dapeng Wu
备注:camera-ready for ACL 2020
链接:https://arxiv.org/abs/2004.12585
20200427
【1】 Template-Based Question Generation from Retrieved Sentences for Improved Unsupervised Question Answering
标题:基于模板从检索到的句子中生成问题用于改进的无监督问题回答
作者:Alexander R. Fabbri, Bing Xiang
备注:ACL 2020
链接:https://arxiv.org/abs/2004.11892
【2】 Lite Transformer with Long-Short Range Attention
标题:具有长短距离注意的Lite变压器
作者:Zhanghao Wu, Song Han
备注:ICLR 2020. The first two authors contributed equally to this work
链接:https://arxiv.org/abs/2004.11886
【4】 Event-QA: A Dataset for Event-Centric Question Answering over Knowledge Graphs
标题:Event-QA:基于知识图的以事件为中心的问答数据集
作者:Tarcísio Souza Costa, Elena Demidova
链接:https://arxiv.org/abs/2004.11861
【6】 FLAT: Chinese NER Using Flat-Lattice Transformer
标题:平面:使用平面点阵变压器的中国NER
作者:Xiaonan Li, Xuanjing Huang
备注:Accepted to the ACL 2020
链接:https://arxiv.org/abs/2004.11795
【15】 G-DAUG: Generative Data Augmentation for Commonsense Reasoning
标题:G-DAUG:用于常识推理的生成性数据增强
作者:Yiben Yang, Doug Downey
链接:https://arxiv.org/abs/2004.11546
20200424
【1】 Rapidly Bootstrapping a Question Answering Dataset for COVID-19
标题:快速引导COVID-19的问题回答数据集
作者: Raphael Tang, Jimmy Lin
链接:https://arxiv.org/abs/2004.11339
【7】 DuReaderrobust: A Chinese Dataset Towards Evaluating the Robustness of Machine Reading Comprehension Models
标题:DuReaderRobust:一个评估机器阅读理解模型稳健性的中文数据集
作者: Hongxuan Tang, Haifeng Wang
链接:https://arxiv.org/abs/2004.11142
【15】 Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks
标题:不要停止培训:使语言模型适应领域和任务
作者: Suchin Gururangan, Noah A. Smith
备注:ACL 2020
链接:https://arxiv.org/abs/2004.10964
【17】 Preserving the Hypernym Tree of WordNet in Dense Embeddings
标题:在稠密嵌入中保持WordNet的Hypernym树
作者: Canlin Zhang, Xiuwen Liu
链接:https://arxiv.org/abs/2004.10863
【18】 Syntactic Structure from Deep Learning
标题:深度学习的句法结构
作者: Tal Linzen, Marco Baroni
链接:https://arxiv.org/abs/2004.10827
20200423
【3】 AmbigQA: Answering Ambiguous Open-domain Questions
标题:AmbigQA:回答含糊的开放领域问题
作者:Sewon Min, Luke Zettlemoyer
链接:https://arxiv.org/abs/2004.10645
【6】 Contextualised Graph Attention for Improved Relation Extraction
标题:用于改进关系提取的上下文图形注意
作者:Angrosh Mandya, Frans Coenen
链接:https://arxiv.org/abs/2004.10624
20200422 None
【3】 Logic-Guided Data Augmentation and Regularization for Consistent Question Answering
标题:用于一致问题回答的逻辑引导的数据增强和正则化
作者: Akari Asai, Hannaneh Hajishirzi
备注:Published as a conference paper at ACL 2020
链接:https://arxiv.org/abs/2004.10157
【5】 Unsupervised Opinion Summarization with Noising and Denoising
标题:基于去噪和去噪的无监督意见总结
作者: Reinald Kim Amplayo, Mirella Lapata
备注:ACL 2020
链接:https://arxiv.org/abs/2004.10150
【16】 Knowledge-Driven Distractor Generation for Cloze-style Multiple Choice Questions
标题:完形填空式多项选择题的知识驱动分心因子生成
作者: Siyu Ren, Kenny Q. Zhu
链接:https://arxiv.org/abs/2004.09853
【25】 Grounding Conversations with Improvised Dialogues
标题:以即兴对话为基础的对话
作者: Hyundong Cho, Jonathan May
备注:ACL2020; 9 pages + 1 page appendix
链接:https://arxiv.org/abs/2004.09544
20200421
【3】 MPNet: Masked and Permuted Pre-training for Language Understanding
标题:MPNet:语言理解的掩蔽和置换预培训
作者: Kaitao Song, Tie-Yan Liu
链接:https://arxiv.org/abs/2004.09297
20200420
【1】 Exploring the Combination of Contextual Word Embeddings and Knowledge Graph Embeddings
标题:探索上下文词嵌入和知识图嵌入的结合
作者:Lea Dieudonat, Esteban Marquer
链接:https://arxiv.org/abs/2004.08371
【8】 Highway Transformer: Self-Gating Enhanced Self-Attentive Networks
标题:公路变压器:自选通增强型自关注网络
作者:Yekun Chai, Xinwen Hou
链接:https://arxiv.org/abs/2004.08178
【10】 Probing Linguistic Features of Sentence-Level Representations in Neural Relation Extraction
标题:神经关系抽取中句子级表征的语言特征探讨
作者:Christoph Alt, Leonhard Hennig
备注:Accepted at ACL 2020
链接:https://arxiv.org/abs/2004.08134
【23】 Understanding the Difficulty of Training Transformers
标题:了解变压器培训的难度
作者:Liyuan Liu, Jiawei Han
链接:https://arxiv.org/abs/2004.08249
【24】 Geometry-aware Domain Adaptation for Unsupervised Alignment of Word Embeddings
标题:用于单词嵌入的无监督对齐的几何感知的域自适应
作者:Pratik Jawanpuria, Bamdev Mishra
备注:Accepted as a short paper in ACL 2020
链接:https://arxiv.org/abs/2004.08243
【21】 Bridging Anaphora Resolution as Question Answering
标题:桥接回指消解作为问答
作者:Yufang Hou
备注:accepted at ACL2020
链接:https://arxiv.org/abs/2004.07898
【15】 Dialogue-Based Relation Extraction
标题:基于对话的关系抽取
作者:Dian Yu, Dong Yu
备注:To appear in ACL 2020
链接:https://arxiv.org/abs/2004.08056
20200416
【10】 Coreferential Reasoning Learning for Language Representation
标题:语言表征的相关推理学习
作者: Deming Ye, Zhiyuan Liu
链接:https://arxiv.org/abs/2004.06870
【14】 A Simple Yet Strong Pipeline for HotpotQA
标题:一条简单而强大的HotpotQA管道
作者: Dirk Groeneveld, Ashish Sabharwal
链接:https://arxiv.org/abs/2004.06753
20200415
【8】 Jointly Modeling Aspect and Sentiment with Dynamic Heterogeneous Graph Neural Networks
标题:动态异构图神经网络联合建模方面和情感
作者: Shu Liu, Xu Sun
链接:https://arxiv.org/abs/2004.06427
20200414
【1】 Pretrained Transformers Improve Out-of-Distribution Robustness
标题:预先培训的变压器提高了配电网外的稳健性
作者: Dan Hendrycks, Dawn Song
备注:ACL 2020
链接:https://arxiv.org/abs/2004.06100
【2】 Adversarial Augmentation Policy Search for Domain and Cross-Lingual Generalization in Reading Comprehension
标题:阅读理解中的对抗性强化策略域搜索与跨语言概括
作者: Adyasha Maharana, Mohit Bansal
链接:https://arxiv.org/abs/2004.06076
【11】 From Machine Reading Comprehension to Dialogue State Tracking: Bridging the Gap
标题:从机器阅读理解到对话状态跟踪:弥合鸿沟
作者: Shuyang Gao, Dilek Hakkani-Tur
链接:https://arxiv.org/abs/2004.05827
【24】 Explaining Question Answering Models through Text Generation
标题:通过文本生成解释问答模型
作者: Veronica Latcinnik, Jonathan Berant
链接:https://arxiv.org/abs/2004.05569
【28】 Unsupervised Commonsense Question Answering with Self-Talk
标题:无人监督的自言自语常识问答
作者: Vered Shwartz, Yejin Choi
链接:https://arxiv.org/abs/2004.05483
20200413
【1】 Longformer: The Long-Document Transformer
标题:Longformer:长文档变压器
作者: Iz Beltagy, Arman Cohan
链接:https://arxiv.org/abs/2004.05150
【6】 Molweni: A Challenge Multiparty Dialogues-based Machine Reading Comprehension Dataset with Discourse Structure
标题:Molweni:一个具有语篇结构的基于多方对话的机器阅读理解数据集
作者: Jiaqi Li, Bing Qin
链接:https://arxiv.org/abs/2004.05080
【7】 Overestimation of Syntactic Representationin Neural Language Models
标题:神经语言模型中的高估句法表示
作者: Jordan Kodner, Nitish Gupta
备注:Accepted for publication at ACL 2020
链接:https://arxiv.org/abs/2004.05067
【8】 A New Dataset for Natural Language Inference from Code-mixed Conversations
标题:一种新的基于代码混合会话的自然语言推理数据集
作者: Simran Khanuja, Monojit Choudhury
备注:To appear in CALCS, LREC 2020
链接:https://arxiv.org/abs/2004.05051
【20】 Natural Perturbation for Robust Question Answering
标题:鲁棒问题回答的自然摄动
作者: Daniel Khashabi, Ashish Sabharwal
链接:https://arxiv.org/abs/2004.04849
20200410
【7】 MuTual: A Dataset for Multi-Turn Dialogue Reasoning
标题:Mutual:一个用于多回合对话推理的数据集
作者: Leyang Cui, Ming Zhou
备注:ACL 2020
链接:https://arxiv.org/abs/2004.04494
【8】 Injecting Numerical Reasoning Skills into Language Models
标题:将数值推理技能注入语言模型
作者: Mor Geva, Jonathan Berant
备注:ACL 2020
链接:https://arxiv.org/abs/2004.04487
【17】 Severing the Edge Between Before and After: Neural Architectures for Temporal Ordering of Events
标题:切断前后的边缘:事件时间顺序的神经体系结构
作者: Miguel Ballesteros, Yaser Al-Onaizan
链接:https://arxiv.org/abs/2004.04295
【20】 Asking and Answering Questions to Evaluate the Factual Consistency of Summaries
标题:提问和回答问题以评估摘要的事实一致性
作者: Alex Wang, Mike Lewis
备注:ACL 2020
链接:https://arxiv.org/abs/2004.04228
20200409
【6】 KdConv: A Chinese Multi-domain Dialogue Dataset Towards Multi-turn Knowledge-driven Conversation
标题:KdConv:一个面向多轮知识驱动会话的中文多域对话数据集
作者: Hao Zhou, Xiaoyan Zhu
链接:https://arxiv.org/abs/2004.04100
【9】 Self-Attention Gazetteer Embeddings for Named-Entity Recognition
标题:用于命名实体识别的自我注意地名词典嵌入
作者: Stanislav Peshterliev, Imre Kiss
链接:https://arxiv.org/abs/2004.04060
【48】 Guessing What’s Plausible But Remembering What’s True: Accurate Neural Reasoning for Question-Answering
标题:猜测什么是合理的,但记住什么是真的:用于问题回答的精确神经推理
作者: Haitian Sun, William W. Cohen
链接:https://arxiv.org/abs/2004.03658
20200408
20200408
【1】 Transformers to Learn Hierarchical Contexts in Multiparty Dialogue for Span-based Question Answering
标题:Transformers用于学习基于跨度的问题回答的多方对话中的分层上下文
作者: Changmao Li, Jinho D. Choi
备注:Accepted by ACL 2020
链接:https://arxiv.org/abs/2004.03561
实体链接----
【2】 Entity Linking via Dual and Cross-Attention Encoders https://arxiv.org/abs/2004.03555
标题:通过双重和交叉注意编码器的实体链接
作者: Oshin Agarwal, Daniel M. Bikel
链接:https://arxiv.org/abs/2004.03555
【4】 What do Models Learn from Question Answering Datasets?
标题:模型从问题回答数据集中学到了什么?
作者: Priyanka Sen, Amir Saffari
链接:https://arxiv.org/abs/2004.03490
医疗NER
【9】 Inexpensive Domain Adaptation of Pretrained Language Models: A Case Study on Biomedical Named Entity Recognition
标题:预训练语言模型的廉价领域适配:生物医学命名实体识别的案例研究
作者: Nina Poerner, Hinrich Schütze
链接:https://arxiv.org/abs/2004.03354
【15】 Variational Question-Answer Pair Generation for Machine Reading Comprehension
标题:机器阅读理解的变分问答对生成
作者: Kazutoshi Shinoda, Akiko Aizawa
链接:https://arxiv.org/abs/2004.03238
【25】 A Sentence Cloze Dataset for Chinese Machine Reading Comprehension
标题:一种用于汉语机器阅读理解的句子完形填空数据集
作者: Yiming Cui, Guoping Hu
链接:https://arxiv.org/abs/2004.03116
【26】 Knowledge Fusion and Semantic Knowledge Ranking for Open Domain Question Answering
标题:面向开放领域问答的知识融合和语义知识排序
作者: Pratyay Banerjee, Chitta Baral
链接:https://arxiv.org/abs/2004.03101
【28】 Is Graph Structure Necessary for Multi-hop Reasoning?
标题:多跳推理是否需要图结构?
作者: Nan Shao, Guoping Hu
链接:https://arxiv.org/abs/2004.03096
【31】 Inferential Text Generation with Multiple Knowledge Sources and Meta-Learning
标题:多知识源推理文本生成与元学习
作者: Daya Guo, Ming Zhou
链接:https://arxiv.org/abs/2004.03070
【33】 Information-Theoretic Probing for Linguistic Structure
标题:语言结构的信息论探索
作者: Tiago Pimentel, Ryan Cotterell
备注:Accepted for publication at ACL 2020
链接:https://arxiv.org/abs/2004.03061
【34】 The Role of Pragmatic and Discourse Context in Determining Argument Impact
标题:语用和话语语境在决定辩论影响中的作用
作者: Esin Durmus, Claire Cardie
备注:EMNLP 2019
链接:https://arxiv.org/abs/2004.03034
【37】 Enhancing Review Comprehension with Domain-Specific Commonsense
标题:用特定领域的常识增强复习理解
作者: Aaron Traylor, Wang-Chiew Tan
链接:https://arxiv.org/abs/2004.03020
【39】 Multi-Step Inference for Reasoning Over Paragraphs
标题:段落推理的多步推理
作者: Jiangming Liu, Matt Gardner
链接:https://arxiv.org/abs/2004.02995
【48】 MedDialog: A Large-scale Medical Dialogue Dataset
标题:MedDialog:一个大规模医学对话数据集
作者: Shu Chen, Pengtao Xie
链接:https://arxiv.org/abs/2004.03329
【50】 Multi-Scale Aggregation Using Feature Pyramid Module for Text-Independent Speaker Verification
标题:使用特征金字塔模块进行文本无关说话人确认的多尺度聚合
作者: Youngmoon Jung, Hoirin Kim
备注:Submitted to Interspeech 2020
链接:https://arxiv.org/abs/2004.03194
20200407
【53】 Prerequisites for Explainable Machine Reading Comprehension: A Position Paper
标题:可解释机器阅读理解的先决条件:一份意见书
作者: Saku Sugawara, Akiko Aizawa
链接:https://arxiv.org/abs/2004.01912
【17】 Learning to Recover Reasoning Chains for Multi-Hop Question Answering via Cooperative Games
标题:通过合作博弈学习恢复多跳问答的推理链
作者: Yufei Feng, Xiaodan Zhu
链接:https://arxiv.org/abs/2004.02393
20200406
【9】 R3: A Reading Comprehension Benchmark Requiring Reasoning Processes
标题:R3:需要推理过程的阅读理解基准
作者: Ran Wang, Xinyu Dai
链接:https://arxiv.org/abs/2004.01251
20200403
【1】 Causal Inference of Script Knowledge
标题:脚本知识的因果推理
作者: Noah Weber, Benjamin Van Durme
链接:https://arxiv.org/abs/2004.01174
20200402
【19】 Information Leakage in Embedding Models
标题:嵌入模型中的信息泄漏
作者:Congzheng Song, Ananth Raghunathan
链接:https://arxiv.org/abs/2004.00053
20200327
【3】 Common-Knowledge Concept Recognition for SEVA
标题:SEVA的常识概念识别
作者: Jitin Krishnan, Huzefa Rangwala
链接:https://arxiv.org/abs/2003.11687
20200326
【8】 Vector logic and counterfactuals
标题:向量逻辑与反事实
作者: Eduardo Mizraji
链接:https://arxiv.org/abs/2003.11519
20200325
【6】 ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
标题:ELECTRA:将文本编码器预先培训为鉴别器而不是生成器
作者: Kevin Clark, Christopher D. Manning
备注:ICLR 2020
链接:https://arxiv.org/abs/2003.10555
20200320
【11】 A Corpus of Adpositional Supersenses for Mandarin Chinese
标题:普通话附加上位语料库
作者: Siyao Peng, Nathan Schneider
备注:LREC 2020 camera-ready
链接:https://arxiv.org/abs/2003.08437
20200319
【5】 Pre-trained Models for Natural Language Processing: A Survey
标题:自然语言处理的预训练模型:综述
作者:Xipeng Qiu, Xuanjing Huang
链接:https://arxiv.org/abs/2003.08271
20200317
【9】 A Survey on Contextual Embeddings
标题:语境嵌入研究综述
作者: Qi Liu, Phil Blunsom
链接:https://arxiv.org/abs/2003.07278
20200316
【9】 Heterogeneous Relational Reasoning in Knowledge Graphs with Reinforcement Learning
标题:基于强化学习的知识图异构关系推理
作者:Mandana Saebi, Nitesh Chawla
链接:https://arxiv.org/abs/2003.06050
20200313
【6】 Investigating Entity Knowledge in BERT with Simple Neural End-To-End Entity Linking
标题:用简单的神经网络端到端实体链接研究BERT中的实体知识
作者: Samuel Broscheit
备注:Published at CoNLL 2019
链接:https://arxiv.org/abs/2003.05473
20200312
【10】 Multi-task Learning with Multi-head Attention for Multi-choice Reading Comprehension
标题:多项选择式阅读理解的多头注意多任务学习
作者: Hui Wan
链接:https://arxiv.org/abs/2003.04992
【16】 Understanding the Downstream Instability of Word Embeddings
标题:理解单词嵌入的下游不稳定性
作者: Megan Leszczynski, Christopher Ré
备注:In Proceedings of the 3rd MLSys Conference, 2020
链接:https://arxiv.org/abs/2003.04983
【22】 Transformer++
标题:转换器+
作者: Prakhar Thapak, Prodip Hore
链接:https://arxiv.org/abs/2003.04974
20200311
【3】 Undersensitivity in Neural Reading Comprehension
标题:神经阅读理解中的低敏感度
作者: Johannes Welbl, Sebastian Riedel
链接:https://arxiv.org/abs/2003.04808
【8】 GenNet : Reading Comprehension with Multiple Choice Questions using Generation and Selection model
标题:GENet:使用生成和选择模型的多项选择题阅读理解
作者: Vaishali Ingale, Pushpender Singh
链接:https://arxiv.org/abs/2003.04360
【6】 A Framework for Evaluation of Machine Reading Comprehension Gold Standards
标题:机器阅读理解黄金标准评估框架
作者: Viktor Schlegel, Riza Batista-Navarro
备注:In Proceedings of the 12th International Conference on Language Resources and Evaluation (LREC 2020)
链接:https://arxiv.org/abs/2003.04642
【11】 Neuro-symbolic Architectures for Context Understanding
标题:用于语境理解的神经符号体系结构
作者: Alessandro Oltramari, Ruwan Wickramarachchi
备注:In: Ilaria Tiddi, Freddy Lecue, Pascal Hitzler (eds.), Knowledge Graphs for eXplainable AI – Foundations, Applications and Challenges. Studies on the Semantic Web, IOS Press, Amsterdam, 2020. arXiv admin note: text overlap with arXiv:1910.14087
链接:https://arxiv.org/abs/2003.04707
20200310
【16】 Natural Language QA Approaches using Reasoning with External Knowledge
标题:使用外部知识推理的自然语言问答方法
作者: Chitta Baral, Arindam Mitra
链接:https://arxiv.org/abs/2003.03446
20200309
【4】 Practical Annotation Strategies for Question Answering Datasets
标题:一种实用的问答数据集标注策略
作者: Bernhard Kratzwald, Stefan Feuerriegel
链接:https://arxiv.org/abs/2003.03235
20200306
【8】 A Study on Efficiency, Accuracy and Document Structure for Answer Sentence Selection
标题:回答句选择的效率、准确性和文档结构研究
作者: Daniele Bonadiman, Alessandro Moschitti
链接:https://arxiv.org/abs/2003.02349
20200304
【6】 Meta-Embeddings Based On Self-Attention
标题:基于自我注意的元嵌入
作者: Qichen Li, Jian Li
链接:https://arxiv.org/abs/2003.01371
20200228
【1】 Generating Followup Questions for Interpretable Multi-hop Question Answering
标题:为可解释的多跳问题回答生成后续问题
作者: Christopher Malon, Bing Bai
链接:https://arxiv.org/abs/2002.12344
20200227
实体链接—
【5】 End-to-End Entity Linking and Disambiguation leveraging Word and Knowledge Graph Embeddings
标题:利用单词和知识图嵌入的端到端实体链接和歧义消除
作者: Rostislav Nedelchev, Asja Fischer
链接:https://arxiv.org/abs/2002.11143
【8】 Sparse Sinkhorn Attention
标题:稀疏Sinkhorn注意
作者: Yi Tay, Da-Cheng Juan
链接:https://arxiv.org/abs/2002.11296
20200226
【14】 Exploring BERT Parameter Efficiency on the Stanford Question Answering Dataset v2.0
标题:基于Stanford问答数据集v2.0的BERT参数效率研究
作者: Eric Hulburd
链接:https://arxiv.org/abs/2002.10670
【15】 Differentiable Reasoning over a Virtual Knowledge Base
标题:虚拟知识库上的可微推理
作者: Bhuwan Dhingra, William W. Cohen
备注:ICLR 2020
链接:https://arxiv.org/abs/2002.10640
【17】 On Feature Normalization and Data Augmentation
标题:特征归一化与数据增强
作者: Boyi Li, Kilian Q. Weinberger
链接:https://arxiv.org/abs/2002.11102
20200225
【7】 Word Embeddings Inherently Recover the Conceptual Organization of the Human Mind
标题:词的嵌入内在地恢复了人类思维的概念组织
作者: Victor Swift
链接:https://arxiv.org/abs/2002.10284
【13】 Do Multi-Hop Question Answering Systems Know How to Answer the Single-Hop Sub-Questions?
标题:多跳问答系统知道如何回答单跳子问题吗?
作者: Yixuan Tang, Anthony K.H. Tung
链接:https://arxiv.org/abs/2002.09919
【16】 Unsupervised Question Decomposition for Question Answering
标题:用于问题回答的无监督问题分解
作者: Ethan Perez, Douwe Kiela
链接:https://arxiv.org/abs/2002.09758
【25】 Training Question Answering Models From Synthetic Data
标题:从合成数据训练问答模型
作者: Raul Puri, Bryan Catanzaro
链接:https://arxiv.org/abs/2002.09599
20200221
【2】 How Much Knowledge Can You Pack Into the Parameters of a Language Model?
标题:您可以将多少知识打包到语言模型的参数中?
作者:Adam Roberts, Noam Shazeer
链接:https://arxiv.org/abs/2002.08910
【3】 REALM: Retrieval-Augmented Language Model Pre-Training
标题:领域:检索-增强的语言模型预培训
作者:Kelvin Guu, Ming-Wei Chang
链接:https://arxiv.org/abs/2002.08909
20200220
【13】 Tree-structured Attention with Hierarchical Accumulation
标题:具有分层累积的树状结构注意
作者: Xuan-Phi Nguyen, Richard Socher
备注:ICLR 2020
链接:https://arxiv.org/abs/2002.08046
20200218
【11】 Exploring Neural Models for Parsing Natural Language into First-Order Logic
标题:探索将自然语言解析为一阶逻辑的神经模型
作者: Hrituraj Singh, Balaji Krishnamurthy
链接:https://arxiv.org/abs/2002.06544
20200217
【2】 Scalable Neural Methods for Reasoning With a Symbolic Knowledge Base
标题:符号知识库推理的可伸缩神经方法
作者:William W. Cohen, Matthew Siegler
备注:Also published in ICLR2020 this https URL¬eId=BJlguT4YPr
链接:https://arxiv.org/abs/2002.06115
对话系统—
【4】 Dialogue history integration into end-to-end signal-to-concept spoken language understanding systems
标题:对话历史集成到端信号到概念口语理解系统中
作者:Natalia Tomashenko, Yannick Esteve
备注:Accepted for ICASSP 2020 (Submitted: October 21, 2019)
链接:https://arxiv.org/abs/2002.06012
【9】 Transformers as Soft Reasoners over Language
标题:变形金刚作为语言的软推理者
作者:Peter Clark, Kyle Richardson
链接:https://arxiv.org/abs/2002.05867
20200216
【4】 Sparse and Structured Visual Attention
标题:稀疏和结构化的视觉注意
作者: Pedro Henrique Martins, André Martins
链接:https://arxiv.org/abs/2002.05556
20200212
【2】 ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning
标题:ReClor:一个需要逻辑推理的阅读理解数据集
作者: Weihao Yu, Jiashi Feng
备注:to be published in ICLR 2020
链接:https://arxiv.org/abs/2002.04326
【11】 Mining Commonsense Facts from the Physical World
标题:从物理世界中挖掘常识事实
作者: Yanyan Zou
链接:https://arxiv.org/abs/2002.03149
【14】 Blank Language Models
标题:空白语言模型
作者: Tianxiao Shen, Tommi Jaakkola
链接:https://arxiv.org/abs/2002.03079
20200206
【4】 K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters
标题:K-Adapter:使用Adapter将知识注入预先训练的模型
作者: Ruize Wang, Ming Zhou
链接:https://arxiv.org/abs/2002.01808
【6】 Parsing as Pretraining
标题:解析为预培训
作者: David Vilares, Carlos Gómez-Rodríguez
备注:AAAI 2020 - The Thirty-Fourth AAAI Conference on Artificial Intelligence
链接:https://arxiv.org/abs/2002.01685
20200204
【17】 Are Pre-trained Language Models Aware of Phrases? Simple but Strong Baselines for Grammar Induction
标题:预先培训的语言模型是否知道短语?简单但强大的语法归纳基线
作者: Taeuk Kim, Sang-goo Lee
备注:ICLR 2020
链接:https://arxiv.org/abs/2002.00737
【33】 Beat the AI: Investigating Adversarial Human Annotations for Reading Comprehension
标题:打败AI:研究阅读理解中的对抗性人类注释
作者: Max Bartolo, Pontus Stenetorp
备注:21 pages including appendices
20200203
【1】 Pretrained Transformers for Simple Question Answering over Knowledge Graphs
标题:知识图上简单问题回答的预训练变压器
作者: D. Lukovnikov, J. Lehmann
链接:https://arxiv.org/abs/2001.11985
【4】 Break It Down: A Question Understanding Benchmark
标题:分解它:一个问题理解基准
作者: Tomer Wolfson, Jonathan Berant
备注:Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2020. Author’s final version
链接:https://arxiv.org/abs/2001.11770
【5】 Teaching Machines to Converse
标题:教机器反转
作者: Jiwei Li
链接:https://arxiv.org/abs/2001.11701
20200228
【5】 Consciousness and Automated Reasoning
标题:意识与自动推理
作者: Ulrike Barthelmeß, Claudia Schon
链接:https://arxiv.org/abs/2001.09442
【24】 Retrospective Reader for Machine Reading Comprehension
标题:机器阅读理解回溯阅读器
作者: Zhuosheng Zhang, Hai Zhao
链接:https://arxiv.org/abs/2001.09694
20200124
【3】 A Study of the Tasks and Models in Machine Reading Comprehension
标题:机器阅读理解的任务与模式研究
作者: Chao Wang
链接:https://arxiv.org/abs/2001.08635