2021秋招-机器阅读理解整理

机器阅读理解整理

经典模型整理

笔记

  1. 后Bert时代机器阅读理解

后续

自己论文整理

已经分类整

  • 大的实验室:
  1. UCL MRC_Group:
  2. AI2:
  3. 微软:
  4. THU:
  5. PKU:

数据集文章

  1. ROPES | Reasoning Over Paragraph Effects in Situations

    • arXiv: https://arxiv.org/abs/1908.05852
    • Leadboard: https://leaderboard.allenai.org/ropes
    • Note
    • Label: EMNLP2019
  2. .CommonSenseQA |COMMONSENSEQA: A Question Answering Challenge Targeting Commonsense Knowledge |

    • arXiv: https://arxiv.org/pdf/1811.00937.pdf |
    • Leadboard: https://www.tau-nlp.org/csqa-leaderboard
    • Note:
    • Label: 2018
  3. CoQA | CoQA: A Conversational Question Answering Challenge

    • arXiv: https://arxiv.org/pdf/1808.07042.pdf
    • Leadboard:https://stanfordnlp.github.io/coqa/
    • Note:
    • Label: 2018
  4. MultiRC | Looking Beyond the Surface: A Challenge Set for Reading Comprehension over Multiple Sentences

    • ACL: https://www.aclweb.org/anthology/N18-1023/
    • Leadboard: https://cogcomp.seas.upenn.edu/multirc/
    • Note:
    • Label: NAACL2018
  5. OpenBookQA | Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering

    • arXiv: https://arxiv.org/pdf/1809.02789.pdf
    • Leadboard:
    • Note:
    • Label: 201809
  6. RACE | RACE: Large-scale ReAding Comprehension Dataset From Examinations

    • arXiv: https://arxiv.org/pdf/1704.04683.pdf
    • Leadboard: http://www.qizhexie.com//data/RACE_leaderboard
    • Note:
    • Label: 201809
  7. XCMRC | XCMRC: Evaluating Cross-lingual Machine Reading Comprehension

    • arXiv: https://arxiv.org/pdf/1908.05416.pdf
    • Leadboard/baseline: https://github.com/NLPBLCU/XCMRC
    • Note:
    • Label: 跨语言 | 20190825
  8. CLMRC | Cross-Lingual Machine Reading Comprehension

    • arXiv: https://arxiv.org/pdf/1909.00361.pdf
    • Leadboard/baseline: https://github.com/ymcui/Cross-Lingual-MRC
    • Note:
    • Label: 跨语言
  9. WINOGRANDE | WINOGRANDE: An Adversarial Winograd Schema Challenge at Scale

    • arXiv: https://arxiv.org/pdf/1907.10641.pdf
    • Leadboard:
    • Note:
    • Label: 201911
  10. HellaSwag | HellaSwag: Can a Machine Really Finish Your Sentence?

    • arXiv: https://arxiv.org/pdf/1905.07830.pdf
    • Leadboard: https://rowanzellers.com/hellaswag/
    • Note:
    • Label: 201905
  11. McTaco | “Going on a vacation” takes longer than “Going for a walk”: A Study of Temporal Commonsense Understanding

    • arXiv: https://arxiv.org/pdf/1909.03065.pdf
    • Leadboard: https://leaderboard.allenai.org/mctaco/submissions/public
    • Note:
    • Label: EMNLP2019 | ai2
  12. Social IQA | SOCIAL IQA: Commonsense Reasoning about Social Interactions

    • arXiv: https://arxiv.org/pdf/1904.09728.pdf
    • Leadboard: https://leaderboard.allenai.org/socialiqa/submissions/public
    • Note:
    • Label: 2019
  13. CosMosQA | Cosmos QA : Machine Reading Comprehension with Contextual Commonsense Reasoning (EMNLP’2019)

    • arXiv: https://arxiv.org/pdf/1909.00277.pdf
    • Leadboard: https://wilburone.github.io/cosmos/
    • Note:
    • Label: emnlp2019
  14. PubMedQA | PubMedQA : A Dataset for Biomedical Research Question Answering

    • arXiv: https://arxiv.org/pdf/1909.06146.pdf
    • Leadboard: https://pubmedqa.github.io
    • Note:
    • Label: 2019009
  15. GeoQA | GeoSQA: A Benchmark for Scenario-based Question Answering in the Geography Domain at High School Level

    • arXiv: https://arxiv.org/pdf/1908.07855.pdf
    • Leadboard:
    • Note:
    • Label: emnlp2019
  16. HEAD-QA | HEAD-QA: A Healthcare Dataset for Complex Reasoning

    • arXiv: https://arxiv.org/abs/1906.04701
    • Leadboard:
    • Note:
    • Label: 2019
  17. ReCoRD | ReCoRD: Bridging the Gap between Human and Machine Commonsense Reading Comprehension

    • arXiv: https://arxiv.org/pdf/1810.12885.pdf
    • Leadboard: https://sheng-z.github.io/ReCoRD-explorer/
    • Note:
    • Label: 2018
  18. c^3 | Investigating Prior Knowledge for Challenging Chinese Machine Reading Comprehension

    • arXiv: https://arxiv.org/pdf/1904.09679.pdf
    • Leadboard/Baseline : https://dataset.org/c3/ | https://github.com/AutoAVE/c3
    • Note:
    • Label: 2019 | 腾讯、Cornell、UW、AI2
  19. Dream | DREAM: A Challenge Dataset and Models for Dialogue-Based Reading Comprehension

    • arXiv: https://arxiv.org/pdf/1902.00164.pdf
    • Leadboard: https://dataset.org/dream/
    • Note:
    • Label:2019 | 腾讯、Cornell、UW、AI2
  20. QAngaroo 、WikihopQA | Constructing Datasets for Multi-hop Reading Comprehension Across Documents

    • arXiv: https://arxiv.org/pdf/1710.06481v1.pdf
    • Leadboard: http://qangaroo.cs.ucl.ac.uk/
    • Note:
    • Label: Multi-hop
  21. JEC-QA | JEC-QA: A Legal-Domain Question Answering Dataset

    • arXiv: https://arxiv.org/pdf/1911.12011.pdf
    • Leadboard/baseline: http://jecqa.thunlp.org/
    • Note:
    • Label:
  22. PIQA | PIQA: Reasoning about Physical Commonsense in Natural Language

    • arXiv: https://arxiv.org/pdf/1911.11641.pdf
    • Leadboard: https://yonatanbisk.com/piqa/
    • Note:
    • Label:
  23. TweetQA | TWEETQA: A Social Media Focused Question Answering Dataset

    • arXiv: https://arxiv.org/pdf/1907.06292.pdf
    • Leadboard: https://tweetqa.github.io/
    • Note:
    • Label:
  24. RC-QED | RC-QED: Evaluating Natural Language Derivations in Multi-Hop Reading Comprehension

    • arXiv: https://arxiv.org/pdf/1910.04601.pdf
    • Leadboard/baseline/data:https://naoya-i.github.io/rc-qed/
    • Note:
    • Label:
  25. MLQA | MLQA: Evaluating Cross-lingual Extractive Question Answering

    • arXiv: https://arxiv.org/pdf/1910.07475.pdf
    • Leadboard/baseline:https://github.com/facebookresearch/MLQA
    • Note:
    • Label: 跨语言
  26. QuAC| QuAC : Question Answering in Context

    • arXiv: https://arxiv.org/pdf/1808.07036.pdf
    • Leadboard: http://quac.ai/
    • Note:
    • Label:
  27. CNN/Daily-Mail | Challenging Reading Comprehension on Daily Conversation: Passage Completion on Multiparty Dialog

    • arXiv/acl:https://www.aclweb.org/anthology/N18-1185/
    • Leadboard/baseline:https://github.com/danqi/rc-cnn-dailymail
    • Note:
    • Label:
  28. 谷歌acl2019医疗对话数据集| Extracting Symptoms and their Status from Clinical Conversations

    • arXiv:https://arxiv.org/pdf/1906.02239.pdf
    • Leadboard: 未开源
    • Note:
    • Label:
  29. NAACL2019医患对话数据集 | Fast Prototyping a Dialogue Comprehension System for Nurse-Patient Conversations on Symptom Monitoring

    • arXiv: https://arxiv.org/pdf/1903.03530.pdf
    • Leadboard: 未开源
    • Note:
    • Label:
  30. DROP | DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs

    • arXiv: https://arxiv.org/pdf/1903.00161v1.pdf
    • Leadboard: https://leaderboard.allenai.org/drop/submissions/public
    • Note:
    • Label:
  31. HotpotQA | HOTPOTQA: A Dataset for Diverse, Explainable Multi-hop Question Answering

    • arXiv: https://arxiv.org/pdf/1809.09600.pdf
    • Leadboard: https://hotpotqa.github.io/
    • Note:
    • Label:
  32. QuoRef | QUOREF: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning

    • arXiv: https://arxiv.org/pdf/1908.05803.pdf
    • Leadboard: https://leaderboard.allenai.org/quoref/submissions/public
    • Note:
    • Label:
  33. Multi-QA | MULTIQA: An Empirical Investigation of Generalization and Transfer in Reading Comprehension

    • arXiv: https://arxiv.org/pdf/1905.13453.pdf
    • Leadboard:
    • Note:
    • Label: 多个数据集合成
  34. MRQA2019| MRQA 2019 Shared Task: Evaluating Generalization in Reading Comprehension

    • arXiv: https://arxiv.org/pdf/1910.09753.pdf
    • Leadboard/baseline: https://github.com/mrqa/MRQA-Shared-Task-2019
    • Note:
    • Label:多个数据集合成
  35. ATOMIC | ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning

    • arXiv: https://homes.cs.washington.edu/~msap/atomic/data/sap2019atomic.pdf
    • Leadboard/data: https://homes.cs.washington.edu/˜msap/atomic/.
    • Note:
    • Label:外部知识库 数据集
  36. ORB | ORB: An Open Reading Benchmark for Comprehensive Evaluation of Machine Reading Comprehension

    • arXiv: https://arxiv.org/abs/1912.12598
    • Leadboard: https://leaderboard.allenai.org/orb/submissions/public
    • Note:
    • Label: 多数据集合成
  37. ACL2019做对话同时发布数据集| Conversing by Reading: Contentful Neural Conversation with On-demand Machine Reading

    • arXiv/ACL: https://www.aclweb.org/anthology/P19-1539.pdf
    • Leadboard/baseline: https://github.com/qkaren/converse_reading_cmr
    • Note:
    • Label:Wikipedia文章+Reddit对话 --> 数据集 | ACL2019 | 数据集
  38. emrQA | emrQA: A Large Corpus for Question Answering on Electronic Medical Records

    • arXiv: https://arxiv.org/pdf/1809.00732.pdf
    • Leadboard/Baseline: https://github.com/panushri25/emrQA
    • Note:
    • Label:
  39. ComQA | ComQA: A Community-sourced Dataset for Complex Factoid Question Answering with Paraphrase Clusters

    • arXiv: https://arxiv.org/abs/1809.09528
    • Leadboard:
    • Note:
    • Label:社区问答数据集
  40. ConvQuestions | Look before you Hop: Conversational Question Answering over Knowledge Graphs Using Judicious Context Expansion

    • arXiv: https://arxiv.org/pdf/1910.03262.pdf
    • Leadboard: http://qa.mpi-inf.mpg.de/convex/
    • Note:
    • Label: cikm2019
  41. ShARC(Shaping Answers with Rules through Conversation) | Interpretation of Natural Language Rules in Conversational Machine Reading

    • arXiv: https://arxiv.org/pdf/1809.01494.pdf
    • Leadboard: https://sharc-data.github.io/
    • Note:
    • Label: UCL_MRC_Grouping |
  42. DuoRC | DuoRC: Towards Complex Language Understanding with Paraphrased Reading Comprehension

    • arXiv: https://arxiv.org/pdf/1804.07927.pdf
    • Leadboard: https://duorc.github.io/
    • Note:
    • Label: 2018
  43. MCScript| MCScript: A Novel Dataset for Assessing Machine Comprehension Using Script Knowledge

    • arXiv:https://arxiv.org/abs/1803.05223
    • Leadboard:
    • Note:
    • Label:
  44. SQuAD2.0 | Know What You Don’t Know: Unanswerable Questions for SQuAD

    • arXiv: https://arxiv.org/pdf/1806.03822.pdf
    • Leadboard:https://rajpurkar.github.io/SQuAD-explorer/
    • Note:
    • Label:
  45. RecipeQA | RecipeQA: A Challenge Dataset for Multimodal Comprehension of Cooking Recipes

    • arXiv:https://arxiv.org/pdf/1809.00812.pdf
    • Leadboard: https://hucvl.github.io/recipeqa/
    • Note:
    • Label:
  46. SWAG | SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference

    • arXiv: https://arxiv.org/abs/1808.05326
    • Leadboard: https://leaderboard.allenai.org/swag/submissions/public
    • Note:
    • Label:
  47. TextWordsQA | Multi-Relational Question Answering from Narratives: Machine Reading and Reasoning in Simulated Worlds

    • arXiv/ACL:https://www.aclweb.org/anthology/P18-1077.pdf
    • Leadboard: https://igorlabutov.github.io/textworldsqa.github.io/
    • Note:
    • Label:
  48. CLOTH | Large-scale Cloze Test Dataset Created by Teachers

    • arXiv: https://arxiv.org/abs/1711.03225
    • Leadboard: http://www.qizhexie.com/data/CLOTH_leaderboard.html
    • Note:
    • Label: 2017
  49. CLiCR | CliCR: A Dataset of Clinical Case Reports for Machine Reading Comprehension

    • arXiv: https://arxiv.org/abs/1803.09720
    • Leadboard/Baseline: https://github.com/clips/clicr
    • Note:
    • Label: 2018
  50. DuReader | DuReader: a Chinese Machine Reading Comprehension Dataset from Real-world Applications

    • arXiv:https://arxiv.org/abs/1711.05073
    • Leadboard/baseline:https://github.com/baidu/DuReader
    • Note:
    • Label: 中文 2017
  51. NarrativeQA | The NarrativeQA Reading Comprehension Challenge

    • arXiv:https://arxiv.org/abs/1712.07040
    • Leadboard:
    • Note:
    • Label: google 2017
  52. Who did What | Who did What: A Large-Scale Person-Centered Cloze Dataset

    • arXiv: https://arxiv.org/abs/1608.05457
    • Leadboard: https://tticnlp.github.io/who_did_what/leaderBoard.html
    • Note:
    • Label: 2017
  53. SearchQA | SearchQA: A New Q&A Dataset
    Augmented with Context from a Search Engine

    • arXiv: https://arxiv.org/pdf/1704.05179.pdf
    • Leadboard/baseline: https://github.com/nyu-dl/dl4ir-searchqA
    • Note:
    • Label: 2017
  54. TriviaQA | TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension

    • arXiv:https://arxiv.org/abs/1705.03551
    • Leadboard/baseline: http://nlp.cs.washington.edu/triviaqa/
    • Note:
    • Label: 2017
  55. Quasar| Quasar: Datasets for Question Answering by Search and Reading

    • arXiv:https://arxiv.org/abs/1707.03904
    • Leadboard/baseline: https://github.com/bdhingra/quasar
    • Note:
    • Label: 2017
  56. bAbi | TOWARDS AI-COMPLETE QUESTION ANSWERING: A SET OF PREREQUISITE TOY TASKS

    • arXiv:https://arxiv.org/pdf/1502.05698.pdf
    • Leadboard: https://github.com/facebook/bAbI-tasks
    • Note: CBT
    • Label: 2015
  57. CBT(Children’s Books Tests) | The Goldilocks Principle: Reading Children’s Books with Explicit Memory Representations

    • arXiv:https://arxiv.org/abs/1511.02301
    • Leadboard:
    • Note:
    • Label: bAbi
  58. MS_MARCO | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset

    • arXiv:https://arxiv.org/pdf/1611.09268.pdf
    • Leadboard: http://www.msmarco.org/leaders.aspx
    • Note:
    • Label: 2018
  59. NewsQA | NewsQA: A Machine Comprehension Dataset

    • arXiv:https://arxiv.org/abs/1611.09830
    • Leadboard: https://www.microsoft.com/en-us/research/project/newsqa-dataset/
    • Note:
    • Label: 2017
  60. LAMBADA | The LAMBADA dataset: Word prediction requiring a broad discourse context

    • arXiv: https://arxiv.org/abs/1606.06031
    • Leadboard:
    • Note:
    • Label: 2016
  61. SCT(Story Cloze Test) | A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories

    • arXiv:https://arxiv.org/abs/1604.01696
    • Leadboard/Baseline: https://www.cs.rochester.edu/nlp/rocstories/
    • Note:
    • Label:
  62. CMRC |

    • arXiv/地址: http://www.lrec-conf.org/proceedings/lrec2018/pdf/32.pdf
    • Leadboard:
    • Note:
    • Label:中文、哈工大-科大讯飞
  63. ROCStories:A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories

    • arXiv:https://arxiv.org/abs/1604.01696?context=cs.AI
    • Leadboard: https://cs.rochester.edu/nlp/rocstories/
    • Note:日常生活故事的语料库,包含大量事件之间的因果和时间关系,是学习常识的理想选择
    • Label:2016 | 微软 |
  64. SherLIiC:SherLIiC: A Typed Event-Focused Lexical Inference Benchmark for Evaluating Natural Language Inference

    • arXiv:https://arxiv.org/pdf/1906.01393v1.pdf
    • Leadboard:https://github.com/mnschmit/SherLIiC
    • Note:
    • Label:
  65. AlphaNLI:ABDUCTIVE COMMONSENSE REASONING

  • arXiv:https://arxiv.org/pdf/1908.05739.pdf
  • Repo:
  • Leadboard: https://leaderboard.allenai.org/anli/submissions/get-started
  • Sum: 诱导推理:给定2个上下文,然后给定选择题,选择正确的。
  • Note:
  • Label: pass
  1. Story Commonsense: Modeling Naive Psychology of Characters in Simple Commonsense Stories
    • arXiv:https://www.aclweb.org/anthology/P18-1213.pdf
    • Leadboard: https://uwnlp.github.io/storycommonsense/
    • Note:
    • Label:
  2. ARCT: The Argument Reasoning Comprehension Task: Identification and Reconstruction of Implicit Warrants
    • arXiv:https://arxiv.org/abs/1708.01425
    • Leadboard: https://github.com/UKPLab/argumentreasoning-comprehension-task
    • Note:
    • Label: naac2018
  3. ARC: AI2 Reasoning Challenge (ARC)
    • arXiv:https://arxiv.org/pdf/1803.05457.pdf
    • Leadboard: https://data.allenai.org/arc/
    • Note:
    • Label:
  4. ProPara | Tracking State Changes in Procedural Text: A Challenge Dataset and Models for Process Paragraph Comprehension
    • arXiv:https://arxiv.org/abs/1805.06975
    • Leadboard: https://data.allenai.org/propara/ http://data.allenai.org/propara/#leaderboard
    • Note:
    • Label:
  5. Triangle-COPA |
    • arXiv:
    • Leadboard:
    • Note:
    • Label:
  6. SciTail | SCITAIL: A Textual Entailment Dataset from Science Question Answering
    • arXiv:https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17368/16067
    • Leadboard: http://data.allenai.org/scitail
    • Note:
    • Label:
  7. OCI数据集-NLI常识推理| Ordinal Common-sense Inference
    • arXiv:https://arxiv.org/pdf/1611.00601.pdf
    • Leadboard: : http://decomp.net/
    • Note:
    • Label: NLI | jhu |
  8. CEAC| Emotion Action Detection and Emotion Inference: the Task and Dataset
    • arXiv: https://arxiv.org/pdf/1903.06901.pdf
    • Leadboard:
    • Note:
    • Label:情绪和事件结合,情绪推理,情绪动作检测 | 中文
  9. ASQ | Asking the Right Question: Inferring Advice-Seeking Intentions from Personal Narratives
    • arXiv:https://arxiv.org/pdf/1904.01587.pdf
    • Leadboard: t https://github.com/CornellNLP/ASQ
    • Note:
    • 给定一段叙述,检测 个人叙述中推断寻求建议的意图。 二选1: 文章的意图是询问什么呢?
    • Label:Cornell大学
  10. ARC数据集 | Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge
    • arXiv:https://arxiv.org/pdf/1803.05457.pdf
    • Leadboard: http://data.allenai.org/arc
    • Note:
    • Sum: 常识推理,问答形式,没有文章和 | Q:矿物的哪一种属性直接可以通过观察得到? A:色泽 B:质量 C:重量 D:硬度
    • Label: AI2 | 常识推理
  11. 论据推理任务| The Argument Reasoning Comprehension Task:Identification and Reconstruction of Implicit Warrants
    • arXiv:https://arxiv.org/pdf/1708.01425.pdf
    • Leadboard: https://github.com/UKPLab/argument-reasoning-comprehension-task
    • Note:
    • Sum: 给定 2个 保证(论证) + 1个 Reason过程 + 1个 结论Claim --> R+ 1/2 w --> Claim过程:
    • R:美国小姐给与 奖学金; Claim: 美国小姐对女性是好的; W0[正确]: 奖学金给女性机会去学习 W1:奖学金使女性离开家
    • Label:
  12. Modeling Naive Psychology of Characters in Simple Commonsense Stories
    • arXiv:https://www.aclweb.org/anthology/P18-1213.pdf
    • Leadboard:https://uwnlp.github.io/storycommonsense/
    • Note:
    • Sum: 故事中的 情绪变化跟踪任务,需要常识进行解决。
    • Label: AI2 | ACL20118 | 没意义
  13. SP-10K: A Large-scale Evaluation Set for Selectional Preference Acquisition
    • arXiv:https://arxiv.org/pdf/1906.02123.pdf
    • Leadboard: : https://github.com/HKUST-KnowComp/SP-10K
    • Note:
    • Sum: 对于 美国英语中 常见的 动词、名词、形容词 进行常识推理的数据集或者知识库。
    • Label: 港科 | 20190314 |

模型文章 —QA & MRC

  1. BiDAF |

    • arXiv:
    • Repo:
    • Note:
    • Label:
  2. QANet |

    • arXiv:
    • Repo:
    • Note:
    • Label:
  3. EPAr | Explore, Propose, and Assemble: An Interpretable Model for Multi-Hop Reading Comprehension

    • arXiv:https://arxiv.org/pdf/1906.05210.pdf
    • Repo:https://github.com/jiangycTarheel/EPAr
    • Note:
    • Label:
  4. model | Using Local Knowledge Graph Construction to Scale Seq2Seq Models to Multi-Document Inputs

    • arXiv:https://arxiv.org/pdf/1910.08435.pdf
    • Repo:
    • Sum: 对于 每个问题生成 局部的知识图
    • Note:
    • Label:
  5. model | Self-Assembling Modular Networks for Interpretable Multi-Hop Reasoning

    • arXiv:https://arxiv.org/pdf/1909.05803.pdf
    • Repo:https://github.com/jiangycTarheel/NMN-MultiHopQA
    • Note:
    • Sum:
    • Label: HotpotQA |
  6. DFGN | Dynamically Fused Graph Network for Multi-hop Reasoning

    • arXiv:https://www.aclweb.org/anthology/P19-1617.pdf
    • Repo: https://github.com/woshiyyya/DFGN-pytorch
    • Sum:
    • Note:
    • Label: HotpotQA | acl2019 |
  7. model | Improving the Robustness of Deep Reading Comprehension Models by Leveraging Syntax Prior

    • arXiv:https://www.aclweb.org/anthology/D19-5807.pdf
    • Repo:
    • Sum: 利用 语言中的 先验知识增强 MRC中模型的鲁棒性等
    • Note:
    • Label:

模型 & 分析 --Bert @ Pre-Trained Model ans Analyze

  1. Bert |
    • arXiv:
    • Repo:
    • Note:
    • Label:
  2. ALBert | ALBERT: A LITE BERT FOR SELF-SUPERVISED
    LEARNING OF LANGUAGE REPRESENTATIONS
    • arXiv: https://openreview.net/pdf?id=H1eA7AEtvS
    • Repo:
    • Note:
    • Label:
  3. Bert |
    • arXiv:
    • Repo:
    • Note:
    • Label:
  4. Bert | Commonsense Knowledge Mining from Pretrained Models
    • arXiv:https://arxiv.org/pdf/1909.00505.pdf
    • Repo:
    • Sum:
    • Note:

知识、常识推理

  • 主要关注: allenai、AI2 、 UCL 、阿里、腾讯、百度、THU、PKU、中科院、华盛顿、MIT、UIUC、斯坦福
  1. SG-Net | SG-Net: Syntax-Guided Machine Reading Comprehension

    • arXiv: https://arxiv.org/pdf/1908.05147.pdf
    • Repo:
    • Sum:通过语言语法知识指导 bert之后的attention计算mask矩阵。
    • Note:
    • Label: 上交 | 20191120
  2. GapQA | What’s Missing: A Knowledge Gap Guided Approach for Multi-hop Question Answering

    • arXiv: https://arxiv.org/pdf/1909.09253.pdf
    • Repo: https://github.com/allenai/missing-fact
    • Sum: 定义了 knowledge_gap 这个概念,并常识去 填补
    • Note:
    • Label:allenai.org | OpenBookQA | AAAI2020 |
  3. Kagnet | KagNet: Knowledge-Aware Graph Networks for Commonsense Reasoning

    • arXiv: https://arxiv.org/pdf/1909.02151.pdf
    • Repo: https://github.com/INK-USC/KagNet
    • Sum:
    • Note:
    • Label: 上交-南加州 | 20190904 | concept–> CommonsenseQA |
  4. AMS | Align, Mask and Select: A Simple Method for Incorporating Commonsense Knowledge into Language Representation Models

    • arXiv: https://arxiv.org/pdf/1908.06725.pdf
    • Repo:
    • Sum:
    • Note:
    • Label:阿里-中科院 | 20191112 | CommonsenseQA --Winograd Schema Challenge |
  5. Incorporating Relation Knowledge into Commonsense Reading Comprehension with Multi-task Learning

    • arXiv: https://arxiv.org/pdf/1908.04530.pdf
    • Repo:
    • Sum:
    • Note:
    • Label: 达摩院 | 20190905 | SemEval-2018 Task 11 and the Cloze Story Test
  6. | Incorporating Structured Commonsense Knowledge in Story Completion

    • arXiv: https://arxiv.org/pdf/1811.00625.pdf
    • Repo:
    • Sum:
    • Note:
    • Label: 20181101 | ROCStory Cloze Task | 腾讯AI-Lab * 南加州 |
  7. KEAG | Incorporating External Knowledge into Machine Reading for Generative Question Answering

    • arXiv: https://www.aclweb.org/anthology/D19-1255.pdf
    • Repo:
    • Sum:
    • Note:
    • Label: 阿里| ACL2019 | MS_MARCO |
  8. | Commonsense for Generative Multi-Hop Question Answering Tasks

    • arXiv: https://arxiv.org/pdf/1809.06309.pdf
    • Repo: https://github.com/yicheng-w/CommonSenseMultiHopQA
    • Sum:
    • Note:
    • Label: 20190701| EMNLP2018 | unc | multihop generative task (NarrativeQA) - WikihopQA |
  9. K-Bert | K-BERT: Enabling Language Representation with Knowledge Graph

  • arXiv: https://arxiv.org/pdf/1909.07606.pdf
  • Repo: https://github.com/autoliuweijie/K-BERT
  • Sum: 不用总结了
  • Note:
  • Label: AAAI2020 | 腾讯*北大 |
  1. | Knowledge Infused Learning (K-IL): Towards Deep Incorporation of Knowledge in Deep Learning
  • arXiv: https://arxiv.org/pdf/1912.00512.pdf
  • Repo:
  • Sum:
  • Note:
  • Label:
  1. SKG | Machine Reading Comprehension Using Structural Knowledge Graph-aware Network
  • arXiv: https://www.aclweb.org/anthology/D19-1602.pdf
  • Repo:
  • Sum:
  • Note:
  • Label: qdl | EMNLP2019-Short | Sota-ReCoRD
  1. KAR | Explicit Utilization of General Knowledge in Machine Reading Comprehen
  • arXiv: https://arxiv.org/pdf/1809.03449.pdf
  • Repo:
  • Sum: 知识辅助增强的阅读理解
  • Note:
  • Label: 20190520 | SQuAD | WortNet
  1. | Graph-Based Reasoning over Heterogeneous External Knowledge for Commonsense Question Answering
  • arXiv: https://arxiv.org/pdf/1909.05311.pdf
  • Repo:
  • Sum:
  • Note:
  • Label:20190909 | 微软*北大 *信工所 | CommonSenseQA
    15.Cos-E 、CAGE | Explain Yourself! Leveraging Language Models for Commonsense Reasoning
  • arXiv: https://www.aclweb.org/anthology/P19-1487.pdf
  • Repo: https://github.com/nazneenrajani/CoS-E
  • Sum:
  • Note:
  • Label: acl2019 | CommonSenseQA | salesforce
  1. Taxonomical hierarchy of canonicalized relations from multiple
  • arXiv: https://arxiv.org/pdf/1909.06249.pdf
  • Repo: https://github.com/akshayparakh25/relationhierarchy
  • Sum: 融合不同来源的知识,统一格式
  • Note:
  • Label: 20191112 | DBPedia-Wikidata |
  1. KAAS | Knowledge-Enhanced Attentive Learning for Answer Selection in Community Question Answering Systems
  • arXiv: https://arxiv.org/abs/1912.07915
  • Repo: https://sites.google.com/view/jingfengshi/home/blog/code
  • Sum:
  • Note:
  • Label: ***
  1. KRL | Integrating Graph Contextualized Knowledge into Pre-trained Language Models
  • arXiv: https://arxiv.org/pdf/1912.00147.pdf
  • Repo:
  • Sum:
  • Note:
  • Label: 华为*中科院 | 20191203 | 和TransE比较 | K-Bert的另一种变体
  1. Commonsense Reasoning Using WordNet and SUMO: a Detailed Analysis
  • arXiv: https://arxiv.org/pdf/1909.02314.pdf
  • Repo:
  • Sum:
  • Note:
  • Label: 20190906 | commonsense推理的分析 |
  1. 常识性推理综述 | Recent Advances in Natural Language Inference: A Survey of Benchmarks, Resources, and Approaches
    • 第一次名字: Commonsense Reasoning for Natural language understanding : A survey of Benchmarks,Resources, and Approaches.
  • arXiv: https://arxiv.org/pdf/1904.01172.pdf
  • Repo:
  • Sum:
  • Note:
  • Label: NLI-NLU-Reasoning的综述
  1. Everything Happens for a Reason: Discovering the Purpose of Actions in Procedural Text
  • arXiv: https://arxiv.org/pdf/1909.04745.pdf
  • Repo:
  • Sum:
  • Note:
  • Label: ProPara数据集
  1. | Can a Gorilla Ride a Camel? Learning Semantic Plausibility from Text
  • arXiv: https://arxiv.org/pdf/1911.05689.pdf
  • Repo:
  • Sum:
  • Note:
  • Label: 从文章中构建语义相似关系,用于常识推理
  1. | Event Representation Learning Enhanced with External Commonsense Knowledge
  • arXiv: https://arxiv.org/pdf/1909.05190.pdf
  • Repo: https://github.com/MagiaSN/CommonsenseERL
  • Sum:事件表示中 加入 情绪、情感等外部知识增强 事件表示,在三种事件相关数据集取得好的性能.
  • Note:
  • Label:
  1. SenMaking-and-Explanation 数据集| Does It Make Sense? And Why? A Pilot Study for Sense Making and Explanation
  • arXiv: https://arxiv.org/pdf/1906.00363.pdf
  • Repo: https://github.com/wangcunxiang/SenMaking-and-Explanation
  • Sum:对于常识检测以前方法都是简介的检测,文章提出新的数据集,直接检测模型对于常识的识别能力: 给定一对pair陈述,判断哪个陈述是正确的,符合常识的,然后对于错误的一个句子,再进一步选择 为什么错,作为explation
  • Note:
  • Label:
  1. AHE | Alignment over Heterogeneous Embeddings for Question Answering
  • arXiv: https://www.aclweb.org/anthology/N19-1274.pdf
  • Repo: https://github.com/vikas95/AHE
  • Sum:将问题和候选答案 中 每个单词与检索到的支持段落中 最相似的单词对齐,并将 每个对齐分数与相应的 问题/答案 术语的文档频率倒数进行权衡。
  • Note:
  • Label: ARC数据集 | 多知识源、多种embedding融合的方式问题等等。
  1. | Commonsense Reasoning Using WordNet and SUMO: a Detailed Analysis
  • arXiv: https://arxiv.org/pdf/1909.02314.pdf
  • Repo:
  • Sum: 评估常识推理基准以及常识推理涉及的 知识资源的质量 ,分析现在常识知识库使用过程中存在的问题
  • Note:
  • Label:
  1. | Question Answering over Knowledge Graphs via Structural Query Patterns
  • arXiv: https://arxiv.org/pdf/1910.09760.pdf
  • Repo:
  • Sum: KBQA:KBQA 中总结问题的模式,代替parsing
  • Note:
  • Label:
  1. | Dynamic Knowledge Graph Construction for Zero-shot Commonsense Question Answering
  • arXiv:
  • Repo:
  • Sum: 理解叙事文本需要对 文本中描述的情况、状态、影响进行 推理,这又要求社会常识,难点: 如何让根据上下文选择相关知识并进行推理。 本文对于 zero-shot常识将任务转化未 动态生成的常识图上的概率推理。 --> 一种使用上下结合 COMET 文生成常识的方法;
  • Note:
  • Label: Social IQA | StoryCommonSense 数据集| 20191110
  1. d | Towards Generalizable Neuro-Symbolic Systems for Commonsense Question Answering
  • arXiv: https://arxiv.org/pdf/1910.14087.pdf
  • Repo:
  • Sum: 对于现有的融合知识的方法进行研究和总结
  • Note:
  • Label:
  1. COMET | Commonsense Knowledge Base Completion with Structural and Semantic Context
  • arXiv: https://arxiv.org/pdf/1910.02915.pdf
  • Repo: github.com/allenai/commonsense-kg-completion
  • Sum: 一种知识库自动生成、自动补全的方法,自动生成知识图谱
  • Note:https://blog.csdn.net/sinat_34611224/article/details/94604097
    https://zhuanlan.zhihu.com/p/96219112
  • Label: ACL2019 | AI2 |
  1. Models | LEARNING TO RETRIEVE REASONING PATHS OVER WIKIPEDIA GRAPH FOR QUESTION ANSWERING
  • arXiv: https://arxiv.org/pdf/1911.10470.pdf
  • Repo:
  • Sum: 递归检索学习 推理路径,回答开放域 多跳 问题
  • Note:
  • Label: AI2 | 20191104 |
  1. d | Generative Adversarial Zero-Shot Relational Learning for Knowledge Graphs
  • arXiv:
  • Repo:
  • Sum: 通过GAN学习知识库中的 zero-shot 关系的学习和补全
  • Note:
  • Label: UCSB-王威廉 | 20200108 |
  1. COMET : Commonsense Transformers for Automatic Knowledge Graph Construction
  • arXiv:
  • Repo:
  • Sum:
  • Note:
  • Label:
  1. Joint Reasoning for Multi-Faceted Commonsense Knowledge
  • arXiv:
  • Repo:
  • Sum:
  • Note:
  • Label:
  1. SenseBERT: Driving Some Sense into BERT
  • arXiv:
  • Repo:
  • Sum:
  • Note:
  • Label:
  1. Knowledge Enhanced Contextual Word Representations
  • arXiv:
  • Repo:
  • Sum:
  • Note:
  • Label:
  1. Improving Question Answering with External Knowledge
  • arXiv:
  • Repo:
  • Sum:
  • Note:
  • Label:

Multi-hop QA

  1. PathNet | Exploiting Explicit Paths for Multi-hop Reading Comprehension
    • arXiv: https://arxiv.org/pdf/1811.01127.pdf
    • Repo: https://github.com/allenai/PathNet
    • Sum:
    • Note:
    • Label: 新加坡国立*allenai | OpenBookQA | 20190708

常见知识库总结:

  1. ASER: A Large-scale Eventuality Knowledge Graph
    • arXiv:https://arxiv.org/pdf/1905.00270.pdf
    • Repo: : https://github.com/HKUST-KnowComp/ASER
    • Sum:
    • Leadboard:
    • Label:
  2. Event2Mind:
    • arXiv: https://arxiv.org/pdf/1805.06939.pdf
    • Leadboard/data:https://uwnlp.github.io/event2mind/
    • Sum:
    • Note:
    • label:
  3. Atomic: ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning
    • arXiv: https://homes.cs.washington.edu/~msap/atomic/data/sap2019atomic.pdf
    • Leadboard/data: An ATlas Of MachIne Commonsense, https://homes.cs.washington.edu/˜msap/atomic/.
    • Sum:
    • Note:
    • Label:
  4. Does It Make Sense? And Why? A Pilot Study for Sense Making and Explanation
    • arXiv: https://arxiv.org/pdf/1906.00363.pdf
    • Leadboard/data: https://github.com/wangcunxiang/Sen-Making-and-Explanation
    • Sum: 设计一个数据集直接测试模型对于 常识的感知能力,并对于原因进行测试 。 eg: 1. 大象放进冰箱 vs 火鸡放进冰箱 哪一个是正确的。 2. 为什么? 1) 2) 3)
    • Note:
    • Label:

AAAI 2020

Reading: 14/15 comprehension : ADD+ 0

  1. ReCO: A Large Scale Chinese Reading Comprehension Dataset on Opinion [未公布]
  2. DCMN+: Dual Co-Matching Network for Multi-choice Reading Comprehension
  • https://arxiv.org/pdf/1908.11511.pdf 新改了 20200116
  1. Unsupervised Domain Adaptation on Reading Comprehension
  2. Generating Well-formed Answers by Machine Reading with Stochastic Selector Networks
  3. A Robust Adversarial Training Approach to Machine Reading Comprehension
  4. TextScanner: Reading Characters in Order for Robust Scene Text Recognition
  5. SG-Net: Syntax-Guided Machine Reading Comprehension
  6. Multi-Task Learning with Generative Adversarial Training for Multi-Passage Machine Reading
    Comprehension
  7. Multi-Task Learning with Generative Adversarial Training for Multi-Passage Machine Reading
    Comprehension
  8. MMM: Multi-stage Multi-task Learning for Multi-choice Reading Comprehension
  9. Co-Attention Hierarchical Network: Generating Coherent Long Distractors for Reading Comprehension
  • https://arxiv.org/pdf/1911.08648.pdf 新版本 20191120
    12.Assessing the Benchmarking Capacity of Machine Reading Comprehension Datasets
  • https://arxiv.org/pdf/1911.09241.pdf
  1. Translucent Answer Predictions in Multi-Hop Reading Comprehension IBM
  2. Select, Answer and Explain: Interpretable Multi-hop Reading Comprehension over Multiple Documents
  • https://arxiv.org/pdf/1911.00484.pdf 20191122

multi-hop 1/4 add

  1. Knowledge Graph Alignment Network with Gated Multi-hop Neighborhood Aggregation

reason
1.Graph-Based Reasoning over Heterogeneous External Knowledge for Commonsense Question Answering
2. Differentiable Reasoning on Large Knowledge Bases and Natural Language
https://arxiv.org/pdf/1912.10824.pdf

  1. Coordinated Reasoning for Cross-Lingual Knowledge Graph Alignment
  2. PIQA: Reasoning about Physical Commonsense in Natural Language 数据集

Question

  1. An Empirical Study of Content Understanding in Conversational Question Answering 公布
  2. Improving Knowledge-aware Dialogue Generation via Knowledge Base Question Answering 公布
  3. Asking the Right Questions to the Right Users: Active Learning with Imperfect Oracles
  4. JEC-QA: A Legal-Domain Question Answering Dataset 数据集 -公布 THU
  5. Iteratively Questioning and Answering for Interpretable Legal Judgment Prediction THU
  6. Knowledge and Cross-Pair Pattern Guided Semantic Matching for Question Answering
  7. Neural Question Generation with Answer Pivot
  8. Getting Closer to AI Complete Question Answering: A Set of Prerequisite Real Tasks
  9. QASC: A Dataset for Question Answering via Sentence Composition | AI2
    https://arxiv.org/pdf/1910.11473.pdf

Answering:

  1. Hashing based Answer Selection
  2. Capturing Sentence Relations for Answer Sentence Selection with Multi-Perspective Graph Encoding TZX
  3. Hypothetical Answers to Continuous Queries over Data Streams
  4. Joint Learning of Answer Selection and Answer Summary Generation in Community Question Answering 亚马逊

2020年未整理


20200508

None

20200507

【6】 Harvesting and Refining Question-Answer Pairs for Unsupervised QA
标题:收集和精炼无监督QA的问答对
作者: Zhongli Li, Ke Xu
备注:Accepted by ACL-20
链接:https://arxiv.org/abs/2005.02925

20200506

【19】 Probabilistic Assumptions Matter: Improved Models for Distantly-Supervised Document-Level Question Answering
标题:概率假设很重要:用于远程监督的文档级问题回答的改进模型
作者: Hao Cheng, Kristina Toutanova
备注:ACL2020
链接:https://arxiv.org/abs/2005.01898

20200505
【12】 To Test Machine Comprehension, Start by Defining Comprehension
标题:要测试机器理解力,请从定义理解力开始
作者: Jesse Dunietz, David Ferrucci
备注:9 pages; 3 figures; 1 table. To be published in the Theme track of ACL 2020
链接:https://arxiv.org/abs/2005.01525

【12】 To Test Machine Comprehension, Start by Defining Comprehension
标题:要测试机器理解力,请从定义理解力开始
作者: Jesse Dunietz, David Ferrucci
备注:9 pages; 3 figures; 1 table. To be published in the Theme track of ACL 2020
链接:https://arxiv.org/abs/2005.01525

【27】 Unsupervised Alignment-based Iterative Evidence Retrieval for Multi-hop Question Answering
标题:基于无监督对齐的迭代证据检索多跳问答
作者: Vikas Yadav, Mihai Surdeanu
备注:Accepted at ACL 2020 as a long conference paper
链接:https://arxiv.org/abs/2005.01218

【44】 How Does Selective Mechanism Improve Self-Attention Networks?
标题:选择机制如何改善自我注意网络?
作者: Xinwei Geng, Zhaopeng Tu
备注:ACL 2020
链接:https://arxiv.org/abs/2005.00979

【67】 Teaching Machine Comprehension with Compositional Explanations
标题:用构图讲解进行机器理解教学
作者: Qinyuan Ye, Xiang Ren
链接:https://arxiv.org/abs/2005.0080

【68】 Treebank Embedding Vectors for Out-of-domain Dependency Parsing
标题:用于域外依赖分析的树库嵌入向量
作者: Joachim Wagner, Jennifer Foster
备注:Camera ready for ACL 2020
链接:https://arxiv.org/abs/2005.00800

【71】 Measuring and Reducing Non-Multifact Reasoning in Multi-hop Question Answering
标题:多跳问答中非多事实推理的度量和约简
作者: Harsh Trivedi, Ashish Sabharwal
链接:https://arxiv.org/abs/2005.00789

【74】 ProtoQA: A Question Answering Dataset for Prototypical Common-Sense Reasoning
标题:ProtoQA:一个用于原型常识推理的问答数据集
作者: Michael Boratko, Andrew McCallum
链接:https://arxiv.org/abs/2005.00771

【98】 Contrastive Self-Supervised Learning for Commonsense Reasoning
标题:用于常识推理的对比自监督学习
作者: Tassilo Klein, Moin Nabi
备注:To appear at ACL2020
链接:https://arxiv.org/abs/2005.00669

【104】 Scalable Multi-Hop Relational Reasoning for Knowledge-Aware Question Answering
标题:面向知识型问答的可扩展多跳关系推理
作者: Yanlin Feng, Xiang Ren
链接:https://arxiv.org/abs/2005.00646

【112】 Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work?
标题:使用自然语言理解的预训练模型进行中间任务迁移学习:何时和为什么起作用?
作者: Yada Pruksachatkun, Samuel R. Bowman
备注:ACL 2020
链接:https://arxiv.org/abs/2005.00628

【121】 Clinical Reading Comprehension: A Thorough Analysis of the emrQA Dataset
标题:临床阅读理解:emrQA数据集的透彻分析
作者: Xiang Yue, Huan Sun
备注:Accepted by ACL 2020
链接:https://arxiv.org/abs/2005.00574

20200504
【3】 SciREX: A Challenge Dataset for Document-Level Information Extraction
标题:SciREX:一个文档级信息抽取的挑战数据集
作者: Sarthak Jain, Iz Beltagy
备注:ACL2020 Camera Ready Submission, Work done by first authors while interning at AI2
链接:https://arxiv.org/abs/2005.00512

【7】 MedType: Improving Medical Entity Linking with Semantic Type Prediction
标题:MedType:利用语义类型预测改进医学实体链接
作者: Shikhar Vashishth, Carolyn Rose
链接:https://arxiv.org/abs/2005.00460

【11】 Topological Sort for Sentence Ordering
标题:句子排序的拓扑排序
作者: Shrimai Prabhumoye, Alan W Black
备注:Will be published at the Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL) 2020
链接:https://arxiv.org/abs/2005.00432

【16】 XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning
标题:XCOPA:一个用于因果常识推理的多语言数据集
作者: Edoardo Maria Ponti, Anna Korhonen
链接:https://arxiv.org/abs/2005.00333

【19】 Self-supervised Knowledge Triplet Learning for Zero-shot Question Answering
标题:基于自监督知识三元组学习的零炮问答
作者: Pratyay Banerjee, Chitta Baral
链接:https://arxiv.org/abs/2005.00316

【30】 TORQUE: A Reading Comprehension Dataset of Temporal Ordering Questions
标题:TORQUE:时序问题的阅读理解数据集
作者: Qiang Ning, Dan Roth
链接:https://arxiv.org/abs/2005.00242

【31】 Biomedical Entity Representations with Synonym Marginalization
标题:同义词边缘化的生物医学实体表征
作者: Mujeen Sung, Jaewoo Kang
备注:ACL 2020
链接:https://arxiv.org/abs/2005.00239

【34】 Evaluating Neural Machine Comprehension Model Robustness to Noisy Inputs and Adversarial Attacks
标题:评估神经机器理解模型对噪声输入和对抗攻击的鲁棒性
作者: Winston Wu, Svitlana Volkova
链接:https://arxiv.org/abs/2005.00190

【39】 Information Seeking in the Spirit of Learning: a Dataset for Conversational Curiosity
标题:在学习精神中寻找信息:一个对话好奇心的数据集
作者: Pedro Rodriguez, Zhiguang Wang
链接:https://arxiv.org/abs/2005.00172

【42】 Attend to Medical Ontologies: Content Selection for Clinical Abstractive Summarization
标题:关注医学本体论:临床文摘的内容选择
作者: Sajad Sotudeh, Ross W. Filice
备注:Accepted to ACL 2020
链接:https://arxiv.org/abs/2005.00163

链接:https://arxiv.org/abs/2005.00048

【60】 Progressively Pretrained Dense Corpus Index for Open-Domain Question Answering
标题:用于开放领域问答的渐进式预训练密集语料库索引
作者: Wenhan Xiong, William Yang Wang
链接:https://arxiv.org/abs/2005.00038

链接:https://arxiv.org/abs/2005.00048

【67】 Bipartite Flat-Graph Network for Nested Named Entity Recognition
标题:用于嵌套命名实体识别的二部平图网络
作者: Ying Luo, Hai Zhao
备注:Accepted by ACL2020
链接:https://arxiv.org/abs/2005.00436

20200501

【41】 STARC: Structured Annotations for Reading Comprehension
标题:Starc:阅读理解的结构化注释
作者: Yevgeni Berzak, Roger Levy
备注:ACL 2020. OneStopQA dataset, STARC guidelines and human experiments data are available at this https URL
链接:https://arxiv.org/abs/2004.14797

【42】 Character-Level Translation with Self-attention
标题:自我关注的字符级翻译
作者: Yingqiang Gao, Richard H.R. Hahnloser
备注:ACL 2020
链接:https://arxiv.org/abs/2004.14788

【47】 Named Entity Recognition without Labelled Data: A Weak Supervision Approach
标题:无标记数据的命名实体识别:一种弱监督方法
作者: Pierre Lison, Samia Touileb
备注:Accepted to ACL 2020 (long paper)
链接:https://arxiv.org/abs/2004.14723

【55】 Robust Question Answering Through Sub-part Alignment
标题:通过子部分对齐进行稳健的问题回答
作者: Jifan Chen, Greg Durrett
链接:https://arxiv.org/abs/2004.14648

【61】 Look at the First Sentence: Position Bias in Question Answering
标题:看第一句:回答问题时的位置偏见
作者: Miyoung Ko, Jaewoo Kang
链接:https://arxiv.org/abs/2004.14602

【70】 RikiNet: Reading Wikipedia Pages for Natural Question Answering
标题:RikiNet:阅读维基百科自然问答页面
作者: Dayiheng Liu, Nan Duan
备注:Accepted at ACL 2020
链接:https://arxiv.org/abs/2004.14560

【84】 Instance-Based Learning of Span Representations: A Case Study through Named Entity Recognition
标题:基于实例的跨度表示学习:通过命名实体识别的案例研究
作者: Hiroki Ouchi, Kentaro Inui
备注:Accepted by ACL2020
链接:https://arxiv.org/abs/2004.14514

20200430
【17】 SubjQA: A Dataset for Subjectivity and Review Comprehension
标题:SubjQA:一个主观性和复习理解的数据集
作者: Johannes Bjerva, Isabelle Augenstein
链接:https://arxiv.org/abs/2004.14283

【22】 Towards Transparent and Explainable Attention Models
标题:走向透明和可解释的注意模型
作者: Akash Kumar Mohankumar, Balaraman Ravindran
备注:Accepted at ACL 2020
链接:https://arxiv.org/abs/2004.14243

【44】 Do Neural Language Models Show Preferences for Syntactic Formalisms?
标题:神经语言模型显示对句法形式的偏好吗?
作者: Artur Kulmizev, Joakim Nivre
备注:ACL 2020
链接:https://arxiv.org/abs/2004.14096

【46】 Pre-training Is (Almost) All You Need: An Application to Commonsense Reasoning
标题:预培训(几乎)就是你所需要的一切:常识推理的应用
作者: Alexandre Tamborrino, Louise Naudin
备注:Accepted at ACL 2020
链接:https://arxiv.org/abs/2004.14074

【47】 Enhancing Answer Boundary Detection for Multilingual Machine Reading Comprehension
标题:增强多语种机器阅读理解的答案边界检测
作者: Fei Yuan, Daxin Jiang
备注:ACL 2020
链接:https://arxiv.org/abs/2004.14069

【53】 Multi-choice Dialogue-Based Reading Comprehension with Knowledge and Key Turns
标题:基于知识和关键转折的多项选择式对话阅读理解
作者: Junlong Li, Hai Zhao
链接:https://arxiv.org/abs/2004.13988

【57】 Data Augmentation for Spoken Language Understanding via Pretrained Models
标题:通过预先训练的模型进行口语理解的数据增强
作者: Baolin Peng, Jianfeng Gao
链接:https://arxiv.org/abs/2004.13952

【89】 Every Document Owns Its Structure: Inductive Text Classification via Graph Neural Networks
标题:每个文档都有自己的结构:基于图神经网络的归纳文本分类
作者: Yufeng Zhang, Liang Wang
链接:https://arxiv.org/abs/2004.13826

【92】 Fine-tuning Multi-hop Question Answering with Hierarchical Graph Network
标题:基于层次图网络的微调多跳问答
作者: Guanming Xiong
链接:https://arxiv.org/abs/2004.13821

20200429
【11】 Event Extraction by Answering (Almost) Natural Questions
标题:通过回答(几乎)自然问题的事件提取
作者: Xinya Du, Claire Cardie
链接:https://arxiv.org/abs/2004.13625

【21】 Semantics-Aware Inferential Network for Natural Language Understanding
标题:语义感知的自然语言理解推理网络
作者: Shuailiang Zhang, Junru Zhou
链接:https://arxiv.org/abs/2004.13338

【49】 Conversational Question Answering over Passages by Leveraging Word Proximity Networks
标题:利用单词邻近网络通过段落进行会话问答
作者: Magdalena Kaiser, Gerhard Weikum
备注:SIGIR 2020 Demonstrations
链接:https://arxiv.org/abs/2004.13117

20200428

【3】 SCDE: Sentence Cloze Dataset with High Quality Distractors From Examinations
标题:SCDE:具有高质量检查干扰项的语句完形填空数据集
作者: Xiang Kong, Eduard Hovy
备注:ACL2020
链接:https://arxiv.org/abs/2004.12934

【6】 Synonyms and Antonyms: Embedded Conflict
标题:同义词和反义词:内在冲突
作者: Igor Samenko, Ivan P. Yamshchikov
链接:https://arxiv.org/abs/2004.12835

【7】 LightPAFF: A Two-Stage Distillation Framework for Pre-training and Fine-tuning
标题:LightPAFF:一个用于预训练和微调的两阶段精馏框架
作者: Kaitao Song, Tie-Yan Liu
链接:https://arxiv.org/abs/2004.12817

【24】 Masking as an Efficient Alternative to Finetuning for Pretrained Language Models
标题:掩蔽作为预培训语言模型的微调的有效替代方法
作者: Mengjie Zhao, Hinrich Schütze
链接:https://arxiv.org/abs/2004.12406

【25】 Heterogeneous Graph Neural Networks for Extractive Document Summarization
标题:用于抽取文档摘要的异构图神经网络
作者: Danqing Wang, Xuanjing Huang
备注:Accepted by ACL2020
链接:https://arxiv.org/abs/2004.12393

【28】 Relational Graph Attention Network for Aspect-based Sentiment Analysis
标题:用于基于方面的情感分析的关系图注意力网络
作者: Kai Wang, Rui Wang
备注:To appear at ACL 2020
链接:https://arxiv.org/abs/2004.12362

【37】 MCQA: Multimodal Co-attention Based Network for Question Answering
标题:MCQA:基于多模态协同注意的问答网络
作者: Abhishek Kumar, Dinesh Manocha
链接:https://arxiv.org/abs/2004.12238

【45】 A Heterogeneous Graph with Factual, Temporal and Logical Knowledge for Question Answering Over Dynamic Contexts
标题:一种具有事实、时间和逻辑知识的异构图用于动态环境下的问题回答
作者: Wanjun Zhong, Jian Yin
链接:https://arxiv.org/abs/2004.12057

【48】 Syntactic Data Augmentation Increases Robustness to Inference Heuristics
标题:句法数据增强了推理启发式的健壮性
作者: Junghyun Min, Tal Linzen
备注:ACL 2020
链接:https://arxiv.org/abs/2004.11999

【54】 A Batch Normalized Inference Network Keeps the KL Vanishing Away
标题:批量规范化推理网络使KL消失
作者: Qile Zhu, Dapeng Wu
备注:camera-ready for ACL 2020
链接:https://arxiv.org/abs/2004.12585

20200427

【1】 Template-Based Question Generation from Retrieved Sentences for Improved Unsupervised Question Answering
标题:基于模板从检索到的句子中生成问题用于改进的无监督问题回答
作者:Alexander R. Fabbri, Bing Xiang
备注:ACL 2020
链接:https://arxiv.org/abs/2004.11892

【2】 Lite Transformer with Long-Short Range Attention
标题:具有长短距离注意的Lite变压器
作者:Zhanghao Wu, Song Han
备注:ICLR 2020. The first two authors contributed equally to this work
链接:https://arxiv.org/abs/2004.11886

【4】 Event-QA: A Dataset for Event-Centric Question Answering over Knowledge Graphs
标题:Event-QA:基于知识图的以事件为中心的问答数据集
作者:Tarcísio Souza Costa, Elena Demidova
链接:https://arxiv.org/abs/2004.11861

【6】 FLAT: Chinese NER Using Flat-Lattice Transformer
标题:平面:使用平面点阵变压器的中国NER
作者:Xiaonan Li, Xuanjing Huang
备注:Accepted to the ACL 2020
链接:https://arxiv.org/abs/2004.11795

【15】 G-DAUG: Generative Data Augmentation for Commonsense Reasoning
标题:G-DAUG:用于常识推理的生成性数据增强
作者:Yiben Yang, Doug Downey
链接:https://arxiv.org/abs/2004.11546

20200424
【1】 Rapidly Bootstrapping a Question Answering Dataset for COVID-19
标题:快速引导COVID-19的问题回答数据集
作者: Raphael Tang, Jimmy Lin
链接:https://arxiv.org/abs/2004.11339

【7】 DuReaderrobust: A Chinese Dataset Towards Evaluating the Robustness of Machine Reading Comprehension Models
标题:DuReaderRobust:一个评估机器阅读理解模型稳健性的中文数据集
作者: Hongxuan Tang, Haifeng Wang
链接:https://arxiv.org/abs/2004.11142

【15】 Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks
标题:不要停止培训:使语言模型适应领域和任务
作者: Suchin Gururangan, Noah A. Smith
备注:ACL 2020
链接:https://arxiv.org/abs/2004.10964

【17】 Preserving the Hypernym Tree of WordNet in Dense Embeddings
标题:在稠密嵌入中保持WordNet的Hypernym树
作者: Canlin Zhang, Xiuwen Liu
链接:https://arxiv.org/abs/2004.10863

【18】 Syntactic Structure from Deep Learning
标题:深度学习的句法结构
作者: Tal Linzen, Marco Baroni
链接:https://arxiv.org/abs/2004.10827

20200423

【3】 AmbigQA: Answering Ambiguous Open-domain Questions
标题:AmbigQA:回答含糊的开放领域问题
作者:Sewon Min, Luke Zettlemoyer
链接:https://arxiv.org/abs/2004.10645

【6】 Contextualised Graph Attention for Improved Relation Extraction
标题:用于改进关系提取的上下文图形注意
作者:Angrosh Mandya, Frans Coenen
链接:https://arxiv.org/abs/2004.10624

20200422 None

【3】 Logic-Guided Data Augmentation and Regularization for Consistent Question Answering
标题:用于一致问题回答的逻辑引导的数据增强和正则化
作者: Akari Asai, Hannaneh Hajishirzi
备注:Published as a conference paper at ACL 2020
链接:https://arxiv.org/abs/2004.10157

【5】 Unsupervised Opinion Summarization with Noising and Denoising
标题:基于去噪和去噪的无监督意见总结
作者: Reinald Kim Amplayo, Mirella Lapata
备注:ACL 2020
链接:https://arxiv.org/abs/2004.10150

【16】 Knowledge-Driven Distractor Generation for Cloze-style Multiple Choice Questions
标题:完形填空式多项选择题的知识驱动分心因子生成
作者: Siyu Ren, Kenny Q. Zhu
链接:https://arxiv.org/abs/2004.09853

【25】 Grounding Conversations with Improvised Dialogues
标题:以即兴对话为基础的对话
作者: Hyundong Cho, Jonathan May
备注:ACL2020; 9 pages + 1 page appendix
链接:https://arxiv.org/abs/2004.09544

20200421
【3】 MPNet: Masked and Permuted Pre-training for Language Understanding
标题:MPNet:语言理解的掩蔽和置换预培训
作者: Kaitao Song, Tie-Yan Liu
链接:https://arxiv.org/abs/2004.09297

20200420

【1】 Exploring the Combination of Contextual Word Embeddings and Knowledge Graph Embeddings
标题:探索上下文词嵌入和知识图嵌入的结合
作者:Lea Dieudonat, Esteban Marquer
链接:https://arxiv.org/abs/2004.08371

【8】 Highway Transformer: Self-Gating Enhanced Self-Attentive Networks
标题:公路变压器:自选通增强型自关注网络
作者:Yekun Chai, Xinwen Hou
链接:https://arxiv.org/abs/2004.08178

【10】 Probing Linguistic Features of Sentence-Level Representations in Neural Relation Extraction
标题:神经关系抽取中句子级表征的语言特征探讨
作者:Christoph Alt, Leonhard Hennig
备注:Accepted at ACL 2020
链接:https://arxiv.org/abs/2004.08134

【23】 Understanding the Difficulty of Training Transformers
标题:了解变压器培训的难度
作者:Liyuan Liu, Jiawei Han
链接:https://arxiv.org/abs/2004.08249

【24】 Geometry-aware Domain Adaptation for Unsupervised Alignment of Word Embeddings
标题:用于单词嵌入的无监督对齐的几何感知的域自适应
作者:Pratik Jawanpuria, Bamdev Mishra
备注:Accepted as a short paper in ACL 2020
链接:https://arxiv.org/abs/2004.08243

【21】 Bridging Anaphora Resolution as Question Answering
标题:桥接回指消解作为问答
作者:Yufang Hou
备注:accepted at ACL2020
链接:https://arxiv.org/abs/2004.07898

【15】 Dialogue-Based Relation Extraction
标题:基于对话的关系抽取
作者:Dian Yu, Dong Yu
备注:To appear in ACL 2020
链接:https://arxiv.org/abs/2004.08056

20200416
【10】 Coreferential Reasoning Learning for Language Representation
标题:语言表征的相关推理学习
作者: Deming Ye, Zhiyuan Liu
链接:https://arxiv.org/abs/2004.06870

【14】 A Simple Yet Strong Pipeline for HotpotQA
标题:一条简单而强大的HotpotQA管道
作者: Dirk Groeneveld, Ashish Sabharwal
链接:https://arxiv.org/abs/2004.06753


20200415
【8】 Jointly Modeling Aspect and Sentiment with Dynamic Heterogeneous Graph Neural Networks
标题:动态异构图神经网络联合建模方面和情感
作者: Shu Liu, Xu Sun
链接:https://arxiv.org/abs/2004.06427

20200414
【1】 Pretrained Transformers Improve Out-of-Distribution Robustness
标题:预先培训的变压器提高了配电网外的稳健性
作者: Dan Hendrycks, Dawn Song
备注:ACL 2020
链接:https://arxiv.org/abs/2004.06100

【2】 Adversarial Augmentation Policy Search for Domain and Cross-Lingual Generalization in Reading Comprehension
标题:阅读理解中的对抗性强化策略域搜索与跨语言概括
作者: Adyasha Maharana, Mohit Bansal
链接:https://arxiv.org/abs/2004.06076

【11】 From Machine Reading Comprehension to Dialogue State Tracking: Bridging the Gap
标题:从机器阅读理解到对话状态跟踪:弥合鸿沟
作者: Shuyang Gao, Dilek Hakkani-Tur
链接:https://arxiv.org/abs/2004.05827

【24】 Explaining Question Answering Models through Text Generation
标题:通过文本生成解释问答模型
作者: Veronica Latcinnik, Jonathan Berant
链接:https://arxiv.org/abs/2004.05569

【28】 Unsupervised Commonsense Question Answering with Self-Talk
标题:无人监督的自言自语常识问答
作者: Vered Shwartz, Yejin Choi
链接:https://arxiv.org/abs/2004.05483

20200413
【1】 Longformer: The Long-Document Transformer
标题:Longformer:长文档变压器
作者: Iz Beltagy, Arman Cohan
链接:https://arxiv.org/abs/2004.05150

【6】 Molweni: A Challenge Multiparty Dialogues-based Machine Reading Comprehension Dataset with Discourse Structure
标题:Molweni:一个具有语篇结构的基于多方对话的机器阅读理解数据集
作者: Jiaqi Li, Bing Qin
链接:https://arxiv.org/abs/2004.05080

【7】 Overestimation of Syntactic Representationin Neural Language Models
标题:神经语言模型中的高估句法表示
作者: Jordan Kodner, Nitish Gupta
备注:Accepted for publication at ACL 2020
链接:https://arxiv.org/abs/2004.05067

【8】 A New Dataset for Natural Language Inference from Code-mixed Conversations
标题:一种新的基于代码混合会话的自然语言推理数据集
作者: Simran Khanuja, Monojit Choudhury
备注:To appear in CALCS, LREC 2020
链接:https://arxiv.org/abs/2004.05051

【20】 Natural Perturbation for Robust Question Answering
标题:鲁棒问题回答的自然摄动
作者: Daniel Khashabi, Ashish Sabharwal
链接:https://arxiv.org/abs/2004.04849

20200410
【7】 MuTual: A Dataset for Multi-Turn Dialogue Reasoning
标题:Mutual:一个用于多回合对话推理的数据集
作者: Leyang Cui, Ming Zhou
备注:ACL 2020
链接:https://arxiv.org/abs/2004.04494

【8】 Injecting Numerical Reasoning Skills into Language Models
标题:将数值推理技能注入语言模型
作者: Mor Geva, Jonathan Berant
备注:ACL 2020
链接:https://arxiv.org/abs/2004.04487

【17】 Severing the Edge Between Before and After: Neural Architectures for Temporal Ordering of Events
标题:切断前后的边缘:事件时间顺序的神经体系结构
作者: Miguel Ballesteros, Yaser Al-Onaizan
链接:https://arxiv.org/abs/2004.04295

【20】 Asking and Answering Questions to Evaluate the Factual Consistency of Summaries
标题:提问和回答问题以评估摘要的事实一致性
作者: Alex Wang, Mike Lewis
备注:ACL 2020
链接:https://arxiv.org/abs/2004.04228

20200409

【6】 KdConv: A Chinese Multi-domain Dialogue Dataset Towards Multi-turn Knowledge-driven Conversation
标题:KdConv:一个面向多轮知识驱动会话的中文多域对话数据集
作者: Hao Zhou, Xiaoyan Zhu
链接:https://arxiv.org/abs/2004.04100

【9】 Self-Attention Gazetteer Embeddings for Named-Entity Recognition
标题:用于命名实体识别的自我注意地名词典嵌入
作者: Stanislav Peshterliev, Imre Kiss
链接:https://arxiv.org/abs/2004.04060

【48】 Guessing What’s Plausible But Remembering What’s True: Accurate Neural Reasoning for Question-Answering
标题:猜测什么是合理的,但记住什么是真的:用于问题回答的精确神经推理
作者: Haitian Sun, William W. Cohen
链接:https://arxiv.org/abs/2004.03658

20200408
20200408
【1】 Transformers to Learn Hierarchical Contexts in Multiparty Dialogue for Span-based Question Answering
标题:Transformers用于学习基于跨度的问题回答的多方对话中的分层上下文
作者: Changmao Li, Jinho D. Choi
备注:Accepted by ACL 2020
链接:https://arxiv.org/abs/2004.03561

实体链接----
【2】 Entity Linking via Dual and Cross-Attention Encoders https://arxiv.org/abs/2004.03555
标题:通过双重和交叉注意编码器的实体链接
作者: Oshin Agarwal, Daniel M. Bikel
链接:https://arxiv.org/abs/2004.03555

【4】 What do Models Learn from Question Answering Datasets?
标题:模型从问题回答数据集中学到了什么?
作者: Priyanka Sen, Amir Saffari
链接:https://arxiv.org/abs/2004.03490

医疗NER
【9】 Inexpensive Domain Adaptation of Pretrained Language Models: A Case Study on Biomedical Named Entity Recognition
标题:预训练语言模型的廉价领域适配:生物医学命名实体识别的案例研究
作者: Nina Poerner, Hinrich Schütze
链接:https://arxiv.org/abs/2004.03354

【15】 Variational Question-Answer Pair Generation for Machine Reading Comprehension
标题:机器阅读理解的变分问答对生成
作者: Kazutoshi Shinoda, Akiko Aizawa
链接:https://arxiv.org/abs/2004.03238

【25】 A Sentence Cloze Dataset for Chinese Machine Reading Comprehension
标题:一种用于汉语机器阅读理解的句子完形填空数据集
作者: Yiming Cui, Guoping Hu
链接:https://arxiv.org/abs/2004.03116

【26】 Knowledge Fusion and Semantic Knowledge Ranking for Open Domain Question Answering
标题:面向开放领域问答的知识融合和语义知识排序
作者: Pratyay Banerjee, Chitta Baral
链接:https://arxiv.org/abs/2004.03101

【28】 Is Graph Structure Necessary for Multi-hop Reasoning?
标题:多跳推理是否需要图结构?
作者: Nan Shao, Guoping Hu
链接:https://arxiv.org/abs/2004.03096

【31】 Inferential Text Generation with Multiple Knowledge Sources and Meta-Learning
标题:多知识源推理文本生成与元学习
作者: Daya Guo, Ming Zhou
链接:https://arxiv.org/abs/2004.03070

【33】 Information-Theoretic Probing for Linguistic Structure
标题:语言结构的信息论探索
作者: Tiago Pimentel, Ryan Cotterell
备注:Accepted for publication at ACL 2020
链接:https://arxiv.org/abs/2004.03061

【34】 The Role of Pragmatic and Discourse Context in Determining Argument Impact
标题:语用和话语语境在决定辩论影响中的作用
作者: Esin Durmus, Claire Cardie
备注:EMNLP 2019
链接:https://arxiv.org/abs/2004.03034

【37】 Enhancing Review Comprehension with Domain-Specific Commonsense
标题:用特定领域的常识增强复习理解
作者: Aaron Traylor, Wang-Chiew Tan
链接:https://arxiv.org/abs/2004.03020

【39】 Multi-Step Inference for Reasoning Over Paragraphs
标题:段落推理的多步推理
作者: Jiangming Liu, Matt Gardner
链接:https://arxiv.org/abs/2004.02995

【48】 MedDialog: A Large-scale Medical Dialogue Dataset
标题:MedDialog:一个大规模医学对话数据集
作者: Shu Chen, Pengtao Xie
链接:https://arxiv.org/abs/2004.03329

【50】 Multi-Scale Aggregation Using Feature Pyramid Module for Text-Independent Speaker Verification
标题:使用特征金字塔模块进行文本无关说话人确认的多尺度聚合
作者: Youngmoon Jung, Hoirin Kim
备注:Submitted to Interspeech 2020
链接:https://arxiv.org/abs/2004.03194

20200407
【53】 Prerequisites for Explainable Machine Reading Comprehension: A Position Paper
标题:可解释机器阅读理解的先决条件:一份意见书
作者: Saku Sugawara, Akiko Aizawa
链接:https://arxiv.org/abs/2004.01912

【17】 Learning to Recover Reasoning Chains for Multi-Hop Question Answering via Cooperative Games
标题:通过合作博弈学习恢复多跳问答的推理链
作者: Yufei Feng, Xiaodan Zhu
链接:https://arxiv.org/abs/2004.02393

20200406
【9】 R3: A Reading Comprehension Benchmark Requiring Reasoning Processes
标题:R3:需要推理过程的阅读理解基准
作者: Ran Wang, Xinyu Dai
链接:https://arxiv.org/abs/2004.01251

20200403
【1】 Causal Inference of Script Knowledge
标题:脚本知识的因果推理
作者: Noah Weber, Benjamin Van Durme
链接:https://arxiv.org/abs/2004.01174

20200402
【19】 Information Leakage in Embedding Models
标题:嵌入模型中的信息泄漏
作者:Congzheng Song, Ananth Raghunathan
链接:https://arxiv.org/abs/2004.00053

20200327
【3】 Common-Knowledge Concept Recognition for SEVA
标题:SEVA的常识概念识别
作者: Jitin Krishnan, Huzefa Rangwala
链接:https://arxiv.org/abs/2003.11687

20200326
【8】 Vector logic and counterfactuals
标题:向量逻辑与反事实
作者: Eduardo Mizraji
链接:https://arxiv.org/abs/2003.11519

20200325

【6】 ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
标题:ELECTRA:将文本编码器预先培训为鉴别器而不是生成器
作者: Kevin Clark, Christopher D. Manning
备注:ICLR 2020
链接:https://arxiv.org/abs/2003.10555

20200320
【11】 A Corpus of Adpositional Supersenses for Mandarin Chinese
标题:普通话附加上位语料库
作者: Siyao Peng, Nathan Schneider
备注:LREC 2020 camera-ready
链接:https://arxiv.org/abs/2003.08437

20200319
【5】 Pre-trained Models for Natural Language Processing: A Survey
标题:自然语言处理的预训练模型:综述
作者:Xipeng Qiu, Xuanjing Huang
链接:https://arxiv.org/abs/2003.08271

20200317

【9】 A Survey on Contextual Embeddings
标题:语境嵌入研究综述
作者: Qi Liu, Phil Blunsom
链接:https://arxiv.org/abs/2003.07278

20200316
【9】 Heterogeneous Relational Reasoning in Knowledge Graphs with Reinforcement Learning
标题:基于强化学习的知识图异构关系推理
作者:Mandana Saebi, Nitesh Chawla
链接:https://arxiv.org/abs/2003.06050

20200313
【6】 Investigating Entity Knowledge in BERT with Simple Neural End-To-End Entity Linking
标题:用简单的神经网络端到端实体链接研究BERT中的实体知识
作者: Samuel Broscheit
备注:Published at CoNLL 2019
链接:https://arxiv.org/abs/2003.05473

20200312
【10】 Multi-task Learning with Multi-head Attention for Multi-choice Reading Comprehension
标题:多项选择式阅读理解的多头注意多任务学习
作者: Hui Wan
链接:https://arxiv.org/abs/2003.04992

【16】 Understanding the Downstream Instability of Word Embeddings
标题:理解单词嵌入的下游不稳定性
作者: Megan Leszczynski, Christopher Ré
备注:In Proceedings of the 3rd MLSys Conference, 2020
链接:https://arxiv.org/abs/2003.04983

【22】 Transformer++
标题:转换器+
作者: Prakhar Thapak, Prodip Hore
链接:https://arxiv.org/abs/2003.04974

20200311
【3】 Undersensitivity in Neural Reading Comprehension
标题:神经阅读理解中的低敏感度
作者: Johannes Welbl, Sebastian Riedel
链接:https://arxiv.org/abs/2003.04808

【8】 GenNet : Reading Comprehension with Multiple Choice Questions using Generation and Selection model
标题:GENet:使用生成和选择模型的多项选择题阅读理解
作者: Vaishali Ingale, Pushpender Singh
链接:https://arxiv.org/abs/2003.04360

【6】 A Framework for Evaluation of Machine Reading Comprehension Gold Standards
标题:机器阅读理解黄金标准评估框架
作者: Viktor Schlegel, Riza Batista-Navarro
备注:In Proceedings of the 12th International Conference on Language Resources and Evaluation (LREC 2020)
链接:https://arxiv.org/abs/2003.04642

【11】 Neuro-symbolic Architectures for Context Understanding
标题:用于语境理解的神经符号体系结构
作者: Alessandro Oltramari, Ruwan Wickramarachchi
备注:In: Ilaria Tiddi, Freddy Lecue, Pascal Hitzler (eds.), Knowledge Graphs for eXplainable AI – Foundations, Applications and Challenges. Studies on the Semantic Web, IOS Press, Amsterdam, 2020. arXiv admin note: text overlap with arXiv:1910.14087
链接:https://arxiv.org/abs/2003.04707

20200310

【16】 Natural Language QA Approaches using Reasoning with External Knowledge
标题:使用外部知识推理的自然语言问答方法
作者: Chitta Baral, Arindam Mitra
链接:https://arxiv.org/abs/2003.03446

20200309

【4】 Practical Annotation Strategies for Question Answering Datasets
标题:一种实用的问答数据集标注策略
作者: Bernhard Kratzwald, Stefan Feuerriegel
链接:https://arxiv.org/abs/2003.03235

20200306
【8】 A Study on Efficiency, Accuracy and Document Structure for Answer Sentence Selection
标题:回答句选择的效率、准确性和文档结构研究
作者: Daniele Bonadiman, Alessandro Moschitti
链接:https://arxiv.org/abs/2003.02349

20200304
【6】 Meta-Embeddings Based On Self-Attention
标题:基于自我注意的元嵌入
作者: Qichen Li, Jian Li
链接:https://arxiv.org/abs/2003.01371

20200228
【1】 Generating Followup Questions for Interpretable Multi-hop Question Answering
标题:为可解释的多跳问题回答生成后续问题
作者: Christopher Malon, Bing Bai
链接:https://arxiv.org/abs/2002.12344

20200227
实体链接—
【5】 End-to-End Entity Linking and Disambiguation leveraging Word and Knowledge Graph Embeddings
标题:利用单词和知识图嵌入的端到端实体链接和歧义消除
作者: Rostislav Nedelchev, Asja Fischer
链接:https://arxiv.org/abs/2002.11143

【8】 Sparse Sinkhorn Attention
标题:稀疏Sinkhorn注意
作者: Yi Tay, Da-Cheng Juan
链接:https://arxiv.org/abs/2002.11296

20200226
【14】 Exploring BERT Parameter Efficiency on the Stanford Question Answering Dataset v2.0
标题:基于Stanford问答数据集v2.0的BERT参数效率研究
作者: Eric Hulburd
链接:https://arxiv.org/abs/2002.10670

【15】 Differentiable Reasoning over a Virtual Knowledge Base
标题:虚拟知识库上的可微推理
作者: Bhuwan Dhingra, William W. Cohen
备注:ICLR 2020
链接:https://arxiv.org/abs/2002.10640

【17】 On Feature Normalization and Data Augmentation
标题:特征归一化与数据增强
作者: Boyi Li, Kilian Q. Weinberger
链接:https://arxiv.org/abs/2002.11102

20200225
【7】 Word Embeddings Inherently Recover the Conceptual Organization of the Human Mind
标题:词的嵌入内在地恢复了人类思维的概念组织
作者: Victor Swift
链接:https://arxiv.org/abs/2002.10284

【13】 Do Multi-Hop Question Answering Systems Know How to Answer the Single-Hop Sub-Questions?
标题:多跳问答系统知道如何回答单跳子问题吗?
作者: Yixuan Tang, Anthony K.H. Tung
链接:https://arxiv.org/abs/2002.09919

【16】 Unsupervised Question Decomposition for Question Answering
标题:用于问题回答的无监督问题分解
作者: Ethan Perez, Douwe Kiela
链接:https://arxiv.org/abs/2002.09758

【25】 Training Question Answering Models From Synthetic Data
标题:从合成数据训练问答模型
作者: Raul Puri, Bryan Catanzaro
链接:https://arxiv.org/abs/2002.09599

20200221
【2】 How Much Knowledge Can You Pack Into the Parameters of a Language Model?
标题:您可以将多少知识打包到语言模型的参数中?
作者:Adam Roberts, Noam Shazeer
链接:https://arxiv.org/abs/2002.08910

【3】 REALM: Retrieval-Augmented Language Model Pre-Training
标题:领域:检索-增强的语言模型预培训
作者:Kelvin Guu, Ming-Wei Chang
链接:https://arxiv.org/abs/2002.08909

20200220
【13】 Tree-structured Attention with Hierarchical Accumulation
标题:具有分层累积的树状结构注意
作者: Xuan-Phi Nguyen, Richard Socher
备注:ICLR 2020
链接:https://arxiv.org/abs/2002.08046

20200218

【11】 Exploring Neural Models for Parsing Natural Language into First-Order Logic
标题:探索将自然语言解析为一阶逻辑的神经模型
作者: Hrituraj Singh, Balaji Krishnamurthy
链接:https://arxiv.org/abs/2002.06544

20200217

【2】 Scalable Neural Methods for Reasoning With a Symbolic Knowledge Base
标题:符号知识库推理的可伸缩神经方法
作者:William W. Cohen, Matthew Siegler
备注:Also published in ICLR2020 this https URL¬eId=BJlguT4YPr
链接:https://arxiv.org/abs/2002.06115

对话系统—
【4】 Dialogue history integration into end-to-end signal-to-concept spoken language understanding systems
标题:对话历史集成到端信号到概念口语理解系统中
作者:Natalia Tomashenko, Yannick Esteve
备注:Accepted for ICASSP 2020 (Submitted: October 21, 2019)
链接:https://arxiv.org/abs/2002.06012

【9】 Transformers as Soft Reasoners over Language
标题:变形金刚作为语言的软推理者
作者:Peter Clark, Kyle Richardson
链接:https://arxiv.org/abs/2002.05867

20200216
【4】 Sparse and Structured Visual Attention
标题:稀疏和结构化的视觉注意
作者: Pedro Henrique Martins, André Martins
链接:https://arxiv.org/abs/2002.05556

20200212
【2】 ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning
标题:ReClor:一个需要逻辑推理的阅读理解数据集
作者: Weihao Yu, Jiashi Feng
备注:to be published in ICLR 2020
链接:https://arxiv.org/abs/2002.04326

【11】 Mining Commonsense Facts from the Physical World
标题:从物理世界中挖掘常识事实
作者: Yanyan Zou
链接:https://arxiv.org/abs/2002.03149

【14】 Blank Language Models
标题:空白语言模型
作者: Tianxiao Shen, Tommi Jaakkola
链接:https://arxiv.org/abs/2002.03079

20200206
【4】 K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters
标题:K-Adapter:使用Adapter将知识注入预先训练的模型
作者: Ruize Wang, Ming Zhou
链接:https://arxiv.org/abs/2002.01808

【6】 Parsing as Pretraining
标题:解析为预培训
作者: David Vilares, Carlos Gómez-Rodríguez
备注:AAAI 2020 - The Thirty-Fourth AAAI Conference on Artificial Intelligence
链接:https://arxiv.org/abs/2002.01685

20200204
【17】 Are Pre-trained Language Models Aware of Phrases? Simple but Strong Baselines for Grammar Induction
标题:预先培训的语言模型是否知道短语?简单但强大的语法归纳基线
作者: Taeuk Kim, Sang-goo Lee
备注:ICLR 2020
链接:https://arxiv.org/abs/2002.00737

【33】 Beat the AI: Investigating Adversarial Human Annotations for Reading Comprehension
标题:打败AI:研究阅读理解中的对抗性人类注释
作者: Max Bartolo, Pontus Stenetorp
备注:21 pages including appendices

20200203

【1】 Pretrained Transformers for Simple Question Answering over Knowledge Graphs
标题:知识图上简单问题回答的预训练变压器
作者: D. Lukovnikov, J. Lehmann
链接:https://arxiv.org/abs/2001.11985

【4】 Break It Down: A Question Understanding Benchmark
标题:分解它:一个问题理解基准
作者: Tomer Wolfson, Jonathan Berant
备注:Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2020. Author’s final version
链接:https://arxiv.org/abs/2001.11770

【5】 Teaching Machines to Converse
标题:教机器反转
作者: Jiwei Li
链接:https://arxiv.org/abs/2001.11701

20200228

【5】 Consciousness and Automated Reasoning
标题:意识与自动推理
作者: Ulrike Barthelmeß, Claudia Schon
链接:https://arxiv.org/abs/2001.09442

【24】 Retrospective Reader for Machine Reading Comprehension
标题:机器阅读理解回溯阅读器
作者: Zhuosheng Zhang, Hai Zhao
链接:https://arxiv.org/abs/2001.09694

20200124
【3】 A Study of the Tasks and Models in Machine Reading Comprehension
标题:机器阅读理解的任务与模式研究
作者: Chao Wang
链接:https://arxiv.org/abs/2001.08635

你可能感兴趣的:(2021秋招)