An Improved Baseline for Sentence-level Relation Extraction

An Improved Baseline for Sentence-level Relation Extraction_第1张图片

论文地址: An Improved Baseline for Sentence-level Relation Extraction

Abstract & Contribution

目前的句子级的关系抽取任务效果,还有远远达不到人工的效果。

本文反思已有模型并指出两个被忽视的方面:

  1. 关系实例包含多个方面的实体信息,如实体名字、范围、类型;已有的模型并没有将其作为输入。
  2. 由于预定义的知识本体还具有一定的限制,所以不可避免地有一些关系并不在知识本体中并被标注为NA类别,但是实际上他们可能有更多样的语义关系。

最后提出一个经过改善的句子级关系抽取模型:

  1. 使用typed marker1提高对于实体表达的效果;
  2. 对被分类为NA的实例,使用confidence-based classification2进行分类,设定一个置信程度,如果最终的分数低于置信度再归类为NA类别。

本文模型再TACRED数据集上的F1达到75.0%,高出SOTA模型2.3%。

Model for RE

本文的RE模型主要是扩展了之前的 基于tranformer3的关系抽取4

实体表示Entity Representation

针对实体表示的多个问题,本文对比了多个实体表示方法,包括Entity mask5、Entity marker6、Entity marker (punct) 7、Typed entity marker8、Typed entity marker (punct)(本文提出的):

An Improved Baseline for Sentence-level Relation Extraction_第2张图片

上图想说明的是:

  1. 本文提出的实体标注方法表现非常好,F1值达到了74.5%;
  2. 引入特殊的符号标记,让RoBERTa模型的效果更差;

另一方面,对于新实体的推理方面。之前的工作9表明:实体名字可能对关系类型缺乏启发性,并且只用实体对作为输入能达到更高的效果,因此建议RE分类器不使用entity mark可能对未遇见地实体不具有较好适用性。

但是,使用entity mask又会导致缺失实体信息,无法将实体信息更好学习10 11 12;如果不考虑实体的名字,那么任务不能通过外部知识库进行优化提高。

因此本文提出一个过滤评价(Filtered evaluation setting):对测试及进行筛选,筛选出其实体在训练集没出现过的的测试数据作为过滤测试集(Filtered test set)。然后评估结果如下图:An Improved Baseline for Sentence-level Relation Extraction_第3张图片

结论是:Typed Entity Marker(OURS)的效果依然比Entity Mask.

NA instances

接下来解决第二个问题:实际上,大量的数据都是NA类别数据,TACRED数据集中78.7%都是NA。

已有的解决方法是:使用一个额外的类别,如果分类为NA的概率大于分类为其他类别的概率,那么就分类为NA。

本文方法:使用基于置信度的分类模型,如果一个样本有对应类别关系,就给与较高的置信分,低于置信分的样本则被分为NA类别。本文方法类似于Bendale和Dhamija的开放数据集的分类13 14 和Liang的OOV检测15.我们给定一个句子 x x x,计算出每个类别的分类概率 p ∈ R ∣ R ∣ p \in \mathbb {R}^{|\mathcal R|} pRR和置信分数 c = m a x r ∈ R p r c= max_{r \in\mathbb {R}}p_r c=maxrRpr,通过最大的分类概率确定对应的类别。

为了保证本文方法可行,要有两个条件:

  1. NA类别足够低分;
  2. 其他类别足够高分;

(说了跟没说一样)

前者通过交叉熵损失函数已经实现,后者直接取最小化置信分又会导致优化不足,因此在“样本类别对应最大分类概率”动手脚,替代原来的置信分:

c s u p = ∑ r ∈ R p r 2 c_{sup} = \sum_{r \in \mathcal R} p^2_r csup=rRpr2

L c o n f = l o g ( 1 − c s u p ) \mathcal{L} _{conf} = log (1-c_{sup}) Lconf=log(1csup)

其中,根据高数的知识得出 c = m a x r ∈ R p r ⩽ c s u p c= max_{r \in\mathbb {R}}p_r \leqslant \sqrt{c_{sup}} c=maxrRprcsup ,最小化上述函数就相当于最小化c,这回使得训练更加稳定。然后用上述函数对关系 r r r的逻辑 l r l_r lr进行求导得到:

∂ L c o n f l r = − 2 p r ( p r − ∑ r ∈ R p r 2 ) 1 − ∑ r ∈ R p r 2 \frac {\partial \mathcal{L} _{conf}} {l_r}= - \frac {2p_r(p_r - \sum_{r \in \mathcal R}p^2_r)} {1- \sum_{r \in \mathcal R}p^2_r} lrLconf=1rRpr22pr(prrRpr2)

并且:

  1. p r = 1 ∣ R ∣ p_r = \frac {1}{| \mathcal{R}|} pr=R1时,置信分取最小值,也就是概率分布是平均分布的时候;
  2. 置信函数通过 1 1 − ∑ r ∈ R p r 2 \frac {1}{1- \sum_{r \in \mathcal R}p^2_r} 1rRpr21,自动对训练实例进行加权。让拥有高置信分数的NA类别样本拥有更高的权重。

Experiments

  • 训练语料:TACRED 和 SemEval 2010
  • 学习率:3e-5,采用线性衰减的预热学习率的方式,参考资料Warmup预热学习率
  • Batch size:64
  • Epoch:5(TACRED) and 10(SemEval)

训练结果如图:

An Improved Baseline for Sentence-level Relation Extraction_第4张图片

An Improved Baseline for Sentence-level Relation Extraction_第5张图片

比基线模型16F1高出0.5%。


  1. Zexuan Zhong and Danqi Chen. 2020. A frustratingly easy approach for joint entity and relation extraction.arXiv preprint arXiv:2010.12812. ↩︎

  2. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva,Preslav Nakov, Diarmuid O S´eaghdha, Sebastian Pad ´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2019. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. arXiv preprint arXiv:1911.10422. ↩︎

  3. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. ↩︎

  4. Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2895–2905, Florence, Italy. Association for Computational Linguistics. ↩︎

  5. Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D. Manning. 2017.Positionaware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,pages 35–45, Copenhagen, Denmark. Association for Computational Linguistics. ↩︎

  6. Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2895–2905, Florence, Italy. Association for Computational Linguistics. ↩︎

  7. Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Cuihong Cao, Daxin Jiang, Ming Zhou, et al. 2020. K-adapter: Infusing knowledge into pre-trained models with adapters. arXiv preprint arXiv:2002.01808. ↩︎

  8. Zexuan Zhong and Danqi Chen. 2020. A frustratingly easy approach for joint entity and relation extraction. arXiv preprint arXiv:2010.12812. ↩︎

  9. Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2205–2215, Brussels, Belgium. Association for Computational Linguistics. ↩︎

  10. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: Enhanced language representation with informative entities. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1441–1451, Florence, Italy. Association for Computational Linguistics. ↩︎

  11. Matthew E. Peters, Mark Neumann, IV RobertLLogan, Roy Schwartz, V. Joshi, Sameer Singh, and Noah A. Smith. 2019. Knowledge enhanced contextual word representations. In EMNLP/IJCNLP. ↩︎

  12. Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Cuihong Cao, Daxin Jiang, Ming Zhou, et al. 2020. K-adapter: Infusing knowledge into pre-trained models with adapters. arXiv preprint arXiv:2002.01808. ↩︎

  13. Abhijit Bendale and Terrance E Boult. 2016. Towards open set deep networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1563–1572. ↩︎

  14. Akshay Raj Dhamija, Manuel G ¨unther, and Terrance E Boult. 2018. Reducing network agnostophobia. In NeurIPS. ↩︎

  15. Shiyu Liang, Yixuan Li, and R Srikant. 2018. Enhancing the reliability of out-of-distribution image detection in neural networks. In International Conference on Learning Representations. ↩︎

  16. Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2895–2905, Florence, Italy. Association for Computational Linguistics ↩︎

你可能感兴趣的:(关系抽取,机器学习,自然语言处理,数据挖掘,机器学习,深度学习)