noteexpress在插入参考文献后,没有引用。。。。
保存为PDF文件时,会出现“错误,未找到引用源”
解决方法:
一个个文献在人手交叉引用一遍。。。
网上那些夸noteexpress的,不知道有没有更好的办法?
简直坑爹。
交叉引用后,仍然会出现“错误,未找到引用源”。
按Ctrl+A先选中整个文本区域,在按ctrl+F11固定域。
插入的文本框 ,在在使用ctrl+A时,应该是选择不中的,需要单独选择,在按Ctrl+F11。
# -*- coding: utf-8 -*-
# @Time : 2022/11/21 18:27
# @FileName: 文章编号.py
# @Software: PyCharm
txt=''' [1] EKBAL A, BANDYOPADHYAY S. Bengali Named Entity Recognition Using Classifier Combination: IEEE, 2009.
[2] GUODONG ZHOU J S J Z. Exploring Various Knowledge in Relation Extraction: Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05)[C], Ann Arbor, Michigan: Association for Computational Linguistics, 2005.
[3] ZHENG S C, XU J M, ZHOU P, et al. A neural network framework for relation extraction: Learning entity semantic and relation pattern[J]. KNOWLEDGE-BASED SYSTEMS, 2016,114: 12-23.
[4] WAN Q, WEI L N, CHEN X H, et al. A region-based hypergraph network for joint entity-relation extraction[J]. KNOWLEDGE-BASED SYSTEMS, 2021,228.
[5] TANG R X, CHEN Y P, QIN Y B, et al. Boundary assembling method for joint entity and relation extraction[J]. KNOWLEDGE-BASED SYSTEMS, 2022,250.
[6] ZHAO K, XU H, CHENG Y, et al. Representation iterative fusion based on heterogeneous graph neural network for joint entity and relation extraction[J]. KNOWLEDGE-BASED SYSTEMS, 2021,219.
[7] WANG Y, YU B, ZHANG Y, et al. TPLinker: Single-stage Joint Extraction of Entities and Relations Through Token Pair Linking: Proceedings of the 28th International Conference on Computational Linguistics[C], Barcelona, Spain (Online): International Committee on Computational Linguistics, 2020.
[8] WANG Y, SUN C, WU Y, et al. UniRE: A Unified Label Space for Entity Relation Extraction: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing[C], Online: Association for Computational Linguistics, 2021.
[9] ZHONG Z, CHEN D. A Frustratingly Easy Approach for Entity and Relation Extraction: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies[C], Online: Association for Computational Linguistics, 2021.
[10] ZHANG Z J, ZHANG H Y, WAN Q, et al. LELNER: A Lightweight and Effective Low-resource Named Entity Recognition model[J]. KNOWLEDGE-BASED SYSTEMS, 2022,251.
[11] WANG J N, XU W J, FU X Y, et al. ASTRAL: Adversarial Trained LSTM-CNN for Named Entity Recognition[J]. KNOWLEDGE-BASED SYSTEMS, 2020,197.
[12] GUILLAUME LAMPLE M B S S. Neural Architectures for Named Entity Recognition: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language[C], San Diego, California: Association for Computational Linguistics, 2016.
[13] WANG H L, QIN K, LU G M, et al. Direction-sensitive relation extraction using Bi-SDP attention model[J]. KNOWLEDGE-BASED SYSTEMS, 2020,198.
[14] PENG T, HAN R D, CUI H, et al. Distantly Supervised Relation Extraction using Global Hierarchy Embeddings and Local Probability Constraints[J]. KNOWLEDGE-BASED SYSTEMS, 2022,235.
[15] LI Q, LI L L, WANG W N, et al. A comprehensive exploration of semantic relation extraction via pre-trained CNNs[J]. KNOWLEDGE-BASED SYSTEMS, 2020,194.
[16] YAN Z, ZHANG C, FU J, et al. A Partition Filter Network for Joint Entity and Relation Extraction: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing[C], Online and Punta Cana, Dominican Republic: Association for Computational Linguistics, 2021.
[17] YANG B S, WONG D F, CHAO L S, et al. Improving tree-based neural machine translation with dynamic lexicalized dependency encoding[J]. KNOWLEDGE-BASED SYSTEMS, 2020,188.
[18] ARTETXE M, LABAKA G, CASAS N, et al. Do all roads lead to Rome? Understanding the role of initialization in iterative back-translation[J]. KNOWLEDGE-BASED SYSTEMS, 2020,206.
[19] MINQING HU B L. Mining and Summarizing Customer Reviews: KDD '04: Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining[C], Online, 2004.
[20] FEI LI Z L M Z. A Span-Based Model for Joint Overlapped and Discontinuous Named Entity Recognition: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)[C], Online: Association for Computational Linguistics, 2021.
[21] WANG B, LU W. Combining Spans into Entities: A Neural Two-Stage Approach for Recognizing Discontiguous Entities: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)[C], Hong Kong, China: Association for Computational Linguistics, 2019.
[22] ZHENG S, WANG F, BAO H, et al. Joint Extraction of Entities and Relations Based on a Novel Tagging Scheme: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)[C], Vancouver, Canada: Association for Computational Linguistics, 2017.
[23] YE D, LIN Y, LI P, et al. Packed Levitated Marker for Entity and Relation Extraction: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)[C], Dublin, Ireland: Association for Computational Linguistics, 2021.
[24] ZHENG H, WEN R, CHEN X, et al. PRGC: Potential Relation and Global Correspondence Based Joint Relational Triple Extraction: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)[C], Online: Association for Computational Linguistics, 2021.
[25] REN F, ZHANG L, YIN S, et al. A Novel Global Feature-Oriented Relational Triple Extraction Model based on Table Filling: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing[C], Online and Punta Cana, Dominican Republic: Association for Computational Linguistics, 2021.
[31] HOCHREITER S S J. Long short-term memory[J]. Neural Comput, 1997,15(9(8)): 1735-1780.
[32] MIKE LEWIS Y L N G. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics[C], 2020.
[33] JACOB DEVLIN M C K L. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)[C], Minneapolis, Minnesota: Association for Computational Linguistics, 2018.
[26] BEKOULIS G, DELEU J, DEMEESTER T, et al. Joint entity recognition and relation extraction as a multi-head selection problem[J]. Expert systems with applications, 2018,114: 34-45.
[27] JIANLIN S. GlobalPointer: handle nested and not nested entity in a unified way[EB/OL]. https://spaces.ac.cn/archives/8373.
[28] CHRISTOPH ALT A G A L. Probing Linguistic Features of Sentence-Level Representations in Neural Relation Extraction: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics[C], Online: Association for Computational Linguistics, 2020.
[29] CONNEAU A, KRUSZEWSKI G, LAMPLE G, et al. What you can cram into a single vector: Probing sentence embeddings for linguistic properties: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)[C], Melbourne, Australia: Association for Computational Linguistics, 2018.
[30] ALT C, GABRYSZAK A, HENNIG L. TACRED Revisited: A Thorough Evaluation of the TACRED Relation Extraction Task: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics[C], Online: Association for Computational Linguistics, 2020.
'''
new_texs=[]
texs=txt.split('\n')
for i,t in enumerate(texs):
tr='['+str(i+1)+']'
new_texs.append(t.replace(tr,''))
print(t.replace(tr,''))