CNN、RNN、LSTM、BERT等引用格式

三个引用格式,以此是

GB/T 7714:文后参考文献著录规则(我国)国家标准的代号由大写汉语拼音字母构成。 强制性国家标准的代号为"GB",推荐性国家标准的代号为"GB/T"。

MLA(Modern Language Association)是一种常用的引用格式,为美国现代语言协会制定的论文指导格式,在一般书写英语论文时应当使用MLA格式来保证学术著作的完整。”

APA(American Psychological Association):APA格式是一个为广泛接受的研究论文撰写格式,特别针对社会科学领域的研究,规范学术文献的引用和参考文献的撰写方法,以及表格、图表、注脚和附录的编排方式。

中文论文引用一般都是GB/T的

CNN:

[1] Kim Y . Convolutional Neural Networks for Sentence Classification[J]. Eprint Arxiv, 2014.

[1] Kim, Y. . "Convolutional Neural Networks for Sentence Classification." Eprint Arxiv (2014).

[1] Kim, Y. . (2014). Convolutional neural networks for sentence classification. Eprint Arxiv.

RNN

[1] Elman J L . Finding Structure in Time[J]. Cognitive Science, 1990, 14(2):179-211.

[1] Elman, J. L. . "Finding Structure in Time." Cognitive Science 14.2(1990):179-211.

[1] Elman, J. L. . (1990). Finding structure in time. Cognitive Science, 14(2), 179-211.

LSTM

[1] Hochreiter S ,  Schmidhuber J . Long Short-Term Memory[J]. Neural Computation, 1997, 9(8):1735-1780.

[1] Hochreiter, S. , and  J. Schmidhuber . "Long Short-Term Memory." Neural Computation 9.8(1997):1735-1780.

[1] Hochreiter, S. , &  Schmidhuber, J. . (1997). Long short-term memory. Neural Computation, 9(8), 1735-1780.

GRU

[1] Chung J ,  Gulcehre C ,  Cho K H , et al. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling[J]. Eprint Arxiv, 2014.

[1] Chung, J. , et al. "Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling." Eprint Arxiv (2014).

[1] Chung, J. ,  Gulcehre, C. ,  Cho, K. H. , &  Bengio, Y. . (2014). Empirical evaluation of gated recurrent neural networks on sequence modeling. Eprint Arxiv.

TransE

[1] Bordes A ,  Usunier N ,  Garcia-Duran A , et al. Translating Embeddings for Modeling Multi-relational Data. Curran Associates Inc.  2013.

[1] Bordes, Antoine , et al. "Translating Embeddings for Modeling Multi-relational Data." Curran Associates Inc.(2013).

[1] Bordes, A. ,  Usunier, N. ,  Garcia-Duran, A. ,  Weston, J. , &  Yakhnenko, O. . (2013). Translating Embeddings for Modeling Multi-relational Data. Curran Associates Inc.

Transformer(Attention is all you need)

[1] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[J]. Advances in neural information processing systems, 2017, 30.

[1] Vaswani, Ashish, et al. "Attention is all you need." Advances in neural information processing systems 30 (2017).

[1] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems30.

BERT

[1] Devlin J, Chang M W, Lee K, et al. Bert: Pre-training of deep bidirectional transformers for language understanding[J]. arXiv preprint arXiv:1810.04805, 2018.

[1] Devlin, Jacob, et al. "Bert: Pre-training of deep bidirectional transformers for language understanding." arXiv preprint arXiv:1810.04805 (2018).

[1] Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.

你可能感兴趣的:(rnn,cnn,bert)