BERT系列经典文章阅读

BERT系列经典文章阅读

[1] BERT
原文:
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
笔记:
论文笔记–BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

[2] RoBERT
原文:RoBERTa: A Robustly Optimized BERT Pretraining Approach
笔记:论文笔记–RoBERTa: A Robustly Optimized BERT Pretraining Approach

[3] SBERT
原文:Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
笔记:论文笔记–Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks

[4] ERNIE
原文:ERNIE: Enhanced Representation through Knowledge Integration
笔记:论文笔记–ERNIE: Enhanced Representation through Knowledge Integration

[5] ERNIE 2.0
原文:ERNIE 2.0: A Continual Pre-Training Framework for Language Understanding
笔记:论文笔记–ERNIE 2.0: A Continual Pre-Training Framework for Language Understanding

[6] ERNIE 3.0
原文:ERNIE 3.0: LARGE-SCALE KNOWLEDGE ENHANCED PRE-TRAINING FOR LANGUAGE UNDERSTANDING AND GENERATION
笔记:论文笔记–ERNIE 3.0: LARGE-SCALE KNOWLEDGE ENHANCED PRE-TRAINING FOR LANGUAGE UNDERSTANDING

[7] ALBERT
原文:ALBERT: A LITE BERT FOR SELF-SUPERVISED LEARNING OF LANGUAGE REPRESENTATIONS
笔记:论文笔记–ALBERT: A LITE BERT FOR SELF-SUPERVISED LEARNING OF LANGUAGE REPRESENTATIONS

[8] XLNET
原文:XLNet: Generalized Autoregressive Pretraining for Language Understanding
笔记:论文笔记–XLNet: Generalized Autoregressive Pretraining for Language Understanding

[9] XLMs
原文:Cross-lingual Language Model Pretraining
笔记:论文笔记–Cross-lingual Language Model Pretraining

[10] PANGU- α \alpha α
原文:PANGU-α: LARGE-SCALE AUTOREGRESSIVE PRETRAINED CHINESE LANGUAGE MODELS WITH AUTO-PARALLEL COMPUTATION
笔记:论文笔记–PANGU-α

[11] SimCSE
原文:SimCSE: Simple Contrastive Learning of Sentence Embeddings
笔记:论文笔记–SimCSE: Simple Contrastive Learning of Sentence Embeddings

[12] StructBERT
原文:STRUCTBERT: INCORPORATING LANGUAGE STRUCTURES INTO PRE-TRAINING FOR DEEP LANGUAGE UNDERSTANDING
笔记:论文笔记–STRUCTBERT: INCORPORATING LANGUAGE STRUCTURES INTO PRE-TRAINING FOR DEEP LANGUAGE UNDERSTANDIN

[13] BERT-flow
原文:On the Sentence Embeddings from Pre-trained Language Model
笔记:论文笔记–On the Sentence Embeddings from Pre-trained Language Models

[14] DistilBERT
原文:DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
笔记:论文笔记–DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter

[15] TinyBERT
原文:TinyBERT: Distilling BERT for Natural Language Understanding
笔记:论文笔记–TinyBERT: Distilling BERT for Natural Language Understanding

[16] ERNIE
原文:ERNIE: Enhanced Language Representation with Informative Entities
笔记:论文笔记–ERNIE: Enhanced Language Representation with Informative Entities

你可能感兴趣的:(论文阅读,bert,文心一言,论文阅读,语言模型,transformer)