深度之眼Paper带读笔记目录

文章目录

  • 简介
  • NLP精读论文目录(已完结)
  • CV目录

简介

本次的Paper学习营分CV和NLP两个方向,每个方向又分精读、重点阅读和推荐阅读三类文章,精读基本每篇文章都分三部分:论文导读、论文精读、代码讲解。
为了快速了解各种方法、框架,代码部分我基本就略过,等遇到实际问题再上手-。-
两个方向的第一篇文章是一样的。
所有论文都可以从谷歌学术上找到。

NLP精读论文目录(已完结)

01.Deep learning:Deep learning
02.word2vec:Efficient Estimation of Word Representations in Vector Space
03.句和文档的embedding:Distributed representations of sentences and docments
04.machine translation:Neural Machine Translation by Jointly Learning to Align and Translate
05.transformer:Transformer: attention is all you need
06.GloVe:GloVe: Global Vectors for Word Representation
07.Skip:Skip-Thought Vector
08.TextCNN:Convolutional Neural Networks for Sentence Classification
09.基于CNN的词级别的文本分类:Character-level Convolutional Networks for Text Classification
10.DCNN:A Convolutional Neural Network For Modelling Sentences
11.FASTTEXT:Bag of Tricks for Efficient Text Classification
12.HAN:Hierarchical Attention Network for Document Classification
13.PCNNATT:Neural Relation Extraction with Selective Attention over Instances
14.E2ECRF:End-to-end Sequence Labeling via Bi-directional LSTM-CNNS-CRF
15.多层LSTM:Sequence to Sequence Learning with Neural Networks
16.卷积seq2seq:Convolutional Sequence to Sequence Learning
17.GNMT:Google’s Neural Machine Translation System:Bridging the Gap between Human and Machine Translation
18.UMT:Phrase-Based&Neural Unsupervised Machine Translation
19.指针生成网络:Get To The Point:Summarization with Pointer-Generator Networks
20.End-to-End Memory Networks:End-to-End Memory Networks
21.QANet:QANet:Combining Local Convolution with Global Self-Attention for Reading Comprehension
22.双向Attention:Bi-Directional Attention Flow for Machine Comprehension
23.Dialogue:Adversarial Learning for Neural Dialogue Generation
24.缺
25.R-GCNs:Modeling Relational Data with GraphConvolutional Networks
26.大规模语料模型:Exploring the limits of language model
27.Transformer-XL:Transformer-XL:Attentive Language Models Beyond a Fixed-Length Context
28.TCN:An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
29.Deep contextualized word representations
30.BERT:Pre-training of Deep Bidirectional Transformers for Language Understanding

CV目录

这里和之前的CV paper学习营内容有变化,待更新。
01.Deep learning:Deep learning
02.AlexNet:ImageNet Classification with Deep Convolutional Neural Networks

你可能感兴趣的:(Paper带读)