【论文整理】DL paper notes 2015-11

2015-11

NLP

  • Deep Reinforcement Learning with a Natural Language Action Space [arXiv]
  • Sequence Level Training with Recurrent Neural Networks [arXiv]
  • Teaching Machines to Read and Comprehend [arxiv]
  • Semi-supervised Sequence Learning [arXiv]
  • Multi-task Sequence to Sequence Learning [arXiv]
  • Alternative structures for character-level RNNs [arXiv]
  • Larger-Context Language Modeling [arXiv]
  • A Unified Tagging Solution: Bidirectional LSTM Recurrent Neural Network with Word Embedding [arXiv]
  • Towards Universal Paraphrastic Sentence Embeddings [arXiv]
  • BlackOut: Speeding up Recurrent Neural Network Language Models With Very Large Vocabularies [arXiv]
  • Sequence Level Training with Recurrent Neural Networks [arXiv]
  • Natural Language Understanding with Distributed Representation [arXiv]
  • sense2vec - A Fast and Accurate Method for Word Sense Disambiguation In Neural Word Embeddings [arXiv]
  • LSTM-based Deep Learning Models for non-factoid answer selection [arXiv]

Programs

  • Neural Random-Access Machines [arxiv]
  • Neural Programmer: Inducing Latent Programs with Gradient Descent [arXiv]
  • Neural Programmer-Interpreters [arXiv]
  • Learning Simple Algorithms from Examples [arXiv]
  • Neural GPUs Learn Algorithms [arXiv] [code]
  • On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models [arXiv]

Vision

  • ReSeg: A Recurrent Neural Network for Object Segmentation [arXiv]
  • Deconstructing the Ladder Network Architecture [arXiv]
  • Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks [arXiv]
  • Multi-Scale Context Aggregation by Dilated Convolutions [arXiv] [code]

General

  • Towards Principled Unsupervised Learning [arXiv]
  • Dynamic Capacity Networks [arXiv]
  • Generating Sentences from a `ous Space [arXiv]
  • Net2Net: Accelerating Learning via Knowledge Transfer [arXiv]
  • A Roadmap towards Machine Intelligence [arXiv]
  • Session-based Recommendations with Recurrent Neural Networks [arXiv]
  • Regularizing RNNs by Stabilizing Activations [arXiv]

你可能感兴趣的:(【论文整理】DL paper notes 2015-11)