【特征学习】【无监督】【教程】EECS 598 Unsupervised Feature Learning

EECS 598 Unsupervised Feature Learning

Instructor: Prof. Honglak Lee

Instructor webpage:http://www.eecs.umich.edu/~honglak/

Office hours: Th 5pm-6pm, 3773 CSE

Classroom: 1690 CSE

Time: M W 10:30am-12pm

Course Schedule

(Note: this schedule is subject to change.)

DateTopicPapersPresenter

9/8IntroductionHonglak

9/13Sparse codingB. Olshausen, D. Field. Emergence of Simple-Cell Receptive Field Properties by Learning a Sparse Code for Natural Images. Nature, 1996.

H. Lee, A. Battle, R. Raina, and A. Y. Ng. Efficient sparse coding algorithms. NIPS, 2007.Honglak

9/15Self-taught learning

Application: computer visionR. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng. Self-taught learning: Transfer learning from unlabeled data. ICML, 2007.

H. Lee, R. Raina, A. Teichman, and A. Y. Ng. Exponential Family Sparse Coding with Application to Self-taught Learning. IJCAI, 2009.

J. Yang, K. Yu, Y. Gong, and T. Huang. Linear Spatial Pyramid Matching Using Sparse Coding for Image Classification. CVPR, 2009.Honglak

9/20Neural networks and deep architectures IY. Bengio. Learning Deep Architectures for AI, Foundations and Trends in Machine Learning, 2009.Chapter 4.

Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. NIPS, 2007.Deepak

9/22Restricted Boltzmann machineY. Bengio. Learning Deep Architectures for AI, Foundations and Trends in Machine Learning, 2009.Chapter 5.Byung-soo

9/27Variants of RBMs and AutoencodersP. Vincent, H. Larochelle, Y. Bengio, and P. Manzagol. Extracting and composing robust features with denoising autoencoders. ICML, 2008.

H. Lee, C. Ekanadham, and A. Y. Ng. Sparse deep belief net model for visual area V2. NIPS, 2008.Chun-Yuen

9/29Deep belief networksY. Bengio. Learning Deep Architectures for AI, Foundations and Trends in Machine Learning, 2009.Chapter 6.

R. Salakhutdinov, PhD Thesis.Chapter 2Anna

10/4Convolutional deep belief networksH. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. ICML, 2009.Min-Yian

10/6Application: audioH. Lee, Y. Largman, P. Pham, and A. Y. Ng. Unsupervised feature learning for audio classification using convolutional deep belief networks. NIPS, 2009.

A. R. Mohamed, G. Dahl, and G. E. Hinton, Deep belief networks for phone recognition. NIPS 2009 workshop on deep learning for speech recognition.Yash

10/11Factorized models IM. Ranzato, A. Krizhevsky, G. E. Hinton, Factored 3-Way Restricted Boltzmann Machines for Modeling Natural Images. AISTATS, 2010.Chun

10/13Factorized models IIM. Ranzato, G. E. Hinton. Modeling Pixel Means and Covariances Using Factorized Third-Order Boltzmann Machines. CVPR, 2010.Soonam

10/18No class - study break

10/20Project proposal presentations

10/25Temporal modeling IG. Taylor, G. E. Hinton, and S. Roweis. Modeling Human Motion Using Binary Latent Variables. NIPS, 2007.

G. Taylor and G. E. Hinton. Factored Conditional Restricted Boltzmann Machines for Modeling Motion Style. ICML, 2009.Jeshua

10/27Temporal modeling IIG. Taylor, R. Fergus, Y. LeCun and C. Bregler. Convolutional Learning of Spatio-temporal Features. ECCV, 2010.Robert

11/1Energy-based modelsK. Kavukcuoglu, M. Ranzato, R. Fergus, and Y. LeCun, Learning Invariant Features through Topographic Filter Maps. CVPR, 2009.

K. Kavukcuoglu, M. Ranzato, and Y. LeCun, Fast Inference in Sparse Coding Algorithms with Applications to Object Recognition. CBLL-TR-2008-12-01, 2008.Ryan

11/3Pooling and invarianceK. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun, What is the Best Multi-Stage Architecture for Object Recognition? ICML, 2009.Min-Yian

11/8Evaluating RBMsR. Salakhutdinov and I. Murray. On the Quantitative Analysis of Deep Belief Networks. ICML, 2008.

R. Salakhutdinov, PhD Thesis.Chapter 4Jeshua

11/10Deep Boltzmann machinesR. Salakhutdinov and G. E. Hinton. Deep Boltzmann machines. AISTATS, 2009.Dae Yon

11/15Local coordinate codingK. Yu, T. Zhang, and Y. Gong. Nonlinear Learning using Local Coordinate Coding, NIPS, 2009.

J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong. Learning Locality-constrained Linear Coding for Image Classification. CVPR, 2010.Robert

11/17Deep architectures IIH. Larochelle, Y. Bengio, J. Louradour and P. Lamblin, Exploring Strategies for Training Deep Neural Networks, JMLR, 2009.Soonam

11/22Deep architectures IIID. Erhan, Y. Bengio, A. Courville, P.-A. Manzagol, P. Vincent and S. Bengio, Why Does Unsupervised Pre-training Help Deep Learning? JMLR, 2010.Chun

11/24Application: computer vision IIJ. Yang, K. Yu, and T. Huang. Supervised Translation-Invariant Sparse Coding. CVPR, 2010.

Y. Boureau, F. Bach, Y. LeCun and J. Ponce: Learning Mid-Level Features for Recognition. CVPR, 2010.Dae Yon

11/29Pooling and invariance III. J. Goodfellow, Q. V. Le, A. M. Saxe, H. Lee, and A. Y. Ng. Measuring invariances in deep networks. NIPS, 2009.

Y. Boureau, J. Ponce, Y. LeCun, A theoretical analysis of feature pooling in vision algorithms. ICML, 2010.Anna

12/1Application: natural language processingR. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. ICML, 2009.Guanyu

12/13Project presentations I

12/15Project presentations II

12/19Final project report due

你可能感兴趣的:(【特征学习】【无监督】【教程】EECS 598 Unsupervised Feature Learning)