转载出处:http://www.zhizhihu.com/html/y2009/790.html
在机器学习领域,迁移学习(Transfer learning)是一个比较新的名词。目前国内做这个方面的很少,我目前只知道香港科技大学杨强教授及上海交大的机器学习小组在从事这方面的研究,近几年他们已经取得大量的成果,发表了十几篇AI领域顶级的会议论文,着实让我崇拜不已。接下来的研究生活,偶希望能循着他们的足迹慢慢摸索!
Qiang Yang
http://www.cse.ust.hk/~qyang/
Sinno Jialin Pan
http://www.cse.ust.hk/~sinnopan/
----------------------苗条分割线------------------------------
转载于: http://apex.sjtu.edu.cn/apex_wiki/Transfer%20Learning
迁移学习( Transfer Learning )
薛贵荣
在传统的机器学习的框架下,学习的任务就是在给定充分训练数据的基础上来学习一个分类模型;然后利用这个学习到的模型来对测试文档进行分类与预测。然而,我们看到机器学习算法在当前的Web挖掘研究中存在着一个关键的问题:一些新出现的领域中的大量训练数据非常难得到。我们看到Web应用领域的发展非常快速。大量新的领域不断涌现,从传统的新闻,到网页,到图片,再到博客、播客等等。传统的机器学习需要对每个领域都标定大量训练数据,这将会耗费大量的人力与物力。而没有大量的标注数据,会使得很多与学习相关研究与应用无法开展。其次,传统的机器学习假设训练数据与测试数据服从相同的数据分布。然而,在许多情况下,这种同分布假设并不满足。通常可能发生的情况如训练数据过期。这往往需要我们去重新标注大量的训练数据以满足我们训练的需要,但标注新数据是非常昂贵的,需要大量的人力与物力。从另外一个角度上看,如果我们有了大量的、在不同分布下的训练数据,完全丢弃这些数据也是非常浪费的。如何合理的利用这些数据就是迁移学习主要解决的问题。迁移学习可以从现有的数据中迁移知识,用来帮助将来的学习。迁移学习(Transfer Learning)的目标是将从一个环境中学到的知识用来帮助新环境中的学习任务。因此,迁移学习不会像传统机器学习那样作同分布假设。
我们在迁移学习方面的工作目前可以分为以下三个部分:同构空间下基于实例的迁移学习,同构空间下基于特征的迁移学习与异构空间下的迁移学习。我们的研究指出,基于实例的迁移学习有更强的知识迁移能力,基于特征的迁移学习具有更广泛的知识迁移能力,而异构空间的迁移具有广泛的学习与扩展能力。这几种方法各有千秋。
1.同构空间下基于实例的迁移学习
基于实例的迁移学习的基本思想是,尽管辅助训练数据和源训练数据或多或少会有些不同,但是辅助训练数据中应该还是会存在一部分比较适合用来训练一个有效的分类模型,并且适应测试数据。于是,我们的目标就是从辅助训练数据中找出那些适合测试数据的实例,并将这些实例迁移到源训练数据的学习中去。在基于实例的迁移学习方面,我们推广了传统的AdaBoost算法,提出一种具有迁移能力的boosting算法:Tradaboosting [9],使之具有迁移学习的能力,从而能够最大限度的利用辅助训练数据来帮助目标的分类。我们的关键想法是,利用boosting的技术来过滤掉辅助数据中那些与源训练数据最不像的数据。其中,boosting的作用是建立一种自动调整权重的机制,于是重要的辅助训练数据的权重将会增加,不重要的辅助训练数据的权重将会减小。调整权重之后,这些带权重的辅助训练数据将会作为额外的训练数据,与源训练数据一起从来提高分类模型的可靠度。
基于实例的迁移学习只能发生在源数据与辅助数据非常相近的情况下。但是,当源数据和辅助数据差别比较大的时候,基于实例的迁移学习算法往往很难找到可以迁移的知识。但是我们发现,即便有时源数据与目标数据在实例层面上并没有共享一些公共的知识,它们可能会在特征层面上有一些交集。因此我们研究了基于特征的迁移学习,它讨论的是如何利用特征层面上公共的知识进行学习的问题。
2.同构空间下基于特征的迁移学习
在基于特征的迁移学习研究方面,我们提出了多种学习的算法,如CoCC算法[7],TPLSA算法[4],谱分析算法[2]与自学习算法[3]等。其中利用互聚类算法产生一个公共的特征表示,从而帮助学习算法。我们的基本思想是使用互聚类算法同时对源数据与辅助数据进行聚类,得到一个共同的特征表示,这个新的特征表示优于只基于源数据的特征表示。通过把源数据表示在这个新的空间里,以实现迁移学习。应用这个思想,我们提出了基于特征的有监督迁移学习与基于特征的无监督迁移学习。
2.1 基于特征的有监督迁移学习
我们在基于特征的有监督迁移学习方面的工作是基于互聚类的跨领域分类[7],这个工作考虑的问题是:当给定一个新的、不同的领域,标注数据及其稀少时,如何利用原有领域中含有的大量标注数据进行迁移学习的问题。在基于互聚类的跨领域分类这个工作中,我们为跨领域分类问题定义了一个统一的信息论形式化公式,其中基于互聚类的分类问题的转化成对目标函数的最优化问题。在我们提出的模型中,目标函数被定义为源数据实例,公共特征空间与辅助数据实例间互信息的损失。
2.2 基于特征的无监督迁移学习:自学习聚类
我们提出的自学习聚类算法[3]属于基于特征的无监督迁移学习方面的工作。这里我们考虑的问题是:现实中可能有标记的辅助数据都难以得到,在这种情况下如何利用大量无标记数据辅助数据进行迁移学习的问题。自学习聚类 的基本思想是通过同时对源数据与辅助数据进行聚类得到一个共同的特征表示,而这个新的特征表示由于基于大量的辅助数据,所以会优于仅基于源数据而产生的特征表示,从而对聚类产生帮助。
上面提出的两种学习策略(基于特征的有监督迁移学习与无监督迁移学习)解决的都是源数据与辅助数据在同一特征空间内的基于特征的迁移学习问题。当源数据与辅助数据所在的特征空间中不同时,我们还研究了跨特征空间的基于特征的迁移学习,它也属于基于特征的迁移学习的一种。
3 异构空间下的迁移学习:翻译学习
我们提出的翻译学习[1][5]致力于解决源数据与测试数据分别属于两个不同的特征空间下的情况。在[1]中,我们使用大量容易得到的标注过文本数据去帮助仅有少量标注的图像分类的问题,如上图所示。我们的方法基于使用那些用有两个视角的数据来构建沟通两个特征空间的桥梁。虽然这些多视角数据可能不一定能够用来做分类用的训练数据,但是,它们可以用来构建翻译器。通过这个翻译器,我们把近邻算法和特征翻译结合在一起,将辅助数据翻译到源数据特征空间里去,用一个统一的语言模型进行学习与分类。
引文:
[1]. Wenyuan Dai, Yuqiang Chen, Gui-Rong Xue, Qiang Yang, and Yong Yu. Translated Learning: Transfer Learning across Different Feature Spaces. Advances in Neural Information Processing Systems 21 (NIPS 2008), Vancouver, British Columbia, Canada, December 8-13, 2008.
[2]. Xiao Ling, Wenyuan Dai, Gui-Rong Xue, Qiang Yang, and Yong Yu. Spectral Domain-Transfer Learning. In Proceedings of the Fourteenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2008), Pages 488-496, Las Vegas, Nevada, USA, August 24-27, 2008.
[3]. Wenyuan Dai, Qiang Yang, Gui-Rong Xue and Yong Yu. Self-taught Clustering. In Proceedings of the Twenty-Fifth International Conference on Machine Learning (ICML 2008), pages 200-207, Helsinki, Finland, 5-9 July, 2008.
[4]. Gui-Rong Xue, Wenyuan Dai, Qiang Yang and Yong Yu. Topic-bridged PLSA for Cross-Domain Text Classification. In Proceedings of the Thirty-first International ACM SIGIR Conference on Research and Development on Information Retrieval (SIGIR2008), pages 627-634, Singapore, July 20-24, 2008.
[5]. Xiao Ling, Gui-Rong Xue, Wenyuan Dai, Yun Jiang, Qiang Yang and Yong Yu. Can Chinese Web Pages be Classified with English Data Source? In Proceedings the Seventeenth International World Wide Web Conference (WWW2008), Pages 969-978, Beijing, China, April 21-25, 2008.
[6]. Xiao Ling, Wenyuan Dai, Gui-Rong Xue and Yong Yu. Knowledge Transferring via Implicit Link Analysis. In Proceedings of the Thirteenth International Conference on Database Systems for Advanced Applications (DASFAA 2008), Pages 520-528, New Delhi, India, March 19-22, 2008.
[7]. Wenyuan Dai, Gui-Rong Xue, Qiang Yang and Yong Yu. Co-clustering based Classification for Out-of-domain Documents. In Proceedings of the Thirteenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2007), Pages 210-219, San Jose, California, USA, Aug 12-15, 2007.
[8]. Wenyuan Dai, Gui-Rong Xue, Qiang Yang and Yong Yu. Transferring Naive Bayes Classifiers for Text Classification. In Proceedings of the Twenty-Second National Conference on Artificial Intelligence (AAAI 2007), Pages 540-545, Vancouver, British Columbia, Canada, July 22-26, 2007.
[9]. Wenyuan Dai, Qiang Yang, Gui-Rong Xue and Yong Yu. Boosting for Transfer Learning. In Proceedings of the Twenty-Fourth International Conference on Machine Learning (ICML 2007), Pages 193-200, Corvallis, Oregon, USA, June 20-24, 2007.
[10]. Dikan Xing, Wenyuan Dai, Gui-Rong Xue and Yong Yu. Bridged Refinement for Transfer Learning. In Proceedings of the Eleventh European Conference on Principles and Practice of Knowledge Discovery in Databases (PKDD 2007), Pages 324-335, Warsaw, Poland, September 17-21, 2007. (Best Student Paper Award)
[11]. Xin Zhang, Wenyuan Dai, Gui-Rong Xue and Yong Yu. Adaptive Email Spam Filtering based on Information Theory. In Proceedings of the Eighth International Conference on Web Information Systems Engineering (WISE 2007), Pages 159–170, Nancy, France, December 3-7, 2007.
---------------------华丽分割线-------------------------
List of Conferences and Workshops Where Transfer Learning Paper Appear |
From: http://www.cse.ust.hk/~sinnopan/conferenceTL.htm
List of Conferences and Workshops Where Transfer Learning Paper AppearThis webpage will be updated regularly. |
Main Conferences |
Machine Learning and Artificial Intelligence Conferences |
AAAI |
2008 |
Transfer Learning via Dimensionality Reduction [Link] [Bibtex] |
Transferring Localization Models across Space [Link] [Bibtex] |
Transferring Localization Models over Time [Link] [Bibtex] |
Transferring Multi-device Localization Models using Latent Multi-task Learning [Link] [Bibtex] |
Text Categorization with Knowledge Transfer from Heterogeneous Data Sources [Link] [Bibtex] |
Zero-data Learning of New Tasks [Link] [Bibtex] |
2007 |
Transferring Naive Bayes Classifiers for Text Classification [Link] [Bibtex] |
Mapping and Revising Markov Logic Networks for Transfer Learning [Link] [Bibtex] |
Measuring the Level of Transfer Learning by an AP Physics Problem-Solver [Link] [Bibtex] |
2006 |
Using Homomorphisms to Transfer Options across Continuous Reinforcement Learning Domains [Link] [Bibtex] |
Value-Function-Based Transfer for Reinforcement Learning Using Structure Mapping [Link] [Bibtex] |
IJCAI |
2009 |
Transfer Learning Using Task-Level Features with Application to Information Retrieval [Link] [Bibtex] |
Transfer Learning from Minimal Target Data by Mapping across Relational Domains [Link] [Bibtex] |
Domain Adaptation via Transfer Component Analysis [Link] [Bibtex] |
Knowledge Transfer on Hybrid Graph [Link] [Bibtex] |
Manifold Alignment without Correspondence [Link] [Bibtex] |
Robust Distance Metric Learning with Auxiliary Knowledge [Link] [Bibtex] |
Can Movies and Books Collaborate? Cross-Domain Collaborative Filtering for Sparsity Reduction [Link] [Bibtex] |
Exponential Family Sparse Coding with Application to Self-taught Learning [Link] [Bibtex] |
2007 |
Learning and Transferring Action Schemas [Link] [Bibtex] |
General Game Learning Using Knowledge Transfer [Link] [Bibtex] |
Building Portable Options: Skill Transfer in Reinforcement Learning [Link] [Bibtex] |
Transfer Learning in Real-Time Strategy Games Using Hybrid CBR/RL [Link] [Bibtex] |
An Experts Algorithm for Transfer Learning [Link] [Bibtex] |
Transferring Learned Control-Knowledge between Planners [Link] [Bibtex] |
Effective Control Knowledge Transfer through Learning Skill and Representation Hierarchies [Link] [Bibtex] |
Efficient Bayesian Task-Level Transfer Learning [Link] [Bibtex] |
ICML |
2009 |
Deep Transfer via Second-Order Markov Logic [Link] [Bibtex] |
Feature Hashing for Large Scale Multitask Learning [Link] [Bibtex] |
A Convex Formulation for Learning Shared Structures from Multiple Tasks [Link] [Bibtex] |
EigenTransfer: A Unified Framework for Transfer Learning [Link] [Bibtex] |
Domain Adaptation from Multiple Sources via Auxiliary Classifiers [Link] [Bibtex] |
Transfer Learning for Collaborative Filtering via a Rating-Matrix Generative Model [Link] [Bibtex] |
2008 |
Bayesian Multiple Instance Learning: Automatic Feature Selection and Inductive Transfer [Link] [Bibtex] |
Multi-Task Learning for HIV Therapy Screening [Link] [Bibtex] |
Self-taught Clustering [Link] [Bibtex] |
Manifold Alignment using Procrustes Analysis [Link] [Bibtex] |
Automatic Discovery and Transfer of MAXQ Hierarchies [Link] [Bibtex] |
Transfer of Samples in Batch Reinforcement Learning [Link] [Bibtex] |
Hierarchical Kernel Stick-Breaking Process for Multi-Task Image Analysis [Link] [Bibtex] |
Multi-Task Compressive Sensing with Dirichlet Process Priors [Link] [Bibtex] |
A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning [Link] [Bibtex] |
2007 |
Boosting for Transfer Learning [Link] [Bibtex] |
Self-taught Learning: Transfer Learning from Unlabeled Data [Link] [Bibtex] |
Robust Multi-Task Learning with t-Processes [Link] [Bibtex] |
Multi-Task Learning for Sequential Data via iHMMs and the Nested Dirichlet Process [Link] [Bibtex] |
Cross-Domain Transfer for Reinforcement Learning [Link] [Bibtex] |
Learning a Meta-Level Prior for Feature Relevance from Multiple Related Tasks [Link] [Bibtex] |
Multi-Task Reinforcement Learning: A Hierarchical Bayesian Approach [Link] [Bibtex] |
The Matrix Stick-Breaking Process for Flexible Multi-Task Learning [Link] [Bibtex] |
Asymptotic Bayesian Generalization Error When Training and Test Distributions Are Different [Link] [Bibtex] |
Discriminative Learning for Differing Training and Test Distributions [Link] [Bibtex] |
2006 |
Autonomous Shaping: Knowledge Transfer in Reinforcement Learning [Link] [Bibtex] |
Constructing Informative Priors using Transfer Learning [Link] [Bibtex] |
NIPS |
2008 |
Clustered Multi-Task Learning: A Convex Formulation [Link] [Bibtex] |
Multi-task Gaussian Process Learning of Robot Inverse Dynamics [Link] [Bibtex] |
Transfer Learning by Distribution Matching for Targeted Advertising [Link] [Bibtex] |
Translated Learning: Transfer Learning across Different Feature Spaces [Link] [Bibtex] |
An empirical Analysis of Domain Adaptation Algorithms for Genomic Sequence Analysis [Link] [Bibtex] |
Domain Adaptation with Multiple Sources [Link] [Bibtex] |
2007 |
Learning Bounds for Domain Adaptation [Link] [Bibtex] |
Transfer Learning using Kolmogorov Complexity: Basic Theory and Empirical Evaluations [Link] [Bibtex] |
A Spectral Regularization Framework for Multi-Task Structure Learning [Link] [Bibtex] |
Multi-task Gaussian Process Prediction [Link] [Bibtex] |
Semi-Supervised Multitask Learning [Link] [Bibtex] |
Gaussian Process Models for Link Analysis and Transfer Learning [Link] [Bibtex] |
Multi-Task Learning via Conic Programming [Link] [Bibtex] |
Direct Importance Estimation with Model Selection and Its Application to Covariate Shift Adaptation [Link] [Bibtex] |
2006 |
Correcting Sample Selection Bias by Unlabeled Data [Link] [Bibtex] |
Dirichlet-Enhanced Spam Filtering based on Biased Samples [Link] [Bibtex] |
Analysis of Representations for Domain Adaptation [Link] [Bibtex] |
Multi-Task Feature Learning [Link] [Bibtex] |
AISTAT |
2009 |
A Hierarchical Nonparametric Bayesian Approach to Statistical Language Model Domain Adaptation [Link] [Bibtex] |
2007 |
Kernel Multi-task Learning using Task-specific Features [Link] [Bibtex] |
Inductive Transfer for Bayesian Network Structure Learning [Link] [Bibtex] |
ECML/PKDD |
2009 |
Relaxed Transfer of Different Classes via Spectral Partition [Link] [Bibtex] |
Feature Selection by Transfer Learning with Linear Regularized Models [Link] [Bibtex] |
Semi-Supervised Multi-Task Regression [Link] [Bibtex] |
2008 |
Actively Transfer Domain Knowledge [Link] [Bibtex] |
An Algorithm for Transfer Learning in a Heterogeneous Environment [Link] [Bibtex] |
Transferred Dimensionality Reduction [Link] [Bibtex] |
Modeling Transfer Relationships between Learning Tasks for Improved Inductive Transfer [Link] [Bibtex] |
Kernel-Based Inductive Transfer [Link] [Bibtex] |
2007 |
Graph-Based Domain Mapping for Transfer Learning in General Games [Link] [Bibtex] |
Bridged Refinement for Transfer Learning [Link] [Bibtex] |
Transfer Learning in Reinforcement Learning Problems Through Partial Policy Recycling [Link] [Bibtex] |
Domain Adaptation of Conditional Probability Models via Feature Subsetting [Link] [Bibtex] |
2006 |
Skill Acquisition via Transfer Learning and Advice Taking [Link] [Bibtex] |
COLT |
2009 |
Online Multi-task Learning with Hard Constraints [Link] [Bibtex] |
Taking Advantage of Sparsity in Multi-Task Learning [Link] [Bibtex] |
Domain Adaptation: Learning Bounds and Algorithms [Link] [Bibtex] |
2008 |
Learning coordinate gradients with multi-task kernels [Link] [Bibtex] |
Linear Algorithms for Online Multitask Classification [Link] [Bibtex] |
2007 |
Multitask Learning with Expert Advice [Link] [Bibtex] |
2006 |
Online Multitask Learning [Link] [Bibtex] |
UAI |
2009 |
Bayesian Multitask Learning with Latent Hierarchies [Link] [Bibtex] |
Multi-Task Feature Learning Via Efficient L2,1-Norm Minimization [Link] [Bibtex] |
2008 |
Convex Point Estimation using Undirected Bayesian Transfer Hierarchies [Link] [Bibtex] |
Data Mining Conferences |
KDD |
2009 |
Cross Domain Distribution Adaptation via Kernel Mapping [Link] [Bibtex] |
Extracting Discriminative Concepts for Domain Adaptation in Text Mining [Link] [Bibtex] |
2008 |
Spectral domain-transfer learning [Link] [Bibtex] |
Knowledge transfer via multiple model local structure mapping [Link] [Bibtex] |
2007 |
Co-clustering based Classification for Out-of-domain Documents [Link] [Bibtex] |
2006 |
Reverse Testing: An Efficient Framework to Select Amongst Classifiers under Sample Selection Bias [Link] [Bibtex] |
ICDM |
2008 |
Unsupervised Cross-domain Learning by Interaction Information Co-clustering [Link] [Bibtex] |
Using Wikipedia for Co-clustering Based Cross-domain Text Classification [Link] [Bibtex] |
SDM |
2008 |
Type-Independent Correction of Sample Selection Bias via Structural Discovery and Re-balancing [Link] [Bibtex] |
Direct Density Ratio Estimation for Large-scale Covariate Shift Adaptation [Link] [Bibtex] |
2007 |
On Sample Selection Bias and Its Efficient Correction via Model Averaging and Unlabeled Examples [Link] [Bibtex] |
Probabilistic Joint Feature Selection for Multi-task Learning [Link] [Bibtex] |
Application Conferences |
SIGIR |
2009 |
Mining Employment Market via Text Block Detection and Adaptive Cross-Domain Information Extraction [Link] [Bibtex] |
Knowledge transformation for cross-domain sentiment classification [Link] [Bibtex] |
2008 |
Topic-bridged PLSA for cross-domain text classification [Link] [Bibtex] |
2007 |
Cross-Lingual Query Suggestion Using Query Logs of Different Languages [Link] [Bibtex] |
2006 |
Tackling Concept Drift by Temporal Inductive Transfer [Link] [Bibtex] |
Constructing Informative Prior Distributions from Domain Knowledge in Text Classification [Link] [Bibtex] |
Building Bridges for Web Query Classification [Link] [Bibtex] |
WWW |
2009 |
Latent Space Domain Transfer between High Dimensional Overlapping Distributions [Link] [Bibtex] |
2008 |
Can Chinese web pages be classified with English data source? [Link] [Bibtex] |
ACL |
2009 |
Transfer Learning, Feature Selection and Word Sense Disambiguation [Link] [Bibtex] |
Graph Ranking for Sentiment Transfer [Link] [Bibtex] |
Multi-Task Transfer Learning for Weakly-Supervised Relation Extraction [Link] [Bibtex] |
Cross-Domain Dependency Parsing Using a Deep Linguistic Grammar [Link] [Bibtex] |
Heterogeneous Transfer Learning for Image Clustering via the SocialWeb [Link] [Bibtex] |
2008 |
Exploiting Feature Hierarchy for Transfer Learning in Named Entity Recognition [Link] [Bibtex] |
Multi-domain Sentiment Classification [Link] [Bibtex] |
Active Sample Selection for Named Entity Transliteration [Link] [Bibtex] |
Mining Wiki Resources for Multilingual Named Entity Recognition [Link] [Bibtex] |
Multi-Task Active Learning for Linguistic Annotations [Link] [Bibtexs] |
2007 |
Domain Adaptation with Active Learning for Word Sense Disambiguation [Link] [Bibtex] |
Frustratingly Easy Domain Adaptation [Link] [Bibtex] |
Instance Weighting for Domain Adaptation in NLP [Link] [Bibtex] |
Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification [Link] [Bibtex] |
Self-Training for Enhancement and Domain Adaptation of Statistical Parsers Trained on Small Datasets [Link] [Bibtex] |
2006 |
Estimating Class Priors in Domain Adaptation for Word Sense Disambiguation [Link] [Bibtex] |
Simultaneous English-Japanese Spoken Language Translation Based on Incremental Dependency Parsing and Transfer [Link] [Bibtex] |
CVPR |
2009 |
Domain Transfer SVM for Video Concept Detection [Link] [Bibtex] |
Boosted Multi-Task Learning for Face Verification With Applications to Web Image and Video Search [Link] [Bibtex] |
2008 |
Transfer Learning for Image Classification with Sparse Prototype Representations [Link] [Bibtex] |
Workshops |
NIPS 2005 Workshop - Inductive Transfer: 10 Years Later |
NIPS 2005 Workshop - Interclass Transfer |
NIPS 2006 Workshop - Learning when test and training inputs have different distributions |
AAAI 2008 Workshop - Transfer Learning for Complex Tasks |
|
|
本文引用地址: http://www.sciencenet.cn/m/user_content.aspx?id=268960 |
FROM:http://www.sciencenet.cn
Transfer Learning for Structured Data (TLSD-09) |
Workshop, in conjunction with NIPS 2009, Dec 7-12, 2009, Vancouver, B.C., Canada |
[Overview] |
[CFP] |
[Invited Speakers] |
[Accepted Papers] |
[Program] |
[Organizers & PCM] |
Description |
Recently, transfer learning (TL) has gained much popularity as an approach to reduce the training-data calibration effort as well as improve generalization performance of learning tasks. Unlike traditional learning, transfer learning methods make the best use of data from one or more source tasks in order to learn a target task. Many previous works on transfer learning have focused on transferring the knowledge across domains where the data are assumed to be i.i.d. In many real-world applications, such as identifying entities in social networks or classifying Web pages, data are often intrinsically non i.i.d., which present a major challenge to transfer learning. In this workshop, we call for papers on the topic of transfer learning for structured data. Structured data are those that have certain intrinsic structures such as network topology, and present several challenges to knowledge transfer. A first challenge is how to judge the relatedness between tasks and avoid negative transfer. Since data are non i.i.d., standard methods for measuring the distance between data distributions, such as KL divergence, Maximum Mean Discrepancy (MMD) and A-distance, may not be applicable. A second challenge is that the target and source data may be heterogeneous. For example, a source domain is a bioinformatics network, while a target domain may be a network of webpage. In this case, deep transfer or heterogeneous transfer approaches are required. |
Heterogeneous transfer learning for structured data is a new area of research, which concerns transferring knowledge between different tasks where the data are non-i.i.d. and may be even heterogeneous. This area has emerged as one of the most promising areas in machine learning. In this workshop, we wish to boost the research activities of knowledge transfer across structured data in the machine learning community. We welcome theoretical and applied disseminations that make efforts (1) to expose novel knowledge transfer methodology and frameworks for transfer mining across structured data. (2) to investigate effective (automated, human-machined-cooperated) principles and techniques for acquiring, representing, modeling and engaging transfer learning on structured data in real-world applications. |
This workshop on Transfer learning for structured data will bring active researchers in artificial intelligence, machine learning and data mining together toward developing methods or systems together, to explore methods for solving real-world problems associated with learning on structured data. The workshop invites researchers interested in transfer learning, statistical relational learning and structured data mining to contribute their recent works on the topic of interest. |