版权所有, 如需引用和转载, 请站内联系.
Views from (1) multiple sources (2) different feature subsets;
视角来自于 (1) 多个源 (2) 多个特征子集
Multi-view learning algorithms: (1) co-training (2) multi-kernel learning (3) subspace learning;
多视角学习算法: (1) 协同训练 (2) 多核学习 (3) 子空间学习
Principles: (1) consensus principle (共识准则) (2) complementary principle (互补准则);
准则: (1) 共识准则 (2) 互补准则
Multi-view learning: introduces one function to model a particular view and jointly optimizes all the functions to exploit the redundant views of the same input data and improve the learning performance.
多视角学习: 引入了一个函数去模型化一个特定的视角, 并且利用相同输入的冗余视角去联合优化所有函数, 最终提高学习效果.
Co-training: trains alternately(轮流地) to maximize the mutual agreement on two distinct views of the unlabeled data.
协同训练: 在未标记数据的两个不同视角下, 轮流的训练, 使相互一致性最大化.
Co-training Variants(变种):
(1) Expectation-Maximization (EM): assigning changeable probabilistic labels to unlabeled data; (聚类算法-OpenCV有相关函数)
(2) Semi-Supervised Learning Algorithm: Muslea et al;
(3) Bayesian undirected graphical model & Gaussian process classifiers: Yu et al;
(4) Combinative label propagation (结合的标签传播): Wang & Zhou;
(5) A data-dependent “co-regularization(协同正则化)”norm: Sindhwani;
(6) Data clustering and designed effective algorithms: Bickel & Scheffer and Kumar et al;
Relying on three assumptions (依赖的三个假设):
(1) sufficiency (充足性) - each view is sufficient for classification on its own;
(2) compatibility (兼容性) – the target function of both views predict the same labels for co-occurring features with a high probability;
(3) conditional independent (条件独立性) – views are conditionally independent given the label.