106 多分类

1. 函数介绍

按照出场顺序对没有介绍过的函数介绍下

1.1 CCA 典型相关分析

一个比较好的文章
http://wenku.baidu.com/link?url=AGlaTNEen3DwykJqVNlZXmRz-tQ6ESFYLROEVYkwbzL5YVeGO_ON0jjRvCdy1ne8ARcQNXB5KayH7rPEnaNAQlGXa2u5hUJNV8Qt42UF70K
还有一篇博客
http://www.cnblogs.com/jerrylead/archive/2011/06/20/2085491.html

理论部分就不说了,主要说说函数,典型相关分析是有监督学习的,下面对比下PCA和CCA的几个用法
1)fit,pca是无监督的,fit的时候不需要y,CCA 需要一个shape = [n_samples, n_targets]的y
2)fit_transform, pca 返回的是一个 shape (n_samples, n_components)的矩阵, CCA 如果y没有改定就是给x_scores 否则是 (x_scores, y_scores) ,所以文中是这样的:

X = PCA(n_components=2).fit_transform(X)
X = CCA(n_components=2).fit(X, Y).transform(X)

1.2 svc

支持向量分类,文中第一个def用到了coef_这是拟合的参数(各特征的系数weight),他的shape是(n-class-1,n-feature),这只有在svc的核是linear的时候可以用。另外intercept_是截距,shape = [n-class * (n-class-1) / 2]。具体在文中的左右在第二部分会有注释。

1.3 OneVsRestClassifier

文中用到的函数是estimators_ 它的作用是提取分类器的list,如果不用oVr,要使用两次SVC,注意代码中使用了classif.estimators_[0]和classif.estimators-[1]就解决了

1.4 np.where (condition[, x, y])

http://docs.scipy.org/doc/numpy-1.6.0/reference/generated/numpy.where.html#numpy.where
这里了解它常用的用法
1)如果只有一个参数即condition给了,也就是本文的形式,它返回的是其中不为0的值的位置
2)x,y如果都给了就返回符合条件的部分,例如

np.where(x < 5, x, -1)              
得到
array([[ 0., 1., 2.], [ 3., 4., -1.], [-1., -1., -1.]])

1.5 make_multilabel_classification

生成多类别样本,官文如下
http://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_multilabel_classification.html#sklearn.datasets.make_multilabel_classification
其中的n_labels越大,含多个分类的的样本越多

2.代码

代码来自官网

import numpy as np
import matplotlib.pyplot as plt

from sklearn.datasets import make_multilabel_classification
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import SVC
from sklearn.preprocessing import LabelBinarizer
from sklearn.decomposition import PCA
from sklearn.cross_decomposition import CCA



def plot_hyperplane(clf, min_x, max_x, linestyle, label):
    # get the separating hyperplane
    w = clf.coef_[0]
    #其shape为(n-class-1,n-feature),本文中两类,故coef形式为
    #[[ 1.16621927 -1.30320338]]
    #coef_[0]的作用是将其变为[ 1.16621927 -1.30320338]
    a = -w[0] / w[1]
    xx = np.linspace(min_x - 5, max_x + 5)  # make sure the line is long enough
    yy = a * xx - (clf.intercept_[0]) / w[1]
    #截距的shape = [n-class * (n-class-1) / 2]
    #原始特征为x,y 故x*w[0]+y*w[1]+intercept=0
    #所以y=(-intercept-x*w[0])/w[1]
    plt.plot(xx, yy, linestyle, label=label)


def plot_subfigure(X, Y, subplot, title, transform):
    if transform == "pca":
        X = PCA(n_components=2).fit_transform(X)
    elif transform == "cca":
        X = CCA(n_components=2).fit(X, Y).transform(X)
    else:
        raise ValueError

    min_x = np.min(X[:, 0])
    max_x = np.max(X[:, 0])

    min_y = np.min(X[:, 1])
    max_y = np.max(X[:, 1])

    classif = OneVsRestClassifier(SVC(kernel='linear'))
    classif.fit(X, Y)

    plt.subplot(2, 2, subplot)
    plt.title(title)

    zero_class = np.where(Y[:, 0])
    print zero_class
    one_class = np.where(Y[:, 1])
    print one_class
    plt.scatter(X[:, 0], X[:, 1], s=40, c='gray')
    plt.scatter(X[zero_class, 0], X[zero_class, 1], s=160, edgecolors='b',
               facecolors='none', linewidths=2, label='Class 1')
    plt.scatter(X[one_class, 0], X[one_class, 1], s=80, edgecolors='orange',
               facecolors='none', linewidths=2, label='Class 2')

    plot_hyperplane(classif.estimators_[0], min_x, max_x, 'k--',
                    'Boundary\nfor class 1')
    plot_hyperplane(classif.estimators_[1], min_x, max_x, 'k-.',
                    'Boundary\nfor class 2')
    plt.xticks(())
    plt.yticks(())

    plt.xlim(min_x - .5 * max_x, max_x + .5 * max_x)
    plt.ylim(min_y - .5 * max_y, max_y + .5 * max_y)
    if subplot == 2:
        plt.xlabel('First principal component')
        plt.ylabel('Second principal component')
       # plt.legend(loc="upper left")


plt.figure(figsize=(8, 6))

X, Y = make_multilabel_classification(n_classes=2, n_labels=1,
                                      allow_unlabeled=True,
                                      random_state=1)
print np.shape(X)
print Y

plot_subfigure(X, Y, 1, "With unlabeled samples + CCA", "cca")
plot_subfigure(X, Y, 2, "With unlabeled samples + PCA", "pca")

X, Y = make_multilabel_classification(n_classes=2, n_labels=1,
                                      allow_unlabeled=False,
                                      random_state=1)
print np.shape(X)
print Y
plot_subfigure(X, Y, 3, "Without unlabeled samples + CCA", "cca")
plot_subfigure(X, Y, 4, "Without unlabeled samples + PCA", "pca")

plt.subplots_adjust(.04, .02, .97, .94, .09, .2)
plt.show()

你可能感兴趣的:(106 多分类)