使用sklearn中svm做多分类时难点解惑

一,parameters: decision_function_shape:
两种方法one v one 或者 one v rest

decision_function_shape : ‘ovo’, ‘ovr’ or None, default=None Whether
to return a one-vs-rest (‘ovr’) decision function of shape (n_samples,
n_classes) as all other classifiers, or the original one-vs-one
(‘ovo’) decision function of libsvm which has shape (n_samples,
n_classes * (n_classes - 1) / 2). The default of None will currently
behave as ‘ovo’ for backward compatibility and raise a deprecation
warning, but will change ‘ovr’ in 0.19. New in version 0.17:
decision_function_shape=’ovr’ is recommended. Changed in version 0.17:
Deprecated decision_function_shape=’ovo’ and None.

二、Attributes:

support_: 返回training data中的support vector的索引(几乎没什么用,因为索引是按大小顺序排列,不知道其属于哪个类别的support vector)

support_vectors_ :返回support_vector的值,而且按照每个类别的support_vector依次排列。

n_support_:每个类别的support_vector个数,对照support_vector使用
如如果返回[3,4,5],则表明support_vector前三个元素是第一类的sv,4到7个元素是第二类的sv,最后5个是第三类的sv。

dual_coef_ : kernel前面的系数部分。即下图中的an*tn

intercept_: 下图中的b

这里写图片描述

三,method
decision_function: 返回每个data到每个classifier的距离,注意出现overlap时,support_vector 到其对应classifier 的距离不一定是1

这里写图片描述

详情请参看bishop 《PRML》333-334页

另外注意:在这个模型中的support vector均对应与class,而不是对应于classifier
即如果data属于class时,它一定是它的boundray或者outliner。但属于classifer时,不清楚它属于classfier中两个class中的哪一个

为了更好理解模型,可以参看stackoverflow大神写的代码:
http://stackoverflow.com/questions/20113206/scikit-learn-svc-decision-function-and-predict

# I've only implemented the linear and rbf kernels
#sv:support vector  nv:上面的n_support_  a:上面的dual_coef
#b:上面的intercept_ 
def kernel(params, sv, X):
    if params.kernel == 'linear':
        return [np.dot(vi, X) for vi in sv]
    elif params.kernel == 'rbf':
        return [math.exp(-params.gamma * np.dot(vi - X, vi - X)) for vi in sv]

# This replicates clf.decision_function(X)
def decision_function(params, sv, nv, a, b, X):
    # calculate the kernels
    k = kernel(params, sv, X)

    # define the start and end index for support vectors for each class
    start = [sum(nv[:i]) for i in range(len(nv))]
    end = [start[i] + nv[i] for i in range(len(nv))]

    # calculate: sum(a_p * k(x_p, x)) between every 2 classes
    c = [ sum(a[ i ][p] * k[p] for p in range(start[j], end[j])) +
          sum(a[j-1][p] * k[p] for p in range(start[i], end[i]))
                for i in range(len(nv)) for j in range(i+1,len(nv))]

    # add the intercept
    return [sum(x) for x in zip(c, b)]

# This replicates clf.predict(X)
def predict(params, sv, nv, a, b, cs, X):
    ''' params = model parameters
        sv = support vectors
        nv = # of support vectors per class
        a  = dual coefficients
        b  = intercepts 
        cs = list of class names
        X  = feature to predict       
    '''
    decision = decision_function(params, sv, nv, a, b, X)
    votes = [(i if decision[p] > 0 else j) for p,(i,j) in enumerate((i,j) 
                                           for i in range(len(cs))
                                           for j in range(i+1,len(cs)))]

    return cs[max(set(votes), key=votes.count)]

最近两周花了很长时间摸索,主要原因是因为不清楚为什么sv对应decision_function返回的distance很少有是1的距离,究其原因还是对模型理解不够透彻,忽略了overlap时,outliner的data也是sv.

希望对大家有一点帮助。
以上

你可能感兴趣的:(svm,sklearn-教程,python,machine,learning)