sklearn.svm.SVC

sklearn.svm.SVC

class sklearn.svm.SVC(C = 1.0,kernel ='rbf',
						degree = 3,gamma ='auto_deprecated',
						coef0 = 0.0,shrinking = True,
						probability = False,tol = 0.001,
						cache_size = 200,class_weight = None,
						verbose = False,max_iter = -1,
						decision_function_shape =' ovr ',random_state =None)

参数:
C : float,可选(默认值= 1.0) 错误术语的惩罚参数C.

kernel : string,optional(default =‘rbf’),指定要在算法中使用的内核类型。它必须是’linear’,‘poly’,‘rbf’,‘sigmoid’,‘precomputed’或者callable之一。如果没有给出,将使用’rbf’。如果给出可调用,则它用于从数据矩阵预先计算内核矩阵;
该矩阵应该是一个形状的数组。(n_samples, n_samples)

度 : int,可选(默认= 3) 多项式核函数的次数(‘poly’)。被所有其他内核忽略。

gamma : float,optional(默认=‘auto’) ‘rbf’,'poly’和’sigmoid’的核系数。

当前默认值为’auto’,它使用1 / n_features,如果gamma=‘scale’传递,则使用1 /(n_features *
X.var())作为gamma的值。当前默认的gamma’‘auto’将在版本0.22中更改为’scale’。‘auto_deprecated’,不推荐使用’auto’版本作为默认值,表示没有传递明确的gamma值。

coef0 : float,optional(默认值= 0.0) 核函数中的独立项。它只在’poly’和’sigmoid’中很重要。

收缩 : 布尔值,可选(默认= True) 是否使用收缩启发式。

概率 : 布尔值,可选(默认=假) 是否启用概率估计。必须在调用之前启用它fit,并且会减慢该方法的速度。

tol : float,optional(默认值= 1e-3) 容忍停止标准。

cache_size : float,可选 指定内核缓存的大小(以MB为单位)。

class_weight : {dict,‘balanced’},可选。
将类i的参数C设置为SVC的class_weight [i] *C. 如果没有给出,所有课程都应该有一个重量。“平衡”模式使用y的值自动调整与输入数据中的类频率成反比的权重n_samples /(n_classes * np.bincount(y))

详细说明 : bool,默认值:False
启用详细输出。请注意,此设置利用libsvm中的每进程运行时设置,如果启用,则可能无法在多线程上下文中正常运行。

max_iter : int,optional(默认值= -1) 求解器内迭代的硬限制,或无限制的-1。

decision_function_shape : ‘ovo’,‘ovr’,默认=‘ovr’
是否将形状(n_samples,n_classes)的one-vs-rest(‘ovr’)决策函数作为所有其他分类器返回,或者返回具有形状的libsvm的原始one-vs-one(‘ovo’)决策函数(n_samples)
,n_classes *(n_classes - 1)/ 2)。但是,一对一(‘ovo’)总是被用作多级策略。

在版本0.19中更改: decision_function_shape默认为’ovr’。

版本0.17中的新功能:建议使用decision_function_shape =‘ovr’。

更改版本0.17:已弃用decision_function_shape ='ovo’且无。

random_state : int,RandomState实例或None,可选(默认=无)
伪随机数生成器的种子在对数据进行混洗以用于概率估计时使用。如果是int,则random_state是随机数生成器使用的种子;
如果是RandomState实例,则random_state是随机数生成器;
如果为None,则随机数生成器是由其使用的RandomState实例np.random。

>>> import numpy as np
>>> X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]])
>>> y = np.array([1, 1, 2, 2])
>>> from sklearn.svm import SVC
>>> clf = SVC(gamma='auto')
>>> clf.fit(X,y)
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
    decision_function_shape='ovr', degree=3, gamma='auto', kernel='rbf',
    max_iter=-1, probability=False, random_state=None, shrinking=True,
    tol=0.001, verbose=False)
>>> print(clf.predict([[-0.8, -1]]))
[1]
>>> 

decision_function(self, X) 评估X中样本的决策函数。
fit(self, X, y[, sample_weight]) 根据给定的训练数据拟合SVM模型。
get_params(self[, deep]) 获取此估算工具的参数。
predict(self, X) 对X中的样本进行分类。
score(self, X, y[, sample_weight]) 返回给定测试数据和标签的平均精度。
set_params(self, \*\*params) 设置此估算器的参数。

示例一:多标签分类

通过投影到PCA和CCA找到的前两个主要组件以进行可视化目的,然后使用sklearn.multiclass.OneVsRestClassifier具有线性内核的两个SVC
的元分类器来学习每个类的判别模型来执行分类。请注意,PCA用于执行无监督降维,而CCA用于执行监督降维。

print(__doc__)

import numpy as np
import matplotlib.pyplot as plt

from sklearn.datasets import make_multilabel_classification
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import SVC
from sklearn.decomposition import PCA
from sklearn.cross_decomposition import CCA


def plot_hyperplane(clf, min_x, max_x, linestyle, label):
    # get the separating hyperplane
    w = clf.coef_[0]
    a = -w[0] / w[1]
    xx = np.linspace(min_x - 5, max_x + 5)  # make sure the line is long enough
    yy = a * xx - (clf.intercept_[0]) / w[1]
    plt.plot(xx, yy, linestyle, label=label)


def plot_subfigure(X, Y, subplot, title, transform):
    if transform == "pca":
        X = PCA(n_components=2).fit_transform(X)
    elif transform == "cca":
        X = CCA(n_components=2).fit(X, Y).transform(X)
    else:
        raise ValueError

    min_x = np.min(X[:, 0])
    max_x = np.max(X[:, 0])

    min_y = np.min(X[:, 1])
    max_y = np.max(X[:, 1])

    classif = OneVsRestClassifier(SVC(kernel='linear'))
    classif.fit(X, Y)

    plt.subplot(2, 2, subplot)
    plt.title(title)

    zero_class = np.where(Y[:, 0])
    one_class = np.where(Y[:, 1])
    plt.scatter(X[:, 0], X[:, 1], s=40, c='gray', edgecolors=(0, 0, 0))
    plt.scatter(X[zero_class, 0], X[zero_class, 1], s=160, edgecolors='b',
                facecolors='none', linewidths=2, label='Class 1')
    plt.scatter(X[one_class, 0], X[one_class, 1], s=80, edgecolors='orange',
                facecolors='none', linewidths=2, label='Class 2')

    plot_hyperplane(classif.estimators_[0], min_x, max_x, 'k--',
                    'Boundary\nfor class 1')
    plot_hyperplane(classif.estimators_[1], min_x, max_x, 'k-.',
                    'Boundary\nfor class 2')
    plt.xticks(())
    plt.yticks(())

    plt.xlim(min_x - .5 * max_x, max_x + .5 * max_x)
    plt.ylim(min_y - .5 * max_y, max_y + .5 * max_y)
    if subplot == 2:
        plt.xlabel('First principal component')
        plt.ylabel('Second principal component')
        plt.legend(loc="upper left")


plt.figure(figsize=(8, 6))

X, Y = make_multilabel_classification(n_classes=2, n_labels=1,
                                      allow_unlabeled=True,
                                      random_state=1)

plot_subfigure(X, Y, 1, "With unlabeled samples + CCA", "cca")
plot_subfigure(X, Y, 2, "With unlabeled samples + PCA", "pca")

X, Y = make_multilabel_classification(n_classes=2, n_labels=1,
                                      allow_unlabeled=False,
                                      random_state=1)

plot_subfigure(X, Y, 3, "Without unlabeled samples + CCA", "cca")
plot_subfigure(X, Y, 4, "Without unlabeled samples + PCA", "pca")

plt.subplots_adjust(.04, .02, .97, .94, .09, .2)
plt.show()

sklearn.svm.SVC_第1张图片
示例二:RBF内核的显式特征映射近似

print(__doc__)

# Author: Gael Varoquaux 
#         Andreas Mueller 
# License: BSD 3 clause

# Standard scientific Python imports
import matplotlib.pyplot as plt
import numpy as np
from time import time

# Import datasets, classifiers and performance metrics
from sklearn import datasets, svm, pipeline
from sklearn.kernel_approximation import (RBFSampler,
                                          Nystroem)
from sklearn.decomposition import PCA

# The digits dataset
digits = datasets.load_digits(n_class=9)

# To apply an classifier on this data, we need to flatten the image, to
# turn the data in a (samples, feature) matrix:
n_samples = len(digits.data)
data = digits.data / 16.
data -= data.mean(axis=0)

# We learn the digits on the first half of the digits
data_train, targets_train = (data[:n_samples // 2],
                             digits.target[:n_samples // 2])


# Now predict the value of the digit on the second half:
data_test, targets_test = (data[n_samples // 2:],
                           digits.target[n_samples // 2:])
# data_test = scaler.transform(data_test)

# Create a classifier: a support vector classifier
kernel_svm = svm.SVC(gamma=.2)
linear_svm = svm.LinearSVC()

# create pipeline from kernel approximation
# and linear svm
feature_map_fourier = RBFSampler(gamma=.2, random_state=1)
feature_map_nystroem = Nystroem(gamma=.2, random_state=1)
fourier_approx_svm = pipeline.Pipeline([("feature_map", feature_map_fourier),
                                        ("svm", svm.LinearSVC())])

nystroem_approx_svm = pipeline.Pipeline([("feature_map", feature_map_nystroem),
                                        ("svm", svm.LinearSVC())])

# fit and predict using linear and kernel svm:

kernel_svm_time = time()
kernel_svm.fit(data_train, targets_train)
kernel_svm_score = kernel_svm.score(data_test, targets_test)
kernel_svm_time = time() - kernel_svm_time

linear_svm_time = time()
linear_svm.fit(data_train, targets_train)
linear_svm_score = linear_svm.score(data_test, targets_test)
linear_svm_time = time() - linear_svm_time

sample_sizes = 30 * np.arange(1, 10)
fourier_scores = []
nystroem_scores = []
fourier_times = []
nystroem_times = []

for D in sample_sizes:
    fourier_approx_svm.set_params(feature_map__n_components=D)
    nystroem_approx_svm.set_params(feature_map__n_components=D)
    start = time()
    nystroem_approx_svm.fit(data_train, targets_train)
    nystroem_times.append(time() - start)

    start = time()
    fourier_approx_svm.fit(data_train, targets_train)
    fourier_times.append(time() - start)

    fourier_score = fourier_approx_svm.score(data_test, targets_test)
    nystroem_score = nystroem_approx_svm.score(data_test, targets_test)
    nystroem_scores.append(nystroem_score)
    fourier_scores.append(fourier_score)

# plot the results:
plt.figure(figsize=(8, 8))
accuracy = plt.subplot(211)
# second y axis for timeings
timescale = plt.subplot(212)

accuracy.plot(sample_sizes, nystroem_scores, label="Nystroem approx. kernel")
timescale.plot(sample_sizes, nystroem_times, '--',
               label='Nystroem approx. kernel')

accuracy.plot(sample_sizes, fourier_scores, label="Fourier approx. kernel")
timescale.plot(sample_sizes, fourier_times, '--',
               label='Fourier approx. kernel')

# horizontal lines for exact rbf and linear kernels:
accuracy.plot([sample_sizes[0], sample_sizes[-1]],
              [linear_svm_score, linear_svm_score], label="linear svm")
timescale.plot([sample_sizes[0], sample_sizes[-1]],
               [linear_svm_time, linear_svm_time], '--', label='linear svm')

accuracy.plot([sample_sizes[0], sample_sizes[-1]],
              [kernel_svm_score, kernel_svm_score], label="rbf svm")
timescale.plot([sample_sizes[0], sample_sizes[-1]],
               [kernel_svm_time, kernel_svm_time], '--', label='rbf svm')

# vertical line for dataset dimensionality = 64
accuracy.plot([64, 64], [0.7, 1], label="n_features")

# legends and labels
accuracy.set_title("Classification accuracy")
timescale.set_title("Training times")
accuracy.set_xlim(sample_sizes[0], sample_sizes[-1])
accuracy.set_xticks(())
accuracy.set_ylim(np.min(fourier_scores), 1)
timescale.set_xlabel("Sampling steps = transformed feature dimension")
accuracy.set_ylabel("Classification accuracy")
timescale.set_ylabel("Training time in seconds")
accuracy.legend(loc='best')
timescale.legend(loc='best')

# visualize the decision surface, projected down to the first
# two principal components of the dataset
pca = PCA(n_components=8).fit(data_train)

X = pca.transform(data_train)

# Generate grid along first two principal components
multiples = np.arange(-2, 2, 0.1)
# steps along first component
first = multiples[:, np.newaxis] * pca.components_[0, :]
# steps along second component
second = multiples[:, np.newaxis] * pca.components_[1, :]
# combine
grid = first[np.newaxis, :, :] + second[:, np.newaxis, :]
flat_grid = grid.reshape(-1, data.shape[1])

# title for the plots
titles = ['SVC with rbf kernel',
          'SVC (linear kernel)\n with Fourier rbf feature map\n'
          'n_components=100',
          'SVC (linear kernel)\n with Nystroem rbf feature map\n'
          'n_components=100']

plt.tight_layout()
plt.figure(figsize=(12, 5))

# predict and plot
for i, clf in enumerate((kernel_svm, nystroem_approx_svm,
                         fourier_approx_svm)):
    # Plot the decision boundary. For that, we will assign a color to each
    # point in the mesh [x_min, x_max]x[y_min, y_max].
    plt.subplot(1, 3, i + 1)
    Z = clf.predict(flat_grid)

    # Put the result into a color plot
    Z = Z.reshape(grid.shape[:-1])
    plt.contourf(multiples, multiples, Z, cmap=plt.cm.Paired)
    plt.axis('off')

    # Plot also the training points
    plt.scatter(X[:, 0], X[:, 1], c=targets_train, cmap=plt.cm.Paired,
                edgecolors=(0, 0, 0))

    plt.title(titles[i])
plt.tight_layout()
plt.show()

sklearn.svm.SVC_第2张图片
sklearn.svm.SVC_第3张图片

你可能感兴趣的:(机器学习笔记)