scikit-learn 学习谱聚类SpectralClustering

谱聚类可看作是一种降维的方法。

class sklearn.cluster.SpectralClustering()


参数:

 n_clusters:切图时降到的维数

affinity:相似矩阵的建立方式。'nearest_neighbors':k-近邻,'precomputed':自定义,全连接方式,常用高斯核'rbf',多项式'poly',sigmoid函数'sigmoid'

 eigen_solver:特征值求解的策略,{None, ‘arpack’, ‘lobpcg’, or ‘amg’}

 eigen_tol:如果eigen_solver使用了arpack’,则需要通过eigen_tol指定矩阵分解停止条件

 gamma:核函数参数,如果在affinity里使用了全连接方式,核函数,需要利用该参数对核函数进行调参。

  degree:当使用了多项式作为核函数时,对该参数进行调参,默认为3

  coef:当核函数为多项式或者sigmoid函数时进行调整,默认为1

  n_neighbors:当affinity设置为'nearest_neighbors'时,该参数作为设置近邻数量

assign_labels:最后使用的聚类方式:{‘k-means’,'discritize'}


返回:

affinity_matrix_:用于聚类的相似矩阵

labels_:


 

示列代码:

import numpy as np
from sklearn import datasets
X=datasets.load_iris()
#print(X)
#查看默认的谱聚类效果
from sklearn.cluster import SpectralClustering
spectral=SpectralClustering()
pred_y=spectral.fit_predict(X.data)
from sklearn import metrics
print("Calinski-Harabasz Score",metrics.calinski_harabaz_score(X.data,pred_y))
"""
Calinski-Harabasz Score 438.286953256
"""

#默认使用的是高斯核,需要对n_cluster和gamma进行调参,选择合适的参数
scores=[]
s=dict()
for index,gamma in enumerate((0.01,0.1,1,10)):
    for index,k in enumerate((3,4,5,6)):
        pred_y=SpectralClustering(n_clusters=k).fit_predict(X.data)
        print("Calinski-Harabasz Score with gamma=",gamma,"n_cluster=",k,"score=",metrics.calinski_harabaz_score(X.data,pred_y))
        tmp=dict()
        tmp['gamma']=gamma
        tmp['n_cluster']=k
        tmp['score']=metrics.calinski_harabaz_score(X.data,pred_y)
        s[metrics.calinski_harabaz_score(X.data,pred_y)]=tmp
        scores.append(metrics.calinski_harabaz_score(X.data,pred_y))
print(np.max(scores))
print("最大得分项:")
print(s.get(np.max(scores)))
"""
Calinski-Harabasz Score with gamma= 0.01 n_cluster= 3 score= 558.91617342
Calinski-Harabasz Score with gamma= 0.01 n_cluster= 4 score= 526.594543218
Calinski-Harabasz Score with gamma= 0.01 n_cluster= 5 score= 493.129509828
Calinski-Harabasz Score with gamma= 0.01 n_cluster= 6 score= 473.659126731
Calinski-Harabasz Score with gamma= 0.1 n_cluster= 3 score= 558.91617342
Calinski-Harabasz Score with gamma= 0.1 n_cluster= 4 score= 526.594543218
Calinski-Harabasz Score with gamma= 0.1 n_cluster= 5 score= 493.129509828
Calinski-Harabasz Score with gamma= 0.1 n_cluster= 6 score= 473.659126731
Calinski-Harabasz Score with gamma= 1 n_cluster= 3 score= 558.91617342
Calinski-Harabasz Score with gamma= 1 n_cluster= 4 score= 526.594543218
Calinski-Harabasz Score with gamma= 1 n_cluster= 5 score= 493.129509828
Calinski-Harabasz Score with gamma= 1 n_cluster= 6 score= 473.659126731
Calinski-Harabasz Score with gamma= 10 n_cluster= 3 score= 558.91617342
Calinski-Harabasz Score with gamma= 10 n_cluster= 4 score= 526.594543218
Calinski-Harabasz Score with gamma= 10 n_cluster= 5 score= 493.129509828
Calinski-Harabasz Score with gamma= 10 n_cluster= 6 score= 473.659126731
558.91617342
最大得分项参数:
{'gamma': 10, 'n_cluster': 3, 'score': 558.91617342043787}

得到的Calinski-Harabasz分数值ss越大则聚类效果越好,参数如上
"""

 

你可能感兴趣的:(机器学习)