一、k-means算法优缺点
k均值简单并且可以用于各种数据类型,它相当有效,尽管常常多次运行。然后k均值并不适合所有的数据类型。它不能处理非球形簇,不同尺寸和不同密度的簇。对包含离群点(噪声点)的数据进行聚类时,k均值也有问题。
https://github.com/Thinkgamer/Machine-Learning-With-Python
二、k-means算法python实现
#encoding:utf-8
from numpy import *
def loadDataSet(filename):
dataMat = [] #创建元祖
fr = open(filename)
for line in fr.readlines():
curLine = line.strip().split("\t")
fltLine = map(float,curLine) #使用map函数将curLine里的数全部转换为float型
dataMat.append(fltLine)
return dataMat
def distEclud(vecA,vecB): #计算两个向量的欧式距离
return sqrt(sum(power(vecA-vecB,2)))
def randCent(dataSet,k): #位给定数据集构建一个包含k个随机质心的集合
n = shape(dataSet)[1] #shape函数此时返回的是dataSet元祖的列数
centroids = mat(zeros((k,n))) #mat函数创建k行n列的矩阵,centroids存放簇中心
for j in range(n):
minJ = min(dataSet[:,j]) #第j列的最小值
rangeJ = float(max(dataSet[:,j]) - minJ)
centroids[:,j] = minJ + rangeJ * random.rand(k,1) #random.rand(k,1)产生shape(k,1)的矩阵
return centroids
def kMeans(dataSet,k,disMeas = distEclud,createCent = randCent):
m = shape(dataSet)[0] #shape函数此时返回的是dataSet元祖的行数
clusterAssment = mat(zeros((m,2))) #创建一个m行2列的矩阵,第一列存放索引值,第二列存放误差,误差用来评价聚类效果
centroids = createCent(dataSet,k) #创建k个质心,调用createCent()函数
clusterChanged =True #标志变量,若为true则继续迭代
print "质心位置更新过程变化:"
while clusterChanged:
clusterChanged = False
for i in range(m):
minDist = inf #inf为正无穷大
minIndex = -1 #创建索引
for j in range(k):
#寻找最近的质心
disJI = disMeas(centroids[j,:],dataSet[i,:]) #计算每个点到质心的欧氏距离
if disJIif clusterAssment[i,0] != minIndex:
clusterChanged = True
clusterAssment[i,:] = minIndex,minDist**2
print centroids
#更新质心的位置
for cent in range(k):
ptsInClust = dataSet[nonzero(clusterAssment[:,0].A==cent)[0]]#通过数组过滤来获得给定簇的所有点
#nonzero(a)函数返回值为元祖a的所有非零值得下标所构成的元祖
#eg:b2 = array([[True, False, True], [True, False, False]])
#print nonzero(b2)
#=>(array([0, 0, 1]), array([0, 2, 0]))
#print array(nonzero(b2))
#=>[[0, 0, 1],[0, 2, 0]]
centroids[cent,:] = mean(ptsInClust,axis=0) #计算所有点的均值,选项axis=0表示沿矩阵的列方向进行均值计算
return centroids,clusterAssment #返回所有的类质心与点分配结果
def bikMeans(dataSet,k,disMeas = distEclud):
m = shape(dataSet)[0] #shape函数此时返回的是dataSet元祖的行数
clusterAssment = mat(zeros((m,2))) #创建一个m行2列的矩阵,第一列存放索引值,第二列存放误差,误差用来评价聚类效果
#创建一个初始簇
centroid0 = mean(dataSet,axis=0).tolist()[0]
centList = [centroid0]
print centList
print len(centList)
for j in range(m):
clusterAssment[j,1] = disMeas(mat(centroid0),dataSet[j,:])**2 #计算所有点的均值,选项axis=0表示沿矩阵的列方向进行均值计算
while (len(centList)#inf正无穷大
for i in range(len(centList)):
#尝试划分每一簇
ptsInCurrCluster = dataSet[nonzero(clusterAssment[:,0].A==i)[0],:]
centroidMat,splitClustAss = kMeans(ptsInCurrCluster,2,disMeas)
sseSplit = sum(splitClustAss[:,1])
sseNotSplit = sum(clusterAssment[nonzero(clusterAssment[:,0].A!=i)[0],1])
print "sseSplit and notSplit:",sseSplit,sseNotSplit
if (sseSplit + sseNotSplit)#更新簇的分配结果
bestClustAss[nonzero(bestClustAss[:,0].A == 1)[0],0] = len(centList)
bestClustAss[nonzero(bestClustAss[:,0].A == 0)[0],0] = bestCentToSplit
print "the bestCentToSplit is :",bestCentToSplit
print "the len of bestClustAss is:",len(bestClustAss)
centList[bestCentToSplit] = bestNewCents[0,:]
centList.append(bestNewCents[1,:])
clusterAssment[nonzero(clusterAssment[:,0].A == bestCentToSplit)[0],:] =bestClustAss
return centList,clusterAssment
#return mat(centList),clusterAssment
datMat = mat(loadDataSet('data.txt'))
myCentList,myNewAssment = bikMeans(datMat,2)
print "最终质心:\n",myCentList
print "索引值和均值:\n",myNewAssment
参考资料
1:【关联规则】Apriori算法分析与Python代码实现,具体分析请参考博客:
http://blog.csdn.net/gamer_gyt/article/details/51113753
2:【关联规则】FP-Tree算法分析与Python代码实现,具体分析请参考博客:
http://blog.csdn.net/gamer_gyt/article/details/51113753
3:【决策树算法】基于信息论的三种决策树算法之ID3算法分析与Python代码实现,具体分析请参考博客:
http://blog.csdn.net/gamer_gyt/article/details/51242815
4:【聚类算法】二分-kMeans算法(二分K均值聚类)分析与Python代码实现,具体分析请参考博客:
http://blog.csdn.net/gamer_gyt/article/details/48949227
5:【回归算法】Logistic回归算法分析与Python代码实现,具体分析请参考博客:
http://blog.csdn.net/gamer_gyt/article/details/51236978
http://blog.csdn.net/gamer_gyt/article/details/51242150
6:【分类算法】AdaBoost算法分析与Python代码实现,具体分析请参考博客:
http://blog.csdn.net/gamer_gyt/article/details/51372309
7:【分类算法】朴素贝叶斯算法分析与Python代码实现,具体分析请参考博客:
http://blog.csdn.net/gamer_gyt/article/details/47205371
http://blog.csdn.net/gamer_gyt/article/details/47860945
8:【回归算法】预测数值型数据-回归(Regression)分析与Python代码实现,具体分析请参考博客:
http://blog.csdn.net/gamer_gyt/article/details/51405251
9:【降维技术】PCA降维技术分析与Python代码实现,具体分析请参考博客:
http://blog.csdn.net/gamer_gyt/article/details/51418069
10:【推荐系统】基于标签的推荐系统,具体分析请参考博客:
http://blog.csdn.net/gamer_gyt/article/details/51684716
11:【推荐系统】基于图推荐系统,具体分析请参考博客:
http://blog.csdn.net/gamer_gyt/article/details/51694250
12:【推荐系统】基于用户和Item的协同过滤推荐算法,具体分析请参考博客:
http://blog.csdn.net/gamer_gyt/article/details/51346159
13:基于随机变量的熵来进行数据建模和分析
http://blog.csdn.net/gamer_gyt/article/details/53729868
14:推荐算法的回顾总结
http://blog.csdn.net/gamer_gyt/article/details/74367714