最近学习机器学习算法,用python实现。
这里记录k近邻算法的python源码实现和一些理解。
文章参考了zouxy09的博文,代码参考machine learning in action.
k近邻分类算法原理:
1、根据k近邻,计算K个离待分类物品最近的物品,这K个最近的物品已经分类。
2、统计K个近邻的分类结果,按照降序排列。
3、分类结果值最大的,即是待分类物品类别。
代码如下(根据手写数字0-9,判断未知手写数字的分类):
#!/usr/bin/env python # coding=utf-8 ''' Created on Sep 16, 2010 kNN: k Nearest Neighbors Input: inX: vector to compare to existing dataset (1xN) dataSet: size m data set of known vectors (NxM) labels: data set labels (1xM vector) k: number of neighbors to use for comparison (should be an odd number) Output: the most popular class label @author: pbharrin ''' from numpy import * import operator from os import listdir #每个dataSet数组元素,对应一个labels数组元素,根据K邻域分类 #k应该是奇数,偶数不好比较。举例:分类结果A:2;B:2.就不能正确分类了 def classify0(inX, dataSet, labels, k): dataSetSize = dataSet.shape[0] diffMat = tile(inX, (dataSetSize,1)) - dataSet #待分类数组与所有训练集数组 sqDiffMat = diffMat**2 sqDistances = sqDiffMat.sum(axis=1) distances = sqDistances**0.5 #计算欧氏距离 sortedDistIndicies = distances.argsort() #距离排序:升序 classCount={} for i in range(k): #最近3个文件,对应的分类 voteIlabel = labels[sortedDistIndicies[i]] #sorted后的索引,与排序前索引对应关系 classCount[voteIlabel] = classCount.get(voteIlabel,0) + 1 sortedClassCount = sorted(classCount.items(), key=operator.itemgetter(1), reverse=True) return sortedClassCount[0][0] def createDataSet(): group = array([[1.0,1.1],[1.0,1.0],[0,0],[0,0.1]]) labels = ['A','A','B','B'] return group, labels def file2matrix(filename): fr = open(filename) numberOfLines = len(fr.readlines()) #get the number of lines in the file returnMat = zeros((numberOfLines,3)) #prepare matrix to return classLabelVector = [] #prepare labels return fr = open(filename) index = 0 for line in fr.readlines(): line = line.strip() listFromLine = line.split('\t') returnMat[index,:] = listFromLine[0:3] classLabelVector.append(int(listFromLine[-1])) index += 1 return returnMat,classLabelVector def autoNorm(dataSet): minVals = dataSet.min(0) maxVals = dataSet.max(0) ranges = maxVals - minVals normDataSet = zeros(shape(dataSet)) m = dataSet.shape[0] normDataSet = dataSet - tile(minVals, (m,1)) normDataSet = normDataSet/tile(ranges, (m,1)) #element wise divide return normDataSet, ranges, minVals def datingClassTest(): hoRatio = 0.50 #hold out 10% datingDataMat,datingLabels = file2matrix('datingTestSet2.txt') #load data setfrom file normMat, ranges, minVals = autoNorm(datingDataMat) m = normMat.shape[0] numTestVecs = int(m*hoRatio) errorCount = 0.0 for i in range(numTestVecs): classifierResult = classify0(normMat[i,:],normMat[numTestVecs:m,:],datingLabels[numTestVecs:m],3) print("the classifier came back with: %d, the real answer is: %d" % (classifierResult, datingLabels[i])) if (classifierResult != datingLabels[i]): errorCount += 1.0 print("the total error rate is: %f" % (errorCount/float(numTestVecs))) print(errorCount) def img2vector(filename): returnVect = zeros((1,1024)) fr = open(filename) for i in range(32): lineStr = fr.readline() for j in range(32): returnVect[0,32*i+j] = int(lineStr[j]) return returnVect def handwritingClassTest(): hwLabels = [] trainingFileList = listdir('trainingDigits') #load the training set m = len(trainingFileList) trainingMat = zeros((m,1024)) for i in range(m): fileNameStr = trainingFileList[i] fileStr = fileNameStr.split('.')[0] #take off .txt classNumStr = int(fileStr.split('_')[0]) hwLabels.append(classNumStr) #文件名表示已分类标签 trainingMat[i,:] = img2vector('trainingDigits/%s' % fileNameStr) testFileList = listdir('testDigits') #iterate through the test set errorCount = 0.0 mTest = len(testFileList) for i in range(mTest): fileNameStr = testFileList[i] fileStr = fileNameStr.split('.')[0] #take off .txt classNumStr = int(fileStr.split('_')[0]) #测试文件的已分类标签 vectorUnderTest = img2vector('testDigits/%s' % fileNameStr) classifierResult = classify0(vectorUnderTest, trainingMat, hwLabels, 3) #训练集已对应分类,求测试集分类结果 if (classifierResult != classNumStr): print("the classifier came back with: %d, the real answer is: %d" % (classifierResult, classNumStr)) errorCount += 1.0 print("\n the total number of errors is: %d" % errorCount) print("\n the total error rate is: %f" % (1-errorCount/float(mTest))) handwritingClassTest();
这里主要用到了函数handwritingClassTest(),classify0,img2vector
handwritingClassTest:真个算法组织管理
classify0:分类主函数,比较待分类物品与已分类函数,根据K近邻,给出分类结果
img2vector:读取文件内容
注意事项:
1、K近邻分类,需要计算待分类物品与所有已分类物品的距离才能计算结果,计算量大。
2、分类结果与K取值相关,不同K值对应不同的分类结果。
3、样本不平衡时,分类结果容易倾向于大样本分类集合。
参考文章:
1、http://blog.csdn.net/zouxy09/article/details/16955347
2、machine learning in action