决策树(ID3用于标称型数据)

优点:
计算复杂度不高,输出结果易于理解,对中间值不敏感,可以处理不相关特征数据。
缺点:
可能会产生过度匹配问题。
适用数据类型:
数值型和标称型
信息增益
划分数据集的最大原则:将无序的数据变得更加有序
在划分数据集之前之后信息发生的变化称为信息增益
获得最多信息增益的特征就是最好的划分数据集的选择
决策树(ID3用于标称型数据)_第1张图片

计算香农熵代码

def calcShannonEnt(dataSet):
    labelCounts = {}
    numEntries = len(dataSet)
    for featVec in dataSet:
        currentLabel = featVec[-1]
        if currentLabel not in labelCounts.keys():
            labelCounts[currentLabel]=0
        labelCounts[currentLabel]+=1
    shannoEnt = 0.0
    for key in labelCounts:
        prob = float(labelCounts[key])/numEntries
        shannoEnt -= prob*log(prob,2)
    return shannoEnt

熵越高,则混合的数据也越多。
划分数据集

def splitDataSet(dataSet,axis,value):
    retDataSet = []
    for featVec in dataSet:
        if featVec[axis] == value:
        #将划分依据从集合中删掉
            reducedFeatVec = featVec[:axis]
            reducedFeatVec.extend(featVec[axis+1:])
            retDataSet.append(reducedFeatVec)
    return retDataSet

选择最好的数据集划分方式

def chooseBestFeatureToSplit(dataSet):
    numFeatures = len(dataSet[0])-1
    baseEntropy = calcShannonEnt(dataSet)
    bestInfoGain = 0.0;bestFeature = -1
    for i in range(numFeatures):
        featList = [example[i] for example in dataSet)
        uniqueVals = set(featList)
        newEntropy = 0.0
        for value in uniqueVals:
            subDataSet = splitDataSet(dataset,i,value)
            prob = len(subDataSet)/float(len(dataSet))
            newEntropy += prob*calcShannonEnt(subDataSet)
        infoGain = baseEntropy - newEntropy
        if(infoGain > bestInfoGain):
            bestInfoGain = infoGain
            bestFeature = i
    return bestFeature

递归构建决策树

#当用完了所有的特征是还不能明确分类,则进行多数表决
def majorCnt(classList):
    classCount = {}
    for vote in classList:
        if vote not in classCount.keys():classCount[vote] = 0
        classCount[vote] += 1
    sortedClassCount = sorted(classCount.iteritems(),
            key = operator.itemgetter(1),reverse = True)
    return sortedClassCount[0][0]
def createTree(dataSet,labels):
    classList = [example[-1] for example in dataSet]
    if classList.count(classList[0]) == len(classList):
        return classList[0]
    if len(dataSet[0])==1:
        return majorCnt(classList)
    bestFeat = chooseBestFeatureToSplit(dataSet)
    bestFeatLabel = labels[bestFeat]
    mytree = {bestFeatLabel:{}}
    del(labels[bestFeat])
    featValues = [example[bestFeat] for example in dataSet]
    uniqueVals = set(featValues)
    for value in uniqueVals:
        #由于函数传参数使用的引用,为了不改变原来的列表中的内容,所以用新的列表代替
        subLabels = labels[:]
        myTree[bestFeatLabel][value] = createTree(
                splitDataSet(dataSet,bestFeat,value),subLabels)
    return myTree

存储决策树
由于计算决策树是需要时间的,我们可以把生成的决策树存储起来,到用的时候在拿出来,就不需要重复计算了


def storeTree(inputTree,filename):
    import pickle
    fw = open(filename,'w')
    pickle.dump(inputTree,fw)
    fw.close()

def grabTree(filename):
    import pickle
    fr = open(filename)
    return pickle.load(fr)

你可能感兴趣的:(机器学习实践笔记代码)