机器学习——集成学习之 AdaBoost

文章目录

  • 一、Adaboost
    • 1. 基本原理
    • 2. 数据权重和弱分类器权重
  • 二、算法步骤
  • 三、具体实现
    • 1.带权数据集
    • 2.基分类器的抽象类
    • 3. 树桩分类器
    • 4. 集成器

学习来源: 日撸 Java 三百行(61-70天,决策树与集成学习)

一、Adaboost

1. 基本原理

  • Adaboost算法(Adaptive Boost)是一种迭代算法,其核心思想是针对同一个训练集训练不同的弱分类器,然后把这些弱分类器集合起来,构成一个强分类器。
  • 弱分类器(单层决策树)
    (1)Adaboost一般使用单层决策树作为其弱分类器。单层决策树是决策树的最简化版本,只有一个决策点。也就是说,如果训练数据有多维特征,单层决策树也只能选择其中一维特征来做决策,并且还有一个关键点,决策的阈值也需要考虑。
    (2)在单层决策树计算误差时,Adaboost要求其乘上权重,即计算带权重的误差。
    (3)权重分布影响着单层决策树决策点的选择,权重大的点得到更多的关注,权重小的点得到更少的关注。

2. 数据权重和弱分类器权重

  • 数据的权重
    主要用于弱分类器寻找其分类误差最小的决策点,找到之后用这个最小误差计算出该弱分类器的权重。
  • 弱分类器的权重
    最终投票表决时,需要根据弱分类器的权重来进行加权投票,权重大小是根据弱分类器的分类错误率计算得出的,总的规律就是弱分类器错误率越低,其权重就越高。

二、算法步骤

输入: 训练数据集 T = ( ( x 1 , y 1 ) , ( x 2 , y 2 ) , . . . , ( x N , y N ) ) T = ((x_1,y_1),(x_2,y_2),...,(x_N,y_N)) T=((x1,y1),(x2,y2),...,(xN,yN)),其中, x i ∈ X ⊆ R n , y i ∈ Y = − 1 , 1 , 迭 代 次 数 M x_i ∈X⊆R^n ,yi∈Y=−1,1,迭代次数M xiXRnyiY=1,1M

  1. 初始化训练样本的权值分布: D 1 = ( w 11 , w 12 , … , w 1 i ) , w 1 i = 1 / N , i = 1 , 2 , … , N D1=(w_{11},w_{12},…,w_{1i}),w_{1i}=1/N,i=1,2,…,N D1=(w11,w12,,w1i),w1i=1/N,i=1,2,,N
  2. 对于 m = 1 , 2 , … , M m=1,2,…,M m=1,2,,M
    2.1 使用具有权值分布 D m D_m Dm的训练数据集进行学习,得到弱分类器 G m ( x ) G_m(x) Gm(x)
    2.2 计算 G m ( x ) G_m(x) Gm(x)在训练数据集上的分类误差率:
    e m = ∑ i = 1 N w m i I ( G m ( x i ) ≠ y i ) (1) e_m = \sum_{i=1}^{N} w_{mi}I(G_m(x_i) \neq y_i) \tag{1} em=i=1NwmiI(Gm(xi)=yi)(1)
    2.3 计算 G m ( x ) G_m(x) Gm(x)在强分类器中所占的权重:
    a m = 1 2 l o g 1 − e m e m (2) a_m = \frac{1}{2}log\frac{1-e_m}{e_m} \tag{2} am=21logem1em(2)
    2.4 更新训练数据集的权值分布(这里, z m z_m zm是归一化因子,为了使样本的概率分布和为1):
    w ( m + 1 ) i = w m i z m e x p ( − α m y i G m ( x i ) ) , i = 1 , 2 , . . . , N (3) w_{(m+1)i} = \frac{w_{mi}}{z_m}exp(-α_my_iG_m(x_i)),i = 1,2,...,N \tag{3} w(m+1)i=zmwmiexp(αmyiGm(xi)),i=1,2,...,N(3)
    z m = ∑ i = 1 N w m i e x p ( − α m y i G m ( x i ) ) (4) z_m = \sum_{i=1}^{N}w_{mi}exp(-α_my_iG_m(x_i)) \tag{4} zm=i=1Nwmiexp(αmyiGm(xi))(4)
  3. 得到最终分类器:
    F ( x ) = s i g n ( ∑ i = 1 N α m G m ( x ) ) (5) F(x)=sign(\sum_{i=1}^{N}α_mG_m(x))\tag{5} F(x)=sign(i=1NαmGm(x))(5)
    举例:
    比如在一维特征时,经过3次迭代,并且知道每次迭代后的弱分类器的决策点与发言权,看看如何实现加权投票表决的。
    机器学习——集成学习之 AdaBoost_第1张图片
    如图所示,3次迭代后得到了3个决策点,
  • 最左边的决策点是小于(等于)7的分为+1类,大于7的分为-1类,且分类器的权重为0.5;它的投票表示,小于(等于)7的区域得0.5,大于7得-0.5。
  • 中间的决策点是大于(等于)13的分为+1类,小于13分为-1类,权重0.3;它的投票表示大于(等于)13的区域得0.3,小于13分区域得-0.3。
  • 最右边的决策点是小于(等于19)的分为+1类,大于19分为-1类,权重0.4。它的投票表示小于(等于19)的区域得0.4,大于19分区域得-0.4。
    如下图:
    机器学习——集成学习之 AdaBoost_第2张图片
    求和可得:
    机器学习——集成学习之 AdaBoost_第3张图片
    最后进行符号函数转化即可得到最终分类结果:
    机器学习——集成学习之 AdaBoost_第4张图片

三、具体实现

1.带权数据集

之前学习的过程中我们都是使用Instances这个数据作为数据集的类,但是在adaboosting过程中因为数据集的每一行都应当具有其权值,而这个权值属于算法定义而非数据本身携带,因此为了方便,这里继承了Instances类并将其扩充为支持 额外附加一项权值属性 的数据集。

package machinelearning.adaboostig;

import java.io.FileReader;
import java.util.Arrays;

import weka.core.Instances;

/**
 * 带权数据集
 * 
 * @author Ling Lin E-mail:[email protected]
 * 
 */

public class WeightedInstances extends Instances {

	/**
	 * Just the requirement of some classes, any number is ok.
	 */
	private static final long serialVersionUID = 11087456L;

	/**
	 * Weights.在一般的Instance关系表旁附带一个权值数组
	 */
	private double[] weights;

	/**
	 ****************** 
	 * The first constructor.
	 * 
	 * @param paraFileReader
	 *            The given reader to read data from file.
	 ****************** 
	 */
	//基于文件指针的构造器
	public WeightedInstances(FileReader paraFileReader) throws Exception {
		super(paraFileReader);
		setClassIndex(numAttributes() - 1);

		// Initialize weights
		weights = new double[numInstances()];
		double tempAverage = 1.0 / numInstances();
		for (int i = 0; i < weights.length; i++) {
			weights[i] = tempAverage;
		} // Of for i
		System.out.println("Instances weights are: " + Arrays.toString(weights));
	} // Of the first constructor

	/**
	 ****************** 
	 * The second constructor.
	 * 
	 * @param paraInstances
	 *            The given instance.
	 ****************** 
	 */
	//基于Instance类型的构造器,用于同类拷贝
	public WeightedInstances(Instances paraInstances) {
		super(paraInstances);
		setClassIndex(numAttributes() - 1);

		// Initialize weights
		weights = new double[numInstances()];
		double tempAverage = 1.0 / numInstances();
		for (int i = 0; i < weights.length; i++) {
			weights[i] = tempAverage;
		} // Of for i
		System.out.println("Instances weights are: " + Arrays.toString(weights));
	} // Of the second constructor

	/**
	 ****************** 
	 * Getter.
	 * 
	 * @param paraIndex
	 *            The given index.
	 * @return The weight of the given index.
	 ****************** 
	 */
	public double getWeight(int paraIndex) {
		return weights[paraIndex];
	} // Of getWeight

	/**
	 ****************** 
	 * Adjust the weights.
	 * 
	 * @param paraCorrectArray
	 *            Indicate which instances have been correctly classified.
	 * @param paraAlpha
	 *            The weight of the last classifier.
	 ****************** 
	 */
	public void adjustWeights(boolean[] paraCorrectArray, double paraAlpha) {
		// Step 1. Calculate alpha.
		double tempIncrease = Math.exp(paraAlpha);

		// Step 2. Adjust.
		double tempWeightsSum = 0; // For normalization.
		for (int i = 0; i < weights.length; i++) {
			//基分类器判断正确数据行的权值会下降,失败的数据行权值会上升
			if (paraCorrectArray[i]) {
				weights[i] /= tempIncrease;
			} else {
				weights[i] *= tempIncrease;
			} // Of if
			tempWeightsSum += weights[i];
		} // Of for i

		// Step 3. Normalize.
		//保持总权为1.0
		for (int i = 0; i < weights.length; i++) {
			weights[i] /= tempWeightsSum;
		} // Of for i

		System.out.println("After adjusting, instances weights are: " + Arrays.toString(weights));
	} // Of adjustWeights

	/**
	 ****************** 
	 * Test the method.
	 ****************** 
	 */
	//权调整测试案例中,我们假设正确/失败情况各半,并且α=0.3
	public void adjustWeightsTest() {
		boolean[] tempCorrectArray = new boolean[numInstances()];
		for (int i = 0; i < tempCorrectArray.length / 2; i++) {
			tempCorrectArray[i] = true;
		} // Of for i

		double tempWeightedError = 0.3;

		adjustWeights(tempCorrectArray, tempWeightedError);

		System.out.println("After adjusting");

		System.out.println(toString());
	} // Of adjustWeightsTest

	/**
	 ****************** 
	 * For display.
	 ****************** 
	 */
	//输出测试
	@Override
	public String toString() {
		String resultString = "I am a weighted Instances object.\r\n" + "I have " + numInstances() + " instances and "
				+ (numAttributes() - 1) + " conditional attributes.\r\n" + "My weights are: " + Arrays.toString(weights)
				+ "\r\n" + "My data are: \r\n" + super.toString();

		return resultString;
	} // Of toString

	/**
	 ****************** 
	 * For unit test.
	 * 
	 * @param args
	 *            Not provided.
	 ****************** 
	 */
	public static void main(String args[]) {
		WeightedInstances tempWeightedInstances = null;
		String tempFilename = "D:/00/data/iris.arff";
		try {
			FileReader tempFileReader = new FileReader(tempFilename);
			tempWeightedInstances = new WeightedInstances(tempFileReader);
			tempFileReader.close();
		} catch (Exception exception1) {
			System.out.println("Cannot read the file: " + tempFilename + "\r\n" + exception1);
			System.exit(0);
		} // Of try

		System.out.println(tempWeightedInstances.toString());

		tempWeightedInstances.adjustWeightsTest();
	} // Of main

} // Of class WeightedInstances

  • 这里的0.00667正是1/150的结果
    机器学习——集成学习之 AdaBoost_第5张图片

2.基分类器的抽象类

本类作为抽象类,主要目的是为后续的分类器构造提供一些可用的部件,从而提高分类器创建之可重用性。

package machinelearning.adaboostig;

import java.util.Random;

import weka.core.Instance;

/**
 * 抽象分类器
 * 
 * @author Ling Lin E-mail:[email protected]
 * 
 */

public abstract class SimpleClassifier {

	/**
	 * The index of the current attribute.
	 */
	//选用的划分属性
	int selectedAttribute;

	/**
	 * Weighted data.
	 */
	//带权数据集
	WeightedInstances weightedInstances;

	/**
	 * The accuracy on the training set.
	 */
	//当前分类器本身的训练准确性
	double trainingAccuracy;

	/**
	 * The number of classes. For binary classification it is 2.
	 */
	//标签数目(决策属性列可用类别的数目)
	int numClasses;

	/**
	 * The number of instances.
	 */
	//数据集长度
	int numInstances;

	/**
	 * The number of conditional attributes.
	 */
	//条件属性个数
	int numConditions;

	/**
	 * For random number generation.
	 */
	Random random = new Random();

	/**
	 ****************** 
	 * The first constructor.
	 * 
	 * @param paraWeightedInstances
	 *            The given instances.
	 ****************** 
	 */
	public SimpleClassifier(WeightedInstances paraWeightedInstances) {
		weightedInstances = paraWeightedInstances;

		numConditions = weightedInstances.numAttributes() - 1;
		numInstances = weightedInstances.numInstances();
		numClasses = weightedInstances.classAttribute().numValues();
	}// Of the first constructor

	/**
	 ****************** 
	 * Train the classifier.
	 ****************** 
	 */
	//分类器的基础构建方法,是构建分类器的触发方法
	public abstract void train();

	/**
	 ****************** 
	 * Classify an instance.
	 * 
	 * @param paraInstance
	 *            The given instance.
	 * @return Predicted label.
	 ****************** 
	 */
	public abstract int classify(Instance paraInstance);

	
	/**
	 ****************** 
	 * Which instances in the training set are correctly classified.
	 * 
	 * @return The correctness array.
	 ****************** 
	 */
	//通过逐一的遍历,得到了每行训练集数据的分类是否正确的状况数组
	public boolean[] computeCorrectnessArray() {
		boolean[] resultCorrectnessArray = new boolean[weightedInstances.numInstances()];
		for (int i = 0; i < resultCorrectnessArray.length; i++) {
			Instance tempInstance = weightedInstances.instance(i);
			if ((int) (tempInstance.classValue()) == classify(tempInstance)) {
				resultCorrectnessArray[i] = true;
			} // Of if
		} // Of for i
	
		return resultCorrectnessArray;
	}// Of computeCorrectnessArray

	/**
	 ****************** 
	 * Compute the accuracy on the training set.
	 * 
	 * @return The training accuracy.
	 ****************** 
	 */
	// 通过computeCorrectnessArray()方法更新了tempCorrectnessArray[ ]
	//于是进行统计,计算整个分类器的准确度。
	public double computeTrainingAccuracy() {
		double tempCorrect = 0;
		boolean[] tempCorrectnessArray = computeCorrectnessArray();
		for (int i = 0; i < tempCorrectnessArray.length; i++) {
			if (tempCorrectnessArray[i]) {
				tempCorrect++;
			} // Of if
		} // Of for i

		double resultAccuracy = tempCorrect / tempCorrectnessArray.length;

		return resultAccuracy;
	}// Of computeTrainingAccuracy

	/**
	 ****************** 
	 * Compute the weighted error on the training set. It is at least 1e-6 to
	 * avoid NaN.
	 * 
	 * @return The weighted error.
	 ****************** 
	 */
	//统计所有的分类出错的数据行的权总和
	public double computeWeightedError() {
		double resultError = 0;
		boolean[] tempCorrectnessArray = computeCorrectnessArray();
		for (int i = 0; i < tempCorrectnessArray.length; i++) {
			if (!tempCorrectnessArray[i]) {
				resultError += weightedInstances.getWeight(i);
			} // Of if
		} // Of for i
		
		//为了避免极小值导致越界的错误,对于一切过小的值错误或者过小的情况都设置为最底线的1e-6(避免越界的常见技巧)
		if (resultError < 1e-6) {
			resultError = 1e-6;
		} // Of if

		return resultError;
	}// Of computeWeightedError
} // Of class SimpleClassifier

3. 树桩分类器

AdaBoost当中,树桩分类器只一个二分类器,即只能识别两个标签,而且分类之后没有递归,就简单分出两个孩子而已。显然这样的分类器自身是无法胜任多标签的任务,我们需要通过多个AdaBoost的多个分类器的串行组合来从两个标签扩充到多标签。

  • 利用随机数选定一个条件属性作为自己分类的依据
  • 对于所选择的条件属性下属的全部数据进行二分叉(连续数据)
  • 只分一次,不递归
package machinelearning.adaboostig;

import java.io.FileReader;
import java.util.Arrays;

import weka.core.Instance;

/**
 * 树桩分类器
 * 
 * @author Ling Lin E-mail:[email protected]
 * 
 */
public class StumpClassifier extends SimpleClassifier {

	/**
	 * The best cut for the current attribute on weightedInstances.
	 */
	//最佳切断值,这个值不能与数组中的任何一个元素重叠
	//从而保证数组能基于这个值不重叠地将数组分为两个不重叠的子集
	double bestCut;

	/**
	 * The class label for attribute value less than bestCut.
	 */
	//左叶子的最佳标签
	//记录左孩子权值最大的标签
	int leftLeafLabel;

	/**
	 * The class label for attribute value no less than bestCut.
	 */
	//左右叶子的最佳标签
	//记录右孩子权值最大的标签
	int rightLeafLabel;

	/**
	 ****************** 
	 * The only constructor.
	 * 
	 * @param paraWeightedInstances
	 *            The given instances.
	 ****************** 
	 */
	public StumpClassifier(WeightedInstances paraWeightedInstances) {
		super(paraWeightedInstances);
	}// Of the only constructor

	/**
	 ****************** 
	 * Train the classifier.
	 ****************** 
	 */
	@Override
	public void train() {
		// Step 1. Randomly choose an attribute.
		selectedAttribute = random.nextInt(numConditions);

		// Step 2. Find all attribute values and sort.
		double[] tempValuesArray = new double[numInstances];
		for (int i = 0; i < tempValuesArray.length; i++) {
			tempValuesArray[i] = weightedInstances.instance(i).value(selectedAttribute);
		} // Of for i
		Arrays.sort(tempValuesArray);

		// Step 3. Initialize, classify all instances as the same with the
		// original cut.
		int tempNumLabels = numClasses;
		double[] tempLabelCountArray = new double[tempNumLabels];
		int tempCurrentLabel;

		// Step 3.1 Scan all labels to obtain their counts.
		for (int i = 0; i < numInstances; i++) {
			// The label of the ith instance
			tempCurrentLabel = (int) weightedInstances.instance(i).classValue();
			tempLabelCountArray[tempCurrentLabel] += weightedInstances.getWeight(i);
		} // Of for i

		// Step 3.2 Find the label with the maximal count.
		double tempMaxCorrect = 0;
		int tempBestLabel = -1;
		for (int i = 0; i < tempLabelCountArray.length; i++) {
			if (tempMaxCorrect < tempLabelCountArray[i]) {
				tempMaxCorrect = tempLabelCountArray[i];
				tempBestLabel = i;
			} // Of if
		} // Of for i

		// Step 3.3 The cut is a little bit smaller than the minimal value.
		bestCut = tempValuesArray[0] - 0.1;
		leftLeafLabel = tempBestLabel;
		rightLeafLabel = tempBestLabel;

		// Step 4. Check candidate cuts one by one.
		// Step 4.1 To handle multi-class data, left and right.
		double tempCut;
		double[][] tempLabelCountMatrix = new double[2][tempNumLabels];

		for (int i = 0; i < tempValuesArray.length - 1; i++) {
			// Step 4.1 Some attribute values are identical, ignore them.
			if (tempValuesArray[i] == tempValuesArray[i + 1]) {
				continue;
			} // Of if
			tempCut = (tempValuesArray[i] + tempValuesArray[i + 1]) / 2;

			// Step 4.2 Scan all labels to obtain their counts wrt. the cut.
			// Initialize again since it is used many times.
			for (int j = 0; j < 2; j++) {
				for (int k = 0; k < tempNumLabels; k++) {
					tempLabelCountMatrix[j][k] = 0;
				} // Of for k
			} // Of for j

			for (int j = 0; j < numInstances; j++) {
				// The label of the jth instance
				tempCurrentLabel = (int) weightedInstances.instance(j).classValue();
				if (weightedInstances.instance(j).value(selectedAttribute) < tempCut) {
					tempLabelCountMatrix[0][tempCurrentLabel] += weightedInstances.getWeight(j);
				} else {
					tempLabelCountMatrix[1][tempCurrentLabel] += weightedInstances.getWeight(j);
				} // Of if
			} // Of for i

			// Step 4.3 Left leaf.
			double tempLeftMaxCorrect = 0;
			int tempLeftBestLabel = 0;
			for (int j = 0; j < tempLabelCountMatrix[0].length; j++) {
				if (tempLeftMaxCorrect < tempLabelCountMatrix[0][j]) {
					tempLeftMaxCorrect = tempLabelCountMatrix[0][j];
					tempLeftBestLabel = j;
				} // Of if
			} // Of for i

			// Step 4.4 Right leaf.
			double tempRightMaxCorrect = 0;
			int tempRightBestLabel = 0;
			for (int j = 0; j < tempLabelCountMatrix[1].length; j++) {
				if (tempRightMaxCorrect < tempLabelCountMatrix[1][j]) {
					tempRightMaxCorrect = tempLabelCountMatrix[1][j];
					tempRightBestLabel = j;
				} // Of if
			} // Of for i

			// Step 4.5 Compare with the current best.
			if (tempMaxCorrect < tempLeftMaxCorrect + tempRightMaxCorrect) {
				tempMaxCorrect = tempLeftMaxCorrect + tempRightMaxCorrect;
				bestCut = tempCut;
				leftLeafLabel = tempLeftBestLabel;
				rightLeafLabel = tempRightBestLabel;
			} // Of if
		} // Of for i

		System.out.println("Attribute = " + selectedAttribute + ", cut = " + bestCut + ", leftLeafLabel = "
				+ leftLeafLabel + ", rightLeafLabel = " + rightLeafLabel);
	}// Of train

	/**
	 ****************** 
	 * Classify an instance.
	 * 
	 * @param paraInstance
	 *            The given instance.
	 * @return Predicted label.
	 ****************** 
	 */
	@Override
	public int classify(Instance paraInstance) {
		int resultLabel = -1;
		if (paraInstance.value(selectedAttribute) < bestCut) {
			resultLabel = leftLeafLabel;
		} else {
			resultLabel = rightLeafLabel;
		} // Of if
		return resultLabel;
	}// Of classify

	/**
	 ****************** 
	 * For display.
	 ****************** 
	 */
	@Override
	public String toString() {
		String resultString = "I am a stump classifier.\r\n" + "I choose attribute #" + selectedAttribute
				+ " with cut value " + bestCut + ".\r\n" + "The left and right leaf labels are " + leftLeafLabel
				+ " and " + rightLeafLabel + ", respectively.\r\n" + "My weighted error is: " + computeWeightedError()
				+ ".\r\n" + "My weighted accuracy is : " + computeTrainingAccuracy() + ".";

		return resultString;
	}// Of toString

	/**
	 ****************** 
	 * For unit test.
	 * 
	 * @param args
	 *            Not provided.
	 ****************** 
	 */
	public static void main(String args[]) {
		WeightedInstances tempWeightedInstances = null;
		String tempFilename = "D:/data/iris.arff";
		try {
			FileReader tempFileReader = new FileReader(tempFilename);
			tempWeightedInstances = new WeightedInstances(tempFileReader);
			tempFileReader.close();
		} catch (Exception ee) {
			System.out.println("Cannot read the file: " + tempFilename + "\r\n" + ee);
			System.exit(0);
		} // Of try

		StumpClassifier tempClassifier = new StumpClassifier(tempWeightedInstances);
		tempClassifier.train();
		System.out.println(tempClassifier);

		System.out.println(Arrays.toString(tempClassifier.computeCorrectnessArray()));
	}// Of main
}// Of class StumpClassifier

  • 这是某次分类的结果,选择了第0列sepallength属性进行分割,分割的中间值是5.5,分割的左右数据集最佳的标签分别是0号标签Iris-setosa与2号标签Iris-virginica;最终正确率为0.64。
    机器学习——集成学习之 AdaBoost_第6张图片

4. 集成器

集成器的核心思想在于通过单个树桩分类的分类不足来进行纠正权值分布,并且影响下一个树桩分类器的分类过程,从而保证每次分布并不是完全的“ 平均 ”二分,而是有权值影响偏移与错误发生更多的标签,不断纠正错误的可能。


package machinelearning.adaboostig;

import java.io.FileReader;

import weka.core.Instance;
import weka.core.Instances;

/**
 * 
 * @author Ling Lin E-mail:[email protected]
 * 
 */
public class Booster {

	/**
	 * Classifiers.
	 */
	// 建立了一个分类器指针(引用)数组,用于存放此集成器需要的所有基分类器
	SimpleClassifier[] classifiers;

	/**
	 * Number of classifiers.
	 */
	// 基分类器数目,值上等于classifiers.length()
	int numClassifiers;

	/**
	 * Whether or not stop after the training error is 0.
	 */
	// 分类器是否可在识别度足够高的时候提前结束分类器迭代(默认设置为1)
	boolean stopAfterConverge = false;

	/**
	 * The weights of classifiers.
	 */
	// 每个分类器的权值对应的权值数组
	double[] classifierWeights;

	/**
	 * The training data.
	 */
	// 训练集
	Instances trainingData;

	/**
	 * The testing data.
	 */
	// 测试集
	Instances testingData;

	/**
	 ****************** 
	 * The first constructor. The testing set is the same as the training set.
	 * 
	 * @param paraTrainingFilename
	 *            The data filename.
	 ****************** 
	 */
	public Booster(String paraTrainingFilename) {
		// Step 1. Read training set.
		try {
			FileReader tempFileReader = new FileReader(paraTrainingFilename);
			trainingData = new Instances(tempFileReader);
			tempFileReader.close();
		} catch (Exception ee) {
			System.out.println("Cannot read the file: " + paraTrainingFilename + "\r\n" + ee);
			System.exit(0);
		} // Of try

		// Step 2. Set the last attribute as the class index.
		trainingData.setClassIndex(trainingData.numAttributes() - 1);

		// Step 3. The testing data is the same as the training data.
		testingData = trainingData;

		stopAfterConverge = true;

		System.out.println("****************Data**********\r\n" + trainingData);
	}// Of the first constructor

	/**
	 ****************** 
	 * Set the number of base classifier, and allocate space for them.
	 * 
	 * @param paraNumBaseClassifiers
	 *            The number of base classifier.
	 ****************** 
	 */
	public void setNumBaseClassifiers(int paraNumBaseClassifiers) {
		numClassifiers = paraNumBaseClassifiers;

		// Step 1. Allocate space (only reference) for classifiers
		classifiers = new SimpleClassifier[numClassifiers];

		// Step 2. Initialize classifier weights.
		classifierWeights = new double[numClassifiers];
	}// Of setNumBaseClassifiers

	/**
	 ****************** 
	 * Train the booster.
	 * 
	 * @see algorithm.StumpClassifier#train()
	 ****************** 
	 */
	public void train() {
		// Step 1. Initialize.
		WeightedInstances tempWeightedInstances = null;
		double tempError;
		numClassifiers = 0;

		// Step 2. Build other classifiers.
		for (int i = 0; i < classifiers.length; i++) {
			// Step 2.1 Key code: Construct or adjust the weightedInstances
			if (i == 0) {
				tempWeightedInstances = new WeightedInstances(trainingData);
			} else {
				// Adjust the weights of the data.
				tempWeightedInstances.adjustWeights(classifiers[i - 1].computeCorrectnessArray(),
						classifierWeights[i - 1]);
			} // Of if

			// Step 2.2 Train the next classifier.
			classifiers[i] = new StumpClassifier(tempWeightedInstances);
			classifiers[i].train();

			tempError = classifiers[i].computeWeightedError();

			// Key code: Set the classifier weight.
			classifierWeights[i] = 0.5 * Math.log(1 / tempError - 1);
			if (classifierWeights[i] < 1e-6) {
				classifierWeights[i] = 0;
			} // Of if

			System.out.println("Classifier #" + i + " , weighted error = " + tempError + ", weight = "
					+ classifierWeights[i] + "\r\n");

			numClassifiers++;

			// The accuracy is enough.
			if (stopAfterConverge) {
				double tempTrainingAccuracy = computeTrainingAccuray();
				System.out.println("The accuracy of the booster is: " + tempTrainingAccuracy + "\r\n");
				if (tempTrainingAccuracy > 0.999999) {
					System.out.println("Stop at the round: " + i + " due to converge.\r\n");
					break;
				} // Of if
			} // Of if
		} // Of for i
	}// Of train

	/**
	 ****************** 
	 * Classify an instance.
	 * 
	 * @param paraInstance
	 *            The given instance.
	 * @return The predicted label.
	 ****************** 
	 */
	public int classify(Instance paraInstance) {
		double[] tempLabelsCountArray = new double[trainingData.classAttribute().numValues()];
		for (int i = 0; i < numClassifiers; i++) {
			int tempLabel = classifiers[i].classify(paraInstance);
			tempLabelsCountArray[tempLabel] += classifierWeights[i];
		} // Of for i

		int resultLabel = -1;
		double tempMax = -1;
		for (int i = 0; i < tempLabelsCountArray.length; i++) {
			if (tempMax < tempLabelsCountArray[i]) {
				tempMax = tempLabelsCountArray[i];
				resultLabel = i;
			} // Of if
		} // Of for

		return resultLabel;
	}// Of classify

	/**
	 ****************** 
	 * Test the booster on the training data.
	 * 
	 * @return The classification accuracy.
	 ****************** 
	 */
	public double test() {
		System.out.println("Testing on " + testingData.numInstances() + " instances.\r\n");

		return test(testingData);
	}// Of test

	/**
	 ****************** 
	 * Test the booster.
	 * 
	 * @param paraInstances
	 *            The testing set.
	 * @return The classification accuracy.
	 ****************** 
	 */
	public double test(Instances paraInstances) {
		double tempCorrect = 0;
		paraInstances.setClassIndex(paraInstances.numAttributes() - 1);

		for (int i = 0; i < paraInstances.numInstances(); i++) {
			Instance tempInstance = paraInstances.instance(i);
			if (classify(tempInstance) == (int) tempInstance.classValue()) {
				tempCorrect++;
			} // Of if
		} // Of for i

		double resultAccuracy = tempCorrect / paraInstances.numInstances();
		System.out.println("The accuracy is: " + resultAccuracy);

		return resultAccuracy;
	} // Of test

	/**
	 ****************** 
	 * Compute the training accuracy of the booster. It is not weighted.
	 * 
	 * @return The training accuracy.
	 ****************** 
	 */
	public double computeTrainingAccuray() {
		double tempCorrect = 0;

		for (int i = 0; i < trainingData.numInstances(); i++) {
			if (classify(trainingData.instance(i)) == (int) trainingData.instance(i).classValue()) {
				tempCorrect++;
			} // Of if
		} // Of for i

		double tempAccuracy = tempCorrect / trainingData.numInstances();

		return tempAccuracy;
	}// Of computeTrainingAccuray

	/**
	 ****************** 
	 * For integration test.
	 * 
	 * @param args
	 *            Not provided.
	 ****************** 
	 */
	public static void main(String args[]) {
		System.out.println("Starting AdaBoosting...");
		Booster tempBooster = new Booster("D:/00/data/iris.arff");
		// Booster tempBooster = new Booster("src/data/smalliris.arff");

		tempBooster.setNumBaseClassifiers(100);
		tempBooster.train();

		System.out.println("The training accuracy is: " + tempBooster.computeTrainingAccuray());
		tempBooster.test();
	}// Of main

}// Of class Booster
  • 这个结果是对于自身测试所得到的结果,最终的测试效果能达到0.98。在单独测试树桩分类器的时候,其识别率只在0.56~0.67区间内,但是通过若干分类器的集成,最终竟然可以达到0.98的识别率。
    机器学习——集成学习之 AdaBoost_第7张图片

你可能感兴趣的:(机器学习,决策树,算法)