AdaBoosting算法是一种集成算法。
集成算法是通过构建并结合多个学习器来完成学习任务,就是“三个臭皮匠赛过诸葛亮”的思想。
例如,我们现有错误率为0.45的弱分类器,假设各个分类器相互独立,则集成后的错误率随个体分类器的数目关系为:
不难发现,随着个体分类器数目增大,集成后的错误率随之下降。
结合Hoeffding不等式的关系表达式为:
P ( F ( x ) ≠ f ( x ) ) = ∑ k = 0 ⌊ T / 2 ⌋ ( T k ) ( 1 − ϵ ) k ϵ T − k ≤ e − 1 2 T ( 1 − 2 ϵ ) 2 P(F(\boldsymbol{x})\ne f(\boldsymbol{x}))=\sum\limits_{k=0}^{\lfloor T/2\rfloor} \begin{pmatrix} T\\ k \end{pmatrix} (1-\epsilon)^k\epsilon^{T-k}\le e^{-\frac{1}{2}T(1-2\epsilon)^2} P(F(x)=f(x))=k=0∑⌊T/2⌋(Tk)(1−ϵ)kϵT−k≤e−21T(1−2ϵ)2
其中 T T T表示集成中个体分类器的个数; ϵ \epsilon ϵ表示个体分类器的错误率; F ( x ) 和 f ( x ) F(\boldsymbol x)和f(\boldsymbol{x}) F(x)和f(x)分别表示预测标签和真实标签。
不难从式子中发现:集成后的错误率随个体分类器的个数增大而指数级下降;当个体分类器的错误率大于等于0.5时,集成没有意义。
关于个体分类器的权值设置和分类错误的样本的权重设置的更新表达式:
这两个式子是由最小化指数损失函数推导而来的,具体的推导过程可见西瓜书集成学习这一节内容。
当前个体分类器的权重: α = 1 2 ln ( 1 − ϵ ϵ ) \alpha=\frac{1}{2}\ln\left(\frac{1-\epsilon}{\epsilon}\right) α=21ln(ϵ1−ϵ),由于当错误率 ϵ \epsilon ϵ大于等于0.5时没有意义,所以 α > 0 \alpha\gt 0 α>0;
更新错误样本的权重: D t + 1 ( x ) = D t ( x ) Z t × e − α f ( x ) h ( x ) D_{t+1}(\boldsymbol{x})=\frac{D_t(\boldsymbol{x})}{Z_t}\times e^{-\alpha f(\boldsymbol x)h(\boldsymbol x)} Dt+1(x)=ZtDt(x)×e−αf(x)h(x)(当 f ( x ) = h ( x ) f(\boldsymbol x)=h(\boldsymbol x) f(x)=h(x)时, f ( x ) h ( x ) = 1 f(\boldsymbol x)h(\boldsymbol x)=1 f(x)h(x)=1,否则为-1。当预测值和真实标签不同时,该样本权重为增大,否则会减少), Z t Z_t Zt是规范化因子,将权重缩放到0~1。
关于权重的理解:
为什么要增大预测错误的样本权重?结合计算平均错误率公式: E ( f ; D ) = 1 m ∑ i = 1 m I ( f ( x i ) ≠ y i ) E(f;D)=\frac{1}{m}\sum\limits_{i=1}^m\mathbb{I}(f(\boldsymbol x_i)\ne y_i) E(f;D)=m1i=1∑mI(f(xi)=yi),如果样本不是均匀分布,例如样本属性值为 x k x_k xk的样本出现次数为 k k k,则该样本的出现概率表示为: p ( x k ) = k m p(\boldsymbol x_k)=\frac{k}{m} p(xk)=mk,则平均错误率表示为: E ( f ; D ) = ∫ x ∼ D I ( f ( x i ) ≠ y i ) p ( x ) d x E(f;D)=\int_{\boldsymbol x\sim D} \mathbb{I}(f(\boldsymbol x_i)\ne y_i)p(\boldsymbol x)dx E(f;D)=∫x∼DI(f(xi)=yi)p(x)dx,增大其样本权重相当于增加了该样本出现的频率,频率越大,则对该样本的分类效果越好。
为什么要给个体分类器设置权重?在集合后的分类根据每个个体分类器的预测结果投票获得最终结果,根据个体分类器的权重计算表达式, 1 − ϵ ϵ \frac{1-\epsilon}{\epsilon} ϵ1−ϵ表示分类器分类正确的几率,值越大则表示该分类器的效果越好,它在投票时的话语权也应该越高。
代码:
(1)
package adaboosting;
import java.io.FileReader;
import java.util.Arrays;
import weka.core.Instances;
public class WeightedInstances extends Instances {
/**
* Just the require of some classes, any number is OK.
*/
private static final long serialVersionUID = 11087456L;
/**
* Weights.
*/
private double[] weights;
/**
*********************
* The first constructor.
*
* @param paraFileReader The given reader to read data from file.
*********************
*/
public WeightedInstances(FileReader paraFileReader) throws Exception {
super(paraFileReader);
setClassIndex(numAttributes() - 1);
// Initialize weights
weights = new double[numInstances()];
double tempAverage = 1.0 / numInstances();
for (int i = 0; i < weights.length; i++) {
weights[i] = tempAverage;
} // Of for i
System.out.println("Instances weights are: " + Arrays.toString(weights));
}// Of the first constructor.
/**
*********************
* The second constructor.
*
* @param paraInstances The given instance.
*********************
*/
public WeightedInstances(Instances paraInstances) {
super(paraInstances);
setClassIndex(numAttributes() - 1);
// Initialize weights.
weights = new double[numInstances()];
double tempAverage = 1.0 / numInstances();
for (int i = 0; i < weights.length; i++) {
weights[i] = tempAverage;
} // Of for i
System.out.println("Instances weights are: " + Arrays.toString(weights));
}// Of the second constructor
/**
********************
* Getter.
*
* @param paraIndex The given index.
* @return The weight of the given index.
*********************
*/
public double getWeight(int paraIndex) {
return weights[paraIndex];
}// Of getWeight
/**
********************
* Adjust the weights.
*
* @param paraCorrectArray Indicate which instances have been correctly
* classified.
* @param paraAlpha The weight of the last classifier.
*********************
*/
public void adjustWeights(boolean[] paraCorrectArray, double paraAlpha) {
// Step 1. Calculate alpha.
double tempIncrease = Math.exp(paraAlpha);
// Step 2. Adjust
double tempWeightsSum = 0;// For normalization
for (int i = 0; i < weights.length; i++) {
if (paraCorrectArray[i]) {
weights[i] /= tempIncrease;
} else {
weights[i] *= tempIncrease;
} // Of if
tempWeightsSum += weights[i];
} // Of for i
// Step 3. Normalize.
for (int i = 0; i < weights.length; i++) {
weights[i] /= tempWeightsSum;
}
System.out.println("After adjusting, instances weights are: " + Arrays.toString(weights));
}// Of adjustWeights
/**
********************
* Test the method.
*********************
*/
public void adjustWeightsTest() {
boolean[] tempCorrectArray = new boolean[numInstances()];
for (int i = 0; i < tempCorrectArray.length; i++) {
tempCorrectArray[i] = true;
} // Of for i
double tempWeightedError = 0.3;
adjustWeights(tempCorrectArray, tempWeightedError);
System.out.println("After adjusting");
System.out.println(toString());
}// Of adjustWeightsTest
/**
*********************
* For display.
*********************
*/
public String toString() {
String resultString = "I am a weighted Instances object.\r\n" + "I have " + numInstances() + " instances and "
+ (numAttributes() - 1) + " conditional attributes.\r\n" + "My weights are: " + Arrays.toString(weights)
+ "\r\n" + "My data are: " + super.toString();
return resultString;
}// Of toString
/**
********************
* For unit test.
*
* @param args Not provided.
*********************
*/
public static void main(String args[]) {
WeightedInstances tempWeightedInstances = null;
String tempFilename = "F:/sampledataMain/iris.arff";
try {
FileReader tempFileReader = new FileReader(tempFilename);
tempWeightedInstances = new WeightedInstances(tempFileReader);
tempFileReader.close();
} catch (Exception e) {
System.out.println("Cannot read the file: " + tempFilename + "\r\n" + e);
System.exit(0);
} // Of try
System.out.println(tempWeightedInstances.toString());
tempWeightedInstances.adjustWeightsTest();
}// Of main
}// Of class WeightedInstances
上面的代码定义了一个WeightedInstances类,继承于Instances,是为了在Instances的基础上给样本添加权重属性和相关方法。其中adjustWeights方法接收两个参数,paraCorrectArray表示样本是否被预测正确,paraAlpha的值等于当前个体分类器的权重 α \alpha α,在该方法中首先计算的是 e α e^{\alpha} eα,再根据paraCorrectArray的值来判断是乘还是除上面的计算值,同时求和所有权值,最后进行规范化,实现了 D t + 1 ( x ) = D t ( x ) Z t × e − α f ( x ) h ( x ) D_{t+1}(\boldsymbol{x})=\frac{D_t(\boldsymbol{x})}{Z_t}\times e^{-\alpha f(\boldsymbol x)h(\boldsymbol x)} Dt+1(x)=ZtDt(x)×e−αf(x)h(x)。
(2)
package adaboosting;
import java.util.Random;
import weka.core.Instance;
public abstract class SimpleClassifier {
/**
* The index of the current attribute.
*/
int selectedAttribute;
/**
* Weighted data.
*/
WeightedInstances weightedInstances;
/**
* The accuracy an the train set.
*/
double trainingAccuracy;
/**
* The number of classes. For binary classification it is 2.
*/
int numClasses;
/**
* The number of instances.
*/
int numInstances;
/**
* The number of conditional attributes.
*/
int numConditions;
/**
* For random number generation.
*/
Random random = new Random();
/**
******************
* Train the classifier.
******************
*/
public abstract void train();
/**
*********************
* The first constructor.
*
* @param paraWeightedInstances The given instances.
*********************
*/
public SimpleClassifier(WeightedInstances paraWeightedInstances) {
weightedInstances = paraWeightedInstances;
numConditions = weightedInstances.numAttributes() - 1;
numInstances = weightedInstances.numInstances();
numClasses = weightedInstances.classAttribute().numValues();
}// Of the first constructor
/**
********************
* Classify an instance.
*
* @param paraInstance The given instance.
* @return Predicted label.
*********************
*/
public abstract int classify(Instance paraInstance);
/**
********************
* Which instances in the training set are correctly classified.
*
* @return The correctness array.
*********************
*/
public boolean[] computeCorrectnessArray() {
boolean[] resultCorrectnessArray = new boolean[weightedInstances.numInstances()];
for (int i = 0; i < resultCorrectnessArray.length; i++) {
Instance tempInstance = weightedInstances.instance(i);
if ((int) (tempInstance.classValue()) == classify(tempInstance)) {
resultCorrectnessArray[i] = true;
} // Of if
} // Of for i
return resultCorrectnessArray;
}// Of computeCorrectnessArray
/**
********************
* Compute the accuracy on the training set.
*
* @return The training accuracy.
*********************
*/
public double computeTrainningAccuracy() {
double tempCorrect = 0;
boolean[] tempCorrectnessArray = computeCorrectnessArray();
for (int i = 0; i < tempCorrectnessArray.length; i++) {
if (tempCorrectnessArray[i]) {
tempCorrect++;
} // Of if
} // Of for i
double resultAccuracy = tempCorrect / tempCorrectnessArray.length;
return resultAccuracy;
}// Of computeTrainningAccuracy
/**
********************
* Compute the weighted error on the training set. It is at least 1e-6 to avoid
* NaN.
*
* @return The weighted error.
*********************
*/
public double computeWeightedError() {
double resultError = 0;
boolean[] tempCorrectnessArray = computeCorrectnessArray();
for (int i = 0; i < tempCorrectnessArray.length; i++) {
if (!tempCorrectnessArray[i]) {
resultError += weightedInstances.getWeight(i);
} // Of if
} // Of for i
if (resultError < 1e-6) {
resultError = 1e-6;
} // Of if
return resultError;
}// Of computeWeigtedError
}// Of class SimpleClassifier
其中train()和classify()方法是抽象方法,在子类StumpClassifier中实现,因此SimpleClassifier也相应是一个抽象类,在该类中定义了集成器的一些通用方法,比如计算错误率、正确率、权重值。个体分类器的具体过程在其子类通过train、classify实现。
(3)
package adaboosting;
import java.io.FileReader;
import java.util.Arrays;
import weka.core.Instance;
public class StumpClassifier extends SimpleClassifier {
/**
* The best cut for the current attribute on WeightedInstances.
*/
double bestCut;
/**
* The class label for attribute value less than bestCut.
*/
int leftLeafLabel;
/**
* The class label for attribute value no less than bestCut.
*/
int rightLeafLabel;
/**
*********************
* The only constructor.
*
* @param paraWeightedInstances The given instances.
*********************
*/
public StumpClassifier(WeightedInstances paraWeightedInstances) {
super(paraWeightedInstances);
}// Of the only constructor
/**
********************
* Train the classifier.
*********************
*/
public void train() {
// Step 1. Randomly choose an attribute.
selectedAttribute = random.nextInt(numConditions);
// Step 2. Find all attribute values and sort.
double[] tempValuesArray = new double[numInstances];
for (int i = 0; i < tempValuesArray.length; i++) {
tempValuesArray[i] = weightedInstances.instance(i).value(selectedAttribute);
} // Of for i
Arrays.sort(tempValuesArray);
// Step 3. Initialize, classify all instances as the same with the original cut.
int tempNumLabels = numClasses;
double[] tempLabelCountArray = new double[tempNumLabels];
int tempCurrentLabel;
// Step 3.1 Scan all labels to obtain their weights.
for (int i = 0; i < numInstances; i++) {
// The label of the its instance.
tempCurrentLabel = (int) weightedInstances.instance(i).classValue();
tempLabelCountArray[tempCurrentLabel] += weightedInstances.getWeight(i);
} // Of for i
// Step 3.2 Find the label with the maximal count.
double tempMaxCorrect = 0;
int tempBestLabel = -1;
for (int i = 0; i < tempLabelCountArray.length; i++) {
if (tempMaxCorrect < tempLabelCountArray[i]) {
tempMaxCorrect = tempLabelCountArray[i];
tempBestLabel = i;
} // Of if
} // Of for i
// Step 3.3 The cut is a little bit smaller than the minimal value.
bestCut = tempValuesArray[0] - 0.1;
leftLeafLabel = tempBestLabel;
rightLeafLabel = tempBestLabel;
// Step 4. Check candidate cuts one by one.
// Step 4.1 To handle multi-class data, left and right.
double tempCut;
double[][] tempLabelCountMatrix = new double[2][tempNumLabels];
for (int i = 0; i < tempValuesArray.length - 1; i++) {
// Step 4.1 Some attribute values are identical, ignore them.
if (tempValuesArray[i] == tempValuesArray[i + 1]) {
continue;
} // Of if
tempCut = (tempValuesArray[i] + tempValuesArray[i + 1]) / 2;
// Step 4.2 Scan all labels to obtain their counts with the cut.
// Initialize again since it is used many times.
for (int j = 0; j < 2; j++) {
for (int k = 0; k < tempNumLabels; k++) {
tempLabelCountMatrix[j][k] = 0;
} // Of for k
} // Of for j
for (int j = 0; j < numInstances; j++) {
// The label of the jth instance
tempCurrentLabel = (int) weightedInstances.instance(j).classValue();
if (weightedInstances.instance(j).value(selectedAttribute) < tempCut) {
tempLabelCountMatrix[0][tempCurrentLabel] += weightedInstances.getWeight(j);
} else {
tempLabelCountMatrix[1][tempCurrentLabel] += weightedInstances.getWeight(j);
} // Of if
} // Of for j
// Step 4.3 Left leaf.
double tempLeftMaxCorrect = 0;
int tempLeftBestLabel = 0;
for (int j = 0; j < tempLabelCountMatrix[0].length; j++) {
if (tempLeftMaxCorrect < tempLabelCountMatrix[0][j]) {
tempLeftMaxCorrect = tempLabelCountMatrix[0][j];
tempLeftBestLabel = j;
} // Of if
} // Of for i
// Step 4.4 Right leaf.
double tempRightMaxCorrect = 0;
int tempRightBestLabel = 0;
for (int j = 0; j < tempLabelCountMatrix[1].length; j++) {
if (tempRightMaxCorrect < tempLabelCountMatrix[1][j]) {
tempRightMaxCorrect = tempLabelCountMatrix[1][j];
tempRightBestLabel = j;
} // Of if
} // Of for i
// Step 4.5 Compare with the current best.
if (tempMaxCorrect < tempLeftMaxCorrect + tempRightMaxCorrect) {
tempMaxCorrect = tempLeftMaxCorrect + tempRightMaxCorrect;
bestCut = tempCut;
leftLeafLabel = tempLeftBestLabel;
rightLeafLabel = tempRightBestLabel;
} // Of if
} // Of for i
System.out.println("Attribute = " + selectedAttribute + ", cut = " + bestCut + ", leftLeafLabel = "
+ leftLeafLabel + ", rightLeafLabel = " + rightLeafLabel);
}// Of train
/**
*********************
* Classify an instance.
*
* @param paraInstance The given instance.
* @return Predicted label.
*********************
*/
public int classify(Instance paraInstance) {
int resultLabel = -1;
if (paraInstance.value(selectedAttribute) < bestCut) {
resultLabel = leftLeafLabel;
} else {
resultLabel = rightLeafLabel;
} // Of if
return resultLabel;
}// Of classify
/**
******************
* For display.
******************
*/
public String toString() {
String resultString = "I am a stump classifier.\r\n" + "I choose attribute #" + selectedAttribute
+ " with cut value " + bestCut + ".\r\n" + "The left and right leaf labels are " + leftLeafLabel
+ " and " + rightLeafLabel + ", respectively.\r\n" + "My weighted error is: " + computeWeightedError()
+ ".\r\n" + "My weighted accuracy is : " + computeTrainningAccuracy() + ".";
return resultString;
}// Of toString
/**
******************
* For unit test.
*
* @param args Not provided.
******************
*/
public static void main(String args[]) {
WeightedInstances tempWeightedInstances = null;
String tempFilename = "F:/sampledataMain/iris.arff";
try {
FileReader tempFileReader = new FileReader(tempFilename);
tempWeightedInstances = new WeightedInstances(tempFileReader);
tempFileReader.close();
} catch (Exception ee) {
System.out.println("Cannot read the file: " + tempFilename + "\r\n" + ee);
System.exit(0);
} // Of try
StumpClassifier tempClassifier = new StumpClassifier(tempWeightedInstances);
tempClassifier.train();
System.out.println(tempClassifier);
System.out.println(Arrays.toString(tempClassifier.computeCorrectnessArray()));
}// Of main
}// Of class StumpClassifier
这里的训练过程(针对实数型属性):
1、随机获取一个属性用于当前个体分类器分类;
2、获取所有样本的属性值,并将其排序;
3、在排序后的不同属性值之间进行划分(分成两堆:大于该属性值为一堆,小于该属性值为一堆,然后遍历样本统计每个堆中各个标签权值和,选择权值最大的代表该堆的标签)。选择最佳的属性值是根据属性值划分后选择的标签所对应的样本权值和来进行评判的,值越大则说明越照顾到了之前划分错误的样本,弥补之前学习器的缺陷,从而达到整体的优化。
(4)
package adaboosting;
import java.io.FileReader;
import weka.core.Instance;
import weka.core.Instances;
public class Booster {
/**
* classifiers
*/
SimpleClassifier[] classifiers;
/**
* Number of classifiers.
*/
int numClassifiers;
/**
* Whether or not stop after the training error is 0.
*/
boolean stopAfterConverge = false;
/**
* The weights of classifiers.
*/
double[] classifierWeights;
/**
* The training data.
*/
Instances trainingData;
/**
* The testing data.
*/
Instances testingData;
/**
*********************
* The first constructor.
*
* @param paraTrainingFilename The data filename.
*********************
*/
public Booster(String paraTrainingFilename) {
// Step 1. Read training set.
try {
FileReader tempFileReader = new FileReader(paraTrainingFilename);
trainingData = new Instances(tempFileReader);
tempFileReader.close();
} catch (Exception e) {
System.out.println("Cannot read the file: " + paraTrainingFilename + "\r\n" + e);
System.exit(0);
} // Of try
// Step 2. Set the last attribute as the class index.
trainingData.setClassIndex(trainingData.numAttributes() - 1);
// Step 3. The testing data is the same as the training data.
testingData = trainingData;
stopAfterConverge = true;
System.out.println("****************Data****************\r\n" + trainingData);
}// Of the first constructor.
/**
********************
* Set the number of base classifier, and allocate space for them.
*
* @param paraNumBaseclassifiers The number of base classifier.
*********************
*/
public void setNumBaseClassifiers(int paraNumBaseclassifiers) {
numClassifiers = paraNumBaseclassifiers;
// Step 1. Allocate space (only reference) for classifiers.
classifiers = new SimpleClassifier[numClassifiers];
// Step 2. Initialize classifier weights.
classifierWeights = new double[numClassifiers];
}// Of setnumClassifiers
/**
********************
* Train the booster.
*
* @see algorithm.StumpClassifier#train()
*********************
*/
public void train() {
// Step 1. Initialize.
WeightedInstances tempWeightedInstances = null;
double tempError;
numClassifiers = 0;
// Step 2. Build other classifiers.
for (int i = 0; i < classifiers.length; i++) {
// Step 2.1 Key code: Construct or adjust the weightedIntances.
if (i == 0) {
tempWeightedInstances = new WeightedInstances(trainingData);
} else {
tempWeightedInstances.adjustWeights(classifiers[i - 1].computeCorrectnessArray(),
classifierWeights[i - 1]);
} // Of if
// Step 2.2 Train the next classifier.
classifiers[i] = new StumpClassifier(tempWeightedInstances);
classifiers[i].train();
tempError = classifiers[i].computeWeightedError();
// Key code: Set the classifier weight.
classifierWeights[i] = 0.5 * Math.log(1 / tempError - 1);
if (classifierWeights[i] < 1e-6) {
classifierWeights[i] = 0;
} // Of if
System.out.println("Classifier #" + i + " , weighted error = " + tempError + ", weight = "
+ classifierWeights[i] + "\r\n");
numClassifiers++;
// The accuracy is enough.
if (stopAfterConverge) {
double tempTrainingAccuracy = computeTrainingAccuray();
System.out.println("The accuracy of the booster is: " + tempTrainingAccuracy + "\r\n");
if (tempTrainingAccuracy > 0.999999) {
System.out.println("Stop at the round: " + i + " due to converge.\r\n");
break;
} // Of if
} // Of if
} // Of for i
}// Of train
/**
******************
* Classify an instance.
*
* @param paraInstance The given instance.
* @return The predicted label.
******************
*/
public int classify(Instance paraInstance) {
double[] tempLabelsCountArray = new double[trainingData.classAttribute().numValues()];
for (int i = 0; i < numClassifiers; i++) {
int tempLabel = classifiers[i].classify(paraInstance);
tempLabelsCountArray[tempLabel] += classifierWeights[i];
} // Of for i
int resultLabel = -1;
double tempMax = -1;
for (int i = 0; i < tempLabelsCountArray.length; i++) {
if (tempMax < tempLabelsCountArray[i]) {
tempMax = tempLabelsCountArray[i];
resultLabel = i;
} // Of if
} // Of for
return resultLabel;
}// Of classify
/**
******************
* Compute the training accuracy of the booster. It is not weighted.
*
* @return The training accuracy.
******************
*/
public double computeTrainingAccuray() {
double tempCorrect = 0;
for (int i = 0; i < trainingData.numInstances(); i++) {
if (classify(trainingData.instance(i)) == (int) trainingData.instance(i).classValue()) {
tempCorrect++;
} // Of if
} // Of for i
double tempAccuracy = tempCorrect / trainingData.numInstances();
return tempAccuracy;
}// Of computeTrainingAccuray
/**
******************
* Test the booster on the training data.
*
* @return The classification accuracy.
******************
*/
public double test() {
System.out.println("Testing on " + testingData.numInstances() + " instances.\r\n");
return test(testingData);
}// Of test
/**
******************
* Test the booster.
*
* @param paraInstances The testing set.
* @return The classification accuracy.
******************
*/
public double test(Instances paraInstances) {
double tempCorrect = 0;
paraInstances.setClassIndex(paraInstances.numAttributes() - 1);
for (int i = 0; i < paraInstances.numInstances(); i++) {
Instance tempInstance = paraInstances.instance(i);
if (classify(tempInstance) == (int) tempInstance.classValue()) {
tempCorrect++;
} // Of if
} // Of for i
double resultAccuracy = tempCorrect / paraInstances.numInstances();
System.out.println("The accuracy is: " + resultAccuracy);
return resultAccuracy;
} // Of test
/**
******************
* For integration test.
*
* @param args Not provided.
******************
*/
public static void main(String args[]) {
System.out.println("Starting AdaBoosting...");
Booster tempBooster = new Booster("F:/sampledataMain/iris.arff");
tempBooster.setNumBaseClassifiers(100);
tempBooster.train();
System.out.println("The training accuracy is: " + tempBooster.computeTrainingAccuray());
tempBooster.test();
}// Of main
}// Of class Booster
定义了集成器,设置了100个个体分类器用于集成,其测试准确率能达到0.98左右。随着个体分类器数量增加,训练数据准确率表现为: