EM_init 中调用多次kmeans, 取得其中最佳的聚类结果, 并赋值:
m_num_clusters = bestK.numberOfClusters();
m_weights = new double[inst.numInstances()][m_num_clusters];
m_model = new DiscreteEstimator[m_num_clusters][m_num_attribs];
m_modelNormal = new double[m_num_clusters][m_num_attribs][3];
m_priors = new double[m_num_clusters];
并得到最佳kmeans聚类的m_num_clusters个质心, 利用最佳kmeans的聚类结果初始化
1) 对离散属性使用m_model[numInstance][numCluster];
2) 对数值属性使用m_modelNormal[numInstance][numCluster][3], 其中第3维的 [0]表示均值,[1]表示方差, [2]表示该样本该属性的权重【权重默认都是1】
3) m_priors[numCluster] 保存了根据最佳kmeans结果得到的每个cluster的先验概率
4) m_weights[numInstance][numCluster] 未赋值, 都是初值0; 保存的是每个样本属于每个类的权重
然后迭代EM。
E step:
double loglk = 0.0, sOW = 0.0; for (int l = 0; l < inst.numInstances(); l++) { Instance in = inst.instance(l); //in.weight始终为1, 样本权重 //sOW 相当于就是样本个数 //logDensityForInstance计算指定样本in的概率密度的log loglk += in.weight() * logDensityForInstance(in); sOW += in.weight(); //然后修正该样本属于每个类的权重 //double m_weights[numInstance][numCluster] m_weights[l] = distributionForInstance(in); } // reestimate priors // 调整m_priors estimate_priors(inst); return loglk / sOW;
E step:
logDensityForInstance ->logJointDensitiesForInstance->logDensityPerClusterForInstance
logDensityPerClusterForInstance计算每个样本属于每个类的weight; 其中会用到m_model[normal]变量
logJointDensitiesForInstance 计算每个样本的联合密度的log:
//weights的长度就是cluster的个数 double[] weights = logDensityPerClusterForInstance(inst); //这里仅仅是一个getter操作 double[] priors = clusterPriors(); for (int i = 0; i < weights.length; i++) { if (priors[i] > 0) { weights[i] += Math.log(priors[i]); } } return weights;
logDensityForInstance 计算给定样本的密度。。?
double[] a = logJointDensitiesForInstance(instance); double max = a[Utils.maxIndex(a)]; double sum = 0.0; for(int i = 0; i < a.length; i++) { sum += Math.exp(a[i] - max); } return max + Math.log(sum);
使用 logJointDensitiesForInstance 重新计算m_weights
m_priors[ci] = sigma(instance.m_weights[ci]); 然后对m_priors正规化
M step:
根据m_weights 重新计算m_model与m_modelNormal
当E step的两次返回值之差小于m_minStdDev时退出。E step的返回值肯定比上一次返回值要大(EM 算法决定的)