K-means 是我们最常用的基于欧氏距离的聚类算法,其认为两个目标的距离越近,相似度越大;
流程:
1、导入数据,得到数据维度和取值范围,同时确定聚类数:K;
2、初始化聚类中心,常使用随机数;(由于是随机数生成初始聚类中心,有时会导致算法不收敛);
3、计算每个点到K个聚类中心的距离(欧氏距离),根据距离最小原则将数据分配到K个类中;
4、利用K个类中数据,计算并更新均值更新K个聚类中心;
5、重复步骤3-4,直到收敛(a.达到最大迭代次数,b.聚类中心或者总距离不再明显变化)。
本次实验使用的2维3分类数据集。其数据集链接为 R3;
clc;clear;close;
tic;
% data
dataset = load("R3.txt");
dataset = dataset(:,1:2);
%% K-means
K =3;
iter_max = 1000;
tol = 0.0001;
[point,total_dist] = myk_means(dataset,K,iter_max,tol); %%此时已完成K-means聚类
%% plot origin dataset
color = {'r','g','b'};
% plot orginal point
subplot(1,2,1);
title("orgin point");
hold on;
scatter(dataset(1:40,1),dataset(1:40,2),'.',color{1});
scatter(dataset(41:80,1),dataset(41:80,2),'.',color{2});
scatter(dataset(81:120,1),dataset(81:120,2),'.',color{3});
legend("cluster1","cluster2","cluster3");
hold off;
%% plot after cluster using K-means
subplot(1,2,2);
title("after Kmeans");
hold on;
for i = 1:K
point(i).point = [];
for j = 1:length(point(i).cluster)
point(i).point = [point(i).point ; dataset(point(i).cluster(j),:)];
end
scatter(point(i).point(:,1),point(i).point(:,2),".",color{i});
scatter(point(i).mean(1),point(i).mean(2),"^",color{i});
end
toc;
final_dist = 0;
for i =1:K
for j = 1:length(point(i).cluster)
final_dist = final_dist + norm(dataset(point(i).cluster(j),:) - point(i).mean);
end
end
disp(['最优距离:',num2str(final_dist)]);
figure(2)
plot(total_dist,"-b");
title(['k means',' ','最优距离',num2str(total_dist(end))]);
xlabel('iter');
ylabel("total distance");
% dataset 传入数据集
% K 聚类数目
% iter_max 最大迭代次数
% tol 精度
% point :返回一个结构体,包括 K个类的聚类中心 及原数据的索引序号。
function [point,total_dist] = myk_means(dataset,K,iter_max,tol)
total_dist = [];
%% initialize randomly generate K points;
[m,dim] = size(dataset);
max1 = max(dataset);
min1 = min(dataset);
for i = 1:K
point(i).mean = rand(1,dim).*(max1-min1) + min1;
end
iter = 1;
flag = 1;
while iter < iter_max && flag
%% assign every point to a particular cluster using min-distance-rule
count = zeros(K,1);
for i = 1:m
temp_distance = inf;
for j = 1:K
distance = norm(dataset(i,:)-point(j).mean);
if distance <temp_distance
temp_distance = distance;
index = j; %% 更新 属于的类别
end
end
count(index) = count(index)+1;
point(index).cluster(count(index)) = i;
end
%% clear every cluster
for i = 1:K
point(i).cluster = point(i).cluster(1:count(i));
temp_mean(i,:) = point(i).mean; % record the last_mean_point
end
%% compute new_mean_point
for i = 1:K
sum = zeros(1,dim);
for j = 1:length(point(i).cluster)
for n = 1 : dim
sum(1,n) = sum(1,n) + dataset(point(i).cluster(j),n); %得到一个1*dim的同类别属性值之和
end
end
point(i).mean = sum./length(point(i).cluster);
end
%% compute distance between last_mean_point and new_mean_point
delta = 0;
for i =1:K
delta = delta + norm(temp_mean(i,:)-point(i).mean);
end
if delta <= tol
flag = 0;
end
iter = iter +1;
%% compute dist
tp_dist = 0;
for i =1:K
for j = 1:length(point(i).cluster)
tp_dist = tp_dist + norm(dataset(point(i).cluster(j),:) - point(i).mean);
end
end
total_dist = [total_dist;tp_dist];
end
end
聚类中心(结果保留两位小数)为: [11.24,11.62] ;[9.99,10.05];[12.06,10.03]
传统K-means方法 总结:速度上非常快,往往几次迭代就可以出结果;偶尔会出现无法收敛(初始聚类中心敏感)。
使用与传统K-means一样的数据集 R3
K-means聚类问题转化为优化问题即变成,找到K个聚类中心使 所有点到所属类别的距离之和最小!!!
该部分包括两点,
1、分配:先对于每组聚类中心计算整个数据集的欧氏距离,然后根据欧氏距离最小原则分配类别;
2、总距离(fitness):计算所有数据到所属类别中心的欧氏距离,求和返回total_dist,即为该组聚类中心的 fitness;
function [total_dist,res] = assignment(one_pop,dataset,K_C)
[data_num,dim] = size(dataset);
for k = 1:K_C
res(k).mean(1,1:dim) = one_pop(1,dim*(k-1)+1:dim*k);
end
% distance最小原则
total_dist = 0;
count = zeros(K_C,1);
for i = 1:data_num
temp_dist = inf;
dist_record = zeros(K_C,1);
for k =1:K_C
dist_record(k) = norm(dataset(i,1:dim)- res(k).mean(1,1:dim));
if(temp_dist > dist_record(k))
count_flag = k; %%记录距离最小的类
temp_dist = dist_record(k);
end
end
% res(count_flag).index(,1) = i; %% 记录类别对应序号
count(count_flag,1) = count(count_flag,1) + 1;
total_dist = total_dist + dist_record(count_flag);
res(count_flag).index(count(count_flag,1)) = i; %% 记录类别对应序号
end
end
function fitness = Fitness(x_pop,dataset,K_C)
[pop_num,~] = size(x_pop);
fitness = zeros(pop_num,1);
% 传入每一组聚类中心
for i = 1:pop_num
[fitness(i),~] = assignment(x_pop(i,:),dataset,K_C);
end
end
粒子群算法流程:
1、初始化种群和速度,初始化参数 c1, c2,惯性因子w;
2、计算初始种群的fitness,并初始化,a.个体最优位置 b.个体最优适应度 c.种群最优位置 d.种群最优适应度;
3、更新种群个体速度(考虑速度边界处理问题),更新种群个体位置(考虑个体越界处理问题);
4、检查并更新,个体最优位置,个体最优适应度,种群最优位置,种群最优适应度;
5、重复3-4步,直到收敛或达到最大迭代次数。
注意:
1、根据聚类中心数目 K_C 和数据维度 dim ,确定个体长度:K_Cdim;
2、该算法可能存在两代之间最优值不更新的情况,故加入的收敛条件更改为3代间差和变化小于精度,即需3代更新不变化,才判断停止迭代。
3、速度范围可能需要依据不同的问题灵活设置,如:V_max (j)= 0.05x_max(j)
clc;clear;close;
% Particle Swarm Optimization solve k-means problem
% min
tic;
dataset = load("R3.txt");
dataset = dataset(:,1:2);
[~,dim] = size(dataset);
K_C = 3; %类别数
tol = 10e-6;
% pso参数
pop_num = 100;
iter_max = 100;
c1 = 1.5;
c2 = 1.5;
w = 0.9; %惯性系数
pos_max = max(dataset);
pos_min = min(dataset);
V_max = [0.5,0.5];
V_min = [-0.5,-0.5];
%其他维度时
%%V_max = 0.05*(pos_max-pos_min);
%%V_min = -0.05*(pos_max-pos_min);
%将自变量写入同一种群
for k = 1:K_C
x_max(1,(k-1)*dim+1:k*dim) = pos_max;
x_min(1,(k-1)*dim+1:k*dim) = pos_min;
v_max(1,(k-1)*dim+1:k*dim) = V_max;
v_min(1,(k-1)*dim+1:k*dim) = V_min;
end
x_pop = zeros(pop_num,K_C*dim); %初始化种群个体
v_pop = zeros(pop_num,K_C*dim); %初始化种群速度
for i = 1:pop_num
x_pop(i,1:K_C*dim) = rand(1,K_C*dim).*(x_max-x_min)+ x_min.*ones(1,K_C*dim);
v_pop(i,1:K_C*dim) = rand(1,K_C*dim).*(v_max-v_min)+ v_min.*ones(1,K_C*dim);
end
fitness = Fitness(x_pop,dataset,K_C);
[~,index] = sort(fitness);
gbest = x_pop(index(1),:); % 群体极值位置
gbest_fitness = fitness(index(1)); % 群体适应度极值
pbest = x_pop; % 个体极值位置
pbest_fitness = fitness; % 个体极值适应度
fit_iter = zeros(iter_max,1);
fit_iter(1,1) = gbest_fitness;
iter = 2;
flag = 1; %精度控制,精度达到要求置0
while(iter <= iter_max && flag == 1)
%更新群体速度和位置
for i = 1:pop_num
%更新群体速度
v_pop(i,:) = w*v_pop(i,:) + c1*rand*(pbest(i,:)-x_pop(i,:)) + c2*rand*(gbest-x_pop(i,:));%rand是[0,1]随机数
for j = 1:K_C*dim %速度 边界处理
if(v_pop(i,j)> v_max(1,j))
v_pop(i,j) = v_max(1,j);
end
if(v_pop(i,j) < v_min(1,j))
v_pop(i,j) = v_min(1,j);
end
end
%更新群体位置
x_pop(i,:) = x_pop(i,:) + 0.5 * v_pop(i,:);
for j = 1:K_C*dim %位置 边界处理,越界,做边界值赋值处理
if(x_pop(i,j)> x_max(1,j))
x_pop(i,j) = x_max(1,j); % 处理完变 最大值
end
if(x_pop(i,j) < x_min(1,j))
x_pop(i,j) = x_min(1,j) ; % 处理完变 最小值
end
end
end %i循环结束
% 重新计算适应度
fitness = Fitness(x_pop,dataset,K_C);
for i = 1:pop_num
%更新个体极值 和 极值位置
if (fitness(i) < pbest_fitness(i))
pbest(i,:) = x_pop(i,:);
pbest_fitness(i) = fitness(i);
% 更新个体极值的同时考虑全局极值的更新
if (pbest_fitness(i) < gbest_fitness )
gbest = pbest(i,:);
gbest_fitness = pbest_fitness(i);
end
end
end % i循环结束
fit_iter(iter,1) = gbest_fitness;
sum = 0;
% 计算3代差值和
if( iter > 3 )
for co = 1:3
sum = sum + abs(fit_iter(iter+1-co,1) - fit_iter(iter-co,1));
end
if(sum < tol)
flag = 0;
cooo = iter;
end
end
iter = iter + 1;
end %iter 结束
[~,res] = assignment(gbest,dataset,K_C);
toc;
figure(1);
hold on;
plot(fit_iter(1:cooo));
xlabel("iteration");
ylabel('total distance');
title('PSO && k-means',' ','最优距离:',num2str(fit_iter(cooo)));
hold off;
聚类中心(保留两位小数,且聚类中心顺序不分先后)为:[11.19,11.65],[10.01,10.01], [12.09,10.00]
PSO 实现 K-means总结:
1、优势在于不需要确定初值,收敛比较稳定,为全局最优值。
2、缺点:运算量相较于传统方法较大;