2021-11-03用 easystats做聚类分析

https://easystats.github.io/parameters/articles/clustering.html

安装包

install.packages(c("NbClust", "mclust", "pvclust", "cluster", "fpc", "dbscan"))

介绍

聚类的算法多种多样,大体分为两类:监督式聚类和无监督式聚类。前一种聚类方法要明确指定想要提取的类别数目,而后一种在算法中自动估算类别数目。聚类算法没有好坏之分,两种算法各有优缺点。

下面以iris包数据聚类分析为例,用不同的聚类算法分析一下。我们知道真实的鸢尾花分为3类。先用PCA进行聚类分析。

library(ggplot2)
library(parameters)
library(see)

set.seed(33)  # Set random seed

# Select the first 4 numeric columns (drop the Species fator)
data <- iris[1:4]  
head(data)  # Print the 6 first rows

# Run PCA
pca <- principal_components(data, n = 2)
pca_scores <- predict(pca, names = c("PCA_1", "PCA_2"))
pca_scores$True_Clusters <- iris$Species  # Add real clusters

# Visualize
ggplot(pca_scores, aes(x = PCA_1, y = PCA_2, color = True_Clusters)) + 
  geom_point() +
  theme_modern()
image.png

可以看到Setosa类别在PCA结果中可以明显与其它两类分开,而其它两个类别就不能完全分开。下面看一下数据驱动的聚类分析,在分析这三个类别中的应用。

监督聚类算法

到底要提取多少个类别为好呢?
这个重要的问题很难回答。最好的方式是有强烈的预期或推测,预先知道大致分几类。如果没有预期的话,研究人员可以用数据驱动的方法来估计最优的类别数目。这个问题由很多数学方法解决,然而,它们相互之间结果常不一致.......
显然,没有最佳的方法来解决这个问题,我们用easystats包来同时运行不同的方法,返回大多数算法一致的分类结果。

n <- n_clusters(data, package = c("easystats", "NbClust", "mclust"))
n
plot(n)
image.png

可以看到,多数算法认为分为2类,分三类的次之。似乎数据本身并不能明显区分这三类鸢尾花。算法得到的结果和我们从真实世界获得的结果间的差异,这个问题是数据科学的基础问题。

K-Means聚类

rez_kmeans <- cluster_analysis(data, n = 3, method = "kmeans")

rez_kmeans  # Show results
#> # Clustering Solution
#> 
#> The 3 clusters accounted for 76.70% of the total variance of the original data.
#> 
#> Cluster | n_Obs | Sum_Squares | Sepal.Length | Sepal.Width | Petal.Length | Petal.Width
#> ---------------------------------------------------------------------------------------
#> 1       |    53 |       44.09 |        -0.05 |       -0.88 |         0.35 |        0.28
#> 2       |    47 |       47.45 |         1.13 |        0.09 |         0.99 |        1.01
#> 3       |    50 |       47.35 |        -1.01 |        0.85 |        -1.30 |       -1.25
#> 
#> # Indices of model performance
#> 
#> Sum_Squares_Total | Sum_Squares_Between | Sum_Squares_Within |    R2
#> --------------------------------------------------------------------
#> 596.000           |             457.112 |            138.888 | 0.767
#> 
#> # You can access the predicted clusters via 'predict()'.
plot(summary(rez_kmeans))  # Visualize cluster centers
image

[predict()](https://rdrr.io/r/stats/predict.html)提取每个鸢尾花的分类的信息。.

predict(rez_kmeans)  # Get clusters
#>   [1] 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
#>  [38] 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 2 1 1 1 2 1 1 1 1 1 1 1 1 2 1 1 1 1 2 1 1 1
#>  [75] 1 2 2 2 1 1 1 1 1 1 1 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 2 2 2 2 1 2 2 2 2
#> [112] 2 2 1 1 2 2 2 2 1 2 1 2 1 2 2 1 2 2 2 2 2 2 1 1 2 2 2 1 2 2 2 1 2 2 2 1 2
#> [149] 2 1

Hierarchical Clustering

rez_hclust <- cluster_analysis(data, n = 3, method = "hclust")

rez_hclust  # Show results

# Visualize
plot(rez_hclust) + theme_modern()  # Visualize 
image.png

Hierarchical K-Means

rez_hkmeans <- cluster_analysis(data, n = 3, method = "hkmeans")

rez_hkmeans  # Show results

# Visualize
plot(rez_hkmeans) + theme_modern()  # Visualize 
image.png

K-Medoids (PAM)

rez_pam <- cluster_analysis(data, n = 3, method = "pam")

rez_pam  # Show results

# Visualize
plot(rez_pam) + theme_modern()  # Visualize 
image.png

无监督聚类算法

Bootstrapped Hierarchical Clustering

rez_hclust2 <- cluster_analysis(data, 
                        n = NULL, 
                        method = "hclust", 
                        iterations = 500,
                        ci = 0.90)

rez_hclust2  # Show results
plot(rez_hclust2) + theme_modern()  # Visualize
image.png

DBSCAN

eps <- n_clusters_dbscan(data, min_size = 0.1) 
eps
plot(eps)
image.png
rez_dbscan <- cluster_analysis(data, method = "dbscan", dbscan_eps = 1.45)

rez_dbscan  # Show results
plot(rez_dbscan) + theme_modern()  # Visualize
image.png

Hierarchical K-Means

rez_hdbscan <- cluster_analysis(data, method = "hdbscan")

rez_hdbscan  # Show results

# Visualize
plot(rez_hdbscan) + theme_modern()  # Visualize 
image.png

K-Medoids with estimation of number of clusters (pamk)

rez_pamk <- cluster_analysis(data, method = "pamk")

rez_pamk  # Show results

# Visualize
plot(rez_pamk) + theme_modern()  # Visualize 
image.png

Mixture

library(mclust)

rez_mixture <- cluster_analysis(data, method = "mixture")

rez_mixture  # Show results

# Visualize
plot(rez_mixture) + theme_modern()  # Visualize 
image.png

Metaclustering

list_of_results <- list(rez_kmeans, rez_hclust, rez_hkmeans, rez_pam,
                        rez_hclust2, rez_dbscan, rez_hdbscan, rez_mixture)

probability_matrix <- cluster_meta(list_of_results)

# Plot the matrix as a reordered heatmap
heatmap(probability_matrix, scale = "none", 
        col = grDevices::hcl.colors(256, palette = "inferno"))
image.png

你可能感兴趣的:(2021-11-03用 easystats做聚类分析)