PCA(principal components analysis)即主成分分析技术,又称为主分量分析,旨在利用降维的思想,把多个指标转换为少数的几个综合指标。
主成分分析是一种简化数据集的技术,它是一个线性变换。这个线性变化把数据变换到一个新的坐标系统中,使得任何数据投影的第一大方差在第一个坐标上(称为第一主成分),第二个大的方差在第二个坐标上(称为第二主成分),以此类推。主成分分析经常用于减少数据集的维数,同时保持数据集的对方差贡献最大的特征。这是通过保留低阶主成分,忽略高阶主成分做到的。这样低阶成分往往能够保留住数据的最重要方面。
PCA的原理就是将原来的样本数据投影到一个新的空间中。其中就是将原来的样本数据空间经过坐标变换矩阵变到新空间坐标下,新空间坐标由其原数据样本中不同维度之间的协方差矩阵中几个最大特征值对应的前几个特征向量组成。较小的特征值向量作为非主要成分去掉,从而可以达到提取主要成分代表原始数据的特征,降低数据复杂度的目的。
将n次采样的m维数据组织成矩阵形式 X ∈ R n × m X ∈ R n × m X\in R^{n\times m}X\in R^{n\times m} X∈Rn×mX∈Rn×m。具体形式如下所示:
$$
\left(\begin{matrix}\begin{matrix}x_{11}&x_{12}\x_{21}&x_{22}\\end{matrix}&\begin{matrix}\cdots&x_{1m}\\cdots&x_{2m}\\end{matrix}\\begin{matrix}\vdots&\vdots\x_{n1}&x_{n2}\\end{matrix}&\begin{matrix}\ddots&\vdots\\cdots&x_{nm}\\end{matrix}\\end{matrix}\right)
$$
将样本矩阵 X X XX XX的每一列零均值化得新矩阵 X ′ X ′ X^{\prime}X^{\prime} X′X′。
$$
\boldsymbol{x}{i} \leftarrow \boldsymbol{x}{i}-\frac{1}{m} \sum_{i=1}^{m} \boldsymbol{x}_{i}
$$
计算其样本数据维度之间的相关度,此处使用协方差矩阵 C C CC CC:
$$
cov=\frac{1}{m}X\prime{X\prime}^T
$$
计算协方差矩阵 C C CC CC的特征值及其对应的特征向量,并特征值按照从大到小排列。
$$
\left(\lambda_1,\lambda_2,\cdots,\lambda_t\right)=\left(\begin{matrix}\begin{matrix}p_{11}&p_{12}\p_{21}&p_{22}\\end{matrix}&\begin{matrix}\cdots&p_{1t}\\cdots&p_{2t}\\end{matrix}\\begin{matrix}\vdots&\vdots\p_{n1}&p_{n2}\\end{matrix}&\begin{matrix}\ddots&\vdots\\cdots&p_{nt}\\end{matrix}\\end{matrix}\right)=\left(\boldsymbol{P}{1}, \boldsymbol{P}{2}, \ldots, \boldsymbol{P}_{i}\right)\ ,\ (其中\lambda_1>\lambda_2>\cdots>\lambda_t)
$$
根据降维要求,比如此处降到 k k kk kk维,取其前个 k k kk kk向量组成降维矩阵 P P PP PP,如下所示:
$$
P=\left(\boldsymbol{P}{1}, \boldsymbol{P}{2}, \ldots, \boldsymbol{P}_{k}\right)^T ,\ P\in R^{k\times n}
$$
通过变换矩阵P对原样本数据 X X XX XX进行坐标变换,从而达到数据降维与主成分提取的目的。
$$
Y=X\bullet P\ ,\ Y\in R^{k\times m}
$$
重建误差的计算
在投影完成之后,需要对投影的误差进行重建,从而计算数据降维之后信息的损失,一般来说通过以下公式来计算。
$$
{error}1=\frac{1}{k}\sum{i=1}{k}{||x{\left(i\right)}}-x_{approx}{\left(i\right)}||2
$$
$$
{error}2=\frac{1}{m}\sum{i=1}{m}{||x{\left(i\right)}}||^2
$$
其中:
则其比率 η η \eta\eta ηη为
$$
\eta=\frac{{error}_1}{{error}_2}
$$
通过 η η \eta\eta ηη来衡量数据降维之后信息的损失。
进而我们总结出算法描述如下:
输入: 样本集 D = { x 1 , x 2 , … , x m } D = { x 1 , x 2 , … , x m } D=\left\{\boldsymbol{x}_{1}, \boldsymbol{x}_{2}, \ldots, \boldsymbol{x}_{m}\right\}D=\left\{\boldsymbol{x}_{1}, \boldsymbol{x}_{2}, \ldots, \boldsymbol{x}_{m}\right\} D={x1,x2,…,xm}D={x1,x2,…,xm};
低维空间维数
k k kk kk
过程:
输出: 变换后的矩阵 Y = X ∙ P , Y ∈ R k × m Y = X ∙ P , Y ∈ R k × m Y=X\bullet P\ ,\ Y\in R^{k\times m}Y=X\bullet P\ ,\ Y\in R^{k\times m} Y=X∙P , Y∈Rk×mY=X∙P , Y∈Rk×m
使用数据集为:Imported Analog EMG – Voltage下的EMG1、EMG2、…、EMG8部分的数据
fileName = 'c:\Users\Administrator\Desktop\机器学习作业\PCA\pcaData1.csv';
X = csvread(fileName);
m = size(X,1);
meanLine = mean(X,2);
R = size(X ,2);
%对原始数据做均值化处理,每一列都减去均值
A = [];
for i = 1:R
temp = X(:,i) - meanLine;
A = [A temp];
end
%求其协方差矩阵
C = A'*A/R;
%求协方差矩阵的特征值及其特征向量
[U,S,V] = svd(C);
%设置降维的维度数k,从1维计算到R-1维
k=8;
%计算投影后的样本数据Y
P=[];
for x = 1:k
P = [P U(:,x)];
end
Y = X*P;
%计算数据重建误差以及比率
err1 = 0;
%获取样本X重建后的矩阵XR
XR= Y * pinv(P);
for i = 1:m
err1 = norm(X(i,:)-XR(i,:))+err1;
end
%计算数据方差
err2 = 0;
for i=1:m
err2 = norm(X(i,:))+err2;
end
eta = err1/err2
通过计算我们发现对应的特征值以及其对应的投影方向如下:
λ 1 λ 1 \lambda_1\lambda_1 λ1λ1=1.8493对应的投影方向为 ( − 0.0164 , 0.0300 , − 0.2376 , 0.4247 , − 0.6717 , 0.2356 , − 0.2196 , 0.4551 ) ( − 0.0164 , 0.0300 , − 0.2376 , 0.4247 , − 0.6717 , 0.2356 , − 0.2196 , 0.4551 ) (-0.0164,0.0300,-0.2376,0.4247,-0.6717,0.2356,-0.2196,0.4551)(-0.0164,0.0300,-0.2376,0.4247,-0.6717,0.2356,-0.2196,0.4551) (−0.0164,0.0300,−0.2376,0.4247,−0.6717,0.2356,−0.2196,0.4551)(−0.0164,0.0300,−0.2376,0.4247,−0.6717,0.2356,−0.2196,0.4551)
λ 2 λ 2 \lambda_2\lambda_2 λ2λ2=1.3836对应的投影方向为 ( 0.0910 , 0.1724 , − 0.0097 , − 0.8267 , − 0.1464 , 0.3599 , 0.0025 , 0.3570 ) ( 0.0910 , 0.1724 , − 0.0097 , − 0.8267 , − 0.1464 , 0.3599 , 0.0025 , 0.3570 ) (0.0910,0.1724,-0.0097,-0.8267,-0.1464,0.3599,0.0025,0.3570)(0.0910,0.1724,-0.0097,-0.8267,-0.1464,0.3599,0.0025,0.3570) (0.0910,0.1724,−0.0097,−0.8267,−0.1464,0.3599,0.0025,0.3570)(0.0910,0.1724,−0.0097,−0.8267,−0.1464,0.3599,0.0025,0.3570)
λ 3 λ 3 \lambda_3\lambda_3 λ3λ3=0.5480对应的投影方向为 ( − 0.1396 , − 0.4457 , − 0.1668 , 0.0870 , 0.2812 , 0.7696 , − 0.1742 , − 0.2115 ) ( − 0.1396 , − 0.4457 , − 0.1668 , 0.0870 , 0.2812 , 0.7696 , − 0.1742 , − 0.2115 ) (-0.1396,-0.4457,-0.1668,0.0870,0.2812,0.7696,-0.1742,-0.2115)(-0.1396,-0.4457,-0.1668,0.0870,0.2812,0.7696,-0.1742,-0.2115) (−0.1396,−0.4457,−0.1668,0.0870,0.2812,0.7696,−0.1742,−0.2115)(−0.1396,−0.4457,−0.1668,0.0870,0.2812,0.7696,−0.1742,−0.2115)
λ 4 λ 4 \lambda_4\lambda_4 λ4λ4=0.4135对应的投影方向为 ( 0.0622 , 0.1782 , 0.3136 , − 0.0080 , − 0.5387 , 0.2841 , 0.3300 , − 0.6214 ) ( 0.0622 , 0.1782 , 0.3136 , − 0.0080 , − 0.5387 , 0.2841 , 0.3300 , − 0.6214 ) (0.0622,0.1782,0.3136,-0.0080,-0.5387,0.2841,0.3300,-0.6214)(0.0622,0.1782,0.3136,-0.0080,-0.5387,0.2841,0.3300,-0.6214) (0.0622,0.1782,0.3136,−0.0080,−0.5387,0.2841,0.3300,−0.6214)(0.0622,0.1782,0.3136,−0.0080,−0.5387,0.2841,0.3300,−0.6214)
λ 5 λ 5 \lambda_5\lambda_5 λ5λ5=0.3218对应的投影方向为 ( 0.2126 , − 0.7813 , 0.3136 , − 0.0080 , − 0.5387 , 0.2841 , 0.3300 , − 0.6214 ) ( 0.2126 , − 0.7813 , 0.3136 , − 0.0080 , − 0.5387 , 0.2841 , 0.3300 , − 0.6214 ) (0.2126,-0.7813,0.3136,-0.0080,-0.5387,0.2841,0.3300,-0.6214)(0.2126,-0.7813,0.3136,-0.0080,-0.5387,0.2841,0.3300,-0.6214) (0.2126,−0.7813,0.3136,−0.0080,−0.5387,0.2841,0.3300,−0.6214)(0.2126,−0.7813,0.3136,−0.0080,−0.5387,0.2841,0.3300,−0.6214)
λ 6 λ 6 \lambda_6\lambda_6 λ6λ6=0.1322对应的投影方向为 ( − 0.0959 , 0.0340 , − 0.6943 , 0.0068 , 0.0269 , 0.0042 , 0.7119 , 0.0064 ) ( − 0.0959 , 0.0340 , − 0.6943 , 0.0068 , 0.0269 , 0.0042 , 0.7119 , 0.0064 ) (-0.0959,0.0340,-0.6943,0.0068,0.0269,0.0042,0.7119,0.0064)(-0.0959,0.0340,-0.6943,0.0068,0.0269,0.0042,0.7119,0.0064) (−0.0959,0.0340,−0.6943,0.0068,0.0269,0.0042,0.7119,0.0064)(−0.0959,0.0340,−0.6943,0.0068,0.0269,0.0042,0.7119,0.0064)
λ 7 λ 7 \lambda_7\lambda_7 λ7λ7=0.0620对应的投影方向为 ( 0.8881 , − 0.0497 , − 0.3407 , − 0.0198 , − 0.0103 , − 0.0424 , − 0.2075 , − 0.2176 ) ( 0.8881 , − 0.0497 , − 0.3407 , − 0.0198 , − 0.0103 , − 0.0424 , − 0.2075 , − 0.2176 ) (0.8881,-0.0497,-0.3407,-0.0198,-0.0103,-0.0424,-0.2075,-0.2176)(0.8881,-0.0497,-0.3407,-0.0198,-0.0103,-0.0424,-0.2075,-0.2176) (0.8881,−0.0497,−0.3407,−0.0198,−0.0103,−0.0424,−0.2075,−0.2176)(0.8881,−0.0497,−0.3407,−0.0198,−0.0103,−0.0424,−0.2075,−0.2176)
λ 8 = 9.5959 × 1 0 − 17 λ 8 = 9.5959 × 1 0 − 17 \lambda_8=9.5959\times 10^{-17}\lambda_8=9.5959\times 10^{-17} λ8=9.5959×10−17λ8=9.5959×10−17对应的投影方向为 ( 0.3536 , 0.3536 , 0.3536 , 0.3536 , 0.3536 , 0.3536 , 0.3536 , 0.3536 ) ( 0.3536 , 0.3536 , 0.3536 , 0.3536 , 0.3536 , 0.3536 , 0.3536 , 0.3536 ) (0.3536,0.3536,0.3536,0.3536,0.3536,0.3536,0.3536,0.3536)(0.3536,0.3536,0.3536,0.3536,0.3536,0.3536,0.3536,0.3536) (0.3536,0.3536,0.3536,0.3536,0.3536,0.3536,0.3536,0.3536)(0.3536,0.3536,0.3536,0.3536,0.3536,0.3536,0.3536,0.3536)
k取不同值时对应的误差比率如下所示:
k的取值 | 数据重建误差eat |
---|---|
1 | 0.8265 |
2 | 0.7105 |
3 | 0.6499 |
4 | 0.5940 |
5 | 0.5521 |
6 | 0.5294 |
7 | 0.5162 |