为什么机器学习(六) —— 数据降维LDA线性判别分析原理


为什么机器学习(六) —— 数据降维LDA线性判别分析原理_第1张图片
因此,LDA降维的套路是:
(1)求各个类的均值向量和总的均值向量
(2)求类间散布矩阵 S B S_B SB和类内散布矩阵 S w S_w Sw
(3)计算矩阵乘法 S = S w − 1 S B S = S_w^{-1}S_B S=Sw1SB
(4)对S进行特征值分解,得到特征值和特征向量
(5)若想降到k维,则按特征值从大到小排序,把前k个特征向量作为行构建投影矩阵 W , x n e w = x ∗ W W,x_{new} = x * W Wxnew=xW

以下是利用LDA降维处理Iris数据集的代码:

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

# 读取数据并取数字部分
data = pd.read_csv('iris.csv')
# print(data)

# 求各类的均值向量和总均值向量
type = data['Species'].value_counts()
meanVal = np.empty([3,4])
for i in range(len(type)):
    meanVal[i] = np.mean(np.mat(data[data['Species'] == type.index[i]].iloc[:, 1:5]),axis = 0)
#print(meanVal)

meanValAll = np.mean(meanVal,axis = 0)
#print(meanValAll)

# 求类内和类间散布矩阵
S_w = np.zeros([4,4])
S_b = np.zeros([4,4])
for i in range(len(type)):
    x = np.mat(data[data['Species'] == type.index[i]].iloc[:, 1:5])
    S_w += np.matmul((x - meanVal[i]).T, x - meanVal[i])
    n = len(x)
    m_mat = np.mat(meanVal[i] - meanValAll)
    S_b += n*np.matmul(m_mat.T,m_mat)
print(S_w)
print(S_b)

# 求S_w^-1 * S_B
S = np.linalg.inv(S_w)*S_b
#求特征值,特征向量
eigVals,eigVects = np.linalg.eig(S)

print(eigVals,"\n",eigVects)

# 4->2投影矩阵
W = eigVects[0:2]

# 绘图
fig = plt.figure()
ax1 = fig.add_subplot()
plt.xlabel('LDA1')
plt.ylabel('LDA2')
colors = ['r','g','b']
for i in range(len(type)):
    x = np.mat(data[data['Species'] == type.index[i]].iloc[:, 1:5])
    x_new = (x * W.T).getA()
    lda1 = list(x_new[:,0])
    lda2 = list(x_new[:,1])
    ax1.scatter(lda1,lda2,c=colors[i],label=type.index[i])

plt.show()

结果:
为什么机器学习(六) —— 数据降维LDA线性判别分析原理_第2张图片

你可能感兴趣的:(机器学习)