线性回归最小二乘法公式推导

符号表示

首先我们将训练样本的特征矩阵X进行表示,其中N为样本个数,p为特征个数,每一行表示为每个样本,每一列表示特征的每个维度:
X = ( x 11 x 12 . . . x 1 p x 21 x 22 . . . x 2 p . . . . . . . . . . . . x N 1 x N 2 . . . x N p ) N ⋅ p X= \begin{gathered} \begin{pmatrix} x_{11} & x_{12} & ... & x_{1p} \\ x_{21} & x_{22} & ... & x_{2p} \\ ... & ... &... &... \\ x_{N1} & x_{N2} & ... & x_{Np} \end{pmatrix} \quad \end{gathered}_{N\cdot p} X=x11x21...xN1x12x22...xN2............x1px2p...xNpNp

然后我们对训练样本的标签向量Y权重向量w进行表示,其中权重向量指的是线性回归中各个系数形成的向量。
Y = ( y 1 y 2 . . . y N ) Y = \begin{gathered} \begin{pmatrix} y_{1} \\ y_{2} \\ ... \\ y_{N} \end{pmatrix} \quad \end{gathered} Y=y1y2...yN

w = ( w 1 w 2 . . . w p ) w = \begin{gathered} \begin{pmatrix} w_{1} \\ w_{2} \\ ... \\ w_{p} \end{pmatrix} \quad \end{gathered} w=w1w2...wp
为了方便运算,我们把 y i = x i w + b y_{i} = x_{i}w + b yi=xiw+b中的b也并入到w和x中。则上述的符号表示则为:

X = ( x 10 x 11 x 12 . . . x 1 p x 20 x 21 x 22 . . . x 2 p . . . . . . . . . . . . . . . x N 0 x N 1 x N 2 . . . x N p ) N ⋅ p X= \begin{gathered} \begin{pmatrix} x_{10} & x_{11} & x_{12} & ... & x_{1p} \\ x_{20} & x_{21} & x_{22} & ... & x_{2p} \\ ... & ... &... &... &... \\ x_{N0} & x_{N1} & x_{N2} & ... & x_{Np} \end{pmatrix} \quad \end{gathered}_{N\cdot p} X=x10x20...xN0x11x21...xN1x12x22...xN2............x1px2p...xNpNp

w = ( w 0 w 1 w 2 . . . w p ) w = \begin{gathered} \begin{pmatrix} w_{0} \\ w_{1} \\ w_{2} \\ ... \\ w_{p} \end{pmatrix} \quad \end{gathered} w=w0w1w2...wp

公式推导

L ( w ) = ∑ i = 1 N ( x i w − y i ) 2 L(w) = \sum^{N}_{i =1 } (x_{i}w - y_{i})^{2} L(w)=i=1N(xiwyi)2
w = arg ⁡ min ⁡ L ( w ) = arg ⁡ min ⁡ ∑ i = 1 N ( x i w − y i ) 2 w = \operatorname { arg } \operatorname { min }L(w) = \operatorname { arg } \operatorname { min } \sum^{N}_{i =1 } (x_{i}w - y_{i})^{2} w=argminL(w)=argmini=1N(xiwyi)2
为什么是转置乘以原矩阵,这是由于Y是列向量,则 ( X W − Y ) (XW - Y) (XWY)则也是列向量。根据矩阵乘法的定义,只有行向量乘以列向量,最终结果才是一个常数。
L ( w ) = ( X W − Y ) T ( X W − Y ) L(w) = (XW-Y)^{T} (XW-Y) L(w)=(XWY)T(XWY)
L ( w ) = ( W T X T − Y T ) ( X W − Y ) L(w) = (W^{T}X^{T} - Y^{T})(XW-Y) L(w)=(WTXTYT)(XWY)
L ( w ) = ( W T X T X W − 2 W T X T Y + Y T Y ) L(w) = (W^{T}X^{T}XW-2W^{T}X^{T}Y+Y^{T}Y) L(w)=(WTXTXW2WTXTY+YTY)
∂ L ( w ) ∂ w = 2 X T X W − 2 X T Y = 0 \frac { \partial L(w)} {\partial w} = 2X^{T}XW - 2X^{T}Y = 0 wL(w)=2XTXW2XTY=0
W = ( X T X ) − 1 X T Y W = {(X^{T}X)}^{-1}X^{T}Y W=(XTX)1XTY

你可能感兴趣的:(经典算法)