LU分解大法好!

LU分解

之前提过的Randomized SVD 的实现中用到了LU factorization, 所谓LU factorization就是把矩阵分解为一个下三角矩阵 L L 和一个上三角矩阵 U U 的乘积。

高斯消元法

先手动算一发高斯消元:

A=13132926204263972 A = ( 1 − 2 − 2 − 3 3 − 9 0 − 9 − 1 2 4 7 − 3 − 6 26 2 )

A A 做高斯消元法得到上三角矩阵 U U 和下三角矩阵 L L :

LU=13130104001200011000230026203041 L U = [ 1 0 0 0 3 1 0 0 − 1 0 1 0 − 3 4 − 2 1 ] ⋅ [ 1 − 2 − 2 − 3 0 − 3 6 0 0 0 2 4 0 0 0 1 ]

Gaussian Elimination transform a linear system into an upper triangular one by applying linear transformations on the left. It is triangular triangularization.

换个角度看:将 A A 做高斯消元得到 U U 可以看成是对 A A 连续做基础行变换。

Lm1L2L1A=U L m − 1 … L 2 L 1 A = U

L L is unit lower-triangular: L L 的对角线上的元素全为1!

def LU(A):
    U = np.copy(A)
    m, n = A.shape
    # 注意L一开始是Identity
    L = np.eye(n)

    # 最后一列(n)不用处理
    for k in range(n-1):
        # 第1行不用处理
        for j in range(k+1,n):
            L[j,k] = U[j,k]/U[k,k]
            U[j,k:n] -= L[j,k] * U[k,k:n]
            # print(U)会发现U就是在做高斯消元
            print(U)
    return L, U
高斯消元法复杂度

Work for Gaussian Elimination: 213n3 2 ⋅ 1 3 n 3 .

Memory

Above, we created two new matrices, L L and U U . However, we can store the values of L L and U U in our matrix A (overwriting the original matrix). Since the diagonal of L L is all 1 1 s, it doesn’t need to be stored. Doing factorizations or computations in-place is a common technique in numerical linear algebra to save memory.

Note: you wouldn’t want to do this if you needed to use your original matrix A A again in the future.

LU分解的用处

分解为 L L U U 之后,先解 Ly=b L y = b 再解 Ux=y U x = y 会变得简单许多。

Solving Ax=b A x = b becomes LUx=b L U x = b :
1. find A=LU A = L U
2. solve Ly=b L y = b
3. solve Ux=y U x = y

Speed up LU Factorization

  • Parallelized LU Decomposition LU decomposition can be fully parallelized
  • Randomized LU Decomposition (2016 article): The randomized LU is fully implemented to run on a standard GPU card without any GPU–CPU data transfer.

你可能感兴趣的:(Machine,Learning,线性代数)