数值计算详细笔记(三):线性方程组解法

文章目录

    • 6. Linear Systems Ax = b
      • 6.1 Basic Concepts
        • 6.1.1 LSEs
        • 6.1.2 Operations of LSEs
        • 6.1.3 Augmented Matirx
      • 6.2 Gaussian Elimination Method
        • 6.2.1 Overall Description
        • 6.2.2 Algorithm
      • 6.3 Pivoting Strategies
        • 6.3.1 Background
        • 6.3.2 Maximal Column Pivoting Technique
        • 6.3.3 Maximal Row Pivoting Technique
        • 6.3.4 Partial Pivoting Technique
        • 6.3.5 Scaled Partial Pivoting Technique
      • 6.4 LU Factorization
        • 6.4.1 The advantage of LU Factorization
        • 6.4.2 LU Factorization through Gaussian Elimination
        • 6.4.3 LU Factorization through Gaussian Elimination
      • 6.5 Strictly Diagonally dominant Matrix
        • 6.5.1 Definition
        • 6.5.2 Property
      • 6.6 Positive Definite Symmetric Matrix
        • 6.6.1 Definition
        • 6.6.2 Property
        • 6.6.3 Theorem
      • 6.7 L L T LL^T LLT Factorization
        • 6.7.1 Definition
        • 6.7.2 Choleski's Algorithm
      • 6.8 L D L T LDL^T LDLT Factorization
        • 6.8.1 Definition
        • 6.8.2 Algorithm
      • 6.9 Tri-diagonal Linear System
        • 6.9.1 Definition
        • 6.9.2 LU Factorization
        • 6.9.3 Remarks

6. Linear Systems Ax = b

6.1 Basic Concepts

6.1.1 LSEs

the Linear System of Equations (LSEs):
( I ) { E 1 : a 11 x 1 + a 12 x 2 + . . . + a 1 n x n = b 1 E 2 : a 21 x 1 + a 22 x 2 + . . . + a 2 n x n = b 2 . . . E n : a n 1 x 1 + a n 2 x 2 + . . . + a n n x n = b n (I)\left\{ \begin{aligned} E_1: & a_{11}x_1+a_{12}x_2+...+a_{1n}x_n=b_1 \\ E_2: & a_{21}x_1+a_{22}x_2+...+a_{2n}x_n=b_2 \\ ... & \\ E_n: & a_{n1}x_1+a_{n2}x_2+...+a_{nn}x_n=b_n \end{aligned} \right. (I)E1:E2:...En:a11x1+a12x2+...+a1nxn=b1a21x1+a22x2+...+a2nxn=b2an1x1+an2x2+...+annxn=bn

6.1.2 Operations of LSEs

Multiplied operation - 数乘

Equation E i E_i Ei can be multiplied by any nonzero constant λ \lambda λ
( λ E i ) → E i (\lambda E_i)\rightarrow E_i (λEi)Ei

Multiplied and added operation - 倍加

Equation E j E_j Ej can be multiplied by any nonzero constant λ \lambda λ, and added to Equation E i E_i Ei in place of E i E_i Ei, denoted by
( λ E j + E i ) → E i (\lambda E_j+E_i)\rightarrow E_i (λEj+Ei)Ei

Transposition - 交换

Equation E i E_i Ei and E j E_j Ej can be transposed in order, denoted by
E i ↔ E j E_i \leftrightarrow E_j EiEj

6.1.3 Augmented Matirx

A ~ = [ A , b ] = ( a 11 a 12 . . . a 1 n a 21 a 22 . . . a 2 n . . . . . . . . . . . . a n 1 a n 2 . . . a n n b 1 b 2 . . . b n ) \tilde{A}=[A,\textbf{b}]= \left ( \begin{array}{c:c} \begin{matrix} a_{11}&a_{12}&...&a_{1n}\\ a_{21}&a_{22}&...&a_{2n}\\ ... & ... & ... &... \\ a_{n1}&a_{n2}&...&a_{nn}\\ \end{matrix}& \begin{matrix} b_1\\ b_2\\ ...\\ b_n \end{matrix} \end{array} \right ) A~=[A,b]=a11a21...an1a12a22...an2............a1na2n...annb1b2...bn

6.2 Gaussian Elimination Method

6.2.1 Overall Description

The key point of Gaussian Elimination Method is changing the original matrix into upper-triangular matrix, then using backward–substitution method to calculate the answer.

6.2.2 Algorithm

  • INPUT: N N N-dimension, A ( N , N ) , B ( N ) A(N,N), B(N) A(N,N),B(N)

  • OUTPUT: Solution x ( N ) x(N) x(N) or Message that LESs has no unique solution.

  • Step 1 1 1: For k = 1 , 2 , . . . , N − 1 k = 1,2,...,N-1 k=1,2,...,N1, do step 2-4.

  • Step 2 2 2: Set p p p be the smallest integer with k ≤ p ≤ N k\leq p\leq N kpN and A p , k ≠ 0 A_{p,k}\not= 0 Ap,k=0. If no p p p can be found, output: “no unique solution exists”; stop.

  • Step 3 3 3: If p ≠ k p\not=k p=k, do transposition E p ↔ E k E_p\leftrightarrow E_k EpEk.

  • Step 4 4 4: For i = k + 1 , . . . , N i=k+1,...,N i=k+1,...,N

    1. Set m i , k = A ( i , k ) A ( k , k ) m_{i,k}=\displaystyle\frac{A(i,k)}{A(k,k)} mi,k=A(k,k)A(i,k)
    2. Set B ( i ) = B ( i ) − m i , k B ( k ) B(i)=B(i)-m_{i,k}B(k) B(i)=B(i)mi,kB(k)
    3. For j = k + 1 , . . . , N j=k+1,...,N j=k+1,...,N, set A ( i , j ) = A ( i , j ) − m i , k A ( k , j ) A(i,j)=A(i,j)-m_{i,k}A(k,j) A(i,j)=A(i,j)mi,kA(k,j);
  • Step 5 5 5: If A ( N , N ) ≠ 0 A(N,N)\not=0 A(N,N)=0, set x ( N ) = B ( N ) A ( N , N ) x(N)=\displaystyle\frac{B(N)}{A(N,N)} x(N)=A(N,N)B(N); Else, output:“no unique solution exists.”

  • Step 6 6 6: For i = N − 1 , N − 2 , . . . , 1 i=N-1,N-2,...,1 i=N1,N2,...,1, set
    X ( i ) = [ B ( i ) − ∑ j = i + 1 N A ( i , j ) x ( j ) ] / A ( i , i ) X(i)=[B(i)-\sum\limits_{j=i+1}^{N}A(i,j)x(j)]/A(i,i) X(i)=[B(i)j=i+1NA(i,j)x(j)]/A(i,i)

  • Step 7 7 7: Output the solution x ( N ) x(N) x(N).

6.3 Pivoting Strategies

6.3.1 Background

According to the process of Gaussian Elimination Method, We find that if a k k ( k − 1 ) a_{kk}^{(k-1)} akk(k1) is too small, the roundoff error will be larger.
m i , k = A ( i , k ) A ( k , k ) X ( i ) = [ B ( i ) − ∑ j = i + 1 N A ( i , j ) x ( j ) ] / A ( i , i ) m_{i,k}=\displaystyle\frac{A(i,k)}{A(k,k)}\\ X(i)=[B(i)-\sum\limits_{j=i+1}^{N}A(i,j)x(j)]/A(i,i)\\ mi,k=A(k,k)A(i,k)X(i)=[B(i)j=i+1NA(i,j)x(j)]/A(i,i)

Therefore, in order to reduce the roundoff error, we need to make the value of a k k ( k − 1 ) a_{kk}^{(k-1)} akk(k1) larger.

6.3.2 Maximal Column Pivoting Technique

数值计算详细笔记(三):线性方程组解法_第1张图片

This method is to make a k k ( k − 1 ) a_{kk}^{(k-1)} akk(k1) equal to the maximal value in its column.

6.3.3 Maximal Row Pivoting Technique

数值计算详细笔记(三):线性方程组解法_第2张图片

This method is to make a k k ( k − 1 ) a_{kk}^{(k-1)} akk(k1) equal to the maximal value in its row.

6.3.4 Partial Pivoting Technique

数值计算详细笔记(三):线性方程组解法_第3张图片

This method is to make a k k ( k − 1 ) a_{kk}^{(k-1)} akk(k1) equal to the maximal value in its remaining area.

6.3.5 Scaled Partial Pivoting Technique

s i = max ⁡ 1 ≤ j ≤ n ∣ a i , j ∣ a k k s k = max ⁡ k ≤ i ≤ n a i , 1 s i s_i=\max_{1\leq j\leq n}|a_{i,j}| \\ \displaystyle\frac{a_{kk}}{s_k}=\max_{k\leq i\leq n}\displaystyle\frac{a_{i,1}}{s_i} si=1jnmaxai,jskakk=kinmaxsiai,1

This method is to make a k k ( k − 1 ) s k \displaystyle\frac{a_{kk}^{(k-1)}}{s_{k}} skakk(k1) equal to the maximal value in its remaining area.

6.4 LU Factorization

6.4.1 The advantage of LU Factorization

A x = b A = L U L = ( 1 0 0 . . . 0 l 21 1 0 . . . 0 l 31 l 32 1 . . . 0 . . . . . . . . . . . . . . . l n 1 l n 2 l n 3 . . . 1 ) , R = ( u 11 u 12 u 13 . . . u 1 n 0 u 22 u 23 . . . u 2 n 0 0 u 33 . . . u 3 n . . . . . . . . . . . . . . . 0 0 0 . . . u n n ) Ax=b\\ A=LU\\ L= \left( \begin{matrix} 1 & 0 & 0 & ... & 0 \\ l_{21} & 1 & 0 & ... & 0 \\ l_{31} & l_{32} & 1 & ... & 0 \\ ... & ... & ... & ... & ... \\ l_{n1} & l_{n2} & l_{n3} & ... & 1 \end{matrix} \right), R= \left( \begin{matrix} u_{11} & u_{12} & u_{13} & ... & u_{1n} \\ 0 & u_{22} & u_{23} & ... & u_{2n} \\ 0 & 0 & u_{33} & ... & u_{3n} \\ ... & ... & ... & ... & ... \\ 0 & 0 & 0 & ... & u_{nn} \end{matrix} \right) Ax=bA=LUL=1l21l31...ln101l32...ln2001...ln3...............000...1,R=u1100...0u12u220...0u13u23u33...0...............u1nu2nu3n...unn

We can use two-step process to solve L U x = b LUx=b LUx=b.

  1. y = U x , L y = b y=Ux,Ly=b y=Ux,Ly=b
  2. Solve L y = b Ly=b Ly=b determining y y y with forward substitution method.
  3. Solve U x = y Ux=y Ux=y determining x x x with forward substitution method.

6.4.2 LU Factorization through Gaussian Elimination

Theorem

If Gaussian elimination can be performed on the linear system A x = b Ax=b Ax=b without row interchanges, then the matrix A A A can be factored into the product of a lower-triangular L L L and an upper-triangular matrix U U U,
A = L U A=LU A=LU
where
L = ( 1 0 0 . . . 0 m 21 1 0 . . . 0 m 31 m 32 1 . . . 0 . . . . . . . . . . . . . . . m n 1 m n 2 m n 3 . . . 1 ) , R = ( a 11 1 a 12 1 a 13 1 . . . a 1 n 1 0 a 22 2 a 23 2 . . . a 2 n 2 0 0 a 33 3 . . . a 3 n 3 . . . . . . . . . . . . . . . 0 0 0 . . . a n n n ) L= \left( \begin{matrix} 1 & 0 & 0 & ... & 0 \\ m_{21} & 1 & 0 & ... & 0 \\ m_{31} & m_{32} & 1 & ... & 0 \\ ... & ... & ... & ... & ... \\ m_{n1} & m_{n2} & m_{n3} & ... & 1 \end{matrix} \right), R= \left( \begin{matrix} a_{11}^1 & a_{12}^1 & a_{13}^1 & ... & a_{1n}^1 \\ 0 & a_{22}^2 & a_{23}^2 & ... & a_{2n}^2 \\ 0 & 0 & a_{33}^3 & ... & a_{3n}^3 \\ ... & ... & ... & ... & ... \\ 0 & 0 & 0 & ... & a_{nn}^n \end{matrix} \right) L=1m21m31...mn101m32...mn2001...mn3...............000...1,R=a11100...0a121a2220...0a131a232a333...0...............a1n1a2n2a3n3...annn

Proof

m j , 1 = a j , 1 a 1 , 1 M 1 = ( 1 0 0 . . . 0 − m 21 1 0 . . . 0 − m 31 0 1 . . . 0 . . . . . . . . . . . . . . . − m n 1 0 0 . . . 1 ) m_{j,1}=\displaystyle\frac{a_{j,1}}{a_{1,1}}\\ M^1 = \left( \begin{matrix} 1 & 0 & 0 & ... & 0 \\ -m_{21} & 1 & 0 & ... & 0 \\ -m_{31} & 0 & 1 & ... & 0 \\ ... & ... & ... & ... & ... \\ -m_{n1} & 0 & 0 & ... & 1 \end{matrix} \right) mj,1=a1,1aj,1M1=1m21m31...mn1010...0001...0...............000...1
Thus,
A n = M n − 1 M n − 2 . . . M 1 A . A^n=M^{n-1}M^{n-2}...M^{1}A. An=Mn1Mn2...M1A.
Let U = A n U=A^n U=An, then
[ M 1 ] − 1 . . . [ M n − 2 ] − 1 [ M n − 1 ] − 1 U = A [ M 1 ] − 1 = ( 1 0 0 . . . 0 m 21 1 0 . . . 0 m 31 0 1 . . . 0 . . . . . . . . . . . . . . . m n 1 0 0 . . . 1 ) L = [ M 1 ] − 1 . . . [ M n − 2 ] − 1 [ M n − 1 ] − 1 [M^1]^{-1}...[M^{n-2}]^{-1}[M^{n-1}]^{-1}U=A \\ [M^1]^{-1} = \left( \begin{matrix} 1 & 0 & 0 & ... & 0 \\ m_{21} & 1 & 0 & ... & 0 \\ m_{31} & 0 & 1 & ... & 0 \\ ... & ... & ... & ... & ... \\ m_{n1} & 0 & 0 & ... & 1 \end{matrix} \right)\\ L = [M^1]^{-1}...[M^{n-2}]^{-1}[M^{n-1}]^{-1}\\ [M1]1...[Mn2]1[Mn1]1U=A[M1]1=1m21m31...mn1010...0001...0...............000...1L=[M1]1...[Mn2]1[Mn1]1

6.4.3 LU Factorization through Gaussian Elimination

L U = ( 1 0 0 . . . 0 l 21 1 0 . . . 0 l 31 l 32 1 . . . 0 . . . . . . . . . . . . . . . l n 1 l n 2 l n 3 . . . 1 ) ( u 11 u 12 u 13 . . . u 1 n 0 u 22 u 23 . . . u 2 n 0 0 u 33 . . . u 3 n . . . . . . . . . . . . . . . 0 0 0 . . . u n n ) L U = A = ( a 11 a 12 a 13 . . . a 1 n a 21 a 22 a 23 . . . a 2 n a 31 a 32 a 33 . . . a 3 n . . . . . . . . . . . . . . . a n 1 a n 2 a n 3 . . . a n n ) LU= \left( \begin{matrix} 1 & 0 & 0 & ... & 0 \\ l_{21} & 1 & 0 & ... & 0 \\ l_{31} & l_{32} & 1 & ... & 0 \\ ... & ... & ... & ... & ... \\ l_{n1} & l_{n2} & l_{n3} & ... & 1 \end{matrix} \right) \left( \begin{matrix} u_{11} & u_{12} & u_{13} & ... & u_{1n} \\ 0 & u_{22} & u_{23} & ... & u_{2n} \\ 0 & 0 & u_{33} & ... & u_{3n} \\ ... & ... & ... & ... & ... \\ 0 & 0 & 0 & ... & u_{nn} \end{matrix} \right)\\ LU=A= \left( \begin{matrix} a_{11} & a_{12} & a_{13} & ... & a_{1n} \\ a_{21} & a_{22} & a_{23} & ... & a_{2n} \\ a_{31} & a_{32} & a_{33} & ... & a_{3n} \\ ... & ... & ... & ... & ... \\ a_{n1} & a_{n2} & a_{n3} & ... & a_{nn} \end{matrix} \right) LU=1l21l31...ln101l32...ln2001...ln3...............000...1u1100...0u12u220...0u13u23u33...0...............u1nu2nu3n...unnLU=A=a11a21a31...an1a12a22a32...an2a13a23a33...an3...............a1na2na3n...ann

Algorithm

6.5 Strictly Diagonally dominant Matrix

6.5.1 Definition

The n ∗ n n*n nn matrix is said to be strictly diagonally dominant (严格对角占优) when
∣ a i i ∣ > ∑ j = 1 , j ≠ i n ∣ a i j ∣ |a_{ii}|>\sum\limits_{j=1,j\not=i}^{n} |a_{ij}| aii>j=1,j=inaij
holds for each i = 1 , 2 , 3 , . . . , n i=1,2,3,...,n i=1,2,3,...,n.

6.5.2 Property

  1. A strictly diagonally dominant matrix A A A is nonsingular.
  2. Moreover, in this case, Gaussian elimination can be performed on any linear system of the form A x = b Ax=b Ax=b to obtain its unique solution without row or column interchanges, and the computations are stable with respect to the growth of roundoff errors.

Proof for First Property

A matrix is singular means its determinant is zero.

A matrix’s determinant is zero means the n vectors in the matrix are linearly dependent.

Thus, matrix A A A is singular means there exists a column vector u u u that A x = 0 Ax=0 Ax=0.

数值计算详细笔记(三):线性方程组解法_第4张图片

6.6 Positive Definite Symmetric Matrix

6.6.1 Definition

A matrix A A A is positive definite if it is symmetric and if x T A x > 0 x^TAx > 0 xTAx>0 for every n n n-dimensional column vector x ≠ 0 x\not=0 x=0.

6.6.2 Property

If A is an n ∗ n n*n nn positive definite matrix, then

  1. A is nonsingular;
  2. a i i > 0 a_{ii}>0 aii>0 for each i = 1 , 2 , . . . , n i=1,2,...,n i=1,2,...,n;
  3. max ⁡ 1 ≤ k , j ≤ n ∣ a k , j ∣ ≤ max ⁡ 1 ≤ i ≤ n ∣ a i , i ∣ \max_{1\leq k,j\leq n}|a_{k,j}|\leq \max_{1\leq i\leq n}|a_{i,i}| max1k,jnak,jmax1inai,i;
  4. a i j 2 < a i i a j j a_{ij}^2aij2<aiiajj for each i ≠ j i\not=j i=j.

6.6.3 Theorem

数值计算详细笔记(三):线性方程组解法_第5张图片

数值计算详细笔记(三):线性方程组解法_第6张图片

数值计算详细笔记(三):线性方程组解法_第7张图片

6.7 L L T LL^T LLT Factorization

6.7.1 Definition

For a n ∗ n n*n nn symmetric and positive definite matrix A A A with the form
A = ( a 11 a 12 a 13 . . . a 1 n a 12 a 22 a 23 . . . a 2 n a 13 a 23 a 33 . . . a 3 n . . . . . . . . . . . . . . . a 1 n a 2 n a 3 n . . . a n n ) A= \left( \begin{matrix} a_{11} & a_{12} & a_{13} & ... & a_{1n} \\ a_{12} & a_{22} & a_{23} & ... & a_{2n} \\ a_{13} & a_{23} & a_{33} & ... & a_{3n} \\ ... & ... & ... & ... & ... \\ a_{1n} & a_{2n} & a_{3n} & ... & a_{nn} \end{matrix} \right) A=a11a12a13...a1na12a22a23...a2na13a23a33...a3n...............a1na2na3n...ann
where A T = A . A^T=A. AT=A. We can factorize this matrix to the form like L L T = A LL^T=A LLT=A, where L L L is a lower triangular matrix with form as follows
( l 11 0 0 ⋯ 0 l 21 l 22 0 ⋯ 0 l 31 l 32 l 33 ⋯ 0 ⋮ ⋮ ⋮ ⋱ ⋮ l n 1 l n 2 l n 3 ⋯ l n n ) \left( \begin{matrix} l_{11} & 0 & 0 & \cdots & 0 \\ l_{21} & l_{22} & 0 & \cdots & 0 \\ l_{31} & l_{32} & l_{33} & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ l_{n1} & l_{n2} & l_{n3} & \cdots & l_{nn} \end{matrix} \right) l11l21l31ln10l22l32ln200l33ln3000lnn

Thus, we need to determine the elements l i j l_{ij} lij, for i ∈ [ 1 , n ] i\in[1,n] i[1,n] and j ∈ [ 1 , n ] j\in[1,n] j[1,n].
A = ( a 11 a 12 a 13 . . . a 1 n a 12 a 22 a 23 . . . a 2 n a 13 a 23 a 33 . . . a 3 n . . . . . . . . . . . . . . . a 1 n a 2 n a 3 n . . . a n n ) = L L T A= \left( \begin{matrix} a_{11} & a_{12} & a_{13} & ... & a_{1n} \\ a_{12} & a_{22} & a_{23} & ... & a_{2n} \\ a_{13} & a_{23} & a_{33} & ... & a_{3n} \\ ... & ... & ... & ... & ... \\ a_{1n} & a_{2n} & a_{3n} & ... & a_{nn} \end{matrix} \right)=LL^T A=a11a12a13...a1na12a22a23...a2na13a23a33...a3n...............a1na2na3n...ann=LLT

6.7.2 Choleski’s Algorithm

Calculate the value one row by one row

To factor the positive definite n ∗ n n*n nn matrix A A A into L L T LL^T LLT, where L L L is lower triangular:

  • INPUT: the dimension n n n; entries a i j a_{ij} aij of A A A, for i ∈ [ 1 , n ] i\in [1,n] i[1,n] and j ∈ [ 1 , i ] j\in[1,i] j[1,i].

  • OUTPUT: the entries l i j l_{ij} lij of L L L, for i ∈ [ 1 , n ] i\in [1,n] i[1,n] and j ∈ [ 1 , i ] j\in[1,i] j[1,i].

  • Step 1 1 1: Set l 11 = a 11 l_{11} = \sqrt{a_{11}} l11=a11 .

  • Step 2 2 2: For j ∈ [ 2 , n ] j\in[2,n] j[2,n], set l j 1 = a 1 j l 11 l_{j1}=\displaystyle\frac{a_{1j}}{l_{11}} lj1=l11a1j

  • Step 3 3 3: For i ∈ [ 2 , n − 1 ] i\in[2,n-1] i[2,n1], do Steps 4 and 5.

  • Step 4 4 4: Set l i i = [ a i i − ∑ j = 1 i − 1 l i j 2 ] 1 2 l_{ii}=[a_{ii}-\sum_{j=1}^{i-1}l_{ij}^2]^{\frac{1}{2}} lii=[aiij=1i1lij2]21.

  • Step 5 5 5: For j ∈ [ i + 1 , n ] j\in[i+1,n] j[i+1,n], set l j i = a i j − ∑ k = 1 i − 1 l i k l j k l i i l_{ji}=\displaystyle\frac{a_{ij}-\sum_{k=1}^{i-1}l_{ik}l_{jk}}{l_{ii}} lji=liiaijk=1i1likljk

  • Step 6 6 6: Set l n n = [ a n n − ∑ k = 1 n − 1 l n k 2 ] 1 2 l_{nn}=[a_{nn}-\sum_{k=1}^{n-1}l_{nk}^2]^\frac{1}{2} lnn=[annk=1n1lnk2]21

  • Step 7 7 7: OUTPUT l i j l_{ij} lij for j ∈ [ 1 , i ] j\in[1,i] j[1,i] and i ∈ [ 1 , n ] i\in[1,n] i[1,n]. STOP!

Example

数值计算详细笔记(三):线性方程组解法_第8张图片

6.8 L D L T LDL^T LDLT Factorization

6.8.1 Definition

Matrix A A A is a positive definite matrix, thus
A = L D L T . A = LDL^T. A=LDLT.
数值计算详细笔记(三):线性方程组解法_第9张图片

数值计算详细笔记(三):线性方程组解法_第10张图片

We can calculate the value of L L L and D D D one row by one row.

6.8.2 Algorithm

数值计算详细笔记(三):线性方程组解法_第11张图片

数值计算详细笔记(三):线性方程组解法_第12张图片

6.9 Tri-diagonal Linear System

6.9.1 Definition

An n ∗ n n*n nn matrix A A A is called a band matrix (带状矩阵), if integers p p p and q q q, with 1 < p , q < n 11<p,q<n, exist having the property that a i j = 0 a_{ij}=0 aij=0 whenever i + p ≤ j i+p\leq j i+pj or j + q ≤ i j+q\leq i j+qi. The bandwidth (带宽) of a band matrix is defined as w = p + q − 1 w=p+q-1 w=p+q1.

数值计算详细笔记(三):线性方程组解法_第13张图片

6.9.2 LU Factorization

A = L U A = LU A=LU

数值计算详细笔记(三):线性方程组解法_第14张图片
数值计算详细笔记(三):线性方程组解法_第15张图片

In order to solve the problem of A x = L U x = b Ax=LUx=b Ax=LUx=b, there are two steps to do.

  1. z = U x z = Ux z=Ux, and solve L z = b Lz=b Lz=b
  2. solve U x = z Ux=z Ux=z

6.9.3 Remarks

  1. Band matrices usually are sparse matrices, thus we need to substitute two-dimensional array to one-dimensional array to store the value of the matrices.
  2. Banded matrices appear in numerical calculation methods of partial differential equations and are common matrix forms.

你可能感兴趣的:(#,数值计算,大学课程笔记(重要),数值计算,LU分解,LL分解,LDL分解,线性方程组)