Matrix Analysis and Application

  1. Chap 1: Linear Equations and Matrix
    1. Linear equations
    2. Gaussian elimination
      • Pivot;
      • Triangularize;
      • Back substitution;
      • Coefficient matrix, augmented matrix, row vector & column vector;
      • the meaning of Ai*, A*j;
      • 3 situations of solution existence (under the view of linear equations): 0,1 or infinite;
      • Computational complexity: n^3/3+...;
    3. Gaussian-Jordan Method
      • Computational complexity: n^3/2+...;
    4. Roundoff error
    • Form of floating number: f=± .d1 d2 ... dt * b^n (d1≠0);
    • Roundoff error: caused by the different magnitudes between the different columns;
    • Partial pivoting: search the position BELOW the pivotal position for the coefficient in maximum magnitude;
    • Complete pivoting: search the position BELOW and on the RIGHT of the pivotal position for the max coefficient;
    • Partial & Complete pivoting: whether using elementary column operation. The partial one is used more frequently because the elementary column operation is not easy to use;
  2. the ill-conditioned system
  • the solution of an ill-conditioned system is extremely sensitive to a small perturbation on the coefficients;
  • Geometrical view: two linear systems are almost parallel so that their cross point will move sensitively when any one system moved;
  • How to notice the ill-condition of a linear system: enumerating ( it's not easy to find whether a system is ill-conditioned);
  • 2 way to solve the problem: bite the bullet and compute the accurate solution, or redesign the experiment setup to avoid producing the ill-conditioned system. The latter one is better empirically. Finding a system is an ill-conditioned one as early as possible will save much time;
Row echelon form
  • Notation: E;
  • Cause: linear correlation between different column vectors and modified Gaussian elimination;
  • The echelon form (namely the position of pivots) is uniquely determined by the entries in A. However, the entries in E is not uniquely determined by A.
  • Basic column: the columns in A which contain the pivotal position;
  • Rank: the number of pivots = the number of nonzero rows in E = the number of basic columns in A;
  • Reduced row echelon form: produced by Gaussian-Jordan Method( [0 0 1 0]T ), notated by EA;
  • Both form and entries of EA is uniquely determined by A;
  • EA can show the hidden relationships among the different columns of A;
Consistency of linear system
  • A system is consistent if it has at least one solution. Otherwise, it is inconsistent.
  • When n (the number of equations) is two or three, the consistency of the system can be shown geometrically, the common point.
  • If  n>3, we can judge through the following method:
    • In the augmented matrix [A|b], 0=a≠0 does not exist;
    • In [A|b], b is the nonbasic column;
    • rank(A|b)=rank(A);
    • b is the combination of the basic column in A.
Homogeneous system
  • Homogeneous and nonhomogeneous;
  • Trivial solution;
  • A homogeneous system must be a consistent system;
  • General solution: basic variable, free variable;
Nonhomogeneous system
  • General solution;
  • The system possesses a unique solution if and only if:
    • rank(A) = the number of the unknowns;
    • no free variable;
    • the associated homogeneous system only has a trivial solution;
Chap 2: Matrix Algebra
  1. Addition
    • Addition and addition inversion;
    • Addition properties;
  2. Scalar multiplication
  3. Transpose
    • Transpose and conjugate transpose;
    • Properties;
    • Symmetry;
      • Symmetric matrix, skew-symmetric matrix, hermitian matrix, skew-hermitian matrix;
  4. Multiplication
    • Linear function: f(x1+x2)=f(x1)+f(x2), f(kx)=kf(x) <=> f(kx+y)=kf(x)+f(y);
    • Affine function: translation of linear function;
    • Matrix multiplication;
    • Properties: distributive law(left one or tight one) and associative law, but no commutative law;
    • Trace
      • Definition: the sum of diagonal entries;
      • Properties: trace(AB) = trace(BA), trace(ABC) = trace(BCA) = trace(CAB) ≠ trace(ACB);
    • Meaning of rows and columns in a product
      • [AB]i* = linear combination of row vectors in B based on i-th row vector in A;
      • [AB]*j = linear combination of column vectors in A based on j-th column vector in B;
      • column vector * row vector = a matrix whose rank is 1 ( outer product);
      • row vector * column vector <=> inner product;
    • Identity matrix;
    • Power: nonnegative;
    • Block matrix multiplication;
  5. Inversion
    • Only square matrices have matrix inversion;
    • AB=I and BA=I ( When only square matrix involved, any one of the two equations is sufficing);
    • Nonsingular matrix and singular matrix;
    • When an inversion exists, it is unique. That means:
      • If A is nonsingular, the equation Ax=b has the unique solution x=A'b;
      • If A is nonsingular, rank(A) =n (full rank);
      • If A is nonsingular, the unknown x has no free variable;
      • If A is nonsingular, the associated homogeneous system has a trivial solution only;
    • Existence of matrix inversion: A' exists <=> rank(A)=n <=> A can be transformed to I via Gauss-Jordan Method <=> Ax=0 only has a trivial solution;
    • Computing an inversion: transforming [A|I] to [I|A'] via Gauss-Jordan Method;
    • Complexity(x=A'b) > Complexity(Gaussian Elimination):
      • C(GE) ≈ n^3/3;
      • C(x=A'b) = C(computing A') + C(A'b) ≈ 2*(n^3/2) + n*n*n = 2n^3;
    • Properties:
      • (A')' = A;
      • A, B are nonsingular, AB is also nonsingular;
      • (AB)' = B'A';
      • (A')T = (AT)' as well as (A')* = (A*)';
    • Inversion of sum and sensitivity:
      • Directly discuss the relationship of (A+B)' and A', B' is meaningless;
      • Sherman-Morrison Formula: a small perturbation;
      • Neumann Series:
        • If limn->infiniteAn=0 and (I-A) is nonsingular, (I-A)'=I + A + A2 + ... =ΣiAi;
        • To solve (A+B)', the expression can be transformed into A(I-(-A'B))';
        • (A+B)' ≈ A' + A'BA': A perturbation B on A, will make inversion change by A'BA'. When A' is large, a small perturbation will change the result a lot;
      • Condition number;
  6. Elementary Matrices and Equivalence
  • Elementary matrix: I-uv^T, u and v are column vectors;
    • The inversion of an elementary matrix is also an elementary matrix;
    • Elementary matrices associated with three types of elementary row (or column) operation;
    • A is a nonsingular matrix <=> A is the product of elementary matrices of Type I, II and III row (or column) operation;
  • Equivalence: A~B <=> PAQ=B for nonsingular P and Q;
    • Row equivalence and column equivalence;
    • Rank normal form: if A is an m*n  matrix such that rank(A)=r, then A~Nr=[[Ir, 0]^T, [0, 0]^T], Nr is called rank normal form for A;
    • A~B <=> rank(A)=rank(B);
    • Corollary: Multiplication by nonsingular matrices cannot change rank;
      • rank(A^T)=rank(A);
      • rank(A*)=rank(A);
LU factorization
  • Origin: Gaussian Elimination;
  • LU factorization: A=LU, L: lower triangular matrix, U: upper triangular matrix;
  • Observation of LU: Advantages of LU factorization:
    • L:
      • a lower triangular matrix;
      • 1's on the diagonal: means itself row plus other rows' multiplication with a scalar;
      • the entries below the diagonal record the multipliers used to eliminate;
    • U:
      • an upper triangular matrix;
      • the result of the elimination on A;
  • *L and U are unique;
    • proof: A=L1U1=L2U2 => L2'L1=U2U1', L2'L1 is a lower triangular matrix, U2U1' is an upper triangular matrix. They are equal to each other. So I=I => L2'L1=U2U1'=I.
  • *If exchanging of two rows is emerging during LU factorizing, the consistency of triangular form will be destroyed;
  • Advantages of LU factorization:
    • If only one system Ax=b need to be solved, the Gaussian Elimination is enough;
    • If more then one systems which coefficient matrices are the same need to be solved, the LU factorization is better;
    • Once the LU factors of A are known, any other system Ax=b can be solved in n^2 multiplications and n^2-n additions;
  • Existence of LU:
    • No zero pivot emerges during row reduction to upper triangular form with type III operation;
    • Another characterization method associated with principle submatrix: each leading principle submatrices is nonsingular;
  • PLU factorization: PA=LU;
  • LDU factorization: A=LDU, D=diag(u11, u22, ..., unn);
Vector Spaces
  1. Spaces and subspaces
    • Vector space;
    • Scalar field F: R for real numbers and C for complex numbers;
  2. null

转载于:https://www.cnblogs.com/hizhaolei/p/11521411.html

你可能感兴趣的:(Matrix Analysis and Application)