线性代数笔记(网易公开课)

Linear Algebra Handnote(1)

  • If L is lower triangular with 1’s on the diagonal, so is L1

  • Elimination = Facotization: A=LU

  • AT is the matrix that makes these two inner products equal for every x and y :

    (Ax)Ty=xT(ATy)

    Inner product of Ax with y = Inner product of x with ATy

  • DEFINITION: The space Rn consists of all column vectors v with n components

  • DEFINITION: A subspace of a vector space is a set of vectors (including 0) that satisfies two requirements: (1) v+w is in the subspace, (2) cv is in the subspace

  • The colomn space consists of all linear combinations of the columns. The combinations are all possible vectors Ax . They fill the column space C(A)

    The system Ax=b is solvable if and only if b is in the column space of A

  • The nullspace of A consists of all solutions to Ax=0 . These vectors x are in Rn . The nullspace containing all solutions of Ax=0 is denoted by N(A)

  • the nullspace is a subspace of Rn , the column space is a subspace of Rm
  • the nullspace consists of all combinations of the special solutions
  • Nullspace(plane) perpendicular to row space(line)

  • Ax=0 has r pivots and nr free variables: n columns minus r pivot columns. The nullspace matrix N (contains all special solutions) contains the nr special solutions. Then AN=0

  • Ax=0 has r independent equations so it has nr independent solutions.

  • xparticular : the particular solution solves Axp=b

  • xnullspace : the nr special solutions solve Axn=0

  • Complete solution: one xp , many xn : x=xp+xn

  • The four possibilities for linear equations depend on the rank r :

    • r=m , and r=n : Square ane invertible, Ax=b has 1 solution
    • r=m , and r<n : Short and wide, Ax=b has solutions
    • r<m , and r=n : Tall and thin, Ax=b has 0 or 1 solutions
    • r<m , and r<n : Not full rank, Ax=b has 0 or solutions
  • Independent vections (no extra vectors)

  • Spanning a space (enough vectors to produce the rest)
  • Basis for a space (not too many or too few)
  • Dimension of a space (the number of vectors in a basis)

  • Any set of n vectors in Rm must be linearly dependent if n>m

  • The columns spans the column space. The rows span the row space

    • The column space / row space of a matrix is the subspace of Rm / Rn spanned by the columns/rows.
  • A basis for a vector space is a sequence of vectors with two properties: linear independent and span the space.

    • The basis is not unique. But the combination that produces the vector is unique.
    • The columns of a n×n invertible matrix are a basis for Rn .
    • The pivot columns of A are a basis for its column space.
  • DEFINITION: The dimension of a space is the number of vectors in every basis.

The space Z that contains only the zero vector. The dimension of this space is zero. The empty set (containing no vectors) is a basis for Z. We can never allow the zero vector into a basis, because then linear independence is lost.

Four Fundamental Subspaces
1. The row space C(AT) , a subspace of Rn
2. The column space C(A) , a subspace of Rm
3. The nullspace is N(A) , a subspace of Rn
4. The left nullspace N(AT) , a subspace of Rm

  1. A has the same row space as R . Same dimension r and same basis.
  2. The column space of A has dimension r . The number of independent columns equals the number of independent rows.
  3. A has the same nullspace as R . Same dimension nr and same basis.
  4. The left nullspace of A (the nullspace of AT has dimension mr .

Fundamental Theorem of Linear Algebra, Part 1

  • The column space and row space both have dimension r .
  • The nullspaces have dimensions nr and mr .

  • Every rank one matrix has the special form A=uvT=column×row.

  • The nullspace N(A) and the row space C(AT) are orthogonal subspaces of Rn .

  • DEFINITION: The orthogonal complement of a subspace V contains every vector that is perpendicular to V .

Fundamental Theorem of Linear Algebra, Part 2
* N(A) is the orthogonal complement of the row space C(AT) (in Rn )
* N(AT) is the orthogonal complement of the column space C(A) (in Rm )

Projection Onto a Line
* The projection matrix P=aaTaTa onto the line through a
* The projection p=x¯a=aTbaTaa

Projection Onto a Subspace

Problem: Find the combination p=x1¯a1++xn¯an closest to a given vector b . The n vectors a1,,an in Rm span the column space of A . Thus the problem is to find the particular combination p=Ax¯ (the projection) that is closest to b . When n=1 , the best choice is aTbaTa

  • AT(bAx¯)=0 , or ATAx¯=ATb

  • The symmetric matrix ATA is n×n . It is inverible if the a ’s are independent.

  • The solution is x¯=(ATA)1ATb
  • The projection of b onto the subspace p=Ax¯=A(ATA)1ATb
  • The projection matrix P=A(ATA)1AT

* ATA is invertible if and only if A has linearly independent columns*

Least Squares Approximations

  • When Ax=b has no solution, multiply by AT and solve ATAx¯=ATb

  • The least squares solution x¯ minimizes E=||Axb||2 . This is the sum of squares of the errors in the m equations ( m>n )

  • The best x¯ comes from the normal equations ATAx¯=ATb

Orthogonal Bases and Gram-Schmidt

  • orthonormal vectors

    • A matrix with orthonormal columns is assigned the special letter Q . The matrix Q is easy to work with because QTQ=I
    • When Q is square, QTQ=I means that QT=Q1 : transpose = inverse.
    • If the columns are only orthogonal (not unit vectors), dot products give a diagonal matrix (not the identity matrix)
  • Every permutation matrix is an orthogonal matrix.

  • If Q has orthonormal columns ( QTQ=I ), it leaves lengths unchanged

  • Orthogonal is good

  • Use Gram-Schmidt for the Factorization A=QR

[abc]=[q1q2q3]qT1aqT1bqT2bqT1cqT2cqT3c

  • (Gram-Schmidt) From independent vectors a1,,an , Gram-Schmidt constructs orthonormal vectors q1,,qn . The matrces with these columns satisfy A=QR . Then R=QTA is upper triangular because later q ’s are orthogonal to earlier a ’s.

  • Least squares: RTRx¯=RTQTb or Rx¯=QTb or x¯=R1QTb

Determinants

  • The determinant is zero when the matrix has no inverse
  • The product of the pivots is the determinant
  • The determinant changes sign when two rows (or two columns) are exchanged
  • Determinants give A1 and A1b (this formulat is called Cramer’s Rule)
  • When the edge of a box are the rows of A , the volume is |detA|
  • For n special numbers λ , called eigenvalues, the determinants of AλI is zero.

The properties of the determinant

  1. The determinant of the n×n identity matrix is 1.
  2. The determinant changes sign when two rows are exchanged
  3. The determinant is a linear function of each row separately (all other rows stay fixed!)
  4. If two rows of A are equal, then detA=0
  5. Subtracting a multiple of one row from another row leave detA unchanged.
    • |ab cladlb|=acbd
  6. A matrix with a row of zeros has detA=0
  7. If A is triangular then detA=a11a22ann=productofdiagonalentries
  8. If A is singular then detA=0 . If A is invertible then detA0
    • Elimination goes from A to U .
    • detA=+detU=+(productofthepivots)
  9. The determinant of AB is detAtimesdetB
  10. The transpose AT has the same determinant as A

Every rule of the rows can apply to columns*

Cramer’s Rule

  • If detA is not zero, Ax=b is solved by determinants:
    • x1=detB1detA,x2=detB2detA,,xn=detBndetA
    • The matrix Bj has the j th column of A replaced by the vector b

Cross Product

  • ||u×v||=||u||||v|||sinθ|
  • |uv|=||u||||v|||cosθ|

  • The length of u×v equals the area of the parallelogram with sides u and v

  • It points by the right hand rule (points along your right thumb when the fingers curl from u to v

Eigenvalues and Eigenvectors

  • The basic equation is Ax=λx , The number λ is an eigenvalue of A

    • When A is squared, the eigenvectors stay the same. The eigenvalues are squared.
  • The projection matrix has eigenvalues λ=1 and λ=0

    • P is singular, so λ=0 is an eigenvalue
    • Each column of P adds to 1, so λ=1 is an eigenvalue
    • P is symmetric, so its eigenvectors are perpendicular
  • Permutations have all |λ|=1
  • The reflection matrix has eigenvalues 1 and -1

  • Solve the eigenvalue problem for an n×n matrix

    • Compute the determinant of AλI . It is a polynomial in λ of degree n
    • Find the roots of this polynomial
    • For each eigenvalue λ , solve (AλI)x=0 to find an eigenvector x
  • Bad news: elimination does not preserve the λ ’s

  • Good news: the product of eigenvalues equals the determinant, the sum of the eigenvalues equals the sum of the diagonal entries (trace)

Diagonalizing a Matrix

  • Suppose the n×n matrix A has n linearly independent eigenvectors x1,,xn . Put them into the columns of an eigenvector matrix S . Then S1AS is the eigenvalue matrix Λ :

    • S1AS=Λ=[λ1  λn]
  • There is no connection between invertibility and diagonalizability:

    • Invertibility is concerned with the eigenvalues ( λ=0 or λ0 )
    • Diagonalizability is concerned with the eigenvectors (too few or enough for S )

Applications to differential equations

  • One equation dudt=λu has the solution u(t)=Ceλt

  • n equations dudt=Au starting from the vector u(0) at t=0

  • Solve linear constant coefficient equations by exponentials eλtx , when Ax=λx

Symmetric Matrices

  • A symmetric matrix has only real eigenvalues.
  • The eigenvectors can be chosen orthonormal.

  • (Spectral Theorem) Every symmetric matrix haas the facorization A=QΛQT with real eigenvalues in Λ and orthonormal eigenvectors in S=Q :

    • Symmetric diagonalization: A=QΛQ1=QΛQT with Q1=QT
  • (Orthogonal Eigenvectors) Eigenvectors of a real symmetric matrix (when they correspond to different λ ’s) are always perpendicular.

  • product of pivots = determinant = product of eigenvalues

  • Eigenvalues VS. Pivots

  • For symmetric matrices the pivots and the eigenvalues have the same signs:

    • The number of positive eigenvalues of A=AT equals the number of positive pivots.
  • All symmetric matrices are diagonalizable

Positive Definite Matrices

  • Symmetric matrices that have positive eigenvalues

  • 2 × 2 matrices

    • The eigenvalues of A are positive if and only if a>0 and acb2>0 .
  • xTAx is positive for all nonzero vectors x

    • If A and B are symmetric positive definite, so is A+B
  • When a symmetric matrix has one of these five properties, it has them all:

    • All n pivots are positive
    • All n upper left determinants are positive
    • All n eigenvalues are positive
    • xTAx is positive except at x=0 . This is the energy-based definition
    • A equals RTR for a matrix R with independent columns

Positive Semidefinite Matrices

Similar Matrices

  • DEFINITION: Let M be any invertible matrix. Then B=M1AM is similar to A

  • (No change in λ ’s) Similar matrices A and M1AM have the same eigenvalues. If x is an eigenvector of A , then M1x is an eigenvector of B . But two matrices can have the same repeated λ , and fail to be similar.

Jordan Form

  • What is “Jordan Form”?
    • For every A , we want to choose M so that M1AM is nearly diagonal as possible
  • JT is the similar to J , the matrix M that produces the similarity happens to be the reverse identity

  • (Jordan form) If A has s independent eigenvectors, it is similar to a matrix J that has s Jordan blocks on its diagonal: Some matrix M puts A into Jordan form.

    • Jordan block: The eigenvalue is on the diagonal with 1 ’s just above it. Each block in J has one eigenvalue λi , one eigenvector. and 1’s above the diagonal

M1AM=[J1  Js]=J

Ji=[λi1 1 1 λi]

  • A is similar to B if they share the same Jordan form J – not otherwise

Singular Value Decomposition (SVD)

  • Two sets of singular vectors, u ’s and v ’s. The u ’s are eigenvectors of AAT and the v ’s are eigenvectors of ATA .

  • The singular vectos v1,,vr are in the row space of A . The outputs u1,,ur are in the column space of A . The singular values σ1,,σr are all positive numbers, the equatinos Avi=σiui tell us:

A[v1vr]=[u1ur]σ1σr

  • We need nr more v ’s and mr more u ’s, from the nullspace N(A) and the left nullspace N(AT) . They can be orthonormal bases for those two nullspaces. Include all the v ’s and u ’s in V and U , so these matrices become square.

A[v1vrvn]=[u1urum]σ1σr

V is now a square orthogonal matrix, with V1=VT . So AV=UΣ can become A=UΣVT . This is the Singular Value Decomposition:

A=UΣVT=u1σ1vT1++urσrvTr

The orthonormal columns of U and V are eigenvectors of AAT and ATA

你可能感兴趣的:(数学,读书)