If L is lower triangular with 1’s on the diagonal, so is L−1
Elimination = Facotization: A=LU
AT is the matrix that makes these two inner products equal for every x and y :
DEFINITION: The space Rn consists of all column vectors v with n components
DEFINITION: A subspace of a vector space is a set of vectors (including 0) that satisfies two requirements: (1) v+w is in the subspace, (2) cv is in the subspace
The colomn space consists of all linear combinations of the columns. The combinations are all possible vectors Ax . They fill the column space C(A)
The system Ax=b is solvable if and only if b is in the column space of A
The nullspace of A consists of all solutions to Ax=0 . These vectors x are in Rn . The nullspace containing all solutions of Ax=0 is denoted by N(A)
- the nullspace is a subspace of Rn , the column space is a subspace of Rm
- the nullspace consists of all combinations of the special solutions
Nullspace(plane) perpendicular to row space(line)
Ax=0 has r pivots and n−r free variables: n columns minus r pivot columns. The nullspace matrix N (contains all special solutions) contains the n−r special solutions. Then AN=0
Ax=0 has r independent equations so it has n−r independent solutions.
xparticular : the particular solution solves Axp=b
xnullspace : the n−r special solutions solve Axn=0
Complete solution: one xp , many xn : x=xp+xn
The four possibilities for linear equations depend on the rank r :
Independent vections (no extra vectors)
Dimension of a space (the number of vectors in a basis)
Any set of n vectors in Rm must be linearly dependent if n>m
The columns spans the column space. The rows span the row space
A basis for a vector space is a sequence of vectors with two properties: linear independent and span the space.
DEFINITION: The dimension of a space is the number of vectors in every basis.
The space Z that contains only the zero vector. The dimension of this space is zero. The empty set (containing no vectors) is a basis for Z. We can never allow the zero vector into a basis, because then linear independence is lost.
Four Fundamental Subspaces
1. The row space C(AT) , a subspace of Rn
2. The column space C(A) , a subspace of Rm
3. The nullspace is N(A) , a subspace of Rn
4. The left nullspace N(AT) , a subspace of Rm
Fundamental Theorem of Linear Algebra, Part 1
The nullspaces have dimensions n−r and m−r .
Every rank one matrix has the special form A=uvT=column×row.
The nullspace N(A) and the row space C(AT) are orthogonal subspaces of Rn .
DEFINITION: The orthogonal complement of a subspace V contains every vector that is perpendicular to V .
Fundamental Theorem of Linear Algebra, Part 2
* N(A) is the orthogonal complement of the row space C(AT) (in Rn )
* N(AT) is the orthogonal complement of the column space C(A) (in Rm )
Projection Onto a Line
* The projection matrix P=aaTaTa onto the line through a
* The projection p=x¯a=aTbaTaa
Projection Onto a Subspace
Problem: Find the combination p=x1¯a1+⋯+xn¯an closest to a given vector b . The n vectors a1,⋯,an in Rm span the column space of A . Thus the problem is to find the particular combination p=Ax¯ (the projection) that is closest to b . When n=1 , the best choice is aTbaTa
AT(b−Ax¯)=0 , or ATAx¯=ATb
The symmetric matrix ATA is n×n . It is inverible if the a ’s are independent.
* ATA is invertible if and only if A has linearly independent columns*
Least Squares Approximations
When Ax=b has no solution, multiply by AT and solve ATAx¯=ATb
The least squares solution x¯ minimizes E=||Ax−b||2 . This is the sum of squares of the errors in the m equations ( m>n )
Orthogonal Bases and Gram-Schmidt
orthonormal vectors
Every permutation matrix is an orthogonal matrix.
If Q has orthonormal columns ( QTQ=I ), it leaves lengths unchanged
Orthogonal is good
Use Gram-Schmidt for the Factorization A=QR
[abc]=[q1q2q3]⎡⎣⎢⎢qT1aqT1bqT2bqT1cqT2cqT3c⎤⎦⎥⎥
(Gram-Schmidt) From independent vectors a1,⋯,an , Gram-Schmidt constructs orthonormal vectors q1,⋯,qn . The matrces with these columns satisfy A=QR . Then R=QTA is upper triangular because later q ’s are orthogonal to earlier a ’s.
Least squares: RTRx¯=RTQTb or Rx¯=QTb or x¯=R−1QTb
Determinants
The properties of the determinant
Every rule of the rows can apply to columns*
Cramer’s Rule
Cross Product
|u⋅v|=||u||||v|||cosθ|
The length of u×v equals the area of the parallelogram with sides u and v
It points by the right hand rule (points along your right thumb when the fingers curl from u to v
Eigenvalues and Eigenvectors
The basic equation is Ax=λx , The number λ is an eigenvalue of A
The projection matrix has eigenvalues λ=1 and λ=0
The reflection matrix has eigenvalues 1 and -1
Solve the eigenvalue problem for an n×n matrix
Bad news: elimination does not preserve the λ ’s
Diagonalizing a Matrix
Suppose the n×n matrix A has n linearly independent eigenvectors x1,⋯,xn . Put them into the columns of an eigenvector matrix S . Then S−1AS is the eigenvalue matrix Λ :
There is no connection between invertibility and diagonalizability:
Applications to differential equations
One equation dudt=λu has the solution u(t)=Ceλt
n equations dudt=Au starting from the vector u(0) at t=0
Solve linear constant coefficient equations by exponentials eλtx , when Ax=λx
Symmetric Matrices
The eigenvectors can be chosen orthonormal.
(Spectral Theorem) Every symmetric matrix haas the facorization A=QΛQT with real eigenvalues in Λ and orthonormal eigenvectors in S=Q :
(Orthogonal Eigenvectors) Eigenvectors of a real symmetric matrix (when they correspond to different λ ’s) are always perpendicular.
product of pivots = determinant = product of eigenvalues
Eigenvalues VS. Pivots
For symmetric matrices the pivots and the eigenvalues have the same signs:
All symmetric matrices are diagonalizable
Positive Definite Matrices
Symmetric matrices that have positive eigenvalues
2 × 2 matrices
xTAx is positive for all nonzero vectors x
When a symmetric matrix has one of these five properties, it has them all:
Positive Semidefinite Matrices
Similar Matrices
DEFINITION: Let M be any invertible matrix. Then B=M−1AM is similar to A
(No change in λ ’s) Similar matrices A and M−1AM have the same eigenvalues. If x is an eigenvector of A , then M−1x is an eigenvector of B . But two matrices can have the same repeated λ , and fail to be similar.
Jordan Form
JT is the similar to J , the matrix M that produces the similarity happens to be the reverse identity
(Jordan form) If A has s independent eigenvectors, it is similar to a matrix J that has s Jordan blocks on its diagonal: Some matrix M puts A into Jordan form.
M−1AM=[J1 ⋱ Js]=J
Ji=[λi1 ⋱1 ⋱1 λi]
Singular Value Decomposition (SVD)
Two sets of singular vectors, u ’s and v ’s. The u ’s are eigenvectors of AAT and the v ’s are eigenvectors of ATA .
The singular vectos v1,⋯,vr are in the row space of A . The outputs u1,⋯,ur are in the column space of A . The singular values σ1,⋯,σr are all positive numbers, the equatinos Avi=σiui tell us:
A[v1⋯vr]=[u1⋯ur]⎡⎣⎢⎢σ1⋱σr⎤⎦⎥⎥
A[v1⋯vr⋯vn]=[u1⋯ur⋯um]⎡⎣⎢⎢⎢⎢⎢σ1⋱σr⎤⎦⎥⎥⎥⎥⎥
V is now a square orthogonal matrix, with V−1=VT . So AV=UΣ can become A=UΣVT . This is the Singular Value Decomposition:
The orthonormal columns of U and V are eigenvectors of AAT and ATA