MAT9004复习总(3)

MAT9004复习总结(3)

这部分主要总结记录一些关于线性代数相关的基础知识点.

Important Proporty

The line interval join points u and v contains exactly the points corresponding to vectors of the form (\(0 \leq \alpha \leq 1\)) : \[\alpha u + (1-\alpha)v\]

Linear independence

\(v_1,...,v_n\) are linearly dependent if one of the \(v_j\) is a linear combination of the other vectors \(v_1,...,v_{j-1}, v_{j+1}, ...,v_n\). If not linearly dependent, they are linearly independent.

The Dot Product

The dot product of two vectors is : \[v·w = v_1w_1 + v_2w_2 + ... + v_dw_d\]

The dot product is also called the scalar product or the inner product, and another notation for the dot product is \(<v,w>\)

The Euclidean Norm

The (Euclidean) norm of a vector is : \[||v|| = \sqrt {v_1^2+v_2^2+...+v_d^2} = \sqrt {v·v}\]

For vectors in dimensions 2 and 3, Pythagoras' theorem tells us $||v|| is equal to the distance from the original Cartesian coordinates, to the point representing \(v\) (The length of \(v\)).

Orthogonal Vectors (正交向量)

Vectors \(v\) and \(w\) are called orthogonal if : \[v·w = 0\]

Matrix

Matrix Multiplication

Let \(A= \left( \begin{matrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{matrix} \right)\), and \(B= \left( \begin{matrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{matrix} \right)\), then : \[AB = \left( \begin{matrix} a_{11}b_{11} + a_{12}b_{21} & a_{11}b_{12} + a_{12}b_{22} \\ a_{21}b_{11} + a_{22}b_{21} & a_{21}b_{12} + a_{12}b_{22} \end{matrix} \right)\]

We can only multiply an \(m\) x \(n\) by an \(n\) x \(r\) matrix, and the result is \(m\) x \(r\).

Rules for Add/Mul

  • \(A(BC) = (AB)C\)

  • \((k+j)A = kA+jA\)

  • \((kA)B = A(kB)\)

  • \(A(B+D) = AB+AD\) and \((E+F)G = EG+FG\)

Gaussian Elimination

Gaussian elimination makes use of certain operations we can apply to the matrix equation that do not change the set of solutions of the linear systems.

If we have : \[\left( \begin{matrix} 0 & 2 & 1 \\ 2 & -2 & 1 \\ 2 &2&-2 \end{matrix} \right) \left( \begin{matrix} x \\ y \\ z \end{matrix} \right) = \left( \begin{matrix}1 \\ 5 \\ -4 \end{matrix} \right)\]

  1. Swap two rows : \[\left( \begin{matrix} 2 & -2 & 1 \\ 0 & 2 & 1 \\ 2 &2&-2 \end{matrix} \right) \left( \begin{matrix} x \\ y \\ z \end{matrix} \right) = \left( \begin{matrix}5 \\ 1 \\ -4 \end{matrix} \right)\]

  2. Multiply a row by a non-zero number : \[\left( \begin{matrix} 1 & -1 & 0.5 \\ 0 & 2 & 1 \\ 2 &2&-2 \end{matrix} \right) \left( \begin{matrix} x \\ y \\ z \end{matrix} \right) = \left( \begin{matrix}2.5 \\ 1 \\ -4 \end{matrix} \right)\]

  3. Add a multiple of one row to another row : Here we subtract 2 times the first row from the third : \[\left( \begin{matrix} 1 & -1 & 0.5 \\ 0 & 2 & 1 \\ 0 &4&-3 \end{matrix} \right) \left( \begin{matrix} x \\ y \\ z \end{matrix} \right) = \left( \begin{matrix}2.5 \\ 1 \\ -9 \end{matrix} \right)\]

Basic Strategy of Gaussian Elimination

The idea of Gaussian elimination is to use the operations described above, one after another, to transform the linear system into upper triangular form : \[\left( \begin{matrix} * & * & * \\ 0 & * & * \\ 0&0&* \end{matrix} \right) \left( \begin{matrix} x \\ y \\ z \end{matrix} \right) = \left( \begin{matrix}*\\ *\\ *\end{matrix} \right)\]

Non-square

\[\left( \begin{matrix} 1 & 2 \\ 0 & 0 \\ 0&0\end{matrix} \right) \left( \begin{matrix} x \\ y \\ z \end{matrix} \right) = \left( \begin{matrix}1\\ 0 \\ 0 \end{matrix} \right)\]

The only remaining equation is \(x+2y=1\), which gives \(x=1-2y\), so the solutions are \((x,y) = \{(1-2y, y), y \in R\}\)

Identity Matrices

An identity matrix is an \(n\) x \(n\) matrix \(I\) with \((I)_{ii} = 1\) and \((I)_{ij} = 0\) for \(i \neq j\), like : \[\left( \begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0\\ 0 & 0 & 1 \end{matrix} \right) \]

  1. The multiplication with an identity matrix has no effect, i.e. \(AI=IA=A\)

  2. \(I\) behaves like the real number \(1\)

  3. The \(n\) x \(n\) identity matrix is often denoted by \(I_n\)

Matrix Inverse

A square matrix \(A\) is called invertible if there is a square matrix \(such\) that \(BA=I\). The matrix \(B\) is called the inverse matrix of \(A\) and is denoted by \(A^{-1}\)

Determinants

The determinate of a square matrix \(A\) is a real number \(det(A)\) with the following properties :

  • \(det(AB) = det(A)det(B)\) for square matrices

  • \(det(I) = 1\)

  • \(det(A) \neq 0\) if and only if \(A\) is invertible

For 2x2 matrix the following formula holds for \(det(A)\) : \[det\left( \begin{matrix} a & b \\ c & d\end{matrix} \right) = ad - bc\]

Finding Inverse

Let \(A = \left( \begin{matrix} a & b \\ c&d \end{matrix} \right)\), \(A\) is invertible if, and only if, \(det(A) \neq 0\), in which case : \[A^{-1} = {1\over det(A)}\left( \begin{matrix} d & -b \\ -c & a \end{matrix} \right)\]

If \(A\) is invertible then the linear system \(Ax=b\) has exactly one solution \(x=A^{-1}b\).

Eigenvalues

An eigenvalue of \(A\) is any real number \(\lambda\) such that \(Ax=\lambda x\) for some non-zero vector \(x\).

Finding Eigenvalues

\[Ax=\lambda x\] \[Ax=\lambda I x\] \[Ax-\lambda I x=0\] \[(A-\lambda I) x=0\]

This equation has multiple solutions only and only if the matrix \(A-\lambda I\) has no inverse, which means \(det(A-\lambda I)=0\).

  • The characteristic equation of a square matrix \(A\) is \(det(A-\lambda I)=0\)

  • Here \(det(A-\lambda I)=0\) is just a polynomial in \(x\), and it's called characteristic polynomial of \(A\)

Eigenvectors

An eigenvector of \(A\) is any non-zero vector \(x\) which satisfies \(Ax=\lambda x\) for some real number \(\lambda\). The zero vector is not an eigenvector.

Diagonal Matrices

A matrix \(D\) is called diagonal if \((D)_{ij}=0\) for all \(i \neq j\), like : \[\left( \begin{matrix} a&0&0 \\ 0&b&0 \\ 0&0&c \end{matrix} \right)\]

Diagonalisable Matrices

A square matrix \(A\) is called diagonalisable if there is an invertible matrix \(P\) and a diagonal matrix \(D\) such that : \[A=PDP^{-1}\] \[A^n = PD^nP^{-1}\]

Eigen Decomposition

If \(D\) is the diagonal matrix with \((D)_{ii}=\lambda _i\) and \((D)_{ij}=0\) for \(i \neq j\). And if \(v_i\) be an eigenvector corresponding to the eigenvalue \(\lambda _i\), define \(P=(v_1,...,v_n)\). Then \(A=PDP^{-1}\)

PageRank

0%