Linear Algebra Review¶

Matlab is a vector / matrix language, which means it thinks and stores information in either a scalar, vector or matrix. Matlab is designed to execute mathematical operations by entering them just as you would write them when solving math by hand. Reviewing how matrices interact is a valuable way to gain insight into how Matlab works, and how it can work for you.

Suppose the matrices A and B exist such that

$\begin{split}\mathbf{A} = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix} \mathbf{B} = \begin{bmatrix} b_{11} & b_{12} & \cdots & b_{1n} \\ b_{21} & b_{22} & \cdots & b_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ b_{m1} & b_{m2} & \cdots & b_{mn} \end{bmatrix}\end{split}$

Let the matrices A and B be of equal size (m x n), then the sum of A and B is the sum of the corresponding elements of A and B

$\begin{split}\mathbf{A+B} = \begin{bmatrix} a_{11}+b_{11} & a_{12}+b_{12} & \cdots & a_{1n}+b_{1n} \\ a_{21}+b_{21} & a_{22}+b_{22} & \cdots & a_{2n}+b_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1}+b_{m1} & a_{m2}+b_{m2} & \cdots & a_{mn}+b_{mn} \end{bmatrix}\end{split}$

The product of any scalar k with the matrix A is equal to $$k \cdot A$$. The resulting matrix is obtained by multiplying each element of A by k

$\begin{split}\mathbf{k \cdot A} = \begin{bmatrix} ka_{11} & ka_{12} & \cdots & ka_{1n} \\ ka_{21} & ka_{22} & \cdots & ka_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ ka_{m1} & ka_{m2} & \cdots & ka_{mn} \end{bmatrix}\end{split}$

Consider the scalar $$k=-1$$, then

$-A=(-1)A$$A-B = A+(-B)$

Theorems¶

Let the matrices A,B,C be of equal size, then for any scalars k, k’

• $$A + B = B + A$$
• $$A + 0 = 0 + A = A$$
• $$A + (-A) = (-A) + A = 0$$
• $$(A + B) + C = A + (B + C)$$
• $$1 \cdot A = A$$
• $$(k + k')A = kA + k'A$$
• $$(kk')A = k(k'A)$$
• $$k(A + B) = kA + kB$$

Matrix Multiplication¶

Suppose the matrices A and B exist such that

$\begin{split}\mathbf{A} = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1p} \\ a_{21} & a_{22} & \cdots & a_{2p} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mp} \end{bmatrix} \mathbf{B} = \begin{bmatrix} b_{11} & b_{12} & \cdots & b_{1n} \\ b_{21} & b_{22} & \cdots & b_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ b_{p1} & b_{p2} & \cdots & b_{pm} \end{bmatrix}\end{split}$

Let A be an (m x p) matrix, and let B be an (p x n) matrix. If the number of columns of A are equal to the number of rows of B, then the product AB is an (m x n) matrix whose ij-element is equal to the dot product of the i-th row of A with the j-th column of B

$\begin{split}\begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1p} \\ a_{21} & a_{22} & \cdots & a_{2p} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mp} \end{bmatrix} \begin{bmatrix} b_{11} & b_{12} & \cdots & b_{1n} \\ b_{21} & b_{22} & \cdots & b_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ b_{p1} & b_{p2} & \cdots & b_{pm} \end{bmatrix} = \begin{bmatrix} c_{11} & c_{12} & \cdots & c_{1n} \\ c_{21} & c_{22} & \cdots & c_{2n} \\ \vdots & \vdots & c_{ij} & \vdots \\ c_{m1} & c_{m2} & \cdots & c_{mn} \end{bmatrix}\end{split}$$c_{ij} = a_{i1}b_{1j} + a_{i2}b_{2j} + \cdots + a_{ip}b_{pj} = \sum_{k=1}^{p}a_{ik}b_{kj}$

If the number of columns of A do not equal the number of rows of the B, then the product AB is not defined

Matrix Power¶

Let A be an n x n matrix, then

• $$A^{0} = I$$
• $$AA=A^{2}$$
• $$AAA = A^{2}A = A^{3}$$
• $$A^{n}A = A^{n+1}$$

Theorems¶

Let A,B,C be matrices whose products are defined and let k be any scalar, then

• $$AB \neq BA$$
• $$(AB)C = A(BC)$$
• $$A(B+C)=AB+AC$$
• $$(B+C)A = BA + CA$$
• $$k(AB) = (kA)B = A(kB)$$

Matrix Transpose¶

Let A be an (m x n) matrix of the form

$\begin{split}\mathbf{A} = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix}\end{split}$

The transpose of A, or $$A^{T}$$, is obtained by any of the following:

• Writing the columns of A as rows
• Writing the rows of A as columns
• Reflecting A over the main diagonal

This results is an (n x m) matrix such that $$A=[a_{ij}]$$ and $$A^{T}=[a_{ji}]$$

$\begin{split}\mathbf{A^{T}} = \begin{bmatrix} a_{11} & a_{21} & \cdots & a_{m1} \\ a_{12} & a_{22} & \cdots & a_{m2} \\ \vdots & \vdots & \ddots & \vdots \\ a_{1n} & a_{2n} & \cdots & a_{mn} \end{bmatrix}\end{split}$

Theorems¶

Let A and B be matrices where the sums and products are defined, and let k be any scalar, then

• $$(A+B)^{T} = A^{T} + B^{T}$$
• $$(kA)^{T} = kA^{T}$$
• $$(A^{T})^{T} = A$$
• $$(AB)^{T} = B^{T}A^{T}$$

Useful Properties of Square Matrices¶

Identity Matrix¶

An identity matrix is an (n x n) matrix with 1’s on the diagonal and 0’s everywhere else.

Let A be an n x n matrix such that $$[a_{ij}] = 1$$ if $$i=j$$, and $$[a_{ij}] = 0$$ if $$i \neq j$$ , then A is an identity matrix I

$\begin{split}\mathbf{A} = \begin{bmatrix} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \end{bmatrix} = \mathbf{I}\end{split}$

For any (n x n) matrix B

$IB = BI = B$

For any scalar k, the matrix kI is called the scalar matrix. Multiplying any (n x n) matrix B by any scalar matrix has the same effect as multiplying B by a scalar

$(kI)B = k(IB) = kB$

Diagonal Matrix¶

• The diagonal elements of an (n x n) matrix D are the elements $$[d_{ij}]$$ such that $$i=j$$.
• D is said to be diagonal if all of the off diagonal elements are zero, or $$[d_{ij}]=0$$ when $$i \neq j$$.
• The identity matrix is a particular case of diagonal matrix

Trace¶

The trace of a matrix A is the sum of the diagonal elements of A

Let A and B be (n x n) matrices, then:

• $$tr(A) = a_{11} + a_{22} + \cdots + a_{nn}$$
• $$tr(A+B) = tr(A) + tr(B)$$
• $$tr(kA) = k*tr(A)$$
• $$tr(A^{T}) = tr(A)$$
• $$tr(AB) = tr(BA)$$

Triangular Matrix¶

• A triangular matrix, sometimes called upper triangular, is an (n x n) matrix whose elements below the main diagonal are all zero

• A lower triangular matrix is an (n x n) matrix whose elements above the main diagonal are all zero

Example of an upper triangular matrix:

$\begin{split}\mathbf{A} = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ 0 & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & a_{nn} \end{bmatrix}\end{split}$

Example of a lower triangular matrix:

$\begin{split}\mathbf{A} = \begin{bmatrix} a_{11} & 0 & \cdots & 0 \\ a_{21} & a_{22} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix}\end{split}$

Matrix Inverse¶

• An (n x n) matrix A is said to be invertible or nonsingular if there exists a matrix B such that
$AB = BA = I$
• A matrix B that has this property is said to be the inverse of A, and is denoted $$A^{-1}$$

• The inverse of a matrix is often used to solve the equation $$Ax=b$$ by multiplying both sides of the equation with $$A^{-1}$$
$\begin{split}Ax &= b \\ A^{-1}Ax &= A^{-1}b \\ x &= A^{-1}b\end{split}$
• A matrix is invertible only if A is nonsingular, or equivalently, if the determinant of A does not equal zero

Steps for finding $$A^{-1}$$¶

Step 1 Form the (n x 2n) matrix $$[A | I]$$

Step 2 Use elementary row operations to transform $$[A | I]$$ into $$[I|B]$$

Step 3 Now, $$A^{-1} = B$$

Matrix Determinant¶

• A system of equations has a unique solution if and only if the determinant of its coefficient matrix does not equal zero

• If the determinant is equal to zero, then the system either has no solution or infinite solutions

First Order Determinant¶

The determinant of a scalar matrix A

$\begin{split}\det(A) &= \begin{vmatrix} a_{11} \end{vmatrix} = a_{11}\end{split}$

Second Order Determinant¶

The determinant of a (2 x 2) matrix A

$\begin{split}\det(A) = \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{vmatrix} = a_{11}a_{22} - a_{21}a_{12}\end{split}$

Third Order Determinant¶

The determinant of a (3 x 3) matrix A

$\begin{split}\det(A) &= \begin{vmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{vmatrix} \\ \\ &= a_{11} \begin{vmatrix} a_{22} & a_{23} \\ a_{32} & a_{33} \end{vmatrix} - a_{12} \begin{vmatrix} a_{21} & a_{23} \\ a_{31} & a_{33} \end{vmatrix} + a_{13} \begin{vmatrix} a_{21} & a_{22} \\ a_{31} & a_{32} \end{vmatrix} \\ \\ &= a_{11}(a_{22}a_{23} - a_{23}a_{32}) -a_{12}(a_{21}a_{33} - a_{23}a_{31}) +a_{13}(a_{21}a_{32} - a_{22}a_{31})\end{split}$

Eigenvalues & Eigenvectors¶

For any (n x n) matrix A, a scalar $$\lambda$$ is called an eigenvalue of A if there exists a nonzero vector $$\upsilon$$ such that

$A \upsilon = \lambda \upsilon$

Any vector $$\upsilon$$ that satisfies this relationship is called an eigenvector of A belonging to the eigenvalue $$\lambda$$

Finding Eigenvalues & Eigenvectors¶

We need to find all scalars $$\lambda$$ such that the equation $$A \upsilon = \lambda \upsilon$$ has a nonzero solution $$\upsilon$$, which is equivalent to solving:

$\begin{split}A \upsilon &= \lambda \upsilon \\ A \upsilon - \lambda \upsilon &= \theta \\ (A - \lambda I)\upsilon &= \theta\end{split}$

The solution then has two parts:

1. Find all scalars $$\lambda$$ such that $$A-\lambda I$$ is singular. This is equivalent to solving the characteristic polynomial of the $$\det(A-\lambda I)$$

1. Given a scalar $$\lambda$$ such that $$A-\lambda I$$ is singular, find all nonzero vectors $$\upsilon$$ such that $$(A - \lambda I)\upsilon = \theta$$. After you have solved for the eigenvalues, plug them in and solve for all values $$\upsilon$$