# Matrices and Determinants

### Matrices

A matrix is a rectangular array of elements arranged in rows and columns. $$\mathbf{A}=\begin{bmatrix}a_{11} & a_{12} & \cdots & a_{1n}\\a_{21} & a_{22} & \cdots & a_{2n}\\ \vdots & \vdots & \ddots & \vdots\\a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix} =\left[ a_{jk}\right]$$

Here, $m=$ number of rows, $n=$ number of columns, $a_{jk} =$ a general element in the matrix of dimension $m\times n$.

If $m\ne n$ the matrix is called a rectangular matrix.

If $m=n$ the matrix is called a square matrix (of order $n$).

If $n=1$ the matrix is called a column matrix (or column vector).

If $m=1$ the matrix is called a row matrix (or row vector).

### Some special matrices

#### Unit (Identity) matrix

A square matrix in which each diagonal element is $1$ (unity). A unit matrix of order $n$ is denoted by $\mathbf{I_n}$. $$\begin{bmatrix}1 & 0 & \cdots & 0\\0 & 1 & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\0 & 0 & \cdots & 1 \end{bmatrix}$$

#### Diagonal matrix

A square matrix in which all elements are zero except those in the main or principal diagonal. $$\begin{bmatrix}a & 0 & \cdots & 0\\0 & b & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\0 & 0 & \cdots & c \end{bmatrix}$$

#### Zero (Null) matrix

A matrix where all the elements are $0$. Zero matrices are generally denoted by $\mathbf{0}$ (with subscript dimensions if needed for clarity). $$\begin{bmatrix}0 & 0 & \cdots & 0\\0 & 0 & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\0 & 0 & \cdots & 0 \end{bmatrix}$$

#### Row matrix and column matrix

A matrix consisting of single row is called a row matrix and a matrix consisting of a single column is called a column matrix. $$\text{ Column matrix, }\begin{bmatrix}x_1 \\x_2 \\ \vdots\\x_m \end{bmatrix}$$ $$\text{ Row matrix, }\begin{bmatrix}y_1 & y_2 & \cdots y_n \end{bmatrix}$$

### Basic matrix operations

#### Equal matrices

Two matrices, $\mathbf{A}$ and $\mathbf{B}$, are equal if and only if they have the same dimension and their corresponding elements are equal. $$a_{ik}=b_{ik}$$

#### Addition and subtraction of matrices

Matrices can only be added or subtracted if they of the same dimension.

The sum of two or several $m\times n$ matrices is an $m\times n$ matrix, the elements of which is equal to the sum of the corresponding elements of the matrices being added. $$a_{ik}+b_{ik}+c_{ik}\cdots =s_{ik}$$

The difference between two $m\times n$ matrices, $\mathbf{A}$ and $\mathbf{B}$, is an $m\times n$ matrix, $\mathbf{D}$ whose elements are the difference between the corresponding elements of $\mathbf{A}$ and $\mathbf{B}$. $$a_{ik}-b_{ik}=d_{ik}$$

#### Multiplication of matrices by a scalar

The product of a scalar, $k$, and an $m\times n$ matrix $\mathbf{A}$ is an $m\times n$ matrix whose elements are equal to the product of the scalar with the corresponding element of $\mathbf{A}$. $$k\mathbf{A}=\left[ka_{ik}\right]$$

#### Product of matrices

The product of two rectangular conformable matrices of dimension $m_1\times n_1$ and $m_2\times n_2$ is a rectangular matrix of dimension $m_1\times n_2$. The product can be formed only if $n_1=m_2$. The elements of the product are the sum of products of the inner elements of the original matrices. The following example illustrates this where the first matrix has dimension $2\times 3$ and the second is $3\times 2$. $$\begin{bmatrix}a_{11} & a_{12} & a_{13}\\a_{21} & a_{22} & a_{23}\end{bmatrix}\begin{bmatrix}b_{11} & b_{12}\\b_{21} & b_{22}\\b_{31} & b_{32}\end{bmatrix}=\begin{bmatrix}a_{11}b_{11}+a_{12}b_{21}+a_{13}b_{31} & a_{11}b_{12}+a_{12}b_{22}+a_{13}b_{32}\\a_{21}b_{11}+a_{22}b_{21}+a_{23}b_{31} & a_{21}b_{12}+a_{22}b_{22}+a_{23}b_{32}\end{bmatrix}$$

### Transpose of a matrix

The transpose, $\mathbf{A}^T$ of a matrix $\mathbf{A}$ has each row identical with the corresponding column of $\mathbf{A}$, that is rows and columns are interchanged.

Some examples of this:

If $\mathbf{A}=\begin{bmatrix}a\\b\end{bmatrix}$, then $\mathbf{A}^T=\begin{bmatrix}a & b\end{bmatrix}$.

If $\mathbf{B}=\begin{bmatrix}a & b\\c & d\end{bmatrix}$, then $\mathbf{B}^T=\begin{bmatrix}a & c\\b & d\end{bmatrix}$.

If $\mathbf{C}=\begin{bmatrix}a & b\end{bmatrix}$, then $\mathbf{C}^T=\begin{bmatrix}a\\b\end{bmatrix}$.

#### Some special cases of transpose

• Transpose of a transpose $$\left(\mathbf{A}^T\right)^T=\mathbf{A}.$$
• Transpose of zero matrix $$\mathbf{0}^T=\mathbf{0}.$$
• Transpose of unit matrix $$\mathbf{I}^T=\mathbf{I}.$$
• Transpose of diagonal matrix $$\mathbf{D}^T=\mathbf{D}.$$ where $\mathbf{D}$ is any diagonal matrix.

### Inverse of a matrix

The inverse $\mathbf{A}^{-1}$ of a square matrix $\mathbf{A}$ is the matrix defined (uniquely) by the conditions $$\mathbf{A}^{-1}\mathbf{A}=\mathbf{I}=\mathbf{A}\mathbf{A}^{-1}$$ if such a matrix exists. It can be shown that $\mathbf{A}^{-1}$ exists if and only if $\det{\mathbf{A}}\ne 0$ (see section below for a definition of the determinant of a matrix).

#### Some special cases of inverse

• Inverse of inverse $$\left(\mathbf{A}^{-1}\right)^{-1}=\mathbf{A}$$ provided that $\mathbf{A}^{-1}$ exists.
• Inverse of $\mathbf{0}$ does not exist.
• Inverse of unit matrix $$\mathbf{I}^{-1}=\mathbf{I}.$$
• Inverse of diagonal matrix $$\begin{bmatrix}a & 0 & \cdots & 0\\0 & b & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\0 & 0 & \cdots & c \end{bmatrix}^{-1}=\begin{bmatrix}\frac1a & 0 & \cdots & 0\\0 & \frac1b & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\0 & 0 & \cdots & \frac1c \end{bmatrix}$$

### Converting a system of linear equations to matrices

When converting linear equations to matrices, each equation in the system becomes a row of the corresponding matrix. Each variable in the system becomes a column. The coefficients of each equation are used to form a matrix. If the equation solutions are included as a column of the vector it's called an augmented matrix, otherwise the matrix is called the coefficient matrix.

Consider the following system of linear equations: \begin{align*}2x+y-z&=3\\x+4y+6z&=1\\5x-2y-3z&=-4\end{align*} When converted to matrix form this becomes $$\begin{bmatrix}2 & 1 & -1\\1 & 4 & 6\\5 &-2 & -3\end{bmatrix}\begin{bmatrix}x\\y\\z\end{bmatrix} = \begin{bmatrix}3\\1\\-4\end{bmatrix}$$ with augmented matrix $$\left[\begin{array}{ccc|c}2 & 1 & -1 & 3\\1 & 4 & 6 & 1\\5 &-2 & -3 & -4\end{array}\right]$$

### Determinant of a matrix

The determinant of a matrix is a number obtained from the elements of a matrix by specified operations which is characteristic of the matrix. The determinants are defined only for square matrices. For a square matrix $\mathbf{A}$, the determinant is denoted by $\det \mathbf{A}$ or $|\mathbf{A}|$. The determinant of the $2\times 2$  matrix $$\mathbf{A}=\begin{bmatrix}a_{11} & a_{12}\\a_{21} & a_{22}\end{bmatrix}$$ is given by $$|\mathbf{A}|=a_{11}a_{22}-a_{12}a_{21}.$$

For a $3\times 3$ matrix $$\mathbf{A}=\begin{bmatrix}a_{11} & a_{12} & a_{13}\\a_{21} & a_{22} & a_{23}\\a_{31} & a_{32} & a_{33}\end{bmatrix}$$ the determinant is given by $$|\mathbf{A}|=a_{11}\mathbf{A}_{11}+a_{21}\mathbf{A}_{21}+a_{31}\mathbf{A}_{31}$$ where $$\mathbf{A}_{11}=\begin{vmatrix} a_{22} & a_{23}\\a_{32} & a_{33}\end{vmatrix},\;\mathbf{A}_{21}=-\begin{vmatrix} a_{12} & a_{13}\\a_{32} & a_{33}\end{vmatrix},\;\mathbf{A}_{31}=\begin{vmatrix} a_{11} & a_{13}\\a_{22} & a_{23}\end{vmatrix}$$

### Solution of linear equations by determinants

Consider a system of linear equations in two variables $x$ and $y$, \begin{align}a_1x+b_1y&=c_1\\a_2x+b_2y&=c_2\end{align}

To solve the above equations using conventional techniques, multiply equation (1) by $b_2$ and equation (2) by $b_1$ and subtract to obtain \begin{align*} x(a_1b_2-a_2b_1)&=b_2c_1-b_1c_2\\x&=\frac{b_2c_1-b_1c_2}{a_1b_2-a_2b_1}.\end{align*}

To solve for $y$, multiply equation (1) by $a_2$ and equation (2) by $a_1$ and subtracting the results we get \begin{align*} y(a_2b_1-a_1b_2)&=a_2c_1-a_1c_2\\[3 pt]x&=\frac{a_2c_1-a_1c_2}{a_2b_1-a_1b_2}\\[3 pt]y&=\frac{a_1c_2-a_2c_1}{a_1b_2-a_2b_1}.\end{align*}

The solutions for $x$ and $y$ of the system of equations (1) and (2) can be written directly in terms of determinants without any algebraic operations as \begin{align*}x&=\frac{\begin{vmatrix}c_1 & b_1\\c_2 & b_2\end{vmatrix}}{\begin{vmatrix}a_1 & b_1\\a_2 & b_2\end{vmatrix}}\\[3 pt]y&=\frac{\begin{vmatrix}a_1 & c_1\\a_2 & c_2\end{vmatrix}}{\begin{vmatrix}a_1 & b_1\\a_2 & b_2\end{vmatrix}}\end{align*}

This result is called Cramer's rule.

Here $\begin{vmatrix}a_1 & b_1\\a_2 & b_2\end{vmatrix} = |\mathbf{A}|$ is the determinant of the coefficient matrix of equations (1) and (2). If $\begin{vmatrix}c_1 & b_1\\c_2 & b_2\end{vmatrix} = |\mathbf{A}_x|$ and $\begin{vmatrix}a_1 & c_1\\a_2 & c_2\end{vmatrix} = |\mathbf{A}_y|$, then $$x=\frac{|\mathbf{A}_x|}{|\mathbf{A}|}\text{ and }y=\frac{|\mathbf{A}_y|}{|\mathbf{A}|}$$