Inverse of a Matrix

Consider the $ n $x$ n $ matrix $ A $. The inverse of $ A $ is defined as $ A^{-1} $ such that $ AA^{-1} = A^{-1}A=I_n $ where $ I_n $ is the identity matrix.

For example let

$ A= \begin{bmatrix} 1 & 4 \\ 1 & 3 \end{bmatrix} $

Then the inverse of A is

$ A^{-1} = \begin{bmatrix} -3 & 4 \\ 1 & -1 \end{bmatrix} $

This can be verified by matrix multiplication.

$ AA^{-1}=\begin{bmatrix}1 & 4 \\1 & 3 \end{bmatrix}\begin{bmatrix} -3 & 4 \\ 1 & -1 \end{bmatrix}=\begin{bmatrix}1(-3)+4(1) & 1(4)+4(-1) \\1(-3)+3(1) & 1(4)+3(-1) \end{bmatrix}=\begin{bmatrix}1 & 0\\0 & 1 \end{bmatrix} $

Although not shown, the commutative case also works

Ways to Find the Inverse

There is no trivial way of finding the inverse of an arbitrary matrix. In fact, if the matrix is not a square matrix it cannot have an inverse. Even if it is square, you need to make sure its reduced row echelon form is the identity matrix. Provided that the matrix is invertible, you can use one of the following methods for finding the inverse.

  • Use the Determinant Trick

For simple 2x2 matrices, if the determinant is nonzero, then you can find the inverse of a matrix by flipping its upper left and lower right elements, multiplying the upper right and lower left by negative one, and dividing by the determinant. Observe:

Let

$ A=\begin{bmatrix}a & b \\c & d \end{bmatrix} $

$ A^{-1}=\frac{1}{ad-bc}\begin{bmatrix}d & -b \\-c & a \end{bmatrix} $

Then

$ \begin{array}{lcl}AA^{-1}&=&\begin{bmatrix}a & b \\c & d \end{bmatrix}\frac{1}{ad-bc}\begin{bmatrix}d & -b \\-c & a \end{bmatrix}\\ &=&\frac{1}{ad-bc}\begin{bmatrix}ad+b(-c) & a(-b)+b(a) \\cd+d(-c) & c(-b)+d(a) \end{bmatrix}\\ &=&\begin{bmatrix}1 & 0\\0 & 1 \end{bmatrix}\end{array} $

Although not shown, the commutative case also works.

  • Identity-Extended Matrix and Row Operations

You can attach an appropriately sized identity matrix to a matrix and perform row operations until the left half side is the identity matrix. The right half is now the inverse.

Purpose of Inverse

Provided a matrix has an inverse, then any linear transformation characterized by the matrix has a unique solution for all elements in the codomain.

In other words, you can multiply both sides of the matrix equation by the inverse. On the side with the original matrix, they cancel out and leave the identity matrix (hurrah!) and on the other you get your solution.

Suppose we used the first example matrix above to define a linear transformation

$ T(\vec v)=A\vec v =\vec b $

$ \begin{bmatrix}1 & 4 \\1 & 3 \end{bmatrix} \vec v = \begin{bmatrix}1\\2\end{bmatrix} $

$ \begin{bmatrix} -3 & 4 \\ 1 & -1 \end{bmatrix}\begin{bmatrix}1 & 4 \\1 & 3 \end{bmatrix} \vec v = \begin{bmatrix} -3 & 4 \\ 1 & -1 \end{bmatrix}\begin{bmatrix}1\\2\end{bmatrix} $

$ \begin{bmatrix}1 & 0\\0 & 1 \end{bmatrix} \vec v = \begin{bmatrix}-3(1)+4(2)\\1(1)+-1(2)\end{bmatrix} $

$ \vec v = \begin{bmatrix}5\\-1\end{bmatrix} $

So the inverses of matrices are really nice for solving linear equations.


Back to Linear Algebra Resource

Back to MA351

Alumni Liaison

Ph.D. on Applied Mathematics in Aug 2007. Involved on applications of image super-resolution to electron microscopy

Francisco Blanco-Silva