Revision as of 09:10, 6 May 2011 by Dcoroian (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

 Chapter 6: Determinants

I will show several problems where I find the determinant, illustrating the several methods of doing this.

6.1

2. For any 2x2 matrix A, det(A) = ad - bc, so det(A) = (2)(5) - (3)(4) = -2 . Since this is not 0, A is invertible.

5. Let's use Laplace Expansion and expand across the first row. Remember to alternate signs.

     det(A) = (1)(2)(det(A11) + (-1)(5)(det(A12) + (1)(7)(det(A13)

                 = (2 * 55) + (-5 * 0) + (7 * 0) = 110          The matrix is invertible

6. Determinant of a upper- or lower-triangular matrix is simply the product of the diagonal entries.

    det(A) = (6)(4)(1) = 24          The matrix is invertible

8. For any 3x3 matrix A with column vectors u, v, w, determinant of A is u ·(w)

   det(A) = [1  1  3] · ([2  1  2] x [3  1  1])

               = [1  1  3] · [-1  4  -1]

               = 0               The matrix is not invertible

41. Remember, det(A) = Σ (sgn P)(prod P). In this matrix, two nonzero patterns exist: (2 -> 3 -> 1 -> 2 -> 4), with 5 inversions, and (2 -> 3 -> 3 -> 2 -> 2), with 8 inversions.

       det(A) = (-1)5(2 * 3 * 1 * 2 * 4) + (-1)8(2 * 3 * 3 * 2 * 2)

                   = (-48) + (72) = 24


6.3

22. Cramer's Rule states that in the system Ax = b , where A is an invertible n x n matrix, the components xi of the solution vector are xi = det(Ab,i)/det(A), where Ab,i is the matrix obtained by replacing the ith column of A with b

x1 = det([[1  7][3 11]])/det(A) = (-10) / (5) = -2

x2 = det([[3  1][4  3]])/det(A) = (5) / (5) = 1

x = [-2  1]


Chapter 7: Eigenvalues and Eigenvectors

7.1

16. Since this transformation takes a vector and rotates it 180 degrees (reflects it about the origin), -1 is the only eigenvalue, and all vectors in R2 are eigenvectors.


7.2

2. Since the eigenvalues of a triangular matrix are its diagonal entries, and both numbers are repeated, λ = 1 (algebraic multiplicity 2), 2 (algebraic multiplicity 2)

3. The characteristic polynomial of a 2x2 matrix is λ2 - (trA)λ + det(A) = λ2 - 4λ + 3 = 0

                                                                                             (λ - 3)(λ - 1) = 0

                                                                                              λ = 1 (algebraic multiplicity 1), 3 (algebraic multiplicity 1)

12. fA(λ) = det(λI4 - A) = det( [[ (λ - 2)  2  0  0 ][ -1 (λ + 1)  0  0 ][ 0  0  (λ - 3)  4 ][ 0  0  -2  (λ + 3)]]. We can split this matrix into blocks, and since only the diagonal blocks have nonzero entries, the determinant of the original matrix will be the product of the determinants of the diagonal blocks.

                   det(λI4 - A) = det([[ (λ - 2)  2 ][ -1  (λ + 1) ]]) * det([[ (λ - 3)  4 ][ -2  (λ + 3) ]])

                                       = ((λ - 2)(λ + 1) + 2) * ((λ - 3)(λ + 3) + 8)

                                       = λ(λ - 1)(λ + 1)(λ - 1) = 0

                                              λ1 = 0 (algebraic multiplicity 1)

                                      λ2 = λ3 = 1 (algebraic multiplicity 2)

                                              λ4 = -1 (algebraic multiplicity 1)


7.3

1. fA(λ) = λ2 - 16λ + 63 = 0

                  (λ - 9)(λ - 7) = 0

                   λ1 = 7   λ2 = 9 (both with algebraic multiplicity 1)

E7 = ker(7I2 - A) = ker [[ 0  -8 ][ 0  -2 ]] = span{ [1  0] } (geometric multiplicity 1)

E9 = ker(9I2 - A) = ker [[ 2  -8 ][ 0  0 ]] = span{ [4  1] } (geometric multiplicity 1)

Since the sum of the geometric multiplicities is equal to the sum of the algebraic multiplicities, an eigenbasis exists.

Eigenbasis: { [1  0], [4  1] }

15. fA(λ) = det(A - λI3) ---------> two nonzero patterns exist, so det(A - λI3) = (-1)0(-1 - λ)(-λ)(3 - λ) + (-1)3(-4)(-λ)(1) = 0

                                                                                                                                                  (-λ)(λ - 1)2 = 0

                                                                                                                                                   λ1 = 0 (algebraic multiplicity 1)

                                                                                                                                                   λ2 = 1 (algebraic multiplicity 2)

E0 = ker(A) = span { [0  1  0] } (geometric multiplicity 1)

E1 = ker(A - I3) = span { [1  -1  2] } (geometric multiplicity 1)

Since the geometric multiplicities add up to 2, but the algebraic multiplicities add up to 3, no eigenbasis exists.

17. fA(λ) = det(A - λI4)  ---------> only one nonzero pattern, so det(A - λI4) = (-λ)2(1 - λ)2 = 0

                                                                                                            λ1 = 0 (algebraic multiplicity 2)

                                                                                                            λ2 = 1 (algebraic multiplicity 2)

E0 = ker(A) = span { [1  0  0  0], [0  1  -1 0] } (geometric multiplicity 2)

E1 = ker(A - I4) = span { [0  1  0  0], [0  0  0  1] } (geometric multiplicity 2)

Since the sum of the algebraic and geometric multiplicities is equal, an eigenbasis exists.

Eigenbasis: { [1  0  0  0], [0  1  -1  0], [0  1  0  0], [0  0  0  1] }


7.4

10. fA(λ) = λ2 - 2λ + 1 = 0

                         (λ - 1)2

                                    λ = 1 (algebraic multiplicity 2)

E1 = ker(A - I2) = span { [-2  1] } (geometric multiplicity 1)

Since the geometric multiplicities sum to 1, but the algebraic multiplicities sum to 2, no eigenbasis exists and the matrix is not diagonalizable.

16. fA(λ) = det(A - λI3) -----------> there are two nonzero patterns, so det(A - λI3) = (-1)0(4 - λ)(1 - λ)2 + (-1)3(1)(1 - λ)(-2) = 0

                                                                                                                                                 λ1 = 1 (algebraic multiplicity 1)

                                                                                                                                                 λ2 = 2 (algebraic multiplicity 1)

                                                                                                                                                 λ3 = 3 (algebraic multiplicity 1)

E1 = ker(A - I3) = span { [0  1  0] } (geometric multiplicity 1)

E2 = ker(A - 2I3) = span { [1  0  1] } (geometric multiplicity 1)

E3 = ker(A - 3I3) = span { [2  0  1] } (geometric multiplicity 1)

Since the geometric multiplicities add up to the same result as the algebraic multiplicities, an eigenbasis exists and the matrix is diagonalizable.

So, the matrix S is the concatenation of the eigenspaces (its column vectors form an eigenbasis).

S = [[0  1  2][1  0  0][0  1  1]]

To get the diagonalized matrix D, we find S-1.

S-1 = [[0  1  0][0  0  -2][1  0  -1]]

D = S-1AS = [[1  0  0][0  2  0][0  0  3]]

Notice that D is a diagonal matrix with the eigenvalues of A as its diagonal entries.



I hope this study guide helped everybody who used it. Good luck on the final!


by Dan Coroian (dcoroian)

Alumni Liaison

Correspondence Chess Grandmaster and Purdue Alumni

Prof. Dan Fleetwood