(35 intermediate revisions by 12 users not shown)
Line 1: Line 1:
[[Category:MA527Fall2013Bell]]
+
<br>
[[Category:MA527]]
+
[[Category:math]]
+
[[Category:homework]]
+
[[Category:linear algebra]]
+
  
 
== Homework 3 collaboration area  ==
 
== Homework 3 collaboration area  ==
[[2013_Fall_MA_527_Bell|MA527 Fall 2013]]
+
 
----  
+
[[2013 Fall MA 527 Bell|MA527 Fall 2013]]  
 +
 
 +
----
 +
 
 
'''Question''' from James Down Under ([[User:Jayling|Jayling]]):  
 
'''Question''' from James Down Under ([[User:Jayling|Jayling]]):  
  
Line 15: Line 14:
  
 
Yes, you are only supposed to find the eigenvector for lambda=3. (The idea here is to spare you from finding the roots of a rather nasty 3rd degree polynomial.)  
 
Yes, you are only supposed to find the eigenvector for lambda=3. (The idea here is to spare you from finding the roots of a rather nasty 3rd degree polynomial.)  
 +
 +
Oops! I reread the instructions for 329: 11 just now and I think they give you that lambda =3 hint so that you can factor out a (lambda-3) from the characteristic polynomial and find the other two roots via the quadratic formula. Now I think they really do want you to find all three roots and as many eigenvectors as you can. Since there has been some confusion about this questions, I will not ask the graders to grade it. However, doing it will be good for you. [[User:Bell|Steve Bell]]
  
 
[[User:Jayling|Jayling]]: thanks Steve, I did try the hard way first but then started to drown in the algebra.  
 
[[User:Jayling|Jayling]]: thanks Steve, I did try the hard way first but then started to drown in the algebra.  
 +
 
----
 
----
 +
 
'''Question''' from a student:  
 
'''Question''' from a student:  
  
Line 62: Line 65:
  
 
----
 
----
'''Question''' from a student :
+
 
 +
'''Question''' from a student:  
  
 
On problem 11, I swapped rows 1 and 2 during row reduction and my final solution has x1 and x2 swapped. Do I need to swap back any row swaps or did I make a mistake along the way? [[User:Tlouvar|Tlouvar]]  
 
On problem 11, I swapped rows 1 and 2 during row reduction and my final solution has x1 and x2 swapped. Do I need to swap back any row swaps or did I make a mistake along the way? [[User:Tlouvar|Tlouvar]]  
  
Eun Young discussed this issue here in a way that is slightly beyond the scope of our course, so I've moved it to here:
+
Eun Young discussed this issue here in a way that is slightly beyond the scope of our course, so I've moved it to here:  
  
[[RemarkHWK3MA527Fall2013| Remark from Eun Young]]
+
[[RemarkHWK3MA527Fall2013|Remark from Eun Young]]  
  
Remark from [[User:Bell|Steve Bell]] :
+
Remark from [[User:Bell|Steve Bell]]&nbsp;:  
  
Step 1: Find the eigenvalues from det(A - lambda I)=0.
+
Step 1: Find the eigenvalues from det(A - lambda I)=0.  
  
Step 2: Choose an eigenvalue lambda and plug it into the system
+
Step 2: Choose an eigenvalue lambda and plug it into the system  
  
(A - lambda I) a = 0
+
(A - lambda I) a = 0  
  
and solve the system for the eigenvector a. Swapping rows does not change the answer, so you are safe here.
+
and solve the system for the eigenvector a. Swapping rows does not change the answer, so you are safe here.  
  
Sometimes you might think you are swapping entries of a vector when you are really multiplying by -1.
+
Sometimes you might think you are swapping entries of a vector when you are really multiplying by -1. For example , if [1, -1] is an eigenvector, so is [-1, 1].  
For example , if [1, -1] is an eigenvector, so is [-1, 1].
+
  
<br>
 
 
----
 
----
'''Question''' from [[User:Dalec|Dalec]]
 
  
For #2 on page 351, I found my spectrum to be lambda = 2i , and -i. For the case where lambda = 2i , I am trying to find the eigenvectors, and I get a matrix
+
'''Question''' from [[User:Dalec|Dalec]]
 +
 
 +
For #2 on page 351, I found my spectrum to be lambda = 2i , and -i. For the case where lambda = 2i , I am trying to find the eigenvectors, and I get a matrix  
 
<pre>[ -i 1+i  |  0]
 
<pre>[ -i 1+i  |  0]
 
[ -1+i  -2i  |  0]
 
[ -1+i  -2i  |  0]
</pre>
+
</pre>  
 +
Is there a way to get a 0 in the bottom left, or is this simply overcontrained?
  
Is there a way to get a 0 in the bottom left, or is this simply overcontrained?
+
- Chris <br>
  
- Chris
+
Suggestions from Shawn Whitman
<br>
+
  
Suggestions from Shawn Whitman
+
In one step: multiply row 1 by (1+i) and add to row 2.
  
In one step: multiply row 1 by (1+i) and add to row 2.
+
In two easier steps: Multiply row 1 by i,  
+
In two easier steps: Multiply row 1 by i,
+
  
[1,   (-1+i)]
+
[1, (-1+i)]  
  
[(-1+i), -2i]
+
[(-1+i), -2i]  
  
then multiply row 1 by (1-i) and add to row 2.
+
then multiply row 1 by (1-i) and add to row 2.  
  
[1, (-1+i)]
+
[1, (-1+i)]  
 +
 
 +
[0, 0] <br>
 +
 
 +
<br>
 +
 
 +
'''RESPONSE from [[User:Mrhoade|Mrhoade]]''' Dalec,
 +
 
 +
&nbsp;The easiest way to clear complex denaminators is to multiply numerator and denominator by the conjugate. &nbsp;I like to solve these problems by getting a '1' in the leading column for all rows by dividing the row by the value in column one. &nbsp;If you have any complex denominators, clear them by conjugate multiplication. &nbsp;Then you can kill all leading ones other than the pivot by subtracting row 1 from the row you are clearing. &nbsp;You shouldn't have any complex denominators now. &nbsp;For 3x3 you can repeat the process by scaling row 2 and 3 and then subtracting row 2 from row 3. &nbsp;You should obtain the eigenvectors in this problem from your eigenvalues to be for a<sub><span style="font-size: 11px;">-i</span></sub>&nbsp;= [i-1 &nbsp;2]<sup>T</sup> and a<sub>2i</sub> = [1-i &nbsp;1]<sup>T</sup> &nbsp;&nbsp;-Mick
 +
 
 +
<br>
  
[0, 0]
 
<br>
 
 
----
 
----
 +
 
I have questions about determinants. For a homogeneous systems, for non-zero determinants we have only the trivial solution while for zero determinant we have infinitely many solutions. For non-homogeneous system, when the determinant is non-zero we have exactly one solution. 1. What will happen if a non-homogeneous system has zero determinant? 2. From the determinant of a non-homogeneous system can we know when the system doesn't have any solution?  
 
I have questions about determinants. For a homogeneous systems, for non-zero determinants we have only the trivial solution while for zero determinant we have infinitely many solutions. For non-homogeneous system, when the determinant is non-zero we have exactly one solution. 1. What will happen if a non-homogeneous system has zero determinant? 2. From the determinant of a non-homogeneous system can we know when the system doesn't have any solution?  
  
 
- Farhan  
 
- Farhan  
  
'''Suggestion''' from [[User:Rrusson|Ryan Russon]]
+
'''Suggestion''' from [[User:Rrusson|Ryan Russon]]  
  
Here is what I understand:
+
Here is what I understand:  
  
For question 1) If we are thinking of a system of equations, then by looking at the determinant, we are only looking at the left-hand side (LHS) of the system. If the determinant of that system is zero, it means that one or more of those equations are dependent on the others. Said differently, one or more of those expressions can be put together by combining the other expressions from the LHS of the system. This also means that any non-homogenous system formed from the components of the LHS expressions may have more than one way to be combined to get the desired solution (i.e. <math> \bar{x} </math> is not unique). Now if the expanded system looks like:
+
For question 1) If we are thinking of a system of equations, then by looking at the determinant, we are only looking at the left-hand side (LHS) of the system. If the determinant of that system is zero, it means that one or more of those equations are dependent on the others. Said differently, one or more of those expressions can be put together by combining the other expressions from the LHS of the system. This also means that any non-homogenous system formed from the components of the LHS expressions may have more than one way to be combined to get the desired solution (i.e. <math> \bar{x} </math> is not unique). Now if the expanded system looks like:  
 
+
<pre>[1  4  1  | 4]  
<pre>
+
[1  4  1  | 4]  
+
 
[0  2  0  | 1]  
 
[0  2  0  | 1]  
 
[0  0  0  | 3]  
 
[0  0  0  | 3]  
</pre>
+
</pre>  
 
+
where you have a statement that "0=3" this is obviously a bad system.  
where you have a statement that "0=3" this is obviously a bad system.
+
  
 
For 2) From the determinant alone, it would not be possible to determine if the system has no solutions. If it is zero, it may have infinitely many or it may be an undetermined system.  
 
For 2) From the determinant alone, it would not be possible to determine if the system has no solutions. If it is zero, it may have infinitely many or it may be an undetermined system.  
  
Please others chime in and correct me if I am flawed in my thinking.
+
Please others chime in and correct me if I am flawed in my thinking.  
  
 
<br>
 
 
----
 
----
 +
 
Question from [[User:Rrusson|Ryan Russon]]:  
 
Question from [[User:Rrusson|Ryan Russon]]:  
  
About p. 338, #3,6, and 8, are we supposed to be finding eigenvectors here? I noticed that they put them in the back of the book, although it only asks to find the spectrum of each, which was defined as the set of eigenvalues in 8.1? I understand that we are using Thms 1-5 to prove our results and it seems like #3 doesn't require finding eigenvectors to prove that it isn't any of the listed matrices. I hope I am not way off-base here. Thanks!
+
About p. 338, #3,6, and 8, are we supposed to be finding eigenvectors here? I noticed that they put them in the back of the book, although it only asks to find the spectrum of each, which was defined as the set of eigenvalues in 8.1? I understand that we are using Thms 1-5 to prove our results and it seems like #3 doesn't require finding eigenvectors to prove that it isn't any of the listed matrices. I hope I am not way off-base here. Thanks!  
 +
 
 +
Follow-up question: On p. 338, #6 Are we only to consider <math>A \in \mathbb{R}^{n \times n}</math> or are we to consider complex matrices as well? Thanks again!
 +
 
 +
<br> Response from Jake Eppehimer:
 +
 
 +
I found that #8 is orthogonal, according to theorem 5. It took quite a bit of manipulation with trig identities, but I believe my answer is reasonable. For number 6, I am not exactly sure how to find the eigenvalues. I am considering substituting a couple prime numbers for k and a, but I am unsure if that is the correct way to do it. It doesn't say anything about eigenvectors, and you don't need them to determine what kind of matrix it is.
 +
 
 +
'''Response''' from [[User:Rrusson|Ryan Russon]]
 +
 
 +
I think I am a little brain dead today as I can answer my own follow-up: We are obviously not considering <math>A \in \mathbb{C}^{n \times n}</math> because we are talking about 'symmetric, skew-symmetric, and orthogonal' matrices which are only classes of real-valued matrices.<br>
 +
 
 +
Response from [[User:Jayling|Jayling]]: Ryan I was confused with the definition of Spectrum also. But Steve did state in the last lecture that it was the set of all eigenvalues of A. Also I found via the index in the text that Spectrum is indeed this (see first paragraph of page 324). In summary no need to calculate eigenvectors if the question is asking for the spectrum.
 +
 
 +
Also with Question 6 I am getting a very nasty looking characteristic equation, so I am not to sure how to solve for the algebraic roots.
 +
 
 +
'''Response''' from [[User:Hzillmer|Hzillmer]] Maybe I'm overthinking things but for Question 3 here I got the e-vals to be 2+8i and 2-8i which fails theorem 5 that the absolute value must be 1. Does anyone have a thought as to what I'm missing here?
 +
 
 +
'''Response''' from [[User:Djkees|Kees]]
 +
 
 +
Its not orthogonal so there is nothing to prove in theorem five. It can fail theorem 5 since it has no reason to pass it. Easy question, since it is none of the three, I do not have to prove any theorem's, correct? I only have to prove, for example, the orthogonality theorems when it is orthogonal etc, not every theorem every time?
 +
 
 +
Response from [[User:Jayling|Jayling]]: If you just do the calculation '''AA<span class="texhtml"><sup>''T''</sup></span>''' you will see that you do not get the Identity Matrix and therefore it is not orthogonal. If your eigenvalues are real then the matrix is symmetric, if your eigenvalues are pure imaginary or zero then the matrix is skew symmetric, and if your eigenvalues are real or have complex conjugate pairs where the absolute value is 1 then you have an orthogonal matrix.
 +
 
 +
From [[User:Bell|Steve Bell]]&nbsp;: James, you are mistaken about somehing in the paragraph above. A symmetric matrix has real eigenvalues, but the reverse is not a true statement, i.e., it is not true that having real eigenvalues forces a matrix to be symmetric. Same with the other types. These are one-way implications only.
 +
 
 +
From [[User:Jayling|James Ayling]]&nbsp;: Thanks for the clarification Steve.
 +
 
 +
On 7.5 #2, I determined the eigenvalues to be -i and 2i.&nbsp; I can't seem to clean up the math when putting these values back into A to determine the eigenvectors.&nbsp; Any tips would be appreciated.&nbsp; [[User:Tlouvar|Tlouvar]]<br> <br>
 +
 
 +
from [[User:Jayling|Jayling]]: it is probably just a factoring issue, some tricks that I use are factor the denominator and numerator with i. You are not changing the answer because you are just multiplying with 1. Does this help?
 +
 
 +
As a sanity check you can always hit your calculated eigenvector with the matrix A and see if you indeed get your calculated eigenvalue multiplied by your eigenvector. If you don't then you know that your calculation is incorrect. You are not alone I do get a bit cross eyed with complex numbers, but if you stick with it remembering that i<sup>2</sup> = -1 and 1/i = -i then you should be able to navigate through the minefield of the algebra.
 +
 
 +
from [[User:Rleemhui|Ryan Leemhuis]]:
 +
 
 +
In regards to question 6, you do get a somewhat intimidating characteristic equation. However, if you keep the numbers in order and look for trig identities that can simplify the math the equation works out rather nicely. Specifically the formula cos^2(x)+Sin^2(x) = 1 came in handy for me.
 +
 
 +
'''Response''' from [[User:Rrusson|Ryan Russon]] 18:47, 9 September 2013 (UTC):
 +
 
 +
With regards to Ryan Leemhuis's response, how did you use a trig identity in #6? I sure needed them for #8. Were you refering to #8?
 +
 
 +
'''Response''' from [[User:Roe5|T. Roe]]:
 +
 
 +
While working on #3 on pg. 338 I calculated '''AA<span class="texhtml"><sup>''T''</sup></span>''' and got:
 +
<pre>[68  0]
 +
[0  68]
 +
</pre>
 +
Can that be reduced to the identity matrix?
 +
 
 +
From Michael Nesteroff:
 +
 
 +
I am also having difficulty with problem 6. My characteristic equation looks nasty, but when I visualize the matrix, I see that a probable eigenvalue would be 'a - k'. This is because it makes the 3x3 matrix all k's and thus reduces to one row being k k k and everything else zero. Is this correct? Are there other eigenvalues? Is there an easy way to factor the eigenvalues?
 +
 
 +
Response from [[User:Bell|Steve Bell]]&nbsp;:
 +
 
 +
Michael, that is a very clever thing to notice. It means that
 +
 
 +
<span class="texhtml">λ − (''a'' − ''k'')</span>
 +
 
 +
is a factor in the characteristic polynomial and you can factor it out, leaving a quadratic term.
 +
 
 +
Here's how I found the roots. Recall that adding a constant times one column to another does not change the determinant. Same thing for rows. I subtracted column 2 from column 1 to kill the k in the 3,1 spot. Then I notice that the thing in the 2,1 spot was minus the thing in the 1,1 spot. So I added row 1 to row 2 to get another zero in column 1. Next, I expanded along column one. That leaves the factor above right out front, and the quadratic that is to the right is rather easy to factor. I did that by eyeballing it, but you could use the quadratic formula too.
 +
 
 +
Response from [[User:Mrhoade|Mrhoade]]<br>That's exactly how I did it. &nbsp; I got the three eigenvectors to be (a - k) , (a - k) , and (a + 2k). I checked these with Matlab using some sample values of a and k and then the eig() function. It appears to be correct. The way I achieved this solution was to do a row and a column operation on the characteristic matrix. Take R3 = R3 -R2 and then C2 = C2+C3 and then do a cofactor expansion on Row 3. The first eigenvalue of (a - k) pops right out at you as the cofactor from a33. You can then divide the cofactor out of both sides and come up with a quadratic that will reduce to ((2a + k) +/- 3k ) /2 . This gives the repeated root (a-k) and the third root (a + 2k). - Mick<br>
 +
 
 +
----
 +
 
 +
From Kees:
 +
 
 +
Dr. Bell, on Pg 338, for 3, 6, and 8, do you expect us to write down the theorems that are proven, or does the act of finding the spectrum "illustrate Theorems 1-5?" I'm fine doing either one just wanted to clarify what work was expected.
 +
 
 +
From [[User:Bell|Steve Bell]] 21:31, 10 September 2013 (UTC):
 +
 
 +
I would say something like this: "The eigenvalues are _____ and this is consistent with Theorem ____ because the matrix is ____."
 +
 
 +
----
 +
 
 +
Also from Kees:
 +
 
 +
Can a matrix be more than one of symmetric, skew-symmetric, or orthogonal? I think intuitively that a matrix cannot be both symmetric and skew symmetric, but I'm not sure if there is a special case(s) I am not considering. I also am not sure if, for example, a matrix could be symmetric AND orthogonal. Anyone have any ideas?
 +
 
 +
<br>
 +
 
 +
'''RESPONSE from Mickey Rhoades [[User:Mrhoade|Mrhoade]]'''
 +
 
 +
A matrix can be symetric or skew symetric and also orthogonal. &nbsp;In fact, HW sect 8.5 #5 is skew hermitian and unitary. &nbsp;For a matrix to be symetric and orthogonal you can see that a n x n matrix A will satisfy A = A<sup>T</sup> for symetry. &nbsp;Then A<sup>T</sup> = A<sup>-1</sup> for orthogonality. &nbsp;This leads to the conclusion A = A<sup>T</sup> = A<sup>-1 </sup>; &nbsp;the matrix is its own inverse. &nbsp;This is an involuntary matrix, such as the identity matrix, where A<sup>2</sup> = I .
 +
 
 +
For a matrix to be skew symetric and orthogonal, it would have to satisfy the property that -A = A<sup>T</sup> = A<sup>-1 &nbsp;</sup>Consider the matrix A = [0 1];[-1 0] &nbsp;This will be skew symetric and orthogonal.
 +
 
 +
A matrix cannot be symetric and skew-symetric. &nbsp;For symetry, A = A<sup>T</sup> and for skew symetry A<sup>T</sup> = -A. &nbsp;To satisfy both, A = -A and only a trivial matrix can satisfy this property.
 +
 
 +
&nbsp;- Mick
 +
 
 +
Response to Mick... This all makes sense, but how are you finding #5 to be unitary? I have that the determinant is i, meaning the inverse has 1's and is therefore not equal to the conjugate transpose. Just thought I'd point that out, unless I'm missing something. [[User:Jones947|Jones947]]
 +
 
 +
<br>
 +
 
 +
'''RESPONSE to Jones947 from Mickey Rhoades ([[User:Mrhoade|Mrhoade]])'''
 +
 
 +
<br>
 +
 
 +
Sorry I couldn't reply earlier, but here is what I understand. &nbsp;For skew hemitian, Skew Hermitian:
 +
<math>\overline{A}^T=-A</math>  and <math>Unitary: \overline{A}^T=A^{-1}</math>
 +
 
 +
Also, section 8.5 theorem 1(b) says that for skew hermitian the eigenvectors will be purely imaginary, and theorem 1(c) says they will have absolute value of 1.  When you work out the complex transpose, inverse, and negative A you should come up with:
 +
<pre>[-i  0  0]
 +
[0  0  -i]
 +
[0  -i  0]
 +
</pre>
 +
These can easily be checked in Matlab if you input A and then do -A, inv(A), ctranspose(A).  These satisfy the conditions for skew hermitian and unitary matrices.  You can give yourself a final check using theorem 1.  The eigenvectors are +i, +i, -i.  These all have absolute value of 1 satisfying the unitary condition and they are purely imaginary satisfying the skew hermitian condition. -Mick
 +
----
 +
 
 +
Not sure how to progress with sec 8.5 #13. I missed the class on 09/09 and am travelling so don't have access to the video also, and am wondering whether some hint was provided in the class or not. - Farhan
 +
 
 +
'''Response''' from [[User:Roe5|T. Roe]] 00:09, 11 September 2013 (UTC)
 +
 
 +
If I understand #13 correctly it is not as intimidating as it appears. All it takes is some facts we learned about matrices in this chapter,
 +
 
 +
<math>Hermitian: \overline{A}^T=A</math>
 +
 
 +
<math>Skew Hermitian: \overline{A}^T=-A</math>
  
Follow-up question: On p. 338, #6 Are we only to consider <math>A \in \mathbb{R}^{n \times n}</math> or are we to consider complex matrices as well? Thanks again!
+
<math>Unitary: \overline{A}^T=A^{-1}</math>  
  
Response from Jake Eppehimer:
+
combined with some matrix multiplication rules we learned back in chapter 7.
  
I found that #8 is orthogonal, according to theorem 5.  It took quite a bit of manipulation with trig identities, but I believe my answer is reasonable.  For number 6, I am not exactly sure how to find the eigenvalues.  I am considering substituting a couple prime numbers for k and a, but I am unsure if that is the correct way to do it.  It doesn't say anything about eigenvectors, and you don't need them to determine what kind of matrix it is.
+
[[2013 Fall MA 527 Bell|Back to MA527, Fall 2013]]
  
<br>
+
[[Category:MA527Fall2013Bell]] [[Category:MA527]] [[Category:Math]] [[Category:Homework]] [[Category:Linear_algebra]]
<br>
+
<br>
+
<br>
+
<br>
+
[[2013 Fall MA 527 Bell|Back to MA527, Fall 2013]]
+

Latest revision as of 11:36, 11 September 2013


Homework 3 collaboration area

MA527 Fall 2013


Question from James Down Under (Jayling):

For Page 329 Question 11. Am I meant to calculate all eigenvalues and eigenvectors or just calculate the eigenvector corresponding to the given eigenvalue of 3?

Answer from Steve Bell :

Yes, you are only supposed to find the eigenvector for lambda=3. (The idea here is to spare you from finding the roots of a rather nasty 3rd degree polynomial.)

Oops! I reread the instructions for 329: 11 just now and I think they give you that lambda =3 hint so that you can factor out a (lambda-3) from the characteristic polynomial and find the other two roots via the quadratic formula. Now I think they really do want you to find all three roots and as many eigenvectors as you can. Since there has been some confusion about this questions, I will not ask the graders to grade it. However, doing it will be good for you. Steve Bell

Jayling: thanks Steve, I did try the hard way first but then started to drown in the algebra.


Question from a student:

Let 3x+4y+2z = 0; 2x+5z= 0 be the system for which I have to find the basis.

When Row Reduced the above system gives [ 1 0 2.5 0 ; 0 1 -1.375 0].

Rank = no of non zero rows = 2 => Dim(rowspace) = 2 ; Nullity = # free variables = 1

Q1: Aren't [ 1 0 2.5] and [0 1 -1.375] called the basis of the system?

A1 from Steve Bell:

Those two vectors form a basis for the ROW SPACE.

The solution space is only 1 dimensional (since the number of free variables is only 1).

Q2: Why is that we get a basis by considering the free variable as some "parameter" and reducing further(and get 1 vector in this case). Isn't that the solution of the system?

A2 from Steve Bell :

If the system row reduces to

[ 1 0  2.5   0 ]
[ 0 1 -1.375 0 ]

then z is the free variable. Let it be t. The top equation gives

x = -2.5 t

and the second equation gives

y = 1.375 t

and of course,

z = t.

So the general solution is

[ x ]   [ -2.5   ]
[ y ] = [  1.375 ] t
[ z ]   [  1     ]

Thus, you can find the solution from the row echelon matrix, but I wouldn't say that you can read it off from there -- not without practice, at least.


Question from a student:

On problem 11, I swapped rows 1 and 2 during row reduction and my final solution has x1 and x2 swapped. Do I need to swap back any row swaps or did I make a mistake along the way? Tlouvar

Eun Young discussed this issue here in a way that is slightly beyond the scope of our course, so I've moved it to here:

Remark from Eun Young

Remark from Steve Bell :

Step 1: Find the eigenvalues from det(A - lambda I)=0.

Step 2: Choose an eigenvalue lambda and plug it into the system

(A - lambda I) a = 0

and solve the system for the eigenvector a. Swapping rows does not change the answer, so you are safe here.

Sometimes you might think you are swapping entries of a vector when you are really multiplying by -1. For example , if [1, -1] is an eigenvector, so is [-1, 1].


Question from Dalec

For #2 on page 351, I found my spectrum to be lambda = 2i , and -i. For the case where lambda = 2i , I am trying to find the eigenvectors, and I get a matrix

[ -i 1+i  |   0]
[ -1+i  -2i  |   0]

Is there a way to get a 0 in the bottom left, or is this simply overcontrained?

- Chris

Suggestions from Shawn Whitman

In one step: multiply row 1 by (1+i) and add to row 2.

In two easier steps: Multiply row 1 by i,

[1, (-1+i)]

[(-1+i), -2i]

then multiply row 1 by (1-i) and add to row 2.

[1, (-1+i)]

[0, 0]


RESPONSE from Mrhoade Dalec,

 The easiest way to clear complex denaminators is to multiply numerator and denominator by the conjugate.  I like to solve these problems by getting a '1' in the leading column for all rows by dividing the row by the value in column one.  If you have any complex denominators, clear them by conjugate multiplication.  Then you can kill all leading ones other than the pivot by subtracting row 1 from the row you are clearing.  You shouldn't have any complex denominators now.  For 3x3 you can repeat the process by scaling row 2 and 3 and then subtracting row 2 from row 3.  You should obtain the eigenvectors in this problem from your eigenvalues to be for a-i = [i-1  2]T and a2i = [1-i  1]T   -Mick



I have questions about determinants. For a homogeneous systems, for non-zero determinants we have only the trivial solution while for zero determinant we have infinitely many solutions. For non-homogeneous system, when the determinant is non-zero we have exactly one solution. 1. What will happen if a non-homogeneous system has zero determinant? 2. From the determinant of a non-homogeneous system can we know when the system doesn't have any solution?

- Farhan

Suggestion from Ryan Russon

Here is what I understand:

For question 1) If we are thinking of a system of equations, then by looking at the determinant, we are only looking at the left-hand side (LHS) of the system. If the determinant of that system is zero, it means that one or more of those equations are dependent on the others. Said differently, one or more of those expressions can be put together by combining the other expressions from the LHS of the system. This also means that any non-homogenous system formed from the components of the LHS expressions may have more than one way to be combined to get the desired solution (i.e. $ \bar{x} $ is not unique). Now if the expanded system looks like:

[1  4  1  | 4] 
[0  2  0  | 1] 
[0  0  0  | 3] 

where you have a statement that "0=3" this is obviously a bad system.

For 2) From the determinant alone, it would not be possible to determine if the system has no solutions. If it is zero, it may have infinitely many or it may be an undetermined system.

Please others chime in and correct me if I am flawed in my thinking.


Question from Ryan Russon:

About p. 338, #3,6, and 8, are we supposed to be finding eigenvectors here? I noticed that they put them in the back of the book, although it only asks to find the spectrum of each, which was defined as the set of eigenvalues in 8.1? I understand that we are using Thms 1-5 to prove our results and it seems like #3 doesn't require finding eigenvectors to prove that it isn't any of the listed matrices. I hope I am not way off-base here. Thanks!

Follow-up question: On p. 338, #6 Are we only to consider $ A \in \mathbb{R}^{n \times n} $ or are we to consider complex matrices as well? Thanks again!


Response from Jake Eppehimer:

I found that #8 is orthogonal, according to theorem 5. It took quite a bit of manipulation with trig identities, but I believe my answer is reasonable. For number 6, I am not exactly sure how to find the eigenvalues. I am considering substituting a couple prime numbers for k and a, but I am unsure if that is the correct way to do it. It doesn't say anything about eigenvectors, and you don't need them to determine what kind of matrix it is.

Response from Ryan Russon

I think I am a little brain dead today as I can answer my own follow-up: We are obviously not considering $ A \in \mathbb{C}^{n \times n} $ because we are talking about 'symmetric, skew-symmetric, and orthogonal' matrices which are only classes of real-valued matrices.

Response from Jayling: Ryan I was confused with the definition of Spectrum also. But Steve did state in the last lecture that it was the set of all eigenvalues of A. Also I found via the index in the text that Spectrum is indeed this (see first paragraph of page 324). In summary no need to calculate eigenvectors if the question is asking for the spectrum.

Also with Question 6 I am getting a very nasty looking characteristic equation, so I am not to sure how to solve for the algebraic roots.

Response from Hzillmer Maybe I'm overthinking things but for Question 3 here I got the e-vals to be 2+8i and 2-8i which fails theorem 5 that the absolute value must be 1. Does anyone have a thought as to what I'm missing here?

Response from Kees

Its not orthogonal so there is nothing to prove in theorem five. It can fail theorem 5 since it has no reason to pass it. Easy question, since it is none of the three, I do not have to prove any theorem's, correct? I only have to prove, for example, the orthogonality theorems when it is orthogonal etc, not every theorem every time?

Response from Jayling: If you just do the calculation AAT you will see that you do not get the Identity Matrix and therefore it is not orthogonal. If your eigenvalues are real then the matrix is symmetric, if your eigenvalues are pure imaginary or zero then the matrix is skew symmetric, and if your eigenvalues are real or have complex conjugate pairs where the absolute value is 1 then you have an orthogonal matrix.

From Steve Bell : James, you are mistaken about somehing in the paragraph above. A symmetric matrix has real eigenvalues, but the reverse is not a true statement, i.e., it is not true that having real eigenvalues forces a matrix to be symmetric. Same with the other types. These are one-way implications only.

From James Ayling : Thanks for the clarification Steve.

On 7.5 #2, I determined the eigenvalues to be -i and 2i.  I can't seem to clean up the math when putting these values back into A to determine the eigenvectors.  Any tips would be appreciated.  Tlouvar

from Jayling: it is probably just a factoring issue, some tricks that I use are factor the denominator and numerator with i. You are not changing the answer because you are just multiplying with 1. Does this help?

As a sanity check you can always hit your calculated eigenvector with the matrix A and see if you indeed get your calculated eigenvalue multiplied by your eigenvector. If you don't then you know that your calculation is incorrect. You are not alone I do get a bit cross eyed with complex numbers, but if you stick with it remembering that i2 = -1 and 1/i = -i then you should be able to navigate through the minefield of the algebra.

from Ryan Leemhuis:

In regards to question 6, you do get a somewhat intimidating characteristic equation. However, if you keep the numbers in order and look for trig identities that can simplify the math the equation works out rather nicely. Specifically the formula cos^2(x)+Sin^2(x) = 1 came in handy for me.

Response from Ryan Russon 18:47, 9 September 2013 (UTC):

With regards to Ryan Leemhuis's response, how did you use a trig identity in #6? I sure needed them for #8. Were you refering to #8?

Response from T. Roe:

While working on #3 on pg. 338 I calculated AAT and got:

[68  0] 
[0  68]

Can that be reduced to the identity matrix?

From Michael Nesteroff:

I am also having difficulty with problem 6. My characteristic equation looks nasty, but when I visualize the matrix, I see that a probable eigenvalue would be 'a - k'. This is because it makes the 3x3 matrix all k's and thus reduces to one row being k k k and everything else zero. Is this correct? Are there other eigenvalues? Is there an easy way to factor the eigenvalues?

Response from Steve Bell :

Michael, that is a very clever thing to notice. It means that

λ − (ak)

is a factor in the characteristic polynomial and you can factor it out, leaving a quadratic term.

Here's how I found the roots. Recall that adding a constant times one column to another does not change the determinant. Same thing for rows. I subtracted column 2 from column 1 to kill the k in the 3,1 spot. Then I notice that the thing in the 2,1 spot was minus the thing in the 1,1 spot. So I added row 1 to row 2 to get another zero in column 1. Next, I expanded along column one. That leaves the factor above right out front, and the quadratic that is to the right is rather easy to factor. I did that by eyeballing it, but you could use the quadratic formula too.

Response from Mrhoade
That's exactly how I did it.   I got the three eigenvectors to be (a - k) , (a - k) , and (a + 2k). I checked these with Matlab using some sample values of a and k and then the eig() function. It appears to be correct. The way I achieved this solution was to do a row and a column operation on the characteristic matrix. Take R3 = R3 -R2 and then C2 = C2+C3 and then do a cofactor expansion on Row 3. The first eigenvalue of (a - k) pops right out at you as the cofactor from a33. You can then divide the cofactor out of both sides and come up with a quadratic that will reduce to ((2a + k) +/- 3k ) /2 . This gives the repeated root (a-k) and the third root (a + 2k). - Mick


From Kees:

Dr. Bell, on Pg 338, for 3, 6, and 8, do you expect us to write down the theorems that are proven, or does the act of finding the spectrum "illustrate Theorems 1-5?" I'm fine doing either one just wanted to clarify what work was expected.

From Steve Bell 21:31, 10 September 2013 (UTC):

I would say something like this: "The eigenvalues are _____ and this is consistent with Theorem ____ because the matrix is ____."


Also from Kees:

Can a matrix be more than one of symmetric, skew-symmetric, or orthogonal? I think intuitively that a matrix cannot be both symmetric and skew symmetric, but I'm not sure if there is a special case(s) I am not considering. I also am not sure if, for example, a matrix could be symmetric AND orthogonal. Anyone have any ideas?


RESPONSE from Mickey Rhoades Mrhoade

A matrix can be symetric or skew symetric and also orthogonal.  In fact, HW sect 8.5 #5 is skew hermitian and unitary.  For a matrix to be symetric and orthogonal you can see that a n x n matrix A will satisfy A = AT for symetry.  Then AT = A-1 for orthogonality.  This leads to the conclusion A = AT = A-1 ;  the matrix is its own inverse.  This is an involuntary matrix, such as the identity matrix, where A2 = I .

For a matrix to be skew symetric and orthogonal, it would have to satisfy the property that -A = AT = A-1  Consider the matrix A = [0 1];[-1 0]  This will be skew symetric and orthogonal.

A matrix cannot be symetric and skew-symetric.  For symetry, A = AT and for skew symetry AT = -A.  To satisfy both, A = -A and only a trivial matrix can satisfy this property.

 - Mick

Response to Mick... This all makes sense, but how are you finding #5 to be unitary? I have that the determinant is i, meaning the inverse has 1's and is therefore not equal to the conjugate transpose. Just thought I'd point that out, unless I'm missing something. Jones947


RESPONSE to Jones947 from Mickey Rhoades (Mrhoade)


Sorry I couldn't reply earlier, but here is what I understand.  For skew hemitian, Skew Hermitian: $ \overline{A}^T=-A $ and $ Unitary: \overline{A}^T=A^{-1} $

Also, section 8.5 theorem 1(b) says that for skew hermitian the eigenvectors will be purely imaginary, and theorem 1(c) says they will have absolute value of 1. When you work out the complex transpose, inverse, and negative A you should come up with:

[-i   0   0] 
[0   0  -i] 
[0  -i   0] 

These can easily be checked in Matlab if you input A and then do -A, inv(A), ctranspose(A). These satisfy the conditions for skew hermitian and unitary matrices. You can give yourself a final check using theorem 1. The eigenvectors are +i, +i, -i. These all have absolute value of 1 satisfying the unitary condition and they are purely imaginary satisfying the skew hermitian condition. -Mick


Not sure how to progress with sec 8.5 #13. I missed the class on 09/09 and am travelling so don't have access to the video also, and am wondering whether some hint was provided in the class or not. - Farhan

Response from T. Roe 00:09, 11 September 2013 (UTC)

If I understand #13 correctly it is not as intimidating as it appears. All it takes is some facts we learned about matrices in this chapter,

$ Hermitian: \overline{A}^T=A $

$ Skew Hermitian: \overline{A}^T=-A $

$ Unitary: \overline{A}^T=A^{-1} $

combined with some matrix multiplication rules we learned back in chapter 7.

Back to MA527, Fall 2013

Alumni Liaison

Basic linear algebra uncovers and clarifies very important geometry and algebra.

Dr. Paul Garrett