(17 intermediate revisions by 4 users not shown)
Line 1: Line 1:
 +
[[Category:math]]
 +
[[Category:math squad]]
 +
[[Category:tutorial]]
 +
[[Category:linear algebra]]
 +
[[Category:MA351]]
 +
[[Category:basis]]
 +
[[Category:problem solving]]
 +
 +
 
='''Basis Problems'''=
 
='''Basis Problems'''=
 +
by [[user:ruanj | Joseph Ruan]], proud member of the [[Math squad|Math Squad]]
 +
----
  
'''Example #1: Testing whether a set of vectors is a basis for a given space'''
+
=='''Example #1: Testing whether a set of vectors is a basis for a given space'''==
  
 
----
 
----
  
'''Part 1''':Is the set of polynomials <math> x^2, x, 1 </math> a basis for the set of all polynomials of degree two or less?
+
==='''Part 1'''===
 +
 
 +
Is the set of polynomials <math> x^2, x, 1 </math> a basis for the set of all polynomials of degree two or less?
  
 
YES. The first way to approach this problem is to write the knowns. We know that our given subspace V is the set of all polynomials of degree two or less. We also know that <math> x^2, x, 1 </math> are the vectors we'd like to test as to whether they form a basis for our space V. There are two conditions, <math> x^2, x, 1 </math> need to be linearly independent and <math> x^2, x, 1 </math> need to span V. To be concise, let's call these three vectors, respectively, as <math> \vec v_1, \vec v_2, \vec v_3 </math>  
 
YES. The first way to approach this problem is to write the knowns. We know that our given subspace V is the set of all polynomials of degree two or less. We also know that <math> x^2, x, 1 </math> are the vectors we'd like to test as to whether they form a basis for our space V. There are two conditions, <math> x^2, x, 1 </math> need to be linearly independent and <math> x^2, x, 1 </math> need to span V. To be concise, let's call these three vectors, respectively, as <math> \vec v_1, \vec v_2, \vec v_3 </math>  
Line 11: Line 24:
 
The first aspect is trivial, since you can't make any one term by combining the others.  
 
The first aspect is trivial, since you can't make any one term by combining the others.  
  
As for the second part, here's a simple trick: every polynomial (let's just call it p(x)) can be written in the form <math> p(x)=a*x^2+ b*x+ c*1 </math>. However, if we look more carefully, a, b, and c are all constants! So we can rewrite this as  <math> p(x)=c_1*x^2+ c_2*x+ c_3*1 </math>. This however can be also rewritten as <math> p(x)=c_1*\vec v_1+ c_2*\vec v_2+ c_3*\vec v_3 </math>  and notice something? this is the definition of our three vectors spanning the space! Therefore the three vectors do indeed span the space and are linearly independent. This means that they are a basis
+
As for the second part, here's a simple trick: every polynomial (let's just call it p(x)) can be written in the form <math> p(x)=a*x^2+ b*x+ c*1 </math>. However, if we look more carefully, a, b, and c are all constants! So we can rewrite this as   
 +
 
 +
<math> p(x)=c_1*x^2+ c_2*x+ c_3*1 </math>.  
 +
 
 +
This however can be also rewritten as  
 +
 
 +
<math> p(x)=c_1*\vec v_1+ c_2*\vec v_2+ c_3*\vec v_3 </math>   
 +
 
 +
and notice something? this is the definition of our three vectors spanning the space! Therefore the three vectors do indeed span the space and are linearly independent. This means that they are a basis
  
 
As a take away from this problem, notice that to test whether the basis vectors span the space, write the ambiguous form of the polynomial (in this case, <math> p(x)=a*x^2+ b*x+ c*1 </math>) and then manipulate it to see if it can be written as an arbitrary linear combination of our given basis vectors. This is how you test for span.  
 
As a take away from this problem, notice that to test whether the basis vectors span the space, write the ambiguous form of the polynomial (in this case, <math> p(x)=a*x^2+ b*x+ c*1 </math>) and then manipulate it to see if it can be written as an arbitrary linear combination of our given basis vectors. This is how you test for span.  
  
NOTE: It wasn't absolutely necessary to rewrite the basis vectors in the from <math>\vec v_1, \vec v_2, \vec v_3 </math>, but this was to help make the span more familiar.
+
NOTE: It wasn't absolutely necessary to rewrite the basis vectors in the form
 +
<math>\vec v_1, \vec v_2, \vec v_3 </math>,  
 +
 
 +
but this was to help make the span more familiar. The following parts will not write the basis vectors in this type of vector expression.
  
 
----
 
----
  
  
'''Part 2''': is the set of polynomials <math> 3x^2, x ,1 </math> a basis for the set of all polynomials of degree two or less?
+
==='''Part 2'''===
 +
Is the set of polynomials <math> 3x^2, x ,1 </math> a basis for the set of all polynomials of degree two or less?
 +
 
 +
YES.
  
YES. Similar to Part 1, we have the same space V and we know that the vectors are linearly independent. Moreover, when we write <math> p(x)=c_1*x^2+ c_2*x+ c_3*1 </math>, we know that <math>3x^2=3*(x^2)</math> so <math> p(x)=(c_1 /3)*(3x^2)+ c_2*x+ c_3*1 </math> and therefore since <math> c_1 /3</math> is a constant, we can rewrite this all as <math> p(x)=(k_1)*(3x^2)+ k_2*x+ k_3*1 </math>. Which means the set of vectors span the space.
+
Similar to Part 1, we have the same space V and we know that the vectors are linearly independent. Moreover, when we write <math> p(x)=c_1*x^2+ c_2*x+ c_3*1 </math>, we know that <math>3x^2=3*(x^2)</math> so <math> p(x)=(c_1 /3)*(3x^2)+ c_2*x+ c_3*1 </math> and therefore since <math> c_1 /3</math> is a constant, we can rewrite this all as <math> p(x)=(k_1)*(3x^2)+ k_2*x+ k_3*1 </math>. Which means the set of vectors span the space.
  
 
Another way of seeing this is that this basis can form the previous basis, by simply dividing the first vector by three. Therefore, since the previous basis spanned the space, this one must too. That would save a lot of work.
 
Another way of seeing this is that this basis can form the previous basis, by simply dividing the first vector by three. Therefore, since the previous basis spanned the space, this one must too. That would save a lot of work.
Line 28: Line 55:
 
----
 
----
  
'''Part 3''': is the set of polynomials <math> 3x^2 + x, x , 1 </math> a basis for  the set of all polynomials of degree two or less?
+
==='''Part 3'''===
 +
Is the set of polynomials <math> 3x^2 + x, x , 1 </math> a basis for  the set of all polynomials of degree two or less?
  
YES. They are definitely linearly independent because <math> 3x^2 + x </math> cannot be made without an <math> x^2 </math> term and <math> x </math> cannot be made without removing the <math> x^2 </math>  term from <math> 3x^2 + x </math> and 1 cannot be made from the first two.
+
YES.  
  
since we know that space V is still the same as with parts 1 and 2, let's write p(x) as <math> p(x)=c_1*x^2+ c_2*x+ c_3*1 </math>. Notice that <math>3x^2+x = 3*(x^2)+1*x</math>.  
+
They are definitely linearly independent because <math> 3x^2 + x </math> cannot be made without an <math> x^2 </math> term and <math> x </math> cannot be made without removing the <math> x^2 </math>  term from <math> 3x^2 + x </math> and 1 cannot be made from the first two.
  
The rigorous solution is as follows: <math> p(x)=c_1*x^2+ c_2*x+ c_3*1 = (c_1 /3)*(3x^2)+c_2*x+c_3*1=(c_1 /3)*(3x^2)+(c_1 /3)*x+(c_2-c_1/3)*x+c_3*1=(c_1 /3)(3x^2+x) + (c_2-c_1 /3)*x + c_3*1</math>. Since <math> (c_1 /3), (c_2-c_1/3)</math> are still both constants, we can rewrite the expression as <math> p(x)=(k_1)(3x^2+x) + (k_2)*x + k_3*1</math>. This is the definition of span using our three vectors, meaning that the vectors span the space. Thus, since they span and are linearly independent, they are a basis.
+
Since we know that space V is still the same as with parts 1 and 2, let's write p(x) as <math> p(x)=c_1*x^2+ c_2*x+ c_3*1 </math>. Notice that <math>3x^2+x = 3*(x^2)+1*x</math>.  
  
The simple solution is as follows: <math> 3x^2 + x, x , 1 </math> is just a linear combination of the first initial basis, <math> x^2, x, 1 </math> and is still linearly independent. Logically, it should still span whatever the initial basis could span and therefore, it spans the entire space.
+
The rigorous solution is as follows:
 +
 
 +
<math> p(x)=c_1*x^2+ c_2*x+ c_3*1 = (c_1 /3)*(3x^2)+c_2*x+c_3*1=(c_1 /3)*(3x^2)+(c_1 /3)*x+(c_2-c_1/3)*x+c_3*1=(c_1 /3)(3x^2+x) + (c_2-c_1 /3)*x + c_3*1</math>.
 +
Since
 +
<math> (c_1 /3), (c_2-c_1/3)</math>
 +
are still both constants, we can rewrite the expression as
 +
<math> p(x)=(k_1)(3x^2+x) + (k_2)*x + k_3*1</math>.
 +
 
 +
This is the definition of span using our three vectors, meaning that the vectors span the space. Thus, since they span and are linearly independent, they are a basis.
 +
 
 +
The simple solution is as follows:  
 +
<math> 3x^2 + x, x , 1 </math>  
 +
is just a linear combination of the first initial basis, <math> x^2, x, 1 </math> and is still linearly independent. Logically, it should still span whatever the initial basis could span and therefore, it spans the entire space.
  
 
----
 
----
  
'''Part 4''': is the set of polynomials <math> 3x^2+x+1, 2x+1, 2 </math> a basis for  the set of all polynomials of degree two or less?
+
==='''Part 4'''===
 +
Is the set of polynomials <math> 3x^2+x+1, 2x+1, 2 </math> a basis for  the set of all polynomials of degree two or less?
 +
 
 +
YES.
 +
 
 +
This problem is more or less identical to Part 3, just with more parts. The same simple solution argument can be made and the rigorous argument as well. However, I will show you a different way of thinking about it. Instead of starting with p(x), we'll start with the span of the three vectors. so the span (let's just call it "S"),
 +
 
 +
<math> S= c_1*(3x^2+x+1) +c_2*(2x+1)+c_3*2=3*(c_1)*x^2+c_1*x+c_1*1+c_2*2*x+c_2*1+c_3*2=(3*c_1)*x^2+(c_1+2*c_2)*x+(c_1+c_2+2*c_3)*1</math>.
 +
 
 +
But wait! <math> (3*c_1), (c_1+2*c_2) and (c_1+c_2+2*c_3)</math> are constants!
 +
 
 +
Meaning that S can be rewritten as <math>(k_1)*x^2+(k_2)*x+(k_3)*1</math> Since <math> k_1, k_2, k_3 </math> have no bounds (meaning they can take any value possible), this expression is equivalent to our ambiguous expression
 +
 
 +
<math>p(x)=d_1*x^2+d_2*x+d_3</math>.
  
This problem is more or less identical to Part 3, just with more parts. The same simple solution argument can be made and the rigorous argument as well. However, I will show you a different way of thinking about it. Instead of starting with p(x), we'll start with the span of the three vectors. so the span (let's just call it "S"), <math> S= c_1*(3x^2+x+1) +c_2*(2x+1)+c_3*2=3*(c_1)*x^2+c_1*x+c_1*1+c_2*2*x+c_2*1+c_3*2=(3*c_1)*x^2+(c_1+2*c_2)*x+(c_1+c_2+2*c_3)*1</math>. But wait! <math> (3*c_1), (c_1+2*c_2) and (c_1+c_2+2*c_3)</math> are constants! Meaning that S can be rewritten as <math>(k_1)*x^2+(k_2)*x+(k_3)*1</math> Since <math> k_1, k_2, k_3 </math> have no bounds (meaning they can take any value possible), this expression is equivalent to our ambiguous expression <math>p(x)=d_1*x^2+d_2*x+d_3</math>. Therefore, the span of our three vectors is exactly the space that we are given. Therefore it is a basis for the space.
+
Therefore, the span of our three vectors is exactly the space that we are given. Therefore it is a basis for the space.
  
 
----
 
----
'''Part 5''': is the set of polynomials <math> x^3, x, , 1 </math> a basis for  the set of all polynomials of degree two or less?
+
==='''Part 5'''===
 +
Is the set of polynomials <math> x^3, x, , 1 </math> a basis for  the set of all polynomials of degree two or less?
  
No. x^3 is included in the space of polynomials of degree THREE. So this set of vectors cannot be a basis for polynomials of degree two or less since its span is not equal to the given space of polynomials.
+
NO.  
 +
 
 +
x^3 is included in the space of polynomials of degree THREE. So this set of vectors cannot be a basis for polynomials of degree two or less since its span is not equal to the given space of polynomials.
  
  
 
----
 
----
  
'''Part 6''': is the set of polynomials <math> x^2,3x^2, x , 1 </math> a basis for  the set of all polynomials of degree two or less?
+
==='''Part 6'''===
No. The first thing to note is that the dimension of polynomials of degree two or less is 3 ( one for x^2, one for x, and one for 1). Since there are four vectors, one is auxiliary. More rigorously, of course, 3x^2 is a linear combination of x^2.
+
Is the set of polynomials <math> x^2,3x^2, x , 1 </math> a basis for  the set of all polynomials of degree two or less?
 +
 
 +
NO.  
 +
 
 +
The first thing to note is that the dimension of polynomials of degree two or less is 3 ( one for x^2, one for x, and one for 1). Since there are four vectors, one is auxiliary. More rigorously, of course, 3x^2 is a linear combination of x^2.
  
 
----
 
----
'''Part 7''': is the set of polynomials <math> x^2,3x^2 + 1, x , 1 </math> a basis for  the set of all polynomials of degree two or less?
+
==='''Part 7'''===
  
No. Same Reasoning as Part 6. There are four vectors when you only need 3. 3x^2+1 is a linear combination of x^2 and 1.
+
Is the set of polynomials <math> x^2,3x^2 + 1, x , 1 </math> a basis for  the set of all polynomials of degree two or less?
 +
 
 +
NO.  
 +
 
 +
Same Reasoning as Part 6. There are four vectors when you only need 3. 3x^2+1 is a linear combination of x^2 and 1.
 
----
 
----
  
'''Part 8''': is the set of polynomials <math> x^2, x , 1 </math> a basis for  the set of all polynomials of degree THREE or less?  
+
==='''Part 8'''===
No. They cannot create any polynomial of degree x^3, and therefore do not span the space.
+
 
 +
Is the set of polynomials <math> x^2, x , 1 </math> a basis for  the set of all polynomials of degree THREE or less?  
 +
 
 +
NO.  
 +
 
 +
They cannot create any polynomial of degree x^3, and therefore do not span the space.
 
----
 
----
'''Part 9''': is the set of polynomials <math> x^2, x , 1 </math> a basis for  the set of all polynomials of degree ONE or less?  
+
 
No. They also span outside of the space of polynomials of degree one, and therefore it is not exactly equal to the space and therefore is not a basis.
+
==='''Part 9'''===
 +
 
 +
Is the set of polynomials <math> x^2, x , 1 </math> a basis for  the set of all polynomials of degree ONE or less?  
 +
 
 +
NO.  
 +
 
 +
They also span outside of the space of polynomials of degree one, and therefore it is not exactly equal to the space and therefore is not a basis.
 
----
 
----
'''Part 10''': is the set of polynomials <math> x^2, x , 1 </math> a basis for  the set of all polynomials of EXACTLY degree TWO?
 
  
NO, it is not a basis for the set of all polynomials of exactly degree two. <math> 0*x^2+0*x+0*1=0</math> 0 is not a polynomial of degree two. So this set of polynomials spans outside of the given space of polynomials.
+
==='''Part 10'''===
 +
 
 +
Is the set of polynomials <math> x^2, x , 1 </math> a basis for  the set of all polynomials of EXACTLY degree TWO?
 +
 
 +
NO.
 +
 
 +
It is not a basis for the set of all polynomials of exactly degree two. <math> 0*x^2+0*x+0*1=0</math> 0 is not a polynomial of degree two. So this set of polynomials spans outside of the given space of polynomials.
 
----
 
----
'''Part 11''': is it possible to make a basis for the set of all polynomials of EXACTLY degree TWO?
 
  
Nope. Every basis can make the <math>\vec 0</math> by having constants of 0 in front of the basis vectors and summing them. In other words, the span of the basis vectors always contains 0 and 0 is not a polynomial of exactly degree two. In fact, it is impossible to make a basis for polynomials of and Exact degree greater than 0, since none of those spaces contain zero.
+
==='''Part 11'''===
 +
 
 +
Is it possible to make a basis for the set of all polynomials of EXACTLY degree TWO?
 +
 
 +
Nope.  
 +
 
 +
Every basis can make the <math>\vec 0</math> by having constants of 0 in front of the basis vectors and summing them. In other words, the span of the basis vectors always contains 0 and 0 is not a polynomial of exactly degree two. In fact, it is impossible to make a basis for polynomials of and Exact degree greater than 0, since none of those spaces contain zero.
 
----
 
----
'''Part 12''': is the set of polynomials <math> x^2, x , 1 </math> a basis for  the set of ALL POLYNOMIALS?
 
  
Nope. All polynomials includes polynomials of degree three or higher. As seen from Part 8, this set of vectors isn't a basis for this space.
+
==='''Part 12'''===
 +
 
 +
Is the set of polynomials <math> x^2, x , 1 </math> a basis for  the set of ALL POLYNOMIALS?
 +
 
 +
Nope.  
 +
 
 +
All polynomials includes polynomials of degree three or higher. As seen from Part 8, this set of vectors isn't a basis for this space.
 
----
 
----
'''Part 13''': Is it possible to make a finite basis for the set of ALL POLYNOMIALS?
+
 
 +
==='''Part 13'''===
 +
 
 +
Is it possible to make a finite basis for the set of ALL POLYNOMIALS?
  
 
Nope. To summarize the proof, by having a finite basis, we will have polynomials of at most degree n. But All polynomials includes polynomials of degree n+1, so the basis won't suffice.
 
Nope. To summarize the proof, by having a finite basis, we will have polynomials of at most degree n. But All polynomials includes polynomials of degree n+1, so the basis won't suffice.
 
----
 
----
'''Example #2: Finding a basis of a given space.'''
 
  
I will only give three examples, due to the exhaustiveness of the previous example:  
+
=='''Example #2: Finding a basis of a given space.'''==
 +
 
 +
For all of these problems, the basic procedure is as follows:
 +
*write the arbitrary expression that represents the general space.
 +
*then use the constraints for the space to remove as many ambiguous coefficients as possible
 +
*manipulate the expression so that it is in the form of <math> V=a_1*\vec v_1 +a_2*\vec v_2 +a_3*\vec v_3 ...+a_n*\vec v_n </math>, where V is just the arbitrary element in the subspace, and the vectors are our basis. In other words, manipulate the expression so that it is in the form of span by the basis vectors.
 +
 
 +
Another procedure is as follows:
 +
*write the span of the basis vectors
 +
*manipulate the expression so that the span is the exact expression of the arbitrary expression of an element of the space.
  
 
----
 
----
  
'''Part 1''':
+
==='''Part 1'''===
  
 
Let our space of polynomials be all polynomials of degree less than or equal to 2, such that p'(x)=x*p(x). What is our set of basis vectors?
 
Let our space of polynomials be all polynomials of degree less than or equal to 2, such that p'(x)=x*p(x). What is our set of basis vectors?
  
Here's how we approach this. Let's make an arbitrary expression of p(x) of degree two or less, which is <math>p(x) = c_1*x^2+c_2*x+c_3</math>. Now let's plug this into our equation: <math> c_1*x^2+c_2*x+c_3 = x*(2(c_1)*x+c_2). Therefore c_1*x^2+c_2*x+c_3 = 2*c_1*x^2+c_2*x </math> however, this means that <math> c_1=2*c_1, so c_1 = 0</math> and <math> c_3=0 </math> So in the end, our p(x) can only be written as <math> p(x)=c_2*x </math>. Note that
+
Here's how we approach this. Let's make an arbitrary expression of p(x) of degree two or less, which is  
 +
 
 +
<math>p(x) = c_1*x^2+c_2*x+c_3</math>.  
 +
 
 +
Now let's plug this into our equation:  
 +
 
 +
<math> c_1*x^2+c_2*x+c_3 = x*(2(c_1)*x+c_2)</math>.  
 +
Therefore <math> c_1*x^2+c_2*x+c_3 = 2*c_1*x^2+c_2*x </math>  
 +
 
 +
However, this means that <math> c_1=2*c_1</math>,  
 +
 
 +
so <math>c_1 = 0</math> and <math> c_3=0 </math>  
 +
 
 +
In the end, our p(x) can only be written as <math> p(x)=c_2*x </math>.  
 +
 
 
However, this means all of p(x) can be written as a linear combination of x. This means that x spans all of p(x) and is obviously linearly independent with itself. Therefore, x is the basis vector.
 
However, this means all of p(x) can be written as a linear combination of x. This means that x spans all of p(x) and is obviously linearly independent with itself. Therefore, x is the basis vector.
  
 
----
 
----
  
'''Part 2''':
+
==='''Part 2'''===
  
Let our space of polynomials be all polynomials of degree less than or equal to 3, and that p(-x)=-p(x). What is our set of basis vectors.
+
Let our space of polynomials be all polynomials of degree less than or equal to 3, and that p(-x)=-p(x). What is our set of basis vectors?
  
 
Again, we approach this by writing the arbitrary expression of p(x) and then attempting to remove unnecessary constants. so p(x) in this case is <math> p(x) = c_0+c_1*x+c_2*x^2+c_3*x^3</math>. Now, let's plug this into our constraints. Therefore our left hand side is <math> c_0+c_1*(-x)+c_2*(-x)^2+c_3*(-x)^3 </math> which simplifies to to <math> c_0 -c_1*x+c_2*x^2-c_3*x^3</math>. Our right hand side of the equation is <math> -c_0-c_1*x-c_2*x^2-c_3*x^3</math>. By setting these two equal, we find that <math> c_0=-c_0, c_1=c_1, c_2=-c_2, c_3=c_3</math>. This means that <math> c_0=0, c_2=0 </math>. Therefore our expression p(x) can be simplified as <math> p(x)= c_1*x+c_3*x^3</math>. Notice that p(x) is now in a form where it is written as an arbitrary linear combination of two vectors and is therefore spanned by x and x^3. This means x and x^3 are our basis.
 
Again, we approach this by writing the arbitrary expression of p(x) and then attempting to remove unnecessary constants. so p(x) in this case is <math> p(x) = c_0+c_1*x+c_2*x^2+c_3*x^3</math>. Now, let's plug this into our constraints. Therefore our left hand side is <math> c_0+c_1*(-x)+c_2*(-x)^2+c_3*(-x)^3 </math> which simplifies to to <math> c_0 -c_1*x+c_2*x^2-c_3*x^3</math>. Our right hand side of the equation is <math> -c_0-c_1*x-c_2*x^2-c_3*x^3</math>. By setting these two equal, we find that <math> c_0=-c_0, c_1=c_1, c_2=-c_2, c_3=c_3</math>. This means that <math> c_0=0, c_2=0 </math>. Therefore our expression p(x) can be simplified as <math> p(x)= c_1*x+c_3*x^3</math>. Notice that p(x) is now in a form where it is written as an arbitrary linear combination of two vectors and is therefore spanned by x and x^3. This means x and x^3 are our basis.
  
 
----
 
----
'''Part 3'''
+
 
 +
==='''Part 3'''===
  
 
The majority of this page has dealt with polynomials. However, the same concepts can be applied to matrices and vectors. For example, let's find the basis for all 2x2 diagonal matrices.
 
The majority of this page has dealt with polynomials. However, the same concepts can be applied to matrices and vectors. For example, let's find the basis for all 2x2 diagonal matrices.
  
Let's first write the ambiguous form of all 2x2 matrices first. so All Matrices <math>M2x2 = \begin{pmatrix}a_1 & a_2 \\a_3 & a_4 \end{pmatrix} </math>. Since the matrices are diagonal matrices, <math> a_2=0, a_3=0 </math>. So we rewriting M2x2, <math> M2x2=\begin{pmatrix}a_1 & 0 \\0& a_4 \end{pmatrix} </math>. However, remember that matrices can be split up by addition. so M2x2 can again be rewritten as <math> M2x2=\begin{pmatrix}a_1 & 0 \\0& 0 \end{pmatrix}+\begin{pmatrix}0 & 0 \\0& a_4 \end{pmatrix} =a_1*\begin{pmatrix}1 & 0 \\0& 0 \end{pmatrix}+a_4*\begin{pmatrix}0 & 0 \\0& 1 \end{pmatrix}</math>. Notice that M2x2 has been written as the arbitrary linear combination of two linearly independent vectors: <math>\begin{pmatrix}1 & 0 \\0& 0 \end{pmatrix} </math> and <math>\begin{pmatrix}0 & 0 \\0& 1 \end{pmatrix}</math>.
+
Let's first write the ambiguous form of all 2x2 matrices first.  
 +
 
 +
For all Matrices M2x2
 +
 
 +
<math> M2x2 = \begin{pmatrix}a_1 & a_2 \\a_3 & a_4 \end{pmatrix} </math>.  
 +
 
 +
Since the matrices are diagonal matrices,  
 +
 
 +
<math> a_2=0, a_3=0 </math>.  
 +
 
 +
So we rewriting M2x2,  
 +
 
 +
<math> M2x2=\begin{pmatrix}a_1 & 0 \\0& a_4 \end{pmatrix} </math>.  
 +
 
 +
However, remember that matrices can be split up by addition. So M2x2 can again be rewritten as  
 +
 
 +
<math> M2x2=\begin{pmatrix}a_1 & 0 \\0& 0 \end{pmatrix}+\begin{pmatrix}0 & 0 \\0& a_4 \end{pmatrix} =a_1*\begin{pmatrix}1 & 0 \\0& 0 \end{pmatrix}+a_4*\begin{pmatrix}0 & 0 \\0& 1 \end{pmatrix}</math>.  
 +
 
 +
Notice that M2x2 has been written as the arbitrary linear combination of two linearly independent vectors:  
 +
 
 +
<math>\begin{pmatrix}1 & 0 \\0& 0 \end{pmatrix} </math> and <math>\begin{pmatrix}0 & 0 \\0& 1 \end{pmatrix}</math>.
 +
 
 +
Therefore, we have our basis:
 +
 
 +
<math>\begin{pmatrix}1 & 0 \\0& 0 \end{pmatrix} </math>
 +
 
 +
and
  
Therefore, we have our basis: <math>\begin{pmatrix}1 & 0 \\0& 0 \end{pmatrix} </math> and <math>\begin{pmatrix}0 & 0 \\0& 1 \end{pmatrix}</math>
+
<math>\begin{pmatrix}0 & 0 \\0& 1 \end{pmatrix}</math>
  
 
----
 
----
  
 +
==='''Part 4'''===
  
 +
For one more problem, let's write a basis for the set of vectors
 +
 +
<math> \vec x </math>
 +
 +
such that
 +
 +
<math> A\vec x = 0 </math>,
 +
 +
where A=
 +
 +
<math> \begin{pmatrix}1 & 2 &3 \\4& 5 & 6\\7&8&9 \end{pmatrix} </math>.
 +
 +
0 in this case is the 3x1 column vector filled with zeroes.
 +
 +
The first step to solving this trying to write the ambiguous form for the vectors of this space. We know that since A is a 3x3 matrix, x must be a 3x1. So for the general case <math> \vec x = \begin{pmatrix}a_1 \\a_2\\a_3 \end{pmatrix} </math>.
 +
 +
Now let's add on the constraints. We have our equation
 +
 +
<math> A*\vec x =0 </math>.
 +
 +
As we've previously learned from Gaussian-Jordan elimination, we can do row operations on A and the equation is still satisfied. Therefore, we can replace A with the row-reduced Echelon form of it here:
 +
 +
Rref(A)=
 +
 +
<math> \begin{pmatrix}1 & 0 &-1 \\0& 1 & 2\\0&0&0 \end{pmatrix} </math>
 +
 +
Therefore our new equation is just
 +
 +
<math> \begin{pmatrix}1 & 0 &-1 \\0& 1 & 2\\0&0&0 \end{pmatrix}*\begin{pmatrix}a_1 \\a_2\\a_3 \end{pmatrix} = \begin{pmatrix}0 \\0\\0 \end{pmatrix} </math>
 +
 +
Now comes the trickier part. We still don't know the constraints on
 +
 +
<math> a_1, a_2, a_3 </math>
 +
 +
However, remember that this equation is just another representation of a system of equations. So let's multiply out the matrix into the three equations and see what we get:
 +
 +
<math>
 +
a_1 + 0*a_2 + -1*a_3=0
 +
0*a_1+a_2+2*a_3=0
 +
0=0
 +
</math>
 +
 +
we can rewrite the equations, and remove the third equation. We'll rewrite it specifically to remove as many unnecessary coefficients as possible:
 +
 +
<math>
 +
a_1 = a_3
 +
a_2=-2*a_3
 +
</math>
 +
 +
Now, substitute that into the ambiguous form of <math> \vec x </math>.
 +
 +
<math> \vec x = \begin{pmatrix}a_1 \\a_2\\a_3 \end{pmatrix}=\begin{pmatrix}a_3 \\-2*a_3\\a_3 \end{pmatrix} </math>
 +
 +
Remember, <math> a_3 </math> is an arbitrary constant, so let's factor it out.
 +
 +
<math> \vec x = a_3*\begin{pmatrix}1 \\-2\\1 \end{pmatrix} </math>
 +
 +
Notice that our vector x is written in the form of a span (as in it is the arbitrary linear combination of a vector)!
 +
And thus, this vector is our basis vector.
 +
Therefore, our basis vector is <math> \begin{pmatrix}1 \\-2\\1 \end{pmatrix} </math>
 +
 +
Note: this whole method was to find the basis for the nullspace of matrix A.
 +
 +
----
 +
 +
==='''Part 5'''===
 +
 +
I will present one last example: find a basis for the set of column vectors such that
 +
 +
<math> A*\vec x = k \vec x </math>,
 +
 +
where
 +
 +
<math> A = \begin{pmatrix}2&1 \\2&3\end{pmatrix} </math> and k is not equal to zero.
 +
 +
So first, we know that
 +
 +
<math> \vec x </math> is a column vector,
 +
 +
and because the matrix A can be multiplied to it, it must be a 2x1 column vector. So our arbitrary expression for it will be
 +
 +
<math> \vec x = \begin{pmatrix} a_1\\a_2\end{pmatrix}</math>
 +
 +
Now let's simplify the expression slightly.
 +
 +
<math> A*\vec x - k*I*\vec x=0, (A-k*I)*\vec x =0 </math>.
 +
 +
Notice something? this is the exact same equation now as Part 4. However, we have an issue. Last time, we knew all the coefficients in our "A", however we don't know all the coefficients of "A-kI", since we don't know what k is. However, we know two things:
 +
 +
*first, <math> \vec x = 0 </math> works, since 0=0. So, if "A-kI" is invertible, then the only solution would be <math> \vec x =0 </math>. However, let's check the case that "A-kI" is NOT invertible.
 +
 +
In this second case, the determinant of A-kI would be equal to 0. Writing out this determinant we get:
 +
 +
<math> det(A-kI)=(2-k)(3-k)-2*1=k^2-5*k+6-2=k^2-5k+4=(k-4)(k-1)=0 </math> so k=4 or 1.
 +
 +
Now we just do exactly what we did back in Part 4. I'll save the exact work, but basically ,we plug in the values of k and find the arbitrary form of x for each.
 +
 +
For k=1,
 +
 +
We get that
 +
 +
<math> \vec x = \begin{pmatrix}1\\-1\end{pmatrix}</math>
 +
 +
For k=4
 +
 +
we get that
 +
 +
<math> \vec x = \begin{pmatrix}1 \\2 \end{pmatrix}</math>
 +
 +
These two vectors are now our basis. Therefore we have our basis vectors.
 +
 +
Note: This method is finding out the basis for the eigenspace.
 +
 +
 +
 +
----
 
=Conclusion:=
 
=Conclusion:=
  
 
The basic way of approaching any basis problem is to write the arbitrary form of the element and check to see if you can rewrite the expression as a linear combination of the basis vectors.
 
The basic way of approaching any basis problem is to write the arbitrary form of the element and check to see if you can rewrite the expression as a linear combination of the basis vectors.
 +
 +
----
 +
==Questions and comments==
 +
 +
If you have any questions, comments, etc. please, please please post them below:
 +
 +
* Comment / question 1
 +
* Comment / question 2
  
 
----
 
----
Line 125: Line 392:
 
[[MA351|Back to MA351]]
 
[[MA351|Back to MA351]]
  
[[Category:MA351]]
+
[[Math_squad|Back to Math Squad page]]
 +
 
 +
<div style="font-family: Verdana, sans-serif; font-size: 14px; text-align: justify; width: 70%; margin: auto; border: 1px solid #aaa; padding: 2em;">
 +
The Spring 2013 Math Squad 2013 was supported by an anonymous [https://www.projectrhea.org/learning/donate.php gift] to [https://www.projectrhea.org/learning/about_Rhea.php Project Rhea]. If you enjoyed reading these tutorials, please help Rhea "help students learn" with a [https://www.projectrhea.org/learning/donate.php donation] to this project. Your [https://www.projectrhea.org/learning/donate.php contribution] is greatly appreciated.
 +
</div>

Latest revision as of 12:09, 25 November 2013


Basis Problems

by Joseph Ruan, proud member of the Math Squad


Example #1: Testing whether a set of vectors is a basis for a given space


Part 1

Is the set of polynomials $ x^2, x, 1 $ a basis for the set of all polynomials of degree two or less?

YES. The first way to approach this problem is to write the knowns. We know that our given subspace V is the set of all polynomials of degree two or less. We also know that $ x^2, x, 1 $ are the vectors we'd like to test as to whether they form a basis for our space V. There are two conditions, $ x^2, x, 1 $ need to be linearly independent and $ x^2, x, 1 $ need to span V. To be concise, let's call these three vectors, respectively, as $ \vec v_1, \vec v_2, \vec v_3 $

The first aspect is trivial, since you can't make any one term by combining the others.

As for the second part, here's a simple trick: every polynomial (let's just call it p(x)) can be written in the form $ p(x)=a*x^2+ b*x+ c*1 $. However, if we look more carefully, a, b, and c are all constants! So we can rewrite this as

$ p(x)=c_1*x^2+ c_2*x+ c_3*1 $.

This however can be also rewritten as

$ p(x)=c_1*\vec v_1+ c_2*\vec v_2+ c_3*\vec v_3 $

and notice something? this is the definition of our three vectors spanning the space! Therefore the three vectors do indeed span the space and are linearly independent. This means that they are a basis

As a take away from this problem, notice that to test whether the basis vectors span the space, write the ambiguous form of the polynomial (in this case, $ p(x)=a*x^2+ b*x+ c*1 $) and then manipulate it to see if it can be written as an arbitrary linear combination of our given basis vectors. This is how you test for span.

NOTE: It wasn't absolutely necessary to rewrite the basis vectors in the form $ \vec v_1, \vec v_2, \vec v_3 $,

but this was to help make the span more familiar. The following parts will not write the basis vectors in this type of vector expression.



Part 2

Is the set of polynomials $ 3x^2, x ,1 $ a basis for the set of all polynomials of degree two or less?

YES.

Similar to Part 1, we have the same space V and we know that the vectors are linearly independent. Moreover, when we write $ p(x)=c_1*x^2+ c_2*x+ c_3*1 $, we know that $ 3x^2=3*(x^2) $ so $ p(x)=(c_1 /3)*(3x^2)+ c_2*x+ c_3*1 $ and therefore since $ c_1 /3 $ is a constant, we can rewrite this all as $ p(x)=(k_1)*(3x^2)+ k_2*x+ k_3*1 $. Which means the set of vectors span the space.

Another way of seeing this is that this basis can form the previous basis, by simply dividing the first vector by three. Therefore, since the previous basis spanned the space, this one must too. That would save a lot of work.


Part 3

Is the set of polynomials $ 3x^2 + x, x , 1 $ a basis for the set of all polynomials of degree two or less?

YES.

They are definitely linearly independent because $ 3x^2 + x $ cannot be made without an $ x^2 $ term and $ x $ cannot be made without removing the $ x^2 $ term from $ 3x^2 + x $ and 1 cannot be made from the first two.

Since we know that space V is still the same as with parts 1 and 2, let's write p(x) as $ p(x)=c_1*x^2+ c_2*x+ c_3*1 $. Notice that $ 3x^2+x = 3*(x^2)+1*x $.

The rigorous solution is as follows:

$ p(x)=c_1*x^2+ c_2*x+ c_3*1 = (c_1 /3)*(3x^2)+c_2*x+c_3*1=(c_1 /3)*(3x^2)+(c_1 /3)*x+(c_2-c_1/3)*x+c_3*1=(c_1 /3)(3x^2+x) + (c_2-c_1 /3)*x + c_3*1 $. Since $ (c_1 /3), (c_2-c_1/3) $ are still both constants, we can rewrite the expression as $ p(x)=(k_1)(3x^2+x) + (k_2)*x + k_3*1 $.

This is the definition of span using our three vectors, meaning that the vectors span the space. Thus, since they span and are linearly independent, they are a basis.

The simple solution is as follows: $ 3x^2 + x, x , 1 $ is just a linear combination of the first initial basis, $ x^2, x, 1 $ and is still linearly independent. Logically, it should still span whatever the initial basis could span and therefore, it spans the entire space.


Part 4

Is the set of polynomials $ 3x^2+x+1, 2x+1, 2 $ a basis for the set of all polynomials of degree two or less?

YES.

This problem is more or less identical to Part 3, just with more parts. The same simple solution argument can be made and the rigorous argument as well. However, I will show you a different way of thinking about it. Instead of starting with p(x), we'll start with the span of the three vectors. so the span (let's just call it "S"),

$ S= c_1*(3x^2+x+1) +c_2*(2x+1)+c_3*2=3*(c_1)*x^2+c_1*x+c_1*1+c_2*2*x+c_2*1+c_3*2=(3*c_1)*x^2+(c_1+2*c_2)*x+(c_1+c_2+2*c_3)*1 $.

But wait! $ (3*c_1), (c_1+2*c_2) and (c_1+c_2+2*c_3) $ are constants!

Meaning that S can be rewritten as $ (k_1)*x^2+(k_2)*x+(k_3)*1 $ Since $ k_1, k_2, k_3 $ have no bounds (meaning they can take any value possible), this expression is equivalent to our ambiguous expression

$ p(x)=d_1*x^2+d_2*x+d_3 $.

Therefore, the span of our three vectors is exactly the space that we are given. Therefore it is a basis for the space.


Part 5

Is the set of polynomials $ x^3, x, , 1 $ a basis for the set of all polynomials of degree two or less?

NO.

x^3 is included in the space of polynomials of degree THREE. So this set of vectors cannot be a basis for polynomials of degree two or less since its span is not equal to the given space of polynomials.



Part 6

Is the set of polynomials $ x^2,3x^2, x , 1 $ a basis for the set of all polynomials of degree two or less?

NO.

The first thing to note is that the dimension of polynomials of degree two or less is 3 ( one for x^2, one for x, and one for 1). Since there are four vectors, one is auxiliary. More rigorously, of course, 3x^2 is a linear combination of x^2.


Part 7

Is the set of polynomials $ x^2,3x^2 + 1, x , 1 $ a basis for the set of all polynomials of degree two or less?

NO.

Same Reasoning as Part 6. There are four vectors when you only need 3. 3x^2+1 is a linear combination of x^2 and 1.


Part 8

Is the set of polynomials $ x^2, x , 1 $ a basis for the set of all polynomials of degree THREE or less?

NO.

They cannot create any polynomial of degree x^3, and therefore do not span the space.


Part 9

Is the set of polynomials $ x^2, x , 1 $ a basis for the set of all polynomials of degree ONE or less?

NO.

They also span outside of the space of polynomials of degree one, and therefore it is not exactly equal to the space and therefore is not a basis.


Part 10

Is the set of polynomials $ x^2, x , 1 $ a basis for the set of all polynomials of EXACTLY degree TWO?

NO.

It is not a basis for the set of all polynomials of exactly degree two. $ 0*x^2+0*x+0*1=0 $ 0 is not a polynomial of degree two. So this set of polynomials spans outside of the given space of polynomials.


Part 11

Is it possible to make a basis for the set of all polynomials of EXACTLY degree TWO?

Nope.

Every basis can make the $ \vec 0 $ by having constants of 0 in front of the basis vectors and summing them. In other words, the span of the basis vectors always contains 0 and 0 is not a polynomial of exactly degree two. In fact, it is impossible to make a basis for polynomials of and Exact degree greater than 0, since none of those spaces contain zero.


Part 12

Is the set of polynomials $ x^2, x , 1 $ a basis for the set of ALL POLYNOMIALS?

Nope.

All polynomials includes polynomials of degree three or higher. As seen from Part 8, this set of vectors isn't a basis for this space.


Part 13

Is it possible to make a finite basis for the set of ALL POLYNOMIALS?

Nope. To summarize the proof, by having a finite basis, we will have polynomials of at most degree n. But All polynomials includes polynomials of degree n+1, so the basis won't suffice.


Example #2: Finding a basis of a given space.

For all of these problems, the basic procedure is as follows:

  • write the arbitrary expression that represents the general space.
  • then use the constraints for the space to remove as many ambiguous coefficients as possible
  • manipulate the expression so that it is in the form of $ V=a_1*\vec v_1 +a_2*\vec v_2 +a_3*\vec v_3 ...+a_n*\vec v_n $, where V is just the arbitrary element in the subspace, and the vectors are our basis. In other words, manipulate the expression so that it is in the form of span by the basis vectors.

Another procedure is as follows:

  • write the span of the basis vectors
  • manipulate the expression so that the span is the exact expression of the arbitrary expression of an element of the space.

Part 1

Let our space of polynomials be all polynomials of degree less than or equal to 2, such that p'(x)=x*p(x). What is our set of basis vectors?

Here's how we approach this. Let's make an arbitrary expression of p(x) of degree two or less, which is

$ p(x) = c_1*x^2+c_2*x+c_3 $.

Now let's plug this into our equation:

$ c_1*x^2+c_2*x+c_3 = x*(2(c_1)*x+c_2) $. Therefore $ c_1*x^2+c_2*x+c_3 = 2*c_1*x^2+c_2*x $

However, this means that $ c_1=2*c_1 $,

so $ c_1 = 0 $ and $ c_3=0 $

In the end, our p(x) can only be written as $ p(x)=c_2*x $.

However, this means all of p(x) can be written as a linear combination of x. This means that x spans all of p(x) and is obviously linearly independent with itself. Therefore, x is the basis vector.


Part 2

Let our space of polynomials be all polynomials of degree less than or equal to 3, and that p(-x)=-p(x). What is our set of basis vectors?

Again, we approach this by writing the arbitrary expression of p(x) and then attempting to remove unnecessary constants. so p(x) in this case is $ p(x) = c_0+c_1*x+c_2*x^2+c_3*x^3 $. Now, let's plug this into our constraints. Therefore our left hand side is $ c_0+c_1*(-x)+c_2*(-x)^2+c_3*(-x)^3 $ which simplifies to to $ c_0 -c_1*x+c_2*x^2-c_3*x^3 $. Our right hand side of the equation is $ -c_0-c_1*x-c_2*x^2-c_3*x^3 $. By setting these two equal, we find that $ c_0=-c_0, c_1=c_1, c_2=-c_2, c_3=c_3 $. This means that $ c_0=0, c_2=0 $. Therefore our expression p(x) can be simplified as $ p(x)= c_1*x+c_3*x^3 $. Notice that p(x) is now in a form where it is written as an arbitrary linear combination of two vectors and is therefore spanned by x and x^3. This means x and x^3 are our basis.


Part 3

The majority of this page has dealt with polynomials. However, the same concepts can be applied to matrices and vectors. For example, let's find the basis for all 2x2 diagonal matrices.

Let's first write the ambiguous form of all 2x2 matrices first.

For all Matrices M2x2

$ M2x2 = \begin{pmatrix}a_1 & a_2 \\a_3 & a_4 \end{pmatrix} $.

Since the matrices are diagonal matrices,

$ a_2=0, a_3=0 $.

So we rewriting M2x2,

$ M2x2=\begin{pmatrix}a_1 & 0 \\0& a_4 \end{pmatrix} $.

However, remember that matrices can be split up by addition. So M2x2 can again be rewritten as

$ M2x2=\begin{pmatrix}a_1 & 0 \\0& 0 \end{pmatrix}+\begin{pmatrix}0 & 0 \\0& a_4 \end{pmatrix} =a_1*\begin{pmatrix}1 & 0 \\0& 0 \end{pmatrix}+a_4*\begin{pmatrix}0 & 0 \\0& 1 \end{pmatrix} $.

Notice that M2x2 has been written as the arbitrary linear combination of two linearly independent vectors:

$ \begin{pmatrix}1 & 0 \\0& 0 \end{pmatrix} $ and $ \begin{pmatrix}0 & 0 \\0& 1 \end{pmatrix} $.

Therefore, we have our basis:

$ \begin{pmatrix}1 & 0 \\0& 0 \end{pmatrix} $

and

$ \begin{pmatrix}0 & 0 \\0& 1 \end{pmatrix} $


Part 4

For one more problem, let's write a basis for the set of vectors

$ \vec x $

such that

$ A\vec x = 0 $,

where A=

$ \begin{pmatrix}1 & 2 &3 \\4& 5 & 6\\7&8&9 \end{pmatrix} $.

0 in this case is the 3x1 column vector filled with zeroes.

The first step to solving this trying to write the ambiguous form for the vectors of this space. We know that since A is a 3x3 matrix, x must be a 3x1. So for the general case $ \vec x = \begin{pmatrix}a_1 \\a_2\\a_3 \end{pmatrix} $.

Now let's add on the constraints. We have our equation

$ A*\vec x =0 $.

As we've previously learned from Gaussian-Jordan elimination, we can do row operations on A and the equation is still satisfied. Therefore, we can replace A with the row-reduced Echelon form of it here:

Rref(A)=

$ \begin{pmatrix}1 & 0 &-1 \\0& 1 & 2\\0&0&0 \end{pmatrix} $

Therefore our new equation is just

$ \begin{pmatrix}1 & 0 &-1 \\0& 1 & 2\\0&0&0 \end{pmatrix}*\begin{pmatrix}a_1 \\a_2\\a_3 \end{pmatrix} = \begin{pmatrix}0 \\0\\0 \end{pmatrix} $

Now comes the trickier part. We still don't know the constraints on

$ a_1, a_2, a_3 $

However, remember that this equation is just another representation of a system of equations. So let's multiply out the matrix into the three equations and see what we get:

$ a_1 + 0*a_2 + -1*a_3=0 0*a_1+a_2+2*a_3=0 0=0 $

we can rewrite the equations, and remove the third equation. We'll rewrite it specifically to remove as many unnecessary coefficients as possible:

$ a_1 = a_3 a_2=-2*a_3 $

Now, substitute that into the ambiguous form of $ \vec x $.

$ \vec x = \begin{pmatrix}a_1 \\a_2\\a_3 \end{pmatrix}=\begin{pmatrix}a_3 \\-2*a_3\\a_3 \end{pmatrix} $

Remember, $ a_3 $ is an arbitrary constant, so let's factor it out.

$ \vec x = a_3*\begin{pmatrix}1 \\-2\\1 \end{pmatrix} $

Notice that our vector x is written in the form of a span (as in it is the arbitrary linear combination of a vector)! And thus, this vector is our basis vector. Therefore, our basis vector is $ \begin{pmatrix}1 \\-2\\1 \end{pmatrix} $

Note: this whole method was to find the basis for the nullspace of matrix A.


Part 5

I will present one last example: find a basis for the set of column vectors such that

$ A*\vec x = k \vec x $,

where

$ A = \begin{pmatrix}2&1 \\2&3\end{pmatrix} $ and k is not equal to zero.

So first, we know that

$ \vec x $ is a column vector,

and because the matrix A can be multiplied to it, it must be a 2x1 column vector. So our arbitrary expression for it will be

$ \vec x = \begin{pmatrix} a_1\\a_2\end{pmatrix} $

Now let's simplify the expression slightly.

$ A*\vec x - k*I*\vec x=0, (A-k*I)*\vec x =0 $.

Notice something? this is the exact same equation now as Part 4. However, we have an issue. Last time, we knew all the coefficients in our "A", however we don't know all the coefficients of "A-kI", since we don't know what k is. However, we know two things:

  • first, $ \vec x = 0 $ works, since 0=0. So, if "A-kI" is invertible, then the only solution would be $ \vec x =0 $. However, let's check the case that "A-kI" is NOT invertible.

In this second case, the determinant of A-kI would be equal to 0. Writing out this determinant we get:

$ det(A-kI)=(2-k)(3-k)-2*1=k^2-5*k+6-2=k^2-5k+4=(k-4)(k-1)=0 $ so k=4 or 1.

Now we just do exactly what we did back in Part 4. I'll save the exact work, but basically ,we plug in the values of k and find the arbitrary form of x for each.

For k=1,

We get that

$ \vec x = \begin{pmatrix}1\\-1\end{pmatrix} $

For k=4

we get that

$ \vec x = \begin{pmatrix}1 \\2 \end{pmatrix} $

These two vectors are now our basis. Therefore we have our basis vectors.

Note: This method is finding out the basis for the eigenspace.



Conclusion:

The basic way of approaching any basis problem is to write the arbitrary form of the element and check to see if you can rewrite the expression as a linear combination of the basis vectors.


Questions and comments

If you have any questions, comments, etc. please, please please post them below:

  • Comment / question 1
  • Comment / question 2

Back to Linear Algebra Resource

Back to MA351

Back to Math Squad page

The Spring 2013 Math Squad 2013 was supported by an anonymous gift to Project Rhea. If you enjoyed reading these tutorials, please help Rhea "help students learn" with a donation to this project. Your contribution is greatly appreciated.

Alumni Liaison

Ph.D. 2007, working on developing cool imaging technologies for digital cameras, camera phones, and video surveillance cameras.

Buyue Zhang