(9 intermediate revisions by 2 users not shown)
Line 1: Line 1:
'''Inner Products and Orthogonality'''
+
[[Category:bonus point project]]
 +
[[Category:linear algebra]]
 +
[[Category:MA265]]
  
Primarily, it is necessary to begin with the basic definitions of Inner Products and Orthogonality. An inner product is defined by Bernard Kolman in his Elementary Linear Algebra book as being "a function V that assigns to each ordered pair of vectors u,v in V a real number (u,v) satisfying the following properties." The symbolism for an inner product consists of two vectors separated by a common and imposed by two parentheses. To mathematically compute the inner product is to simply take the cross product of the two vectors. Taking the cross product is to multiply the variables of the vectors in the same location by each other and then add all of the products up. The inner product is as follows: (u,v).
+
<center><font size= 4>
 +
'''Inner Products and Orthogonality'''
 +
</font size>
 +
 
 +
Student project for [[MA265]]
 +
</center>
 +
----
 +
----
 +
 
 +
Primarily, it is necessary to begin with the basic definitions of Inner Products. An '''inner product''' is defined by Bernard Kolman in his Elementary Linear Algebra book as being "a function V that assigns to each ordered pair of vectors u,v in V a real number (u,v) satisfying the following properties." The symbolism for an inner product consists of two vectors separated by a comma and imposed by two parentheses. To mathematically compute the inner product is to simply take the dot product of the two vectors. Taking the dot product is to multiply the variables of the vectors in the same location by each other and then add all of the products up. The inner product is as follows: (u,v).
  
 
  There are four properties taken from Elementary Linear Algebra book that inner products must follow:
 
  There are four properties taken from Elementary Linear Algebra book that inner products must follow:
Line 14: Line 25:
  
 
The inner product also lies in a vector space that can be represented by V. This is called an '''inner product space'''. An inner product space is defined simply as a [http://mathworld.wolfram.com/VectorSpace.html] vector space that contains a inner product. As a side note, if the vector space is to the nth power it is refered to as an [http://mathworld.wolfram.com/EuclideanSpace.html] '''Euclidean space''' which is a finite space as well.  
 
The inner product also lies in a vector space that can be represented by V. This is called an '''inner product space'''. An inner product space is defined simply as a [http://mathworld.wolfram.com/VectorSpace.html] vector space that contains a inner product. As a side note, if the vector space is to the nth power it is refered to as an [http://mathworld.wolfram.com/EuclideanSpace.html] '''Euclidean space''' which is a finite space as well.  
 +
 +
  
 
The inner product is useful in computing various other items in mathematics as well. By knowing the inner product, one can then in turn figure out the angle between two vectors. This simple equation for determining the angle, theta, is given by:
 
The inner product is useful in computing various other items in mathematics as well. By knowing the inner product, one can then in turn figure out the angle between two vectors. This simple equation for determining the angle, theta, is given by:
Line 21: Line 34:
 
               ----
 
               ----
 
           ||u|| ||v||
 
           ||u|| ||v||
 +
 
where the denominator consists of the product of the lengths of vectors u and v. The cautionary information is that this only works when u,v vectors are not the zero vector and the equation above is between -1 and 1. The resulting angle will be 0 ≤ θ ≤ π.  
 
where the denominator consists of the product of the lengths of vectors u and v. The cautionary information is that this only works when u,v vectors are not the zero vector and the equation above is between -1 and 1. The resulting angle will be 0 ≤ θ ≤ π.  
  
 
The second thing that the inner product is useful for is that it creates an inequality known as the '''Triangle Inequality'''. The property is that ||u+v|| ≤ ||u||+||v||.  
 
The second thing that the inner product is useful for is that it creates an inequality known as the '''Triangle Inequality'''. The property is that ||u+v|| ≤ ||u||+||v||.  
 +
  
  
Line 31: Line 46:
 
[http://www.youtube.com/watch?v=ibqvxMO47ww&feature=related] More on inner products
 
[http://www.youtube.com/watch?v=ibqvxMO47ww&feature=related] More on inner products
  
Simply, as follows in the book is a definition for Orthogonality. "Two vectors u and v in V are orthogonal if (u,v)=0." This is to say that given one vector crossed with another vector is equal to zero, then they are orthogonal.  
+
 
 +
 
 +
Simply, as follows in the book is a definition for Orthogonality. "Two vectors u and v in V are orthogonal if (u,v)=0." The orthogonal vectors must also be contained in the inner product space as described above. This is to say that given one vector dotted with another vector is equal to zero, then they are orthogonal.  
  
 
'''For example using variables:'''
 
'''For example using variables:'''
  
u=[a;b] v=[c;d]
+
u = [a;b] v = [c;d]
  
 
(u,v)=(u x v) =  ac + bd = 0 => orthogonal vectors
 
(u,v)=(u x v) =  ac + bd = 0 => orthogonal vectors
Line 41: Line 58:
 
'''For example using numbers:'''
 
'''For example using numbers:'''
  
u=[1;0] v=[0;1]
+
u = [1;0] v = [0;1]
 +
 
 +
(u,v) = (u x v) = 1(0) + 0(1) = 0 => orthogonal vectors
 +
 
 +
An example to watch out for is the zero vector. The zero vector happens to be orthogonal to every vector. This is because the zero vector multiplied by any vector is equal to 0.
  
(u,v)=(u x v) = 1(0) + 0(1) = 0 => orthogonal vectors
 
  
  
 +
The orthogonal vectors are critical in finding an orthonormal set of vectors. These orthonormal vectors happen to be the orthogonal vectors that are also the unit length. To compute an orthonormal set of vectors from a given orthogonal set, one needs to compute the unit vector. The unit vector consists of the square root of the vector squared represented by √(u,u).
 +
After finding the unit vector, all that is needed to be done is divide each variable of the orthogonal set by the unit vector. The resulting matrix is the orthonormal set of vectors that is contained in the inner product space as well. 
  
 
----
 
----

Latest revision as of 08:09, 11 April 2013


Inner Products and Orthogonality

Student project for MA265



Primarily, it is necessary to begin with the basic definitions of Inner Products. An inner product is defined by Bernard Kolman in his Elementary Linear Algebra book as being "a function V that assigns to each ordered pair of vectors u,v in V a real number (u,v) satisfying the following properties." The symbolism for an inner product consists of two vectors separated by a comma and imposed by two parentheses. To mathematically compute the inner product is to simply take the dot product of the two vectors. Taking the dot product is to multiply the variables of the vectors in the same location by each other and then add all of the products up. The inner product is as follows: (u,v).

There are four properties taken from Elementary Linear Algebra book that inner products must follow:

1) (u,u) is greater than or equal to 0 ((u,u)=0 if u equals the zero vector)

2) (v,u)=(u,v) for an u,v in V

3) (u+v,w)=(u,w)+(v,w) for an u,v,w in V

4) (cu,v)=c(u,v) for u, v in V and c a real scalar

The inner product also lies in a vector space that can be represented by V. This is called an inner product space. An inner product space is defined simply as a [1] vector space that contains a inner product. As a side note, if the vector space is to the nth power it is refered to as an [2] Euclidean space which is a finite space as well.


The inner product is useful in computing various other items in mathematics as well. By knowing the inner product, one can then in turn figure out the angle between two vectors. This simple equation for determining the angle, theta, is given by:

cos(θ)=

             (u,v)
              ----
          ||u|| ||v||

where the denominator consists of the product of the lengths of vectors u and v. The cautionary information is that this only works when u,v vectors are not the zero vector and the equation above is between -1 and 1. The resulting angle will be 0 ≤ θ ≤ π.

The second thing that the inner product is useful for is that it creates an inequality known as the Triangle Inequality. The property is that ||u+v|| ≤ ||u||+||v||.


  • It may also be helpful to look at other explanations of inner products. These links will bring you to other people explaining inner products:

[3] The standard inner product

[4] More on inner products


Simply, as follows in the book is a definition for Orthogonality. "Two vectors u and v in V are orthogonal if (u,v)=0." The orthogonal vectors must also be contained in the inner product space as described above. This is to say that given one vector dotted with another vector is equal to zero, then they are orthogonal.

For example using variables:

u = [a;b] v = [c;d]

(u,v)=(u x v) = ac + bd = 0 => orthogonal vectors

For example using numbers:

u = [1;0] v = [0;1]

(u,v) = (u x v) = 1(0) + 0(1) = 0 => orthogonal vectors

An example to watch out for is the zero vector. The zero vector happens to be orthogonal to every vector. This is because the zero vector multiplied by any vector is equal to 0.


The orthogonal vectors are critical in finding an orthonormal set of vectors. These orthonormal vectors happen to be the orthogonal vectors that are also the unit length. To compute an orthonormal set of vectors from a given orthogonal set, one needs to compute the unit vector. The unit vector consists of the square root of the vector squared represented by √(u,u). After finding the unit vector, all that is needed to be done is divide each variable of the orthogonal set by the unit vector. The resulting matrix is the orthonormal set of vectors that is contained in the inner product space as well.


Back to MA265 Fall 2011 Prof. Walther

Alumni Liaison

Abstract algebra continues the conceptual developments of linear algebra, on an even grander scale.

Dr. Paul Garrett