m
(Solved System)
 
(12 intermediate revisions by the same user not shown)
Line 27: Line 27:
  
 
----
 
----
 +
 +
===Contents===
 +
#Vectors
 +
##Addition/Subtraction
 +
##Scalar Multiplication
 +
##Unit
 +
##Dot
 +
##Cross
  
 
====Vectors====
 
====Vectors====
 
For computer science students vectors can be seen as ordered lists, for engineering students focused on physics they can be seen as a direction and a length.  For linear algebra they can be approached from any and every angle <font size=2>Given <math>0\leq\theta<2\pi</math> of course</font size>.
 
For computer science students vectors can be seen as ordered lists, for engineering students focused on physics they can be seen as a direction and a length.  For linear algebra they can be approached from any and every angle <font size=2>Given <math>0\leq\theta<2\pi</math> of course</font size>.
  
For the purposes of this tutorial think of it was a way to move a point (normally at the origin) to another point
+
For the purposes of this tutorial think of it as a way to move a point (normally at the origin) to another point
  
 
<font size=1>As a warning most of this page will be movement oriented and I will try my best to graphically demonstrate that</font size>
 
<font size=1>As a warning most of this page will be movement oriented and I will try my best to graphically demonstrate that</font size>
Line 48: Line 56:
  
 
In this way <math>\begin{bmatrix} 1\\ 2\end{bmatrix}+\begin{bmatrix} -3\\ 2\end{bmatrix}=\begin{bmatrix}1-3\\2+2\end{bmatrix}=\begin{bmatrix}-2\\4\end{bmatrix}</math>
 
In this way <math>\begin{bmatrix} 1\\ 2\end{bmatrix}+\begin{bmatrix} -3\\ 2\end{bmatrix}=\begin{bmatrix}1-3\\2+2\end{bmatrix}=\begin{bmatrix}-2\\4\end{bmatrix}</math>
 +
 +
Vector subtraction is built on primarily the same process but in reverse as could be expected.
 +
 +
Some ways to derive the exact method would be imagining <math>\vec{u}-\vec{v}=\vec{w}</math> as <math>\vec{u}=\vec{w}+\vec{v}</math>
 +
 +
We know from before that <math>\begin{bmatrix} 1\\ 2\end{bmatrix}+\begin{bmatrix} -3\\ 2\end{bmatrix}=\begin{bmatrix}-2\\4\end{bmatrix}</math>  So in that case <math>\begin{bmatrix} 1\\ 2\end{bmatrix}=\begin{bmatrix}-2\\4\end{bmatrix}-\begin{bmatrix} -3\\ 2\end{bmatrix}</math> or <math>\begin{bmatrix}-2\\4\end{bmatrix}-\begin{bmatrix} -3\\ 2\end{bmatrix}=\begin{bmatrix} -2-(-3)\\ 4-2\end{bmatrix}=\begin{bmatrix}1\\2\end{bmatrix}</math>
 +
 +
So vector subtraction works very much the way that we would expect as well.
 +
 +
You may be thinking that <math>\begin{bmatrix} 1\\ 2\end{bmatrix}=\begin{bmatrix}-2\\4\end{bmatrix}-\begin{bmatrix} -3\\ 2\end{bmatrix}</math> could also be proven by writing it as <math>\begin{bmatrix} 1\\ 2\end{bmatrix}=\begin{bmatrix}-2\\4\end{bmatrix}+(-1)*\begin{bmatrix} -3\\ 2\end{bmatrix}</math>.
 +
 +
We haven't defined scalar multiplication yet but imagine it as adding a vector to itself so <math>2*\begin{bmatrix} 1\\2\end{bmatrix}</math> is the same as <math>\begin{bmatrix} 1\\2\end{bmatrix}+\begin{bmatrix} 1\\2\end{bmatrix}</math>
 +
 +
This would just be <math>\begin{bmatrix} 1+1\\2+2\end{bmatrix}</math> or <math>\begin{bmatrix} 2*1\\2*2\end{bmatrix}</math>
 +
 +
So multiplying a vector by a scalar is the same as multiplying the individual components of the vector by that number
 +
 +
====Unit Vectors====
 +
 +
Since we seem to be using this specific example vector a lot, why don't we combine these two concepts to start writing <math>\begin{bmatrix}1\\2\end{bmatrix}</math> as <math>\begin{bmatrix}1\\0\end{bmatrix}+2*\begin{bmatrix}0\\1\end{bmatrix}</math>?
 +
 +
The two vectors <math>\begin{bmatrix}1\\0\end{bmatrix}</math> and <math>\begin{bmatrix}0\\1\end{bmatrix}</math> seem like we'll use them a lot so lets name them <math>\vec{i}</math> and <math>\vec{j}</math> respectively.
 +
 +
Now instead of me typing out all the code required to show <math>\begin{bmatrix}1\\2\end{bmatrix}</math> <font size=1><math>_{\text{Which is a lot actually I don't know why I even bother}}</math></font size>, I can just say <math>\vec{i}+2\vec{j}</math>.
 +
 +
This also conveniently helps us with vector addition and subtraction. <math>\begin{bmatrix} 1\\ 2\end{bmatrix}+\begin{bmatrix} -3\\ 2\end{bmatrix}</math> becomes <math>\vec{i}+2\vec{j}+-3\vec{i}+2\vec{2}</math>
 +
 +
Combining like terms in exactly the way you would expect you get <math>-2\vec{i}+4\vec{j}</math> or <math>\begin{bmatrix}-2\\4\end{bmatrix}</math>
 +
 +
This process of writing a vector as the addition of two other vectors is a vital process the result is called a '''Linear Combinations'''
 +
 +
We can describe any matrix in our system as the linear combination of <math>\vec{i}</math> and <math>\vec{j}</math>
 +
 +
Don't forget about this process because it becomes important later
 +
 +
===Linear?===
 +
 +
You may be confused on what exactly the "linear" means in "linear combination"
 +
 +
Think of it in the terms of a linear function from your basic algebra course
 +
 +
Lets say we have a function <math>F(x)=2x</math>
 +
 +
For any <math>x</math> that I choose to put in if I put in <math>2*x</math>, I will get out <math>2*2x</math> or <math>2*F(x)</math>
 +
 +
This holds true for any constant <math>a</math>, meaning <math>F(ax)=aF(x)</math>
 +
 +
Also if we add together <math>x+2</math> we will get out <math>F(x+2)=2x+2*2=F(x)+(2)</math>
 +
 +
Again this holds generally true with <math>F(x+y)=F(x)+F(y)</math>
 +
 +
These are our two essential conditions of linearity,
 +
 +
<math>F(x+y)=F(x)+F(y)</math> and <math>F(ax)=aF(x)</math>
 +
 +
If you didn't understand the general concept it's okay we'll apply it to the vectors we've been dealing with.
 +
 +
 +
 +
Let's label the function that breaks down a vector into its unit components as the function <math>LC(\vec{x})</math> for linear combination.
 +
 +
So, for example <math>LC(\begin{bmatrix}1\\2\end{bmatrix})=\vec{i}+2\vec{j}</math>
 +
 +
We would hope that this '''Linear''' combination has the same properties of linearity.
 +
 +
Checking the first one we find that <math>LC(k*\begin{bmatrix}1\\2\end{bmatrix})=LC(\begin{bmatrix}k\\2k\end{bmatrix})=k\vec{i}+2k\vec{j}=k*LC(\begin{bmatrix}1\\2\end{bmatrix})</math>
 +
 +
Checking the second one we find that <math>LC(\begin{bmatrix}3\\1\end{bmatrix}+\begin{bmatrix}1\\2\end{bmatrix})=LC(\begin{bmatrix}4\\3\end{bmatrix})=4\vec{i}+3\vec{j}=3\vec{i}+\vec{j}+\vec{i}+2\vec{j}=LC(\begin{bmatrix}3\\1\end{bmatrix})+LC(\begin{bmatrix}1\\2\end{bmatrix})</math>
 +
 +
Since both characteristics hold true our linear combinations are by definition linear.
 +
 +
===Matrices as Transformations===
 +
 +
Lets say maybe with get tired of our base units of <math>\vec{i}</math> and <math>\vec{j}</math> and want to switch things up.
 +
 +
Lets switch our based units to my favorite two vectors <math>\begin{bmatrix}1\\1\end{bmatrix}</math> and <math>\begin{bmatrix}-2\\3\end{bmatrix}</math>
 +
 +
So how in the world do we rewrite the vector <math>\begin{bmatrix}1\\6\end{bmatrix}</math> in our fancy new base system.
 +
 +
Well to write it in our new base we would have to find a linear combination of <math>\begin{bmatrix}1\\1\end{bmatrix}</math> and <math>\begin{bmatrix}-2\\3\end{bmatrix}</math> such that <math>c_1\begin{bmatrix}1\\1\end{bmatrix}+c_2\begin{bmatrix}-2\\3\end{bmatrix}=\begin{bmatrix}1\\6\end{bmatrix}</math>
 +
 +
We then have to solve the system of equations of equations
 +
<math>\begin{displaymath}\begin{matrix}
 +
c_1-2*c_2=1
 +
c_1+3*c_2=6
 +
\end{matrix}\end{displaymath}
 +
</math>
 +
Ruining the fun of solving it yourself.  The solution of this system is <math>c_1=3</math>, <math>c_2=1</math>.  That means that we can multiply <math>\begin{bmatrix}1\\1\end{bmatrix}</math> by three and add it to <math>\begin{bmatrix}-2\\3\end{bmatrix}</math> to get <math>\begin{bmatrix}1\\6\end{bmatrix}</math>
 +
 +
 +
 +
===Dot Products===
 +
 +
So we multiplied a vector times a scalar already, but that's kinda dull I want to smash two vectors together in ways other than addition.
 +
 +
Dot products is one of the ways we can do this and has its roots in integer multiplication
 +
 +
It may seem very pedantic, but lets go over regular multiplication in relation to the number line real quick.
 +
  
 
<center><font size=10>
 
<center><font size=10>
 
'''Work in Progress'''
 
'''Work in Progress'''
 
</font size></center>
 
</font size></center>

Latest revision as of 09:34, 17 November 2017


Work in Progress



Linear Algebra the Conceptual Way

by Kevin LaMaster, proud Member of the Math Squad.


Introduction

For many students they are able to skate by in linear algebra by following equations and systems but don't understand the intuitive nature of matrices and vectors and their operators. This tutorial is not meant as a replacement to the course but should rather be used as a supplement to the course to understand why the operations work as they do. This tutorial is intended to be receivable by a wide range of individuals including past linear algebra students wanting review, present students seeking help, and my friends that I inevitable force to read my work.


Contents

  1. Vectors
    1. Addition/Subtraction
    2. Scalar Multiplication
    3. Unit
    4. Dot
    5. Cross

Vectors

For computer science students vectors can be seen as ordered lists, for engineering students focused on physics they can be seen as a direction and a length. For linear algebra they can be approached from any and every angle Given $ 0\leq\theta<2\pi $ of course.

For the purposes of this tutorial think of it as a way to move a point (normally at the origin) to another point

As a warning most of this page will be movement oriented and I will try my best to graphically demonstrate that

So for example the vector written $ \begin{bmatrix} 1\\ 2\end{bmatrix} $ will move a vector from the origin to point (1,2)

[1,2]

If we want vectors to have all the properties of numbers then what should a vector + a vector result in.

What if we make it one movement and then the other? This way $ \begin{bmatrix} 1\\ 2\end{bmatrix}+\begin{bmatrix} -3\\ 2\end{bmatrix} $ will be the resultof moving right 1 and up 2 followed by moving left 3 and up 2.

Vector Addition

As displayed by the animation this is the same as adding the x component each vector and the y component of each vector.

In this way $ \begin{bmatrix} 1\\ 2\end{bmatrix}+\begin{bmatrix} -3\\ 2\end{bmatrix}=\begin{bmatrix}1-3\\2+2\end{bmatrix}=\begin{bmatrix}-2\\4\end{bmatrix} $

Vector subtraction is built on primarily the same process but in reverse as could be expected.

Some ways to derive the exact method would be imagining $ \vec{u}-\vec{v}=\vec{w} $ as $ \vec{u}=\vec{w}+\vec{v} $

We know from before that $ \begin{bmatrix} 1\\ 2\end{bmatrix}+\begin{bmatrix} -3\\ 2\end{bmatrix}=\begin{bmatrix}-2\\4\end{bmatrix} $ So in that case $ \begin{bmatrix} 1\\ 2\end{bmatrix}=\begin{bmatrix}-2\\4\end{bmatrix}-\begin{bmatrix} -3\\ 2\end{bmatrix} $ or $ \begin{bmatrix}-2\\4\end{bmatrix}-\begin{bmatrix} -3\\ 2\end{bmatrix}=\begin{bmatrix} -2-(-3)\\ 4-2\end{bmatrix}=\begin{bmatrix}1\\2\end{bmatrix} $

So vector subtraction works very much the way that we would expect as well.

You may be thinking that $ \begin{bmatrix} 1\\ 2\end{bmatrix}=\begin{bmatrix}-2\\4\end{bmatrix}-\begin{bmatrix} -3\\ 2\end{bmatrix} $ could also be proven by writing it as $ \begin{bmatrix} 1\\ 2\end{bmatrix}=\begin{bmatrix}-2\\4\end{bmatrix}+(-1)*\begin{bmatrix} -3\\ 2\end{bmatrix} $.

We haven't defined scalar multiplication yet but imagine it as adding a vector to itself so $ 2*\begin{bmatrix} 1\\2\end{bmatrix} $ is the same as $ \begin{bmatrix} 1\\2\end{bmatrix}+\begin{bmatrix} 1\\2\end{bmatrix} $

This would just be $ \begin{bmatrix} 1+1\\2+2\end{bmatrix} $ or $ \begin{bmatrix} 2*1\\2*2\end{bmatrix} $

So multiplying a vector by a scalar is the same as multiplying the individual components of the vector by that number

Unit Vectors

Since we seem to be using this specific example vector a lot, why don't we combine these two concepts to start writing $ \begin{bmatrix}1\\2\end{bmatrix} $ as $ \begin{bmatrix}1\\0\end{bmatrix}+2*\begin{bmatrix}0\\1\end{bmatrix} $?

The two vectors $ \begin{bmatrix}1\\0\end{bmatrix} $ and $ \begin{bmatrix}0\\1\end{bmatrix} $ seem like we'll use them a lot so lets name them $ \vec{i} $ and $ \vec{j} $ respectively.

Now instead of me typing out all the code required to show $ \begin{bmatrix}1\\2\end{bmatrix} $ $ _{\text{Which is a lot actually I don't know why I even bother}} $, I can just say $ \vec{i}+2\vec{j} $.

This also conveniently helps us with vector addition and subtraction. $ \begin{bmatrix} 1\\ 2\end{bmatrix}+\begin{bmatrix} -3\\ 2\end{bmatrix} $ becomes $ \vec{i}+2\vec{j}+-3\vec{i}+2\vec{2} $

Combining like terms in exactly the way you would expect you get $ -2\vec{i}+4\vec{j} $ or $ \begin{bmatrix}-2\\4\end{bmatrix} $

This process of writing a vector as the addition of two other vectors is a vital process the result is called a Linear Combinations

We can describe any matrix in our system as the linear combination of $ \vec{i} $ and $ \vec{j} $

Don't forget about this process because it becomes important later

Linear?

You may be confused on what exactly the "linear" means in "linear combination"

Think of it in the terms of a linear function from your basic algebra course

Lets say we have a function $ F(x)=2x $

For any $ x $ that I choose to put in if I put in $ 2*x $, I will get out $ 2*2x $ or $ 2*F(x) $

This holds true for any constant $ a $, meaning $ F(ax)=aF(x) $

Also if we add together $ x+2 $ we will get out $ F(x+2)=2x+2*2=F(x)+(2) $

Again this holds generally true with $ F(x+y)=F(x)+F(y) $

These are our two essential conditions of linearity,

$ F(x+y)=F(x)+F(y) $ and $ F(ax)=aF(x) $

If you didn't understand the general concept it's okay we'll apply it to the vectors we've been dealing with.


Let's label the function that breaks down a vector into its unit components as the function $ LC(\vec{x}) $ for linear combination.

So, for example $ LC(\begin{bmatrix}1\\2\end{bmatrix})=\vec{i}+2\vec{j} $

We would hope that this Linear combination has the same properties of linearity.

Checking the first one we find that $ LC(k*\begin{bmatrix}1\\2\end{bmatrix})=LC(\begin{bmatrix}k\\2k\end{bmatrix})=k\vec{i}+2k\vec{j}=k*LC(\begin{bmatrix}1\\2\end{bmatrix}) $

Checking the second one we find that $ LC(\begin{bmatrix}3\\1\end{bmatrix}+\begin{bmatrix}1\\2\end{bmatrix})=LC(\begin{bmatrix}4\\3\end{bmatrix})=4\vec{i}+3\vec{j}=3\vec{i}+\vec{j}+\vec{i}+2\vec{j}=LC(\begin{bmatrix}3\\1\end{bmatrix})+LC(\begin{bmatrix}1\\2\end{bmatrix}) $

Since both characteristics hold true our linear combinations are by definition linear.

Matrices as Transformations

Lets say maybe with get tired of our base units of $ \vec{i} $ and $ \vec{j} $ and want to switch things up.

Lets switch our based units to my favorite two vectors $ \begin{bmatrix}1\\1\end{bmatrix} $ and $ \begin{bmatrix}-2\\3\end{bmatrix} $

So how in the world do we rewrite the vector $ \begin{bmatrix}1\\6\end{bmatrix} $ in our fancy new base system.

Well to write it in our new base we would have to find a linear combination of $ \begin{bmatrix}1\\1\end{bmatrix} $ and $ \begin{bmatrix}-2\\3\end{bmatrix} $ such that $ c_1\begin{bmatrix}1\\1\end{bmatrix}+c_2\begin{bmatrix}-2\\3\end{bmatrix}=\begin{bmatrix}1\\6\end{bmatrix} $

We then have to solve the system of equations of equations $ \begin{displaymath}\begin{matrix} c_1-2*c_2=1 c_1+3*c_2=6 \end{matrix}\end{displaymath} $ Ruining the fun of solving it yourself. The solution of this system is $ c_1=3 $, $ c_2=1 $. That means that we can multiply $ \begin{bmatrix}1\\1\end{bmatrix} $ by three and add it to $ \begin{bmatrix}-2\\3\end{bmatrix} $ to get $ \begin{bmatrix}1\\6\end{bmatrix} $


Dot Products

So we multiplied a vector times a scalar already, but that's kinda dull I want to smash two vectors together in ways other than addition.

Dot products is one of the ways we can do this and has its roots in integer multiplication

It may seem very pedantic, but lets go over regular multiplication in relation to the number line real quick.


Work in Progress

Alumni Liaison

Correspondence Chess Grandmaster and Purdue Alumni

Prof. Dan Fleetwood