(One intermediate revision by the same user not shown)
Line 1: Line 1:
 
The perceptron algorithm maps an input to a single binary output value.  For a proof of the Perceptron convergence theorem, see [[Perceptron Convergence Theorem_OldKiwi]]
 
The perceptron algorithm maps an input to a single binary output value.  For a proof of the Perceptron convergence theorem, see [[Perceptron Convergence Theorem_OldKiwi]]
  
First introduced in [Lecture 9].  The gradient descent algorithm used is discussed in [Lecture 10].  
+
First introduced in [[Lecture_9_-_Linear_Discriminant_Functions_OldKiwi]].  The gradient descent algorithm used is discussed in [Lecture 10].  
  
 
== Gradient Descent ==
 
== Gradient Descent ==
Main article: [Gradient Descent]  
+
Main article: [[Gradient Descent_OldKiwi]]  
<math> </math>
+
 
Consider the cost function <math>J_p(\vec{c}) = \sum -\vec{c}y_i</math>, where <math>y_i</math> is the misclassified data.
 
Consider the cost function <math>J_p(\vec{c}) = \sum -\vec{c}y_i</math>, where <math>y_i</math> is the misclassified data.
  
Line 38: Line 37:
  
 
This can also be written as <math>\vec{Y} \cdot \vec{c} = \vec{b}</math>.
 
This can also be written as <math>\vec{Y} \cdot \vec{c} = \vec{b}</math>.
 
.. |y_1| image:: tex
 
  :alt: tex: \vec{y_1}
 
.. |y_d| image:: tex
 
  :alt: tex: \vec{y_d}
 
.. |Y| image:: tex
 
  :alt: tex: \vec{Y}
 
.. |L_2| image:: tex
 
  :alt: tex: || Y \vec{c} - \vec{b} ||_{L_2}
 
.. |soln| image:: tex
 
  :alt: tex: \vec{c} = (Y^{\top}Y)^{-1}Y^{\top}b
 
.. |mag| image:: tex
 
  :alt: tex: |Y^{\top}y| \ne 0
 
.. |mag0| image:: tex
 
  :alt: tex: |Y^{\top}y| = 0
 
.. |lim| image:: tex
 
  :alt: tex: \vec{c} = lim (Y^{\top}Y + \epsilon1)^{-1}Y^{\top}b
 
  
 
If <math>d=n</math>, and <math>\vec{y_1},\ldots,\vec{y_d}</math> are "generic" ( i.e. determinant of <math>\vec{Y}</math> is not 0), then we "can" solve by matrix inversion.
 
If <math>d=n</math>, and <math>\vec{y_1},\ldots,\vec{y_d}</math> are "generic" ( i.e. determinant of <math>\vec{Y}</math> is not 0), then we "can" solve by matrix inversion.
Line 61: Line 43:
 
The solution is given by <math>\vec{c} = (Y^{\top}Y)^{-1}Y^{\top}b</math>, if <math>|Y^{\top}y| \ne 0</math>.
 
The solution is given by <math>\vec{c} = (Y^{\top}Y)^{-1}Y^{\top}b</math>, if <math>|Y^{\top}y| \ne 0</math>.
  
If <math>|Y^{\top}y| = 0</math>, <math>\vec{c} = lim (Y^{\top}Y + \epsilon1)^{-1}Y^{\top}b</math>|lim| always exists!
+
If <math>|Y^{\top}y| = 0</math>, <math>\vec{c} = \lim (Y^{\top}Y + \epsilon1)^{-1}Y^{\top}b</math> always exists!
 
+
.. |J_p1| image:: tex
+
  :alt: tex: J_p(\vec{c}) = \sum -\vec{c}y_i
+
.. |y_i| image:: tex
+
  :alt: tex: y_i
+
.. |J_p| image:: tex
+
  :alt: tex: J_p(\vec{c})
+
.. |descent| image:: tex
+
  :alt: tex: \nabla J_p(\vec{c}) = ... = - \sum y_i
+
.. |c_1| image:: tex
+
  :alt: tex: \vec{c_1}
+
.. |c_2| image:: tex
+
  :alt: tex: \vec{c_2} = \vec{c_1} - \eta(1) \nabla J_p(\vec{c})
+
.. |eta_1| image:: tex
+
  :alt: tex: \eta(1)
+
.. |c_k+1| image:: tex
+
  :alt: tex: \vec{c_{k+1}} = \vec{c_{k}} - \eta(k) \nabla J_p(\vec{c})
+
.. |stop| image:: tex
+
  :alt: tex: \eta(k) \nabla J_p(\vec{c}) <
+
.. |theorem| image:: tex
+
  :alt: tex: \vec{c_{k+1}} = \vec{c_k} + cst \sum y_i
+
.. |cdot| image:: tex
+
  :alt: tex: \vec{c} \cdot y_i > 0
+
.. |b_i| image:: tex
+
  :alt: tex: b_i
+
.. |solve| image:: tex
+
  :alt: tex: \vec{c} \cdot y_i = b_i
+
.. |eq_Y| image:: tex
+
  :alt: tex: \vec{Y} \cdot \vec{c} = \vec{b}
+

Latest revision as of 13:46, 28 March 2008

The perceptron algorithm maps an input to a single binary output value. For a proof of the Perceptron convergence theorem, see Perceptron Convergence Theorem_OldKiwi

First introduced in Lecture_9_-_Linear_Discriminant_Functions_OldKiwi. The gradient descent algorithm used is discussed in [Lecture 10].

Gradient Descent

Main article: Gradient Descent_OldKiwi Consider the cost function $ J_p(\vec{c}) = \sum -\vec{c}y_i $, where $ y_i $ is the misclassified data.

We use the gradient descent procedure to minimize $ J_p(\vec{c}) $.

Compute $ \nabla J_p(\vec{c}) = \ldots = - \sum y_i $.

Follow basic gradient descent procedure:

- Initial guess $ \vec{c_1} $ - Then, update $ \vec{c_2} = \vec{c_1} - \eta(1) \nabla J_p(\vec{c}) $, where $ \eta(1) $ is the step size

- Iterate $ \vec{c_{k+1}} = \vec{c_{k}} - \eta(k) \nabla J_p(\vec{c}) $ until it "converges"

 ( e.g when $ \eta(k) \nabla J_p(\vec{c}) < $ threshold )


Gradient Descent in the Perceptron Algorithm

$ \vec{c_{k+1}} = \vec{c_k} + cst \sum y_i $, where $ y_i $ is the misclassified data, terminates after a finite number of steps.

But, in practice, we do not have linear separable data. So instead, we use the Least Squares Procedure.

We want $ \vec{c} \cdot y_i > 0 $, for all samples $ y_i $. This is a linear inequality problem which is usually hard to solve. Therefore, we need to convert this problem into a linear equality problem.

We choose $ b_i>0 $ and solve $ \vec{c} \cdot y_i = b_i $, for all $ i $

The matrix equation has the following form:

$ \begin{bmatrix}y_1^T\\y_2^T\\\vdots\\y_d^T\end{bmatrix}_{d\times n}c_{n\times1} = \begin{bmatrix}b_1\\b_2\\\vdots\\b_d\end{bmatrix}_{d\times1} $.

This can also be written as $ \vec{Y} \cdot \vec{c} = \vec{b} $.

If $ d=n $, and $ \vec{y_1},\ldots,\vec{y_d} $ are "generic" ( i.e. determinant of $ \vec{Y} $ is not 0), then we "can" solve by matrix inversion.

If $ d>n $, over-constrained system (there is no solution in the generic case). This is the case where there is more data than you need, and the information is contradictory. In this case, we seek to minimize $ \|Y \vec{c} - \vec{b}\|_{L_2} $. The solution is given by $ \vec{c} = (Y^{\top}Y)^{-1}Y^{\top}b $, if $ |Y^{\top}y| \ne 0 $.

If $ |Y^{\top}y| = 0 $, $ \vec{c} = \lim (Y^{\top}Y + \epsilon1)^{-1}Y^{\top}b $ always exists!

Alumni Liaison

Basic linear algebra uncovers and clarifies very important geometry and algebra.

Dr. Paul Garrett