Line 6: | Line 6: | ||
====================== | ====================== | ||
Main article: [Gradient Descent] | Main article: [Gradient Descent] | ||
+ | <math> </math> | ||
+ | Consider the cost function <math>J_p(\vec{c}) = \sum -\vec{c}y_i</math>, where <math>y_i</math> is the misclassified data. | ||
+ | |||
+ | We use the gradient descent procedure to minimize <math>J_p(\vec{c})</math>. | ||
+ | |||
+ | Compute <math>\nabla J_p(\vec{c}) = \ldots = - \sum y_i</math>. | ||
+ | |||
+ | Follow basic gradient descent procedure: | ||
+ | |||
+ | - Initial guess <math> </math>|c_1| | ||
+ | - Then, update <math> </math>|c_2|, where <math> </math>|eta_1| is the step size | ||
+ | |||
+ | - Iterate <math> </math>|c_k+1| until it "converges" | ||
+ | |||
+ | ( e.g when <math> </math>|stop| threshold ) | ||
.. |J_p1| image:: tex | .. |J_p1| image:: tex | ||
Line 27: | Line 42: | ||
.. |theorem| image:: tex | .. |theorem| image:: tex | ||
:alt: tex: \vec{c_{k+1}} = \vec{c_k} + cst \sum y_i | :alt: tex: \vec{c_{k+1}} = \vec{c_k} + cst \sum y_i | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
Gradient Descent in the Perceptron Algorithm | Gradient Descent in the Perceptron Algorithm |
Revision as of 13:22, 28 March 2008
The perceptron algorithm maps an input to a single binary output value. For a proof of the Perceptron convergence theorem, see Perceptron Convergence Theorem_Old Kiwi
First introduced in [Lecture 9]. The gradient descent algorithm used is discussed in [Lecture 10].
Gradient Descent
==========
Main article: [Gradient Descent]
Consider the cost function $ J_p(\vec{c}) = \sum -\vec{c}y_i $, where $ y_i $ is the misclassified data.
We use the gradient descent procedure to minimize $ J_p(\vec{c}) $.
Compute $ \nabla J_p(\vec{c}) = \ldots = - \sum y_i $.
Follow basic gradient descent procedure:
- Initial guess |c_1| - Then, update |c_2|, where |eta_1| is the step size
- Iterate |c_k+1| until it "converges"
( e.g when |stop| threshold )
.. |J_p1| image:: tex
:alt: tex: J_p(\vec{c}) = \sum -\vec{c}y_i
.. |y_i| image:: tex
:alt: tex: y_i
.. |J_p| image:: tex
:alt: tex: J_p(\vec{c})
.. |descent| image:: tex
:alt: tex: \nabla J_p(\vec{c}) = ... = - \sum y_i
.. |c_1| image:: tex
:alt: tex: \vec{c_1}
.. |c_2| image:: tex
:alt: tex: \vec{c_2} = \vec{c_1} - \eta(1) \nabla J_p(\vec{c})
.. |eta_1| image:: tex
:alt: tex: \eta(1)
.. |c_k+1| image:: tex
:alt: tex: \vec{c_{k+1}} = \vec{c_{k}} - \eta(k) \nabla J_p(\vec{c})
.. |stop| image:: tex
:alt: tex: \eta(k) \nabla J_p(\vec{c}) <
.. |theorem| image:: tex
:alt: tex: \vec{c_{k+1}} = \vec{c_k} + cst \sum y_i
Gradient Descent in the Perceptron Algorithm
=====================
- Theorem:** If samples are linearly separable, then the "batch [perceptron]" iterative algorithm. The proof of this theorem, PerceptronConvergenceTheorem, is due to Novikoff (1962).
|theorem|, where |y_i| is the misclassified data, terminates after a finite number of steps.
But, in practice, we do not have linear separable data. So instead, we use the Least Squares Procedure.
.. |cdot| image:: tex
:alt: tex: \vec{c} \cdot y_i > 0
.. |b_i| image:: tex
:alt: tex: b_i
.. |solve| image:: tex
:alt: tex: \vec{c} \cdot y_i = b_i
We want |cdot|, for all samples |y_i|. This is a linear inequality problem which is usually hard to solve. Therefore, we need to convert this problem into a linear equality problem.
We choose |b_i| > 0 and solve |solve|, for all i
The matrix equation has the following form:
.. image:: equation111.jpg
.. |eq_Y| image:: tex
:alt: tex: \vec{Y} \cdot \vec{c} = \vec{b}
This can also be written as |eq_Y|.
.. |y_1| image:: tex
:alt: tex: \vec{y_1}
.. |y_d| image:: tex
:alt: tex: \vec{y_d}
.. |Y| image:: tex
:alt: tex: \vec{Y}
.. |L_2| image:: tex
:alt: tex: || Y \vec{c} - \vec{b} ||_{L_2}
.. |soln| image:: tex
:alt: tex: \vec{c} = (Y^{\top}Y)^{-1}Y^{\top}b
.. |mag| image:: tex
:alt: tex: |Y^{\top}y| \ne 0
.. |mag0| image:: tex
:alt: tex: |Y^{\top}y| = 0
.. |lim| image:: tex
:alt: tex: \vec{c} = lim (Y^{\top}Y + \epsilon1)^{-1}Y^{\top}b
If d=n, and |y_1|,..., |y_d| are "generic" ( i.e. determinant of |Y| is not 0), then we "can" solve by matrix inversion.
If d > n, over-constrained system (there is no solution in the generic case). This is the case where there is more data than you need, and the information is contradictory. In this case, we seek to minimize |L_2|. The solution is given by |soln|, if |mag|.
If |mag0|, |lim| always exists!