Line 19: | Line 19: | ||
<math> g(\vec{y}) = c\cdot\vec{y}</math> | <math> g(\vec{y}) = c\cdot\vec{y}</math> | ||
− | Then a testing data point <math>\vec{y}_i</math> is labelled <math> \ | + | Then a testing data point <math>\vec{y}_i</math> is labelled <math> \it{w} </math> |
. The separation hyperplane can be written as | . The separation hyperplane can be written as | ||
<math> c\cdot y=b </math> | <math> c\cdot y=b </math> | ||
where <math>\cdot </math> denotes the dot product, c determines the orientation of the hyperplane and | where <math>\cdot </math> denotes the dot product, c determines the orientation of the hyperplane and |
Revision as of 10:35, 1 May 2014
'Support Vector Machine and its Applications in Classification Problems
A slecture by Xing Liu
Partially based on the ECE662 Spring 2014 lecture material of Prof. Mireille Boutin.
Outline of the slecture
- Linear discriminant functions
- Summary
- References
Linear classification Problem Statement
In a linear classification problem, the feature space can be divided into different regions by hyperplanes. In this lecture, we will take a two-catagory case to illustrate. Given training samples $ \vec{y}_1,\vec{y}_2,...\vec{y}_n \in \mathbb{R}^p $, each $ \vec{y}_i $ is a p-dimensional vector and belongs to either class $ w_1 $ or $ w_2 $. The goal is to find the maximum-margin hyperplane that separate the points in the feature space that belong to class $ w_1 $ from those belong to class$ w_2 $. The discriminate function can be written as
$ g(\vec{y}) = c\cdot\vec{y} $
Then a testing data point $ \vec{y}_i $ is labelled $ \it{w} $
. The separation hyperplane can be written as $ c\cdot y=b $ where $ \cdot $ denotes the dot product, c determines the orientation of the hyperplane and