Line 24: Line 24:
  
 
Strong points:
 
Strong points:
 +
 
Breaks down SVM in terms of the optimization problem.  I expect most who use SVM in practice don't concern themselves to much with this, but going through the steps outlined here gives one a good intuition of SVM.  This is especially true for the kernel trick, when one sees it in the context of optimization.
 
Breaks down SVM in terms of the optimization problem.  I expect most who use SVM in practice don't concern themselves to much with this, but going through the steps outlined here gives one a good intuition of SVM.  This is especially true for the kernel trick, when one sees it in the context of optimization.
  
Line 29: Line 30:
  
 
Critiques:
 
Critiques:
 +
 
At time 10:58, you state you need the condition that t*y > 0.  This will confuse people who have not seen it before .
 
At time 10:58, you state you need the condition that t*y > 0.  This will confuse people who have not seen it before .
 
In discussion of slack variables, it would be useful to explain why one doesn't just extend to a larger dimension, until you get a linear seperation.
 
In discussion of slack variables, it would be useful to explain why one doesn't just extend to a larger dimension, until you get a linear seperation.

Latest revision as of 06:18, 8 May 2014


Back to ECE662, Spring 2014


Questions and Comments for: Support Vector Machine

A slecture by student Tao Jiang


Please leave me comment below if you have any questions, if you notice any errors or if you would like to discuss a topic further.



Question/comment


Review

This slecture reviewed by Robert Ness.

The student gives an in depth introduction to SVM. He pays specific attention to the concept of slack variables, the kernel trick, and extensions from binary to multilevel classification. He also gives a concise breakdown of SVM in the language of optimization.

Strong points:

Breaks down SVM in terms of the optimization problem. I expect most who use SVM in practice don't concern themselves to much with this, but going through the steps outlined here gives one a good intuition of SVM. This is especially true for the kernel trick, when one sees it in the context of optimization.

He also extends his explanation to classifying multiple categories, which went above and beyond what was covered in the class.

Critiques:

At time 10:58, you state you need the condition that t*y > 0. This will confuse people who have not seen it before . In discussion of slack variables, it would be useful to explain why one doesn't just extend to a larger dimension, until you get a linear seperation.

Time 50:00 discussion about choice of parameters could have been enhanced with mention of cross-validation, which is often how such parameters are chosen.


Back to ECE 662 2014 course wiki

Back to ECE 662 course page

Alumni Liaison

BSEE 2004, current Ph.D. student researching signal and image processing.

Landis Huffman