(New page: Category:2010 Spring ECE 662 mboutin =Details of Lecture 24, ECE662 Spring 2010= April 16, 2010 <span style="color:green"> This is a make up lecture, 1:30-2:30 in EE117. </span> ...)
 
 
Line 6: Line 6:
 
<span style="color:green"> This is a make up lecture, 1:30-2:30 in EE117. </span>
 
<span style="color:green"> This is a make up lecture, 1:30-2:30 in EE117. </span>
  
In Lecture 24, we will continue discussing [[Support_Vector_Machines|Support Vector Machines]].  
+
In Lecture 24, we continued discussing [[Support_Vector_Machines|Support Vector Machines]]. We discussed the use of slack variables to define "soft margins" in the case where the data is not linearly separable. We noted that the optimization problem in that case involves inner products between the training samples, but not the training samples coordinates themselves. We then introduced the "kernel trick" for computing inner products in the extended feature space without actually extending the space. We defined the notion of kernel and presented two examples of well known kernels. More generally, we discussed the existence of en underlying feature space extension for a given kernel function.  
  
 +
Meanwhile, a student created [[Fisher_discriminant_under_nonlinear_data|an interesting page]] on the use of [[Fisher_Linear_Discriminant|Fisher's linear discriminant]] when the data is not linearly separable.
  
 
Previous: [[Lecture23ECE662S10|Lecture 23]]
 
Previous: [[Lecture23ECE662S10|Lecture 23]]

Latest revision as of 09:36, 16 April 2010


Details of Lecture 24, ECE662 Spring 2010

April 16, 2010

This is a make up lecture, 1:30-2:30 in EE117.

In Lecture 24, we continued discussing Support Vector Machines. We discussed the use of slack variables to define "soft margins" in the case where the data is not linearly separable. We noted that the optimization problem in that case involves inner products between the training samples, but not the training samples coordinates themselves. We then introduced the "kernel trick" for computing inner products in the extended feature space without actually extending the space. We defined the notion of kernel and presented two examples of well known kernels. More generally, we discussed the existence of en underlying feature space extension for a given kernel function.

Meanwhile, a student created an interesting page on the use of Fisher's linear discriminant when the data is not linearly separable.

Previous: Lecture 23 Next: Lecture 25


Back to course outline

Back to 2010 Spring ECE 662 mboutin

Back to ECE662

Alumni Liaison

Basic linear algebra uncovers and clarifies very important geometry and algebra.

Dr. Paul Garrett