Revision as of 17:15, 16 March 2008 by Ynimmaga (Talk)

Support Vector Machines

1. Requires solving a Quadratic Programming problem which can be computationally intensive 2. Finding the right kernel function, for linear classification of data required by SVMs, is non-trivial task. 3. Assuming a kernel function and optimizing the cost function are done as different steps (Neural Networks, where these are done simultaneously poses this as its advantage over SVMs)

Perceptron (with FLD)

1. Requires the data to be linearly separable. If the classification accuracy of the perceptron method is bad, kernel methods (eg. SVMs) might be required. 2. If the required class means and covariances are not known, they can be estimated from the training set. Parameter estimation methods like maximum likelihood estimate or the maximum a posteriori estimate may be used 3. Regularization might be required (for finding the inverse) to avoid overfitting issues.

KNN Classification

1. This classification method gives very good results if huge training data is available.

From yamini.nimmagadda.1 Fri Mar 7 17:03:13 -0500 2008 From: yamini.nimmagadda.1 Date: Fri, 07 Mar 2008 17:03:13 -0500 Subject: Distance Metric Learning Message-ID: <20080307170313-0500@https://engineering.purdue.edu>

1) For a given input data, with no apriori knowledge, choosing appropriate distance metric is very important. Distance metrics are used in density estimation methods (Parzen windows), clustering (k-means) and instance based classification methods (Nearest Neighbors) etc. Euclidean distance is used in most of the cases, but in cases where the relationship between data points is non-linear, selection of a distance metric is a challenge. Here is a reference addressing this issue: [1]

Alumni Liaison

Correspondence Chess Grandmaster and Purdue Alumni

Prof. Dan Fleetwood