Revision as of 04:08, 6 May 2010 by Zge (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

This is an informal discussion about the Kn-Nearest-Neighbor (KNN) and Nearest Neighbor (NN) estimations in k = 1 case. My conclusion is that KNN and NN are essentially the same when k = 1.

In any testing point, let us assume that this testing point is category-k dominant using NN (trainning point from category k will be firstly captured). Then, if we switch to use KNN instead, when enlarge the volume surrounding this testing point, the volume in category k should be the smallest (with largest discriminant value), so this point is still considered as category-k dominant using KNN.

Based on the above discussion, I conclude that the classification using KNN and NN when k = 1 should be the same.

zge

Alumni Liaison

Correspondence Chess Grandmaster and Purdue Alumni

Prof. Dan Fleetwood