Line 1: Line 1:
 +
[[Category:discusssion]]
 +
[[Category:decision theory]]
 +
[[Category:nearest neighbor]]
 +
[[Category:k nearest neighbors]]
 +
[[Category:ECE662]]
  
== Informal Comparison of KNN and NN in case of k = 1 ==
+
= Informal Comparison of KNN and NN in case of k = 1 =
 
+
 
+
 
This is an informal discussion about the Kn-Nearest-Neighbor (KNN) and Nearest Neighbor (NN) estimations in k = 1 case. My conclusion is that KNN and NN are essentially the same when k = 1.
 
This is an informal discussion about the Kn-Nearest-Neighbor (KNN) and Nearest Neighbor (NN) estimations in k = 1 case. My conclusion is that KNN and NN are essentially the same when k = 1.
  
Line 10: Line 13:
  
 
zge
 
zge
 +
*Answer/comment here.
 +
*Answer/comment here.
 +
----
 +
[[ECE662|Back to ECE662]]

Latest revision as of 09:47, 22 March 2012


Informal Comparison of KNN and NN in case of k = 1

This is an informal discussion about the Kn-Nearest-Neighbor (KNN) and Nearest Neighbor (NN) estimations in k = 1 case. My conclusion is that KNN and NN are essentially the same when k = 1.

In any testing point, let us assume that this testing point is category-k dominant using NN (trainning point from category k will be firstly captured). Then, if we switch to use KNN instead, when enlarge the volume surrounding this testing point, the volume in category k should be the smallest (with largest discriminant value), so this point is still considered as category-k dominant using KNN.

Based on the above discussion, I conclude that the classification using KNN and NN when k = 1 should be the same.

zge

  • Answer/comment here.
  • Answer/comment here.

Back to ECE662

Alumni Liaison

Basic linear algebra uncovers and clarifies very important geometry and algebra.

Dr. Paul Garrett