Parzen windows are very similar to K nearest neighborhoods(KNN). Both methods can generate very complex decision boundaries. The main difference is that instead of looking at the k closest points to a piece of training data, all points within a fixed distance are considered. In practice, one difference is that datasets with large gaps get treated much different. kNN will pick far away points, but it is possible that relatively small Parzen Windows will actually enclose zero points. In this case, only the priors can be used to classify. Unfortunately, this dataset had many holes in it at the fringes Thhe Parzen-window density estimate using n training samples and the window function tex: \pi is defined by

$ p_n(x) = 1/n \sum_{i=1}^{n} 1/V_n \pi({x-x_i}/h_n) $

The estimate $ p_n(x) $ is an average of (window) functions. Usually the window function has its maximum at the origin and its values become smaller when we move further away from the origin. Then each training sample is contributing to the estimate in accordance with its distance from x.

  • Reference:

1. Parzen-window 2. Parzen-window density estimation 3. Parzen-window density estimation

Alumni Liaison

Basic linear algebra uncovers and clarifies very important geometry and algebra.

Dr. Paul Garrett