Line 11: | Line 11: | ||
[[Image:Lec15_comparison_OldKiwi.PNG]] | [[Image:Lec15_comparison_OldKiwi.PNG]] | ||
+ | Parzen Window Method | ||
+ | |||
+ | **Step 1:** Choose "shape" of your window by introducing a "window function" | ||
+ | e.g. if <math>R_i</math> is hybercube in <math>\mathbb{R}^n</math> with side-length <math>h_i</math>, then the window function is <math>\varphi</math>. | ||
+ | |||
+ | <math>\varphi(\vec{u})=\varphi(u_1, u_2, \ldots, u_n)=1</math> if <math>|u_i|<\frac{1}{2}, \forall i</math> otherwise 0. | ||
+ | |||
+ | Examples of Parzen windows | ||
+ | |||
+ | |||
+ | |||
+ | [[Image:Lec15_square_OldKiwi.jpg]] | ||
+ | |||
+ | [[Image:Lec15_square3D_OldKiwi.jpg]] | ||
+ | |||
+ | |||
+ | |||
+ | .. |phi3| image:: tex | ||
+ | :alt: tex: \varphi(\frac{\vec{x}-\vec{x_0}}{h_i}) | ||
+ | |||
+ | .. |x_0| image:: tex | ||
+ | :alt: tex: \vec{x_0} | ||
+ | |||
+ | Given the shape for parzen window by |phi1|, we can scale and shift it as required by the method. | ||
+ | |||
+ | |phi3| is window centered at |x_0| scaled by a factor |h_i|, i.e. its side-length is |h_i|. | ||
+ | |||
+ | .. image:: shiftWindow.jpg | ||
+ | |||
+ | .. |pix0| image:: tex | ||
+ | :alt: tex: p_i(\vec{x_0}) | ||
+ | |||
+ | .. |x0inRi| image:: tex | ||
+ | :alt: tex: \vec{x_0} \in R_i | ||
+ | |||
+ | **Step 2:** Write the density estimate of |px| at |x0inRi| using window function, denoted by |pix0|. | ||
+ | |||
+ | .. |K_i| image:: tex | ||
+ | :alt: tex: K_i | ||
+ | |||
+ | .. |sample_space_i| image:: tex | ||
+ | :alt: tex: \{\vec{x_1}, \vec{x_2}, \ldots, \vec{x_i}\} | ||
+ | |||
+ | .. |K_i1| image:: tex | ||
+ | :alt: tex: \sum_{l=1}^{i}\varphi(\frac{\vec{x_l}-\vec{x_0}}{h_i}) | ||
+ | |||
+ | .. |pix01| image:: tex | ||
+ | :alt: tex: p_i(\vec{x_0})=\frac{k_i}{iV_i}=\frac{1}{iV_i}\sum_{l=1}^{i}\varphi(\frac{\vec{x_l}-\vec{x_0}}{h_i}) | ||
+ | |||
+ | .. |delta_iu| image:: tex | ||
+ | :alt: tex: \delta_i(\vec{u})=\frac{1}{V_i}\varphi(\frac{\vec{u}}{h_i}) | ||
+ | |||
+ | .. |pix02| image:: tex | ||
+ | :alt: tex: p_i(\vec{x_0})=\frac{1}{i}\sum_{l=1}^{i}\delta_i(\vec{x_l}-\vec{x_0}) | ||
+ | |||
+ | We have number of samples for |sample_space_i| inside |R_i| denoted by |K_i| | ||
+ | |||
+ | |K_i1| | ||
+ | |||
+ | So, |pix01| | ||
+ | |||
+ | Let |delta_iu| | ||
+ | |||
+ | |pix02| | ||
+ | |||
+ | .. |dirac_delta| image:: tex | ||
+ | :alt: tex: \lim_{h_i->0}\delta(\vec{x_l}-\vec{x_0}) | ||
+ | |||
+ | .. |px| image:: tex | ||
+ | :alt: tex: p(\vec{x}) | ||
+ | |||
+ | .. |i_tends_infty| image:: tex | ||
+ | :alt: tex: i\rightarrow \infty | ||
+ | |||
+ | This last equation is an average over impulses. For any l, |dirac_delta| is [Dirac delta Function]. We do not want to average over dirac delta functions. Our objective is that |pix0| should converge to true value |px|, as |i_tends_infty| | ||
+ | |||
+ | .. image:: dirac.jpg | ||
+ | |||
+ | |||
+ | .. |MSS1| image:: tex | ||
+ | :alt: tex: \lim_{i\rightarrow \infty}E\{p_i(\vec{x_0})\}=p(\vec{x_0}) | ||
+ | |||
+ | .. |MSS2| image:: tex | ||
+ | :alt: tex: \lim_{i\rightarrow \infty}Var\{p_i(\vec{x_0})\}=0 | ||
+ | |||
+ | .. |MSS3| image:: tex | ||
+ | :alt: tex: p_i(\vec{x_0}) \longrightarrow p(\vec{x_0}) | ||
+ | |||
+ | .. |pix03| image:: tex | ||
+ | :alt: tex: \{p_i(\vec{x_0})\} | ||
+ | |||
+ | **What does convergence mean here?** | ||
+ | Observe |pix03| is a sequence of random variables since |pix0| depends on random variables |sample_space_i|. | ||
+ | What do we mean by convergence of a sequence of random variables (There are many definitions). We pick "Convergence in mean square" sense, i.e. | ||
+ | |||
+ | If |MSS1| | ||
+ | |||
+ | and |MSS2| | ||
+ | |||
+ | then we say |MSS3| in mean square as |i_tends_infty| | ||
+ | |||
+ | .. |kkh01| image:: tex | ||
+ | :alt: tex: E(p_i(\vec{x_o})) | ||
+ | |||
+ | .. |kkh02| image:: tex | ||
+ | :alt: tex: p(\vec{x_o}) | ||
+ | |||
+ | .. |kkh03| image:: tex | ||
+ | :alt: tex: i\to\infty | ||
+ | |||
+ | .. |kkh04| image:: tex | ||
+ | :alt: tex: h_i \to\infty | ||
+ | |||
+ | .. |kkh05| image:: tex | ||
+ | :alt: tex: V_i\to\infty | ||
+ | |||
+ | .. |kkh06| image:: tex | ||
+ | ..alt: tex: Var(p_i\vec{x_o}) | ||
+ | |||
+ | |||
+ | **First condition:** | ||
+ | From the previous result, |jinha_pix0| | ||
+ | |||
+ | .. |jinha_pix0| image:: tex | ||
+ | :alt: tex: \displaystyle p_i (x_0) = \frac{1}{i} \sum_{l=1}^{i} \delta_i (\vec{x}_l - \vec{x}_0) | ||
+ | |||
+ | |jinha_epix0| | ||
+ | |||
+ | .. |jinha_epix0| image:: tex | ||
+ | :alt: tex: \displaystyle E[p_i(x_0)] = \frac{1}{i} \sum_{l=1}^{i} E[ \delta_i (\vec{x}_l - \vec{x}_0) ] = \frac{1}{i} \sum_{l=1}^{i} \int \delta_i (\vec{x}_l - \vec{x}_0) p(\vec{x}_l) dx_l \rightarrow p(\vec{x}_0) | ||
+ | |||
+ | |||
+ | |||
+ | We don't need an infinity number of samples to make |kkh01| converge to |kkh02| as |kkh03|. | ||
+ | |||
+ | We just need |kkh04| (iie. |kkh05|) | ||
+ | |||
+ | **To make it sure** |jinha_varpix0|, what should we do? | ||
+ | |||
+ | .. |jinha_varpix0| image:: tex | ||
+ | :alt: tex: Var(p_i(x_0)) \rightarrow 0 | ||
+ | |||
+ | |jinha_varpix0_1| | ||
+ | |||
+ | .. |jinha_varpix0_1| image:: tex | ||
+ | :alt: tex: \displaystyle Var(p_i(x_0)) = Var(\sum_{l=1}^{i} \frac{1}{i} \delta_i(\vec{x}_l - \vec{x}_0)) = \sum_{l=1}^{i} Var(\frac{1}{i} \delta_i(\vec{x}_l - \vec{x}_0)) | ||
+ | |||
+ | |jinha_varpix0_2| | ||
+ | |||
+ | .. |jinha_varpix0_2| image:: tex | ||
+ | :alt: tex: \displaystyle = \sum_{l=1}^{i} E \left[ \left( \frac{\delta_i(\vec{x}_l - \vec{x}_0)}{i} - E\left[ \frac{\delta_i(\vec{x}_l - \vec{x}_0)}{i} \right] \right)^2 \right] = \sum_{l=1}^{i} E \left[ \left( \frac{\delta_i(\vec{x}_l - \vec{x}_0)}{i} \right)^2 \right] - \left( E\left[ \frac{\delta_i(\vec{x}_l - \vec{x}_0)}{i} \right] \right)^2 | ||
+ | |||
+ | We know that second term is non-negative, therefore we can write | ||
+ | |||
+ | |jinha_varpix0_3| | ||
+ | |||
+ | .. |jinha_varpix0_3| image:: tex | ||
+ | :alt: tex: \displaystyle Var(p_i(x_0)) \le \sum_{l=1}^{i} E \left[ \left( \frac{\delta_i(\vec{x}_l - \vec{x}_0)}{i} \right)^2 \right] | ||
+ | |||
+ | |jinha_varpix0_4| | ||
+ | |||
+ | .. |jinha_varpix0_4| image:: tex | ||
+ | :alt: tex: \displaystyle \rightarrow Var(p_i(x_0)) \le \sum_{l=1}^{i} \int \left( \frac{\delta_i(\vec{x}_l - \vec{x}_0)}{i} \right)^2 p(x_l) dx_l | ||
+ | |||
+ | |jinha_varpix0_5| | ||
+ | |||
+ | .. |jinha_varpix0_5| image:: tex | ||
+ | :alt: tex: \displaystyle \rightarrow Var(p_i(x_0)) \le \sum_{l=1}^{i} \frac{1}{i^2} \int \frac{\psi \left( \frac{\delta_i(\vec{x}_l - \vec{x}_0)}{i}\right)}{V_i} \frac{\psi \left( \frac{\delta_i(\vec{x}_l - \vec{x}_0)}{i}\right)}{V_i} p(x_l) dx_l | ||
+ | |||
+ | |jinha_varpix0_6| | ||
+ | |||
+ | .. |jinha_varpix0_6| image:: tex | ||
+ | :alt: tex: \displaystyle \rightarrow Var(p_i(x_0)) \le \frac{1}{i V_i} sup\psi \int \sum_{l=1}^{i} \delta_i (x_l - x_0) p(x_l) dx_l | ||
+ | |||
+ | |jinha_varpix0_7| | ||
+ | |||
+ | .. |jinha_varpix0_7| image:: tex | ||
+ | :alt: tex: \displaystyle \therefore Var(p_i(x_0)) \le \frac{1}{i V_i} sup\psi E [p_i(x_0)] | ||
+ | |||
+ | |||
+ | If fixed i=d, then as |a_1| increased, |a_2| decreased. | ||
+ | |||
+ | But, if |a_3| , as |a_4| | ||
+ | |||
+ | (for example, if |a_5|) | ||
+ | |||
+ | then, |a_6| | ||
+ | |||
+ | .. |a_1| image:: tex | ||
+ | :alt: tex: v_i | ||
+ | |||
+ | .. |a_2| image:: tex | ||
+ | :alt: tex: var(P_i (\vec{x_0})) | ||
+ | |||
+ | .. |a_3| image:: tex | ||
+ | :alt: tex: i V_i \rightarrow \infty | ||
+ | |||
+ | .. |a_4| image:: tex | ||
+ | :alt: tex: i \rightarrow \infty | ||
+ | |||
+ | .. |a_5| image:: tex | ||
+ | :alt: tex: v_i= \frac{1}{\sqrt i}, v_i=\frac{13}{\sqrt i} or \frac{17}{\sqrt i} | ||
+ | |||
+ | .. |a_6| image:: tex | ||
+ | :alt: tex: var(P_i (\vec{x_0})) \rightarrow 0, as i \rightarrow \infty | ||
+ | |||
+ | |||
+ | Here are some useful links to "Parzen-window Density Estimation" | ||
+ | |||
+ | http://www.cs.utah.edu/~suyash/Dissertation_html/node11.html | ||
+ | |||
+ | http://en.wikipedia.org/wiki/Parzen_window | ||
+ | |||
+ | http://www.personal.rdg.ac.uk/~sis01xh/teaching/CY2D2/Pattern2.pdf | ||
+ | |||
+ | http://www.eee.metu.edu.tr/~alatan/Courses/Demo/AppletParzen.html | ||
== Lectures == | == Lectures == | ||
[http://balthier.ecn.purdue.edu/index.php/Lecture_1_-_Introduction 1] [http://balthier.ecn.purdue.edu/index.php/Lecture_2_-_Decision_Hypersurfaces 2] [http://balthier.ecn.purdue.edu/index.php/Lecture_3_-_Bayes_classification 3] | [http://balthier.ecn.purdue.edu/index.php/Lecture_1_-_Introduction 1] [http://balthier.ecn.purdue.edu/index.php/Lecture_2_-_Decision_Hypersurfaces 2] [http://balthier.ecn.purdue.edu/index.php/Lecture_3_-_Bayes_classification 3] | ||
[http://balthier.ecn.purdue.edu/index.php/Lecture_4_-_Bayes_Classification 4] [http://balthier.ecn.purdue.edu/index.php/Lecture_5_-_Discriminant_Functions 5] [http://balthier.ecn.purdue.edu/index.php/Lecture_6_-_Discriminant_Functions 6] [http://balthier.ecn.purdue.edu/index.php/Lecture_7_-_MLE_and_BPE 7] [http://balthier.ecn.purdue.edu/index.php/Lecture_8_-_MLE%2C_BPE_and_Linear_Discriminant_Functions 8] [http://balthier.ecn.purdue.edu/index.php/Lecture_9_-_Linear_Discriminant_Functions 9] [http://balthier.ecn.purdue.edu/index.php/Lecture_10_-_Batch_Perceptron_and_Fisher_Linear_Discriminant 10] [http://balthier.ecn.purdue.edu/index.php/Lecture_11_-_Fischer%27s_Linear_Discriminant_again 11] [http://balthier.ecn.purdue.edu/index.php/Lecture_12_-_Support_Vector_Machine_and_Quadratic_Optimization_Problem 12] [http://balthier.ecn.purdue.edu/index.php/Lecture_13_-_Kernel_function_for_SVMs_and_ANNs_introduction 13] [http://balthier.ecn.purdue.edu/index.php/Lecture_14_-_ANNs%2C_Non-parametric_Density_Estimation_%28Parzen_Window%29 14] [http://balthier.ecn.purdue.edu/index.php/Lecture_15_-_Parzen_Window_Method 15] [http://balthier.ecn.purdue.edu/index.php/Lecture_16_-_Parzen_Window_Method_and_K-nearest_Neighbor_Density_Estimate 16] [http://balthier.ecn.purdue.edu/index.php/Lecture_17_-_Nearest_Neighbors_Clarification_Rule_and_Metrics 17] [http://balthier.ecn.purdue.edu/index.php/Lecture_18_-_Nearest_Neighbors_Clarification_Rule_and_Metrics%28Continued%29 18] | [http://balthier.ecn.purdue.edu/index.php/Lecture_4_-_Bayes_Classification 4] [http://balthier.ecn.purdue.edu/index.php/Lecture_5_-_Discriminant_Functions 5] [http://balthier.ecn.purdue.edu/index.php/Lecture_6_-_Discriminant_Functions 6] [http://balthier.ecn.purdue.edu/index.php/Lecture_7_-_MLE_and_BPE 7] [http://balthier.ecn.purdue.edu/index.php/Lecture_8_-_MLE%2C_BPE_and_Linear_Discriminant_Functions 8] [http://balthier.ecn.purdue.edu/index.php/Lecture_9_-_Linear_Discriminant_Functions 9] [http://balthier.ecn.purdue.edu/index.php/Lecture_10_-_Batch_Perceptron_and_Fisher_Linear_Discriminant 10] [http://balthier.ecn.purdue.edu/index.php/Lecture_11_-_Fischer%27s_Linear_Discriminant_again 11] [http://balthier.ecn.purdue.edu/index.php/Lecture_12_-_Support_Vector_Machine_and_Quadratic_Optimization_Problem 12] [http://balthier.ecn.purdue.edu/index.php/Lecture_13_-_Kernel_function_for_SVMs_and_ANNs_introduction 13] [http://balthier.ecn.purdue.edu/index.php/Lecture_14_-_ANNs%2C_Non-parametric_Density_Estimation_%28Parzen_Window%29 14] [http://balthier.ecn.purdue.edu/index.php/Lecture_15_-_Parzen_Window_Method 15] [http://balthier.ecn.purdue.edu/index.php/Lecture_16_-_Parzen_Window_Method_and_K-nearest_Neighbor_Density_Estimate 16] [http://balthier.ecn.purdue.edu/index.php/Lecture_17_-_Nearest_Neighbors_Clarification_Rule_and_Metrics 17] [http://balthier.ecn.purdue.edu/index.php/Lecture_18_-_Nearest_Neighbors_Clarification_Rule_and_Metrics%28Continued%29 18] |
Revision as of 15:07, 20 March 2008
Figure 1
Figure 2
Parzen Window Method
- Step 1:** Choose "shape" of your window by introducing a "window function"
e.g. if $ R_i $ is hybercube in $ \mathbb{R}^n $ with side-length $ h_i $, then the window function is $ \varphi $.
$ \varphi(\vec{u})=\varphi(u_1, u_2, \ldots, u_n)=1 $ if $ |u_i|<\frac{1}{2}, \forall i $ otherwise 0.
Examples of Parzen windows
.. |phi3| image:: tex
:alt: tex: \varphi(\frac{\vec{x}-\vec{x_0}}{h_i})
.. |x_0| image:: tex
:alt: tex: \vec{x_0}
Given the shape for parzen window by |phi1|, we can scale and shift it as required by the method.
|phi3| is window centered at |x_0| scaled by a factor |h_i|, i.e. its side-length is |h_i|.
.. image:: shiftWindow.jpg
.. |pix0| image:: tex
:alt: tex: p_i(\vec{x_0})
.. |x0inRi| image:: tex
:alt: tex: \vec{x_0} \in R_i
- Step 2:** Write the density estimate of |px| at |x0inRi| using window function, denoted by |pix0|.
.. |K_i| image:: tex
:alt: tex: K_i
.. |sample_space_i| image:: tex
:alt: tex: \{\vec{x_1}, \vec{x_2}, \ldots, \vec{x_i}\}
.. |K_i1| image:: tex
:alt: tex: \sum_{l=1}^{i}\varphi(\frac{\vec{x_l}-\vec{x_0}}{h_i})
.. |pix01| image:: tex
:alt: tex: p_i(\vec{x_0})=\frac{k_i}{iV_i}=\frac{1}{iV_i}\sum_{l=1}^{i}\varphi(\frac{\vec{x_l}-\vec{x_0}}{h_i})
.. |delta_iu| image:: tex
:alt: tex: \delta_i(\vec{u})=\frac{1}{V_i}\varphi(\frac{\vec{u}}{h_i})
.. |pix02| image:: tex
:alt: tex: p_i(\vec{x_0})=\frac{1}{i}\sum_{l=1}^{i}\delta_i(\vec{x_l}-\vec{x_0})
We have number of samples for |sample_space_i| inside |R_i| denoted by |K_i|
|K_i1|
So, |pix01|
Let |delta_iu|
|pix02|
.. |dirac_delta| image:: tex
:alt: tex: \lim_{h_i->0}\delta(\vec{x_l}-\vec{x_0})
.. |px| image:: tex
:alt: tex: p(\vec{x})
.. |i_tends_infty| image:: tex
:alt: tex: i\rightarrow \infty
This last equation is an average over impulses. For any l, |dirac_delta| is [Dirac delta Function]. We do not want to average over dirac delta functions. Our objective is that |pix0| should converge to true value |px|, as |i_tends_infty|
.. image:: dirac.jpg
.. |MSS1| image:: tex
:alt: tex: \lim_{i\rightarrow \infty}E\{p_i(\vec{x_0})\}=p(\vec{x_0})
.. |MSS2| image:: tex
:alt: tex: \lim_{i\rightarrow \infty}Var\{p_i(\vec{x_0})\}=0
.. |MSS3| image:: tex
:alt: tex: p_i(\vec{x_0}) \longrightarrow p(\vec{x_0})
.. |pix03| image:: tex
:alt: tex: \{p_i(\vec{x_0})\}
- What does convergence mean here?**
Observe |pix03| is a sequence of random variables since |pix0| depends on random variables |sample_space_i|. What do we mean by convergence of a sequence of random variables (There are many definitions). We pick "Convergence in mean square" sense, i.e.
If |MSS1|
and |MSS2|
then we say |MSS3| in mean square as |i_tends_infty|
.. |kkh01| image:: tex
:alt: tex: E(p_i(\vec{x_o}))
.. |kkh02| image:: tex
:alt: tex: p(\vec{x_o})
.. |kkh03| image:: tex
:alt: tex: i\to\infty
.. |kkh04| image:: tex
:alt: tex: h_i \to\infty
.. |kkh05| image:: tex
:alt: tex: V_i\to\infty
.. |kkh06| image:: tex
..alt: tex: Var(p_i\vec{x_o})
- First condition:**
From the previous result, |jinha_pix0|
.. |jinha_pix0| image:: tex
:alt: tex: \displaystyle p_i (x_0) = \frac{1}{i} \sum_{l=1}^{i} \delta_i (\vec{x}_l - \vec{x}_0)
|jinha_epix0|
.. |jinha_epix0| image:: tex
:alt: tex: \displaystyle E[p_i(x_0)] = \frac{1}{i} \sum_{l=1}^{i} E[ \delta_i (\vec{x}_l - \vec{x}_0) ] = \frac{1}{i} \sum_{l=1}^{i} \int \delta_i (\vec{x}_l - \vec{x}_0) p(\vec{x}_l) dx_l \rightarrow p(\vec{x}_0)
We don't need an infinity number of samples to make |kkh01| converge to |kkh02| as |kkh03|.
We just need |kkh04| (iie. |kkh05|)
- To make it sure** |jinha_varpix0|, what should we do?
.. |jinha_varpix0| image:: tex
:alt: tex: Var(p_i(x_0)) \rightarrow 0
|jinha_varpix0_1|
.. |jinha_varpix0_1| image:: tex
:alt: tex: \displaystyle Var(p_i(x_0)) = Var(\sum_{l=1}^{i} \frac{1}{i} \delta_i(\vec{x}_l - \vec{x}_0)) = \sum_{l=1}^{i} Var(\frac{1}{i} \delta_i(\vec{x}_l - \vec{x}_0))
|jinha_varpix0_2|
.. |jinha_varpix0_2| image:: tex
:alt: tex: \displaystyle = \sum_{l=1}^{i} E \left[ \left( \frac{\delta_i(\vec{x}_l - \vec{x}_0)}{i} - E\left[ \frac{\delta_i(\vec{x}_l - \vec{x}_0)}{i} \right] \right)^2 \right] = \sum_{l=1}^{i} E \left[ \left( \frac{\delta_i(\vec{x}_l - \vec{x}_0)}{i} \right)^2 \right] - \left( E\left[ \frac{\delta_i(\vec{x}_l - \vec{x}_0)}{i} \right] \right)^2
We know that second term is non-negative, therefore we can write
|jinha_varpix0_3|
.. |jinha_varpix0_3| image:: tex
:alt: tex: \displaystyle Var(p_i(x_0)) \le \sum_{l=1}^{i} E \left[ \left( \frac{\delta_i(\vec{x}_l - \vec{x}_0)}{i} \right)^2 \right]
|jinha_varpix0_4|
.. |jinha_varpix0_4| image:: tex
:alt: tex: \displaystyle \rightarrow Var(p_i(x_0)) \le \sum_{l=1}^{i} \int \left( \frac{\delta_i(\vec{x}_l - \vec{x}_0)}{i} \right)^2 p(x_l) dx_l
|jinha_varpix0_5|
.. |jinha_varpix0_5| image:: tex
:alt: tex: \displaystyle \rightarrow Var(p_i(x_0)) \le \sum_{l=1}^{i} \frac{1}{i^2} \int \frac{\psi \left( \frac{\delta_i(\vec{x}_l - \vec{x}_0)}{i}\right)}{V_i} \frac{\psi \left( \frac{\delta_i(\vec{x}_l - \vec{x}_0)}{i}\right)}{V_i} p(x_l) dx_l
|jinha_varpix0_6|
.. |jinha_varpix0_6| image:: tex
:alt: tex: \displaystyle \rightarrow Var(p_i(x_0)) \le \frac{1}{i V_i} sup\psi \int \sum_{l=1}^{i} \delta_i (x_l - x_0) p(x_l) dx_l
|jinha_varpix0_7|
.. |jinha_varpix0_7| image:: tex
:alt: tex: \displaystyle \therefore Var(p_i(x_0)) \le \frac{1}{i V_i} sup\psi E [p_i(x_0)]
If fixed i=d, then as |a_1| increased, |a_2| decreased.
But, if |a_3| , as |a_4|
(for example, if |a_5|)
then, |a_6|
.. |a_1| image:: tex
:alt: tex: v_i
.. |a_2| image:: tex
:alt: tex: var(P_i (\vec{x_0}))
.. |a_3| image:: tex
:alt: tex: i V_i \rightarrow \infty
.. |a_4| image:: tex
:alt: tex: i \rightarrow \infty
.. |a_5| image:: tex
:alt: tex: v_i= \frac{1}{\sqrt i}, v_i=\frac{13}{\sqrt i} or \frac{17}{\sqrt i}
.. |a_6| image:: tex
:alt: tex: var(P_i (\vec{x_0})) \rightarrow 0, as i \rightarrow \infty
Here are some useful links to "Parzen-window Density Estimation"
http://www.cs.utah.edu/~suyash/Dissertation_html/node11.html
http://en.wikipedia.org/wiki/Parzen_window
http://www.personal.rdg.ac.uk/~sis01xh/teaching/CY2D2/Pattern2.pdf
http://www.eee.metu.edu.tr/~alatan/Courses/Demo/AppletParzen.html