Line 21: Line 21:
  
 
1 Bayes rule for Gaussian data
 
1 Bayes rule for Gaussian data
 +
 +
Assume we have two classes <math>c_1, c_2</math>, and we want to classify the D-dimensional data <math>\left\{{\mathbf{x}}\right\}</math> into these two classes. To do so, we investigate the posterior probabilities of the two classes given the data, which are given by <math>p(c_i|\mathbf{x})</math>, where <math>i=1,2</math> for a two-class classification. Using Bayes' theorem, these probabilities can be expressed in the form
 +
<center><math>
 +
p(c_i|{\mathbf{x}})=\frac{p({\mathbf{x}}|c_i)P(c_i)}{p({\mathbf{x}})}
 +
</math></center>
 +
where <math>P(c_i)</math> is the prior probability for class <math>c_i</math>.
 +
 +
We will classify the data to <math>c_1</math> if
 +
<center><math>
 +
\begin{align}
 +
p(c_1|\mathbf{x}) &\geq p(c_2|\mathbf{x}) \\
 +
\Leftrightarrow    \frac{p({\mathbf{x}}|c_1)P(c_1)}{p({\mathbf{x}})} &\geq \frac{p({\mathbf{x}}|c_2)P(c_2)}{p({\mathbf{x}})}\\
 +
\Leftrightarrow    {p({\mathbf{x}}|c_1)P(c_1)} &\geq {p({\mathbf{x}}|c_2)P(c_2)}\\
 +
\Leftrightarrow  g({\mathbf{x}})=ln {p({\mathbf{x}}|c_1)P(c_1)} &- ln{p({\mathbf{x}}|c_2)P(c_2)} \geq 0
 +
\end{align}
 +
</math></center>
 +
and vise versa. Here <math>g({\mathbf{x}})</math> is the discriminant function.
 +
  
 
For a D-dimensional vector <math>\mathbf{x}</math>, the multivariate Gaussian distribution takes the form
 
For a D-dimensional vector <math>\mathbf{x}</math>, the multivariate Gaussian distribution takes the form
 
<center><math>
 
<center><math>
p(\mathbf{x}|\mathbf{\mu, \Sigma}) = -\frac{1}{(2\pi)^{D/2}}\frac{1}{|{\mathbf{\Sigma}}|^{1/2}}exp\left[ {-\frac{1}{2}({\mathbf{x}}_n - \mathbf{\mu})^T\mathbf{\Sigma}^{-1}({\mathbf{x}}_n - \mathbf{\mu})} \right]
+
p(\mathbf{x}|\mathbf{\mu, \Sigma}) = \frac{1}{(2\pi)^{D/2}}\frac{1}{|{\mathbf{\Sigma}}|^{1/2}}exp\left[ {-\frac{1}{2}({\mathbf{x}}_n - \mathbf{\mu})^T\mathbf{\Sigma}^{-1}({\mathbf{x}}_n - \mathbf{\mu})} \right]
 
</math></center>  
 
</math></center>  
  
 
where <math>\mathbf{\mu}</math> is a D-dimensional mean vector, <math>\mathbf{\Sigma}</math> is a D×D covariance matrix, and <math>|\mathbf{\Sigma}|</math> denotes the determinant of <math>\mathbf{\Sigma}</math>.
 
where <math>\mathbf{\mu}</math> is a D-dimensional mean vector, <math>\mathbf{\Sigma}</math> is a D×D covariance matrix, and <math>|\mathbf{\Sigma}|</math> denotes the determinant of <math>\mathbf{\Sigma}</math>.
 +
 +
Then the discriminant function  will be
 +
<center><math>
 +
\begin{align}
 +
g({\mathbf{x}})&=ln {p({\mathbf{x}}|c_1)P(c_1)} - ln{p({\mathbf{x}}|c_2)P(c_2)} \\
 +
&=\left[ {-\frac{1}{2}({\mathbf{x}}_n - \mathbf{\mu}_1)^T\mathbf{\Sigma}_1^{-1}({\mathbf{x}}_n - \mathbf{\mu}_1)} -ln{|{\mathbf{\Sigma}}_1|^{1/2}}+P(c_1)\right]-\left[ {-\frac{1}{2}({\mathbf{x}}_n - \mathbf{\mu}_2)^T\mathbf{\Sigma}_2^{-1}({\mathbf{x}}_n - \mathbf{\mu}_2)} -ln{|{\mathbf{\Sigma}}_2|^{1/2}}+P(c_2)\right]
 +
\end{align}.
 +
</math></center>
 +
 +
So if <math>g({\mathbf{x}})\geq 0</math>, decide <math>c_1</math>;
 +
If <math>g({\mathbf{x}}) < 0</math>, decide <math>c_2</math>.
 +
 +
For k-classes classification problem, we decide the data belongs to <math>c_i</math>, where <math>i=1,...,k</math> if
 +
<center><math>
 +
arg \max \limits_{i} p({\mathbf{x}}|c_i)P(c_i).
 +
</math></center>
 +
 
----
 
----
 
2 Procedure
 
2 Procedure
Line 41: Line 76:
 
&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp; After obtaining the estimated parameters, we can calculate and decide which class a testing sample belongs to using Bayes rule.<br>
 
&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp; After obtaining the estimated parameters, we can calculate and decide which class a testing sample belongs to using Bayes rule.<br>
 
&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp; <math>
 
&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp; <math>
\begin{align}if \ p({\mathbf{x}}|c_1)>p({\mathbf{x}}|c_2)\  decide\  c_1\\
+
\begin{align}if \ p({\mathbf{x}}|c_1)P(c_1)>p({\mathbf{x}}|c_2)P(c_2)\  decide\  c_1\\
if \ p({\mathbf{x}}|c_1)<p({\mathbf{x}}|c_2)\  decide\  c_2
+
if \ p({\mathbf{x}}|c_1)P(c_1)<p({\mathbf{x}}|c_2)P(c_2)\  decide\  c_2
 
\end{align}
 
\end{align}
 
</math>
 
</math>
Line 78: Line 113:
 
To estimate the class priors, we will count how much data are there for each class, so
 
To estimate the class priors, we will count how much data are there for each class, so
 
<center><math>
 
<center><math>
\begin{align}P(c_1)\approx N_1/N\\
+
\begin{align}P(c_1)\approx N_{c_1}/N\\
P(c_2)\approx N_2/N
+
P(c_2)\approx N_{c_1}/N
 
\end{align}
 
\end{align}
 
</math></center>  
 
</math></center>  
Line 87: Line 122:
  
 
*1-D case
 
*1-D case
 +
 +
  
 
*2-D case
 
*2-D case

Revision as of 14:49, 30 April 2014


Bayes rule in practice: definition and parameter estimation

A slecture by ECE student Chuohao Tang

Partly based on the ECE662 Spring 2014 lecture material of Prof. Mireille Boutin.



Content:


1 Bayes rule for Gaussian data

Assume we have two classes $ c_1, c_2 $, and we want to classify the D-dimensional data $ \left\{{\mathbf{x}}\right\} $ into these two classes. To do so, we investigate the posterior probabilities of the two classes given the data, which are given by $ p(c_i|\mathbf{x}) $, where $ i=1,2 $ for a two-class classification. Using Bayes' theorem, these probabilities can be expressed in the form

$ p(c_i|{\mathbf{x}})=\frac{p({\mathbf{x}}|c_i)P(c_i)}{p({\mathbf{x}})} $

where $ P(c_i) $ is the prior probability for class $ c_i $.

We will classify the data to $ c_1 $ if

$ \begin{align} p(c_1|\mathbf{x}) &\geq p(c_2|\mathbf{x}) \\ \Leftrightarrow \frac{p({\mathbf{x}}|c_1)P(c_1)}{p({\mathbf{x}})} &\geq \frac{p({\mathbf{x}}|c_2)P(c_2)}{p({\mathbf{x}})}\\ \Leftrightarrow {p({\mathbf{x}}|c_1)P(c_1)} &\geq {p({\mathbf{x}}|c_2)P(c_2)}\\ \Leftrightarrow g({\mathbf{x}})=ln {p({\mathbf{x}}|c_1)P(c_1)} &- ln{p({\mathbf{x}}|c_2)P(c_2)} \geq 0 \end{align} $

and vise versa. Here $ g({\mathbf{x}}) $ is the discriminant function.


For a D-dimensional vector $ \mathbf{x} $, the multivariate Gaussian distribution takes the form

$ p(\mathbf{x}|\mathbf{\mu, \Sigma}) = \frac{1}{(2\pi)^{D/2}}\frac{1}{|{\mathbf{\Sigma}}|^{1/2}}exp\left[ {-\frac{1}{2}({\mathbf{x}}_n - \mathbf{\mu})^T\mathbf{\Sigma}^{-1}({\mathbf{x}}_n - \mathbf{\mu})} \right] $

where $ \mathbf{\mu} $ is a D-dimensional mean vector, $ \mathbf{\Sigma} $ is a D×D covariance matrix, and $ |\mathbf{\Sigma}| $ denotes the determinant of $ \mathbf{\Sigma} $.

Then the discriminant function will be

$ \begin{align} g({\mathbf{x}})&=ln {p({\mathbf{x}}|c_1)P(c_1)} - ln{p({\mathbf{x}}|c_2)P(c_2)} \\ &=\left[ {-\frac{1}{2}({\mathbf{x}}_n - \mathbf{\mu}_1)^T\mathbf{\Sigma}_1^{-1}({\mathbf{x}}_n - \mathbf{\mu}_1)} -ln{|{\mathbf{\Sigma}}_1|^{1/2}}+P(c_1)\right]-\left[ {-\frac{1}{2}({\mathbf{x}}_n - \mathbf{\mu}_2)^T\mathbf{\Sigma}_2^{-1}({\mathbf{x}}_n - \mathbf{\mu}_2)} -ln{|{\mathbf{\Sigma}}_2|^{1/2}}+P(c_2)\right] \end{align}. $

So if $ g({\mathbf{x}})\geq 0 $, decide $ c_1 $; If $ g({\mathbf{x}}) < 0 $, decide $ c_2 $.

For k-classes classification problem, we decide the data belongs to $ c_i $, where $ i=1,...,k $ if

$ arg \max \limits_{i} p({\mathbf{x}}|c_i)P(c_i). $

2 Procedure

  • Obtain training and testing data

          We divide the sample data into training set and testing set. Training data is used to estimate the model parameters, and testing data is used to evaluate the accuracy of the classifier.

  • Fit a Gaussian model to each class
    • Parameter estimation for mean,variance
    • Estimate class priors

          We will discuss details to estimate these parameters in the following section.

  • Calculate and decide

          After obtaining the estimated parameters, we can calculate and decide which class a testing sample belongs to using Bayes rule.
          $ \begin{align}if \ p({\mathbf{x}}|c_1)P(c_1)>p({\mathbf{x}}|c_2)P(c_2)\ decide\ c_1\\ if \ p({\mathbf{x}}|c_1)P(c_1)<p({\mathbf{x}}|c_2)P(c_2)\ decide\ c_2 \end{align} $


3 Parameter estimation

Given a data set $ \mathbf{X}=(\mathbf{x}_1,...,\mathbf{x}_N)^T $ in which the observations $ \{{\mathbf{x}_n}\} $ are assumed to be drawn independently from a multivariate Gaussian distribution (D dimension), we can estimate the parameters of the distribution by maximum likelihood. The log likelihood function is given by

$ ln p(\mathbf{x}|\mathbf{\mu, \Sigma}) = -\frac{ND}{2}ln(2\pi)-\frac{N}{2}ln(|\mathbf{\Sigma}|)-{\frac{1}{2}\sum\limits_{n=1}^{N}({\mathbf{x}}_n - \mathbf{\mu})^T\mathbf{\Sigma}^{-1}({\mathbf{x}}_n - \mathbf{\mu})}. $

By simple rearrangement, we see that the likelihood function depends on the data set only through the two quantities

$ \sum\limits_{n=1}^{N}\mathbf{x}_n, \sum\limits_{n=1}^{N}{\mathbf{x}}_n{\mathbf{x}}_n^T. $

These are the sufficient statistics for the Gaussian distribution. The derivative of the log likelihood with respect to $ \mathbf{\mu} $ is

$ \frac{\partial}{\partial\mathbf{\mu}} ln p(\mathbf{x}|\mathbf{\mu, \Sigma})= \sum\limits_{n=1}^{N}\mathbf{\Sigma}^{-1}(\mathbf{x}_n - \mathbf{\mu}) $

and setting this derivative to zero, we obtain the solution for the maximum likelihood estimate of the mean

$ {\mathbf{\mu}}_{ML}=\frac{1}{N} \sum\limits_{n=1}^{N} {\mathbf{x}}_n. $

Use similar method by setting the derivative of the log likelihood with respect to $ \mathbf{\Sigma} $ to zero, we obtain and setting this derivative to zero, we obtain the solution for the maximum likelihood estimate of the mean

$ {\mathbf{\Sigma}}_{ML}=\frac{1}{N} \sum\limits_{n=1}^{N}({\mathbf{x}}_n - {\mathbf{\mu}}_{ML})({\mathbf{x}}_n - {\mathbf{\mu}}_{ML})^T. $

To estimate the class priors, we will count how much data are there for each class, so

$ \begin{align}P(c_1)\approx N_{c_1}/N\\ P(c_2)\approx N_{c_1}/N \end{align} $

4 Example

  • 1-D case


  • 2-D case

5 Conclusion



Questions and comments

If you have any questions, comments, etc. please post them on this page.


Back to ECE662, Spring 2014

Alumni Liaison

To all math majors: "Mathematics is a wonderfully rich subject."

Dr. Paul Garrett