Line 15: Line 15:
 
----
 
----
 
----
 
----
Bayes rule in practice: definition and parameter estimation
 
  
1) Bayes rule for Gaussian data
+
Content:
  
 +
----
 +
 +
1 Bayes rule for Gaussian data
 +
 +
----
 +
2 Procedure
 +
 +
----
 +
3 Parameter estimation
 +
 +
Given a data set
 +
<math>\mathbf{X}=(\mathbf{x}_1,...,\mathbf{x}_N)^T</math> in which the observations <math>\{{\mathbf{x}_n}\}</math> are assumed to be drawn independently from a multivariate Gaussian distribution (D dimension), we can estimate the parameters of the distribution by maximum likelihood. The log likelihood function is given by
 +
 +
<center><math>
 +
ln p(\mathbf{x}|\mathbf{\mu, \Sigma}) = -\frac{ND}{2}ln(2\pi)-\frac{N}{2}ln(|\mathbf{\Sigma}|)-{\frac{1}{2}\sum\limits_{n=1}^{N}({\mathbf{x}}_n - \mathbf{\mu})^T\mathbf{\Sigma}^{-1}({\mathbf{x}}_n - \mathbf{\mu})}.
 +
</math></center>
 +
 +
By simple rearrangement, we see that the likelihood function depends on the data set only through the two quantities
 +
<center><math>
 +
\sum\limits_{n=1}^{N}\mathbf{x}_n, \sum\limits_{n=1}^{N}{\mathbf{x}}_n{\mathbf{x}}_n^T.
 +
</math></center>
 +
 +
These are the sufficient statistics for the Gaussian distribution. The derivative of the log likelihood with respect to <math>\mathbf{\mu}</math> is
 +
<center><math>
 +
\frac{\partial}{\partial\mathbf{\mu}}  ln p(\mathbf{x}|\mathbf{\mu, \Sigma})= \sum\limits_{n=1}^{N}\mathbf{\Sigma}^{-1}(\mathbf{x}_n - \mathbf{\mu})
 +
</math></center>
 +
 +
and setting this derivative to zero, we obtain the solution for the maximum likelihood estimate of the mean
 +
<center><math>
 +
{\mathbf{\mu}}_{ML}=\frac{1}{N} \sum\limits_{n=1}^{N} {\mathbf{x}}_n.
 +
</math></center>
 +
 +
Use similar method by setting the derivative of the log likelihood with respect to <math>\mathbf{\Sigma}</math> to zero, we obtain
 +
and setting this derivative to zero, we obtain the solution for the maximum likelihood estimate of the mean
 +
<center><math>
 +
{\mathbf{\Sigma}}_{ML}=\frac{1}{N} \sum\limits_{n=1}^{N}({\mathbf{x}}_n - {\mathbf{\mu}}_{ML})({\mathbf{x}}_n - {\mathbf{\mu}}_{ML})^T.
 +
</math></center>
 +
 +
 +
 +
----
 +
4 Example
 +
 +
----
 +
5 Conclusion
  
 
----
 
----

Revision as of 09:50, 30 April 2014


Bayes rule in practice: definition and parameter estimation

A slecture by ECE student Chuohao Tang

Partly based on the ECE662 Spring 2014 lecture material of Prof. Mireille Boutin.



Content:


1 Bayes rule for Gaussian data


2 Procedure


3 Parameter estimation

Given a data set $ \mathbf{X}=(\mathbf{x}_1,...,\mathbf{x}_N)^T $ in which the observations $ \{{\mathbf{x}_n}\} $ are assumed to be drawn independently from a multivariate Gaussian distribution (D dimension), we can estimate the parameters of the distribution by maximum likelihood. The log likelihood function is given by

$ ln p(\mathbf{x}|\mathbf{\mu, \Sigma}) = -\frac{ND}{2}ln(2\pi)-\frac{N}{2}ln(|\mathbf{\Sigma}|)-{\frac{1}{2}\sum\limits_{n=1}^{N}({\mathbf{x}}_n - \mathbf{\mu})^T\mathbf{\Sigma}^{-1}({\mathbf{x}}_n - \mathbf{\mu})}. $

By simple rearrangement, we see that the likelihood function depends on the data set only through the two quantities

$ \sum\limits_{n=1}^{N}\mathbf{x}_n, \sum\limits_{n=1}^{N}{\mathbf{x}}_n{\mathbf{x}}_n^T. $

These are the sufficient statistics for the Gaussian distribution. The derivative of the log likelihood with respect to $ \mathbf{\mu} $ is

$ \frac{\partial}{\partial\mathbf{\mu}} ln p(\mathbf{x}|\mathbf{\mu, \Sigma})= \sum\limits_{n=1}^{N}\mathbf{\Sigma}^{-1}(\mathbf{x}_n - \mathbf{\mu}) $

and setting this derivative to zero, we obtain the solution for the maximum likelihood estimate of the mean

$ {\mathbf{\mu}}_{ML}=\frac{1}{N} \sum\limits_{n=1}^{N} {\mathbf{x}}_n. $

Use similar method by setting the derivative of the log likelihood with respect to $ \mathbf{\Sigma} $ to zero, we obtain and setting this derivative to zero, we obtain the solution for the maximum likelihood estimate of the mean

$ {\mathbf{\Sigma}}_{ML}=\frac{1}{N} \sum\limits_{n=1}^{N}({\mathbf{x}}_n - {\mathbf{\mu}}_{ML})({\mathbf{x}}_n - {\mathbf{\mu}}_{ML})^T. $



4 Example


5 Conclusion



Questions and comments

If you have any questions, comments, etc. please post them on this page.


Back to ECE662, Spring 2014

Alumni Liaison

has a message for current ECE438 students.

Sean Hu, ECE PhD 2009