Line 4: | Line 4: | ||
== Maximum Likelihood Estimation (MLE) == | == Maximum Likelihood Estimation (MLE) == | ||
− | |||
---- | ---- | ||
General Principles: | General Principles: | ||
Given vague knowledge about a situation and some training data (i.e. feature vector values for which the class is known) | Given vague knowledge about a situation and some training data (i.e. feature vector values for which the class is known) | ||
− | <math> | + | <math>\vec{x}_l, \qquad l=1,\ldots,\text{hopefully large number}</math> |
+ | |||
+ | we want to estimate | ||
+ | <math>p(\vec{x}|\omega_i), \qquad i=1,\ldots,k</math> | ||
+ | |||
+ | Assume a parameter form for <math>p(\vec{x}|\omega_i), \qquad i=1,\ldots,k</math> | ||
+ | Use training data to estimate the parameters of <math>p(\vec{x}|\omega_i)</math>, e.g. if you assume <math>p(\vec{x}|\omega_i)=\mathcal{N}(\mu,\Sigma)</math> |
Revision as of 08:23, 21 April 2010
In Lecture 11, we continued our discussion of Parametric Density Estimation techniques. We discussed the Maximum Likelihood Estimation (MLE) method and look at a couple of 1-dimension examples for case when feature in dataset follows Gaussian distribution. First, we looked at case where mean parameter was unknown, but variance parameter is known. Then we followed with another example where both mean and variance where unknown. Finally, we looked at the slight "bias" problem when calculating the variance.
Below are the notes from lecture.
Maximum Likelihood Estimation (MLE)
General Principles: Given vague knowledge about a situation and some training data (i.e. feature vector values for which the class is known) $ \vec{x}_l, \qquad l=1,\ldots,\text{hopefully large number} $
we want to estimate $ p(\vec{x}|\omega_i), \qquad i=1,\ldots,k $
Assume a parameter form for $ p(\vec{x}|\omega_i), \qquad i=1,\ldots,k $ Use training data to estimate the parameters of $ p(\vec{x}|\omega_i) $, e.g. if you assume $ p(\vec{x}|\omega_i)=\mathcal{N}(\mu,\Sigma) $