Line 136: Line 136:
 
Let the sample be <math>\mathcal{D} = (x_{1}, x_{2}, \ldots, x_{N})</math>. Let <span class="texhtml">ω<sub>''i''''j'''''</sub></span>'''''&nbsp;'''''be the class of the feature vector&nbsp;&lt;span class="texhtml" /&gt;x<sub></sub>j<br>(N is the sample size and &lt;span class="texhtml" /&gt;N<sub>1</sub>&nbsp;is the number of feature vectors belonging to class <span class="texhtml">ω<sub>1</sub></span>). Also assume that <br>samples'''''<math>x_{1}, x_{2} \ldots, x_{N}</math>&nbsp;'''''are independent events. <br>  
 
Let the sample be <math>\mathcal{D} = (x_{1}, x_{2}, \ldots, x_{N})</math>. Let <span class="texhtml">ω<sub>''i''''j'''''</sub></span>'''''&nbsp;'''''be the class of the feature vector&nbsp;&lt;span class="texhtml" /&gt;x<sub></sub>j<br>(N is the sample size and &lt;span class="texhtml" /&gt;N<sub>1</sub>&nbsp;is the number of feature vectors belonging to class <span class="texhtml">ω<sub>1</sub></span>). Also assume that <br>samples'''''<math>x_{1}, x_{2} \ldots, x_{N}</math>&nbsp;'''''are independent events. <br>  
  
<font class="Apple-style-span" color="#ff0000" size="3"><span class="Apple-style-span" style="font-weight: 800;"><math>Prob(\mathcal{D}/p) = \prod_{j=1}^{N} Prob(\omega_{ij} / p)
+
<math>Prob(\mathcal{D}/p) = \prod_{j=1}^{N} Prob(\omega_{ij} / p) </math>
\text{By Independence of the feature vectors}
+
<math>                                    = p^{N_1} * (1-p)^{N - N_1}</math>
                                    = p^{N_1} * (1-p)^{N - N_1}</math>
+
</span></font>
+

Revision as of 10:02, 5 April 2014

Tutorial on Maximum Likelihood Estimation: A Parametric Density Estimation Method



MLE Tutorial in PDF Format


Motivation


Suppose one wishes to determine just how biased an unfair coin is. Call the probability of
tossing a HEAD is p. The goal then is to determine p.

Also suppose the coin is tossed 80 times: i.e., the sample might be something like x1 = H,
x2 = T, …, x8 = T, and the count of number of HEADS, "H" is observed.

The probability of tossing TAILS is 1 − p. Suppose the outcome is 49 HEADS and 31 TAILS,
and suppose the coin was taken from a box containing three coins: one which gives HEADS
with probability p = 1 / 3, one which gives HEADS with probability p = 1 / 2 and another which
gives HEADS with probability p = 2 / 3. The coins have lost their labels, so which one it was is
unknown. Clearly the probability mass function for this experiment is binomial distribution with
sample size equal to 80, number of successes equal to 49 but different values of p. We have
the following probability mass functions for each of the above mentioned cases:

$ Pr(H = 49 | p = {1}/{3}) = \binom{80}{49}(1/3)^{49}(1 - 1/3)^31 \approx 0.000 $

$ Pr(H = 49 | p = {1}/{2}) = \binom{80}{49}(1/2)^{49}(1 - 1/2)^31 \approx 0.012 $

$ Pr(H = 49 | p = {2}/{3}) = \binom{80}{49}(2/3)^{49}(1 - 2/3)^31 \approx 0.054 $

Based on the above equations, we can conclude that the coin with p = 2 / 3 was more likely
to be picked up for the observations which we were given to begin with.



Definition


The generic situation is that we observe a n-dimensional random vector X with probability
density (or mass) function f(x / θ). It is assumed that θ is a fixed, unknown constant
belonging to the set $ \Theta \subset \mathbb{R}^{n} $.

For $ x \in \mathbb{R}^{n} $, the likelihood function of θ is defined as 

L(θ / x) = f(x / θ)

x is regarded as fixed, and θ is regarded as the variable for L. The log-likelihood function
is defined as l(θ / x) = loL(θ / x)

The Maximum Likelihood Estimate (or MLE) is the value $ \hat{ \theta } = \hat{\theta(x)} \in \Theta $
maximizing L(θ / x), provided it exists:

$ L(\hat{\theta}/(x)) = \underset{\theta}{argmax}[ L(\theta/x) ] $



What is Likelihood function ?

If the probability of an event X dependent on model parameters p is written as
P(X | p)

then we talk about the likelihood

L(p | X)

that is the likelihood of the parameters given the data.

For most sensible models, we will find that certain data are more probable than other data. The aim of maximum likelihood estimation is to find the parameter value(s) that makes the observed data most likely. This is because the likelihood of the parameters given the data is defined to be equal to the probability of the data given the parameters

If we were in the business of making predictions based on a set of solid assumptions, then we would be interested in probabilities - the probability of certain outcomes occurring or not occurring.

However, in the case of data analysis, we have already observed all the data: once they have been observed they are fixed, there is no 'probabilistic' part to them anymore (the word data comes from the Latin word meaning 'given'). We are much more interested in the likelihood of the model parameters that underly the fixed data.

The following is the relation between the likelihood and the probability spaces:

Probability:

                  Knowing Parameters $ \rightarrow $  Prediction of Outcomes

Likelihood:

                  Observation of Data $ \rightarrow $  Estimation of Parameters




Method

Maximum likelihood (ML) estimates need not exist nor be unique. In this section, we show how to compute ML
estimates when they exist and are unique. For computational convenience, the ML estimate
is obtained by maximizing the log-likelihood function, log L(θ / x). This is because the
two functions log L(θ / x) and L(θ / x) are monotonically related to each other
so the same ML estimate is obtained by maximizing either one. Assume that the log-likelihood
function is differentiable, if θM'L'E exists, it must satisfy the following partial
differential equation known as the likelihood equation:

$ \frac{d}{d\theta}\left( log L(\theta/x) \right) = 0 $

at θ$ = $θM'L'E. This is because maximum or minimum of a continuously differentiable
function implies that its first derivatives vanishes at such points.


The likelihood equation represents a necessary condition for the existence of an MLE estimate.
An additional condition must also be satisfied to ensure that log L(θ / x) is a maximum and not
minimum, since the first derivative cannot reveal this. To be a maximum, the shape of the log-likelihood
function should be convex in the neighborhood of θM'L'E. This can be checked by
calculating the second derivatives of the log-likelihoods and showing whether they are all negative
at θ$ = $θM'L'E.


$ \frac{d^{2}}{d\theta^{2}}\left( log L(\theta/x) \right) < 0 $




Properties

Some general properties (advantages and disadvantages) of the Maximum Likelihood Estimate are as follows:

  • For large data samples (large N) the likelihood function L approaches a Gaussian distribution
  • Maximum Likelihood estimates are usually consistent.
  • For large N the estimates converge to the true value of the parameters which are estimated.
  • Maximum Likelihood Estimates are usually unbiased.
  • For all sample sizes the parameter of interest is calculated correctly.
  • Maximum Likelihood Estimate is efficient: (the estimates have the smallest variance).
  • Maximum Likelihood Estimate is sufficient: (it uses all the information in the observations).
  • The solution from the Maximum Likelihood Estimate is unique.

On the other hand, we must know the correct probability distribution for the problem at hand.



Numerical Examples using Maximum Likelihood Estimates

In the following section, we discuss the applications of MLE procedure in estimating unknown
parameters of various common density distributions.


Estimating prior probability using MLE

Consider a two-class classification problem, with classes (ω12) and let
Prob(ω1) = p and Prob(ω2) = 1 − p (here p is the unknown parameter). By using
MLE, we can estimate p as follows:

Let the sample be $ \mathcal{D} = (x_{1}, x_{2}, \ldots, x_{N}) $. Let ωi'j be the class of the feature vector <span class="texhtml" />xj
(N is the sample size and <span class="texhtml" />N1 is the number of feature vectors belonging to class ω1). Also assume that
samples$ x_{1}, x_{2} \ldots, x_{N} $ are independent events.

$ Prob(\mathcal{D}/p) = \prod_{j=1}^{N} Prob(\omega_{ij} / p) $ $ = p^{N_1} * (1-p)^{N - N_1} $

Alumni Liaison

Questions/answers with a recent ECE grad

Ryne Rayburn