Revision as of 20:30, 9 November 2008 by Thomas34 (Talk)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

We are given n samples, and we are told to find the max. likelihood estimator for the parameter a.

Let's start by saying we are given just one sample, $ x_i $. Then,

$ f_{X}(x_i;a)=\{0, \text{ if } |a| < x_i ; \frac{1}{a}, \text{ if } |a| \geq x_i\} $

$ \hat a_{ML} $ is obtained by maximizing $ f_{X}(x_i;a) $ over a:

$ \hat a_{ML} = \text{max}_a ( f_{X}(x_i;a) ) = |x_i| $


Now, likewise, for n samples x_1, ..., x_n:

$ f_{X_1, X_2, ..., X_n}(x_1, x_2, ..., x_n;a)=f_{X_1}(x_1;a)f_{X_2}(x_2;a)...f_{X_n}(x_n;a) $

$ = \{0, \text{ if } \exists x_i \text { s.t. } |a| < x_i ; (\frac{1}{a})^n, \text{ if } \forall x_i \ \ |a| \geq x_i\} $

Maximizing this function over a, we obtain $ \hat a_{ML} = {max}_i(|x_i|) $

(Intuitively, understanding the first part, we can think that if our guess for a -- call this Ga -- is smaller than any sample (say $ x_i $), then the function $ f_{X_1, X_2, ..., X_n}(x_1, x_2, ..., x_n;a) $ equates to 0, since $ f_{X_i}(x_i;a) $ will equate to 0. So, Ga needs to be at least as big as the maximum $ x_i $. But what if we made Ga bigger than that? If one looks at any individual $ f_{X_i}(x_i;a) $, it will be smaller than if Ga was equal to $ x_i $. Since we want the Ga that produces the largest number (and hence the highest chance that Ga is "actually a", then we need to have Ga = the maximum $ x_i $.)

Alumni Liaison

Basic linear algebra uncovers and clarifies very important geometry and algebra.

Dr. Paul Garrett