(ML Estimation Rule)
(ML Estimation Rule)
Line 18: Line 18:
  
 
==ML Estimation Rule==
 
==ML Estimation Rule==
<math>\hat a_{ML} = \text{max}_a ( f_{X}(x_i;a) )</math>
+
<math>\hat a_{ML} = \text{max}_a ( f_{X}(x_i;a) \text{For continuous} )</math>
 +
 
 +
<math>\hat a_{ML} = \text{max}_a ( Pr(x_i;a) \text{For discrete} )</math>
  
 
==MAP Estimation Rule==
 
==MAP Estimation Rule==

Revision as of 16:22, 18 November 2008

Covariance

  • $ COV(X,Y)=E[(X-E[X])(Y-E[Y])]\! $
  • $ COV(X,Y)=E[XY]-E[X]E[Y]\! $

Correlation Coefficient

$ \rho(X,Y)= \frac {cov(X,Y)}{\sqrt{var(X)} \sqrt{var(Y)}} \, $

Markov Inequality

Loosely speaking: In a nonnegative RV has a small mean, then the probability that it takes a large value must also be small.

  • $ P(X \geq a) \leq E[X]/a\! $

for all a > 0

Chebyshev Inequality

"Any RV is likely to be close to its mean"

$ \Pr(\left|X-E[X]\right|\geq C)\leq\frac{var(X)}{C^2}. $

ML Estimation Rule

$ \hat a_{ML} = \text{max}_a ( f_{X}(x_i;a) \text{For continuous} ) $

$ \hat a_{ML} = \text{max}_a ( Pr(x_i;a) \text{For discrete} ) $

MAP Estimation Rule

Bias of an Estimator, and Unbiased estimators

Confidence Intervals, and how to get them via Chebyshev

Alumni Liaison

Basic linear algebra uncovers and clarifies very important geometry and algebra.

Dr. Paul Garrett