(Confidence Intervals, and how to get them via Chebyshev)
Line 41: Line 41:
 
==Confidence Intervals, and how to get them via Chebyshev==
 
==Confidence Intervals, and how to get them via Chebyshev==
  
<math>\theta is unknown and fixed
+
*<math>\theta \text {is unknown and fixed}
<math>\hat \theta is random and should be close to <math>\theta most of the time
+
*<math>\hat \theta \text {is random and should be close to} <math>\theta \text {most of the time}
if Pr[abs(<math>\hat \theta - <math>\theta) <= E] >= (1-a) then we say we have (1-a) confidence in the interval [<math>\hat \theta -E, <math>\hat \theta + E]
+
*\text{if Pr[abs(}<math>\hat \theta - <math>\theta) <= E] >= (1-a) then we say we have (1-a) confidence in the interval [<math>\hat \theta -E, <math>\hat \theta + E]
 +
 
 +
 
 +
==MAP Estimation Rule==
 +
 
 +
<math>\hat \theta_{MAP} = \text{argmax}_\theta ( f_{\theta|X}(\theta|x))</math>
 +
 
 +
Which can be expanded and turned into the following (if I am not mistaken):
 +
 
 +
<math>\hat \theta_{MAP} = \text{argmax}_\theta ( f_{X|\theta}(x|\theta)f_{\theta}(\theta))</math>

Revision as of 16:59, 18 November 2008

Covariance

  • $ COV(X,Y)=E[(X-E[X])(Y-E[Y])]\! $
  • $ COV(X,Y)=E[XY]-E[X]E[Y]\! $

Correlation Coefficient

$ \rho(X,Y)= \frac {cov(X,Y)}{\sqrt{var(X)} \sqrt{var(Y)}} \, $

Markov Inequality

Loosely speaking: In a nonnegative RV has a small mean, then the probability that it takes a large value must also be small.

  • $ P(X \geq a) \leq E[X]/a\! $

for all a > 0

Chebyshev Inequality

"Any RV is likely to be close to its mean"

$ \Pr(\left|X-E[X]\right|\geq C)\leq\frac{var(X)}{C^2}. $

Weak Law of Large Numbers

The weak law of large numbers states that the sample average converges in probability towards the expected value

$ \overline{X}_n \, \xrightarrow{P} \, \mu \qquad\textrm{for}\qquad n \to \infty. $

ML Estimation Rule

$ \hat a_{ML} = \text{max}_a ( f_{X}(x_i;a)) $ continuous

$ \hat a_{ML} = \text{max}_a ( Pr(x_i;a)) $ discrete

MAP Estimation Rule

$ \hat \theta_{MAP} = \text{argmax}_\theta ( f_{\theta|X}(\theta|x)) $

Which can be expanded and turned into the following (if I am not mistaken):

$ \hat \theta_{MAP} = \text{argmax}_\theta ( f_{X|\theta}(x|\theta)f_{\theta}(\theta)) $

Bias of an Estimator, and Unbiased estimators

An estimator is unbiased if: $ E[\hat a_{ML}] = a $ for all values of a

Confidence Intervals, and how to get them via Chebyshev

  • $ \theta \text {is unknown and fixed} *<math>\hat \theta \text {is random and should be close to} <math>\theta \text {most of the time} *\text{if Pr[abs(}<math>\hat \theta - <math>\theta) <= E] >= (1-a) then we say we have (1-a) confidence in the interval [<math>\hat \theta -E, <math>\hat \theta + E] ==MAP Estimation Rule== <math>\hat \theta_{MAP} = \text{argmax}_\theta ( f_{\theta|X}(\theta|x)) $

Which can be expanded and turned into the following (if I am not mistaken):

$ \hat \theta_{MAP} = \text{argmax}_\theta ( f_{X|\theta}(x|\theta)f_{\theta}(\theta)) $

Alumni Liaison

Sees the importance of signal filtering in medical imaging

Dhruv Lamba, BSEE2010