(New page: ='''1.6 Continuous Random Variables'''= '''1.6.1 Gaussian distribution (normal distribution)''' <math>\mathcal{N}\left(\mu,\sigma^{2}\right)</math> <math>f_{\mathbf{X}}(x)=\frac{1}{\sqr...)
 
 
(3 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
='''1.6 Continuous Random Variables'''=
 
='''1.6 Continuous Random Variables'''=
 +
From the [[ECE_600_Prerequisites|ECE600 Pre-requisites notes]] of  [[user:han84|Sangchun Han]], [[ECE]] PhD student.
 +
----
 +
'''1.6.1 Gaussian distribution (normal distribution)''' <math class="inline">\mathcal{N}\left(\mu,\sigma^{2}\right)</math>
  
'''1.6.1 Gaussian distribution (normal distribution)''' <math>\mathcal{N}\left(\mu,\sigma^{2}\right)</math>  
+
<math class="inline">f_{\mathbf{X}}(x)=\frac{1}{\sqrt{2\pi}\sigma}\cdot e^{-\frac{(x-\mu)^{2}}{2\sigma^{2}}}</math>  
  
<math>f_{\mathbf{X}}(x)=\frac{1}{\sqrt{2\pi}\sigma}\cdot e^{-\frac{(x-\mu)^{2}}{2\sigma^{2}}}</math>  
+
<math class="inline">\Phi_{\mathbf{X}}\left(\omega\right)=e^{i\mu\omega}e^{-\frac{1}{2}\sigma^{2}\omega^{2}}</math>  
  
<math>\Phi_{\mathbf{X}}\left(\omega\right)=e^{i\mu\omega}e^{-\frac{1}{2}\sigma^{2}\omega^{2}}</math>  
+
<math class="inline">\phi_{\mathbf{X}}\left(s\right)=e^{\mu s}e^{\frac{1}{2}\sigma^{2}s^{2}}</math>  
  
<math>\phi_{\mathbf{X}}\left(s\right)=e^{\mu s}e^{\frac{1}{2}\sigma^{2}s^{2}}</math>  
+
<math class="inline">E\left[\mathbf{X}\right]=\mu</math>  
  
<math>E\left[\mathbf{X}\right]=\mu</math>  
+
<math class="inline">Var\left[\mathbf{X}\right]=\sigma^{2}</math>  
  
<math>Var\left[\mathbf{X}\right]=\sigma^{2}</math>  
+
'''1.6.2 Log-normal distribution <math class="inline">\ln\mathcal{N}\left(\mu,\sigma^{2}\right)</math>'''
  
'''1.6.2 Log-normal distribution \ln\mathcal{N}\left(\mu,\sigma^{2}\right)'''
+
For log-normal distribution, its logaritm is normally distributed. If \mathbf{X}  is a random variable with log-normal distribution, then <math class="inline">\mathbf{Y}=\ln\mathbf{X}</math>  is a random variable with Gaussian distribution. This distribution is characterized with two parameters; <math class="inline">\mu</math>  and <math class="inline">\sigma</math>  (that are mean and standard deviation of <math class="inline">\mathbf{Y}</math>  rather than <math class="inline">\mathbf{X}</math> ).
  
For log-normal distribution, its logaritm is normally distributed. If \mathbf{X}  is a random variable with log-normal distribution, then <math>\mathbf{Y}=\ln\mathbf{X}</math>  is a random variable with Gaussian distribution. This distribution is characterized with two parameters; <math>\mu</math>  and <math>\sigma</math>  (that are mean and standard deviation of <math>\mathbf{Y}</math>  rather than <math>\mathbf{X}</math> ).
+
<math class="inline">f_{\mathbf{X}}\left(x\right)=\frac{1}{x\sqrt{2\pi}\sigma}\cdot e^{-\frac{(\ln x-\mu)^{2}}{2\sigma^{2}}}</math>.  
  
<math>f_{\mathbf{X}}\left(x\right)=\frac{1}{x\sqrt{2\pi}\sigma}\cdot e^{-\frac{(\ln x-\mu)^{2}}{2\sigma^{2}}}</math>.  
+
<math class="inline">E\left[\mathbf{X}\right]=e^{\mu+\sigma^{2}/2}</math>.  
  
<math>E\left[\mathbf{X}\right]=e^{\mu+\sigma^{2}/2}</math>.
+
<math class="inline">Var\left[\mathbf{X}\right]=\left(e^{\sigma^{2}}-1\right)e^{2\mu+\sigma^{2}}</math>.  
 
+
<math>Var\left[\mathbf{X}\right]=\left(e^{\sigma^{2}}-1\right)e^{2\mu+\sigma^{2}}</math>.  
+
  
 
MLE of log-normal distribution
 
MLE of log-normal distribution
  
<math>\hat{\mu}\frac{\sum_{k}\ln x_{k}}{n}</math>.  
+
<math class="inline">\hat{\mu}\frac{\sum_{k}\ln x_{k}}{n}</math>.  
  
<math>\hat{\sigma}^{2}=\frac{\sum_{k}\left(\ln x_{k}-\hat{\mu}\right)^{2}}{n}</math>.  
+
<math class="inline">\hat{\sigma}^{2}=\frac{\sum_{k}\left(\ln x_{k}-\hat{\mu}\right)^{2}}{n}</math>.  
  
1.6.3 Exponential distribution with mean <math>\mu</math>  
+
1.6.3 Exponential distribution with mean <math class="inline">\mu</math>  
  
 
It is about the time until success in Poisson process. It has the characteristic of memoryless.
 
It is about the time until success in Poisson process. It has the characteristic of memoryless.
  
<math>F_{\mathbf{X}}\left(x\right)=\left(1-e^{-\frac{x}{\mu}}\right)\cdot\mathbf{1}_{\left[0,\infty\right)}\left(x\right)</math>.  
+
<math class="inline">F_{\mathbf{X}}\left(x\right)=\left(1-e^{-\frac{x}{\mu}}\right)\cdot\mathbf{1}_{\left[0,\infty\right)}\left(x\right)</math>.  
  
<math>f_{\mathbf{X}}\left(x\right)=\frac{1}{\mu}e^{-\frac{x}{\mu}}\cdot\mathbf{1}_{\left[0,\infty\right)}\left(x\right)</math>.  
+
<math class="inline">f_{\mathbf{X}}\left(x\right)=\frac{1}{\mu}e^{-\frac{x}{\mu}}\cdot\mathbf{1}_{\left[0,\infty\right)}\left(x\right)</math>.  
  
<math>\Phi_{\mathbf{X}}\left(\omega\right)=\int_{0}^{\infty}\frac{1}{\mu}e^{-\frac{x}{\mu}}\cdot e^{i\omega x}dx=\frac{1}{\mu}\int_{0}^{\infty}e^{x\left(i\omega-\frac{1}{\mu}\right)}dx=\frac{1}{\mu}\cdot\frac{e^{x\left(i\omega-\frac{1}{\mu}\right)}}{i\omega-\frac{1}{\mu}}\Biggl|_{0}^{\infty}=\frac{\frac{1}{\mu}}{\frac{1}{\mu}-i\omega}=\frac{1}{1-i\mu\omega}</math>.  
+
<math class="inline">\Phi_{\mathbf{X}}\left(\omega\right)=\int_{0}^{\infty}\frac{1}{\mu}e^{-\frac{x}{\mu}}\cdot e^{i\omega x}dx=\frac{1}{\mu}\int_{0}^{\infty}e^{x\left(i\omega-\frac{1}{\mu}\right)}dx=\frac{1}{\mu}\cdot\frac{e^{x\left(i\omega-\frac{1}{\mu}\right)}}{i\omega-\frac{1}{\mu}}\Biggl|_{0}^{\infty}=\frac{\frac{1}{\mu}}{\frac{1}{\mu}-i\omega}=\frac{1}{1-i\mu\omega}</math>.  
  
<math>E\left[\mathbf{X}\right]=\mu</math>.  
+
<math class="inline">E\left[\mathbf{X}\right]=\mu</math>.  
  
<math>Var\left[\mathbf{X}\right]=\mu^{2}</math>.  
+
<math class="inline">Var\left[\mathbf{X}\right]=\mu^{2}</math>.  
  
 
Consider Poisson process
 
Consider Poisson process
  
<math>f_{\mathbf{X}}\left(t\right)dt</math> In fact, <math>\lambda=1/\mu</math> .
+
<math class="inline">f_{\mathbf{X}}\left(t\right)dt</math> In fact, <math class="inline">\lambda=1/\mu</math> .
  
 
Moment generating function
 
Moment generating function
  
<math>\phi_{\mathbf{X}}\left(s\right)=E\left[e^{s\mathbf{X}}\right]=\int_{0}^{\infty}e^{sx}\cdot\lambda e^{-\lambda x}dx=\lambda\int_{0}^{\infty}e^{x\left(s-\lambda\right)}dx=\frac{\lambda}{s-\lambda}e^{x\left(s-\lambda\right)}\biggl|_{0}^{\infty}=\frac{\lambda}{\lambda-s}=\frac{1}{1-s\cdot\mu}</math>.  
+
<math class="inline">\phi_{\mathbf{X}}\left(s\right)=E\left[e^{s\mathbf{X}}\right]=\int_{0}^{\infty}e^{sx}\cdot\lambda e^{-\lambda x}dx=\lambda\int_{0}^{\infty}e^{x\left(s-\lambda\right)}dx=\frac{\lambda}{s-\lambda}e^{x\left(s-\lambda\right)}\biggl|_{0}^{\infty}=\frac{\lambda}{\lambda-s}=\frac{1}{1-s\cdot\mu}</math>.  
  
 
'''1.6.4 Erlang distribution'''
 
'''1.6.4 Erlang distribution'''
  
If <math>\mathbf{X}_{1},\mathbf{X}_{2},\cdots</math>  are i.i.d.  exponential random variables, then Erlang random variable is defined as <math>\mathbf{Z}_{m}=\mathbf{X}_{1}+\mathbf{X}_{2}+\cdots+\mathbf{X}_{m}</math>. This is about the the time until m -th success. <math>f_{\mathbf{Z}_{m}}\left(t\right)=\frac{\left(\lambda t\right)^{m-1}e^{-\lambda t}}{\left(m-1\right)!}\cdot\lambda,\quad t\geq0</math>.  
+
If <math class="inline">\mathbf{X}_{1},\mathbf{X}_{2},\cdots</math>  are i.i.d.  exponential random variables, then Erlang random variable is defined as <math class="inline">\mathbf{Z}_{m}=\mathbf{X}_{1}+\mathbf{X}_{2}+\cdots+\mathbf{X}_{m}</math>. This is about the the time until m -th success. <math class="inline">f_{\mathbf{Z}_{m}}\left(t\right)=\frac{\left(\lambda t\right)^{m-1}e^{-\lambda t}}{\left(m-1\right)!}\cdot\lambda,\quad t\geq0</math>.  
  
<math>E\left[\mathbf{Z}_{m}\right]=E\left[\mathbf{X}_{1}+\mathbf{X}_{2}+\cdots+\mathbf{X}_{m}\right]=\sum_{k=1}^{m}E\left[\mathbf{X}_{k}\right]=m\cdot\frac{1}{\lambda}=\frac{m}{\lambda}</math>.  
+
<math class="inline">E\left[\mathbf{Z}_{m}\right]=E\left[\mathbf{X}_{1}+\mathbf{X}_{2}+\cdots+\mathbf{X}_{m}\right]=\sum_{k=1}^{m}E\left[\mathbf{X}_{k}\right]=m\cdot\frac{1}{\lambda}=\frac{m}{\lambda}</math>.  
  
<math>Var\left[\mathbf{Z}_{m}\right]=Var\left[\mathbf{X}_{1}+\mathbf{X}_{2}+\cdots+\mathbf{X}_{m}\right]=\sum_{k=1}^{m}Var\left[\mathbf{X}_{k}\right]=m\cdot\frac{1}{\lambda^{2}}=\frac{m}{\lambda^{2}}</math>. There are one important relationship that is useful to solve several problems.<math>P\left(\mathbf{Z}_{m}\leq t\right)=P\left(\mathbf{N}\left(t\right)\geq m\right)</math>.  
+
<math class="inline">Var\left[\mathbf{Z}_{m}\right]=Var\left[\mathbf{X}_{1}+\mathbf{X}_{2}+\cdots+\mathbf{X}_{m}\right]=\sum_{k=1}^{m}Var\left[\mathbf{X}_{k}\right]=m\cdot\frac{1}{\lambda^{2}}=\frac{m}{\lambda^{2}}</math>. There are one important relationship that is useful to solve several problems.<math class="inline">P\left(\mathbf{Z}_{m}\leq t\right)=P\left(\mathbf{N}\left(t\right)\geq m\right)</math>.  
  
 
Moment generating function
 
Moment generating function
  
<math>\phi_{\mathbf{Z}_{m}}\left(s\right)=E\left[e^{s\mathbf{Z}_{m}}\right]=E\left[e^{s\left(\mathbf{X}_{1}+\mathbf{X}_{2}+\cdots+\mathbf{X}_{m}\right)}\right]=\left(\phi_{\mathbf{X}}\left(s\right)\right)^{m}=\left(\frac{\lambda}{\lambda-s}\right)^{m}</math>.  
+
<math class="inline">\phi_{\mathbf{Z}_{m}}\left(s\right)=E\left[e^{s\mathbf{Z}_{m}}\right]=E\left[e^{s\left(\mathbf{X}_{1}+\mathbf{X}_{2}+\cdots+\mathbf{X}_{m}\right)}\right]=\left(\phi_{\mathbf{X}}\left(s\right)\right)^{m}=\left(\frac{\lambda}{\lambda-s}\right)^{m}</math>.  
  
 
Gamma distribution
 
Gamma distribution
  
The <math>m</math>  is the natural number in Erlang distribution. If <math>m</math>  is the positive real number, then it becomes Gamma distribution. But we also change <math>\left(m-1\right)</math>!  into Gamma function <math>\Gamma\left(m\right)=\int_{0}^{\infty}e^{x}x^{m-1}dx</math> .
+
The <math class="inline">m</math>  is the natural number in Erlang distribution. If <math class="inline">m</math>  is the positive real number, then it becomes Gamma distribution. But we also change <math class="inline">\left(m-1\right)</math>!  into Gamma function <math class="inline">\Gamma\left(m\right)=\int_{0}^{\infty}e^{x}x^{m-1}dx</math> .
 +
----
 +
[[ECE600|Back to ECE600]]
 +
 
 +
[[ECE 600 Prerequisites|Back to ECE 600 Prerequisites]]

Latest revision as of 10:27, 30 November 2010

1.6 Continuous Random Variables

From the ECE600 Pre-requisites notes of Sangchun Han, ECE PhD student.


1.6.1 Gaussian distribution (normal distribution) $ \mathcal{N}\left(\mu,\sigma^{2}\right) $

$ f_{\mathbf{X}}(x)=\frac{1}{\sqrt{2\pi}\sigma}\cdot e^{-\frac{(x-\mu)^{2}}{2\sigma^{2}}} $

$ \Phi_{\mathbf{X}}\left(\omega\right)=e^{i\mu\omega}e^{-\frac{1}{2}\sigma^{2}\omega^{2}} $

$ \phi_{\mathbf{X}}\left(s\right)=e^{\mu s}e^{\frac{1}{2}\sigma^{2}s^{2}} $

$ E\left[\mathbf{X}\right]=\mu $

$ Var\left[\mathbf{X}\right]=\sigma^{2} $

1.6.2 Log-normal distribution $ \ln\mathcal{N}\left(\mu,\sigma^{2}\right) $

For log-normal distribution, its logaritm is normally distributed. If \mathbf{X} is a random variable with log-normal distribution, then $ \mathbf{Y}=\ln\mathbf{X} $ is a random variable with Gaussian distribution. This distribution is characterized with two parameters; $ \mu $ and $ \sigma $ (that are mean and standard deviation of $ \mathbf{Y} $ rather than $ \mathbf{X} $ ).

$ f_{\mathbf{X}}\left(x\right)=\frac{1}{x\sqrt{2\pi}\sigma}\cdot e^{-\frac{(\ln x-\mu)^{2}}{2\sigma^{2}}} $.

$ E\left[\mathbf{X}\right]=e^{\mu+\sigma^{2}/2} $.

$ Var\left[\mathbf{X}\right]=\left(e^{\sigma^{2}}-1\right)e^{2\mu+\sigma^{2}} $.

MLE of log-normal distribution

$ \hat{\mu}\frac{\sum_{k}\ln x_{k}}{n} $.

$ \hat{\sigma}^{2}=\frac{\sum_{k}\left(\ln x_{k}-\hat{\mu}\right)^{2}}{n} $.

1.6.3 Exponential distribution with mean $ \mu $

It is about the time until success in Poisson process. It has the characteristic of memoryless.

$ F_{\mathbf{X}}\left(x\right)=\left(1-e^{-\frac{x}{\mu}}\right)\cdot\mathbf{1}_{\left[0,\infty\right)}\left(x\right) $.

$ f_{\mathbf{X}}\left(x\right)=\frac{1}{\mu}e^{-\frac{x}{\mu}}\cdot\mathbf{1}_{\left[0,\infty\right)}\left(x\right) $.

$ \Phi_{\mathbf{X}}\left(\omega\right)=\int_{0}^{\infty}\frac{1}{\mu}e^{-\frac{x}{\mu}}\cdot e^{i\omega x}dx=\frac{1}{\mu}\int_{0}^{\infty}e^{x\left(i\omega-\frac{1}{\mu}\right)}dx=\frac{1}{\mu}\cdot\frac{e^{x\left(i\omega-\frac{1}{\mu}\right)}}{i\omega-\frac{1}{\mu}}\Biggl|_{0}^{\infty}=\frac{\frac{1}{\mu}}{\frac{1}{\mu}-i\omega}=\frac{1}{1-i\mu\omega} $.

$ E\left[\mathbf{X}\right]=\mu $.

$ Var\left[\mathbf{X}\right]=\mu^{2} $.

Consider Poisson process

$ f_{\mathbf{X}}\left(t\right)dt $ In fact, $ \lambda=1/\mu $ .

Moment generating function

$ \phi_{\mathbf{X}}\left(s\right)=E\left[e^{s\mathbf{X}}\right]=\int_{0}^{\infty}e^{sx}\cdot\lambda e^{-\lambda x}dx=\lambda\int_{0}^{\infty}e^{x\left(s-\lambda\right)}dx=\frac{\lambda}{s-\lambda}e^{x\left(s-\lambda\right)}\biggl|_{0}^{\infty}=\frac{\lambda}{\lambda-s}=\frac{1}{1-s\cdot\mu} $.

1.6.4 Erlang distribution

If $ \mathbf{X}_{1},\mathbf{X}_{2},\cdots $ are i.i.d. exponential random variables, then Erlang random variable is defined as $ \mathbf{Z}_{m}=\mathbf{X}_{1}+\mathbf{X}_{2}+\cdots+\mathbf{X}_{m} $. This is about the the time until m -th success. $ f_{\mathbf{Z}_{m}}\left(t\right)=\frac{\left(\lambda t\right)^{m-1}e^{-\lambda t}}{\left(m-1\right)!}\cdot\lambda,\quad t\geq0 $.

$ E\left[\mathbf{Z}_{m}\right]=E\left[\mathbf{X}_{1}+\mathbf{X}_{2}+\cdots+\mathbf{X}_{m}\right]=\sum_{k=1}^{m}E\left[\mathbf{X}_{k}\right]=m\cdot\frac{1}{\lambda}=\frac{m}{\lambda} $.

$ Var\left[\mathbf{Z}_{m}\right]=Var\left[\mathbf{X}_{1}+\mathbf{X}_{2}+\cdots+\mathbf{X}_{m}\right]=\sum_{k=1}^{m}Var\left[\mathbf{X}_{k}\right]=m\cdot\frac{1}{\lambda^{2}}=\frac{m}{\lambda^{2}} $. There are one important relationship that is useful to solve several problems.$ P\left(\mathbf{Z}_{m}\leq t\right)=P\left(\mathbf{N}\left(t\right)\geq m\right) $.

Moment generating function

$ \phi_{\mathbf{Z}_{m}}\left(s\right)=E\left[e^{s\mathbf{Z}_{m}}\right]=E\left[e^{s\left(\mathbf{X}_{1}+\mathbf{X}_{2}+\cdots+\mathbf{X}_{m}\right)}\right]=\left(\phi_{\mathbf{X}}\left(s\right)\right)^{m}=\left(\frac{\lambda}{\lambda-s}\right)^{m} $.

Gamma distribution

The $ m $ is the natural number in Erlang distribution. If $ m $ is the positive real number, then it becomes Gamma distribution. But we also change $ \left(m-1\right) $! into Gamma function $ \Gamma\left(m\right)=\int_{0}^{\infty}e^{x}x^{m-1}dx $ .


Back to ECE600

Back to ECE 600 Prerequisites

Alumni Liaison

Ph.D. on Applied Mathematics in Aug 2007. Involved on applications of image super-resolution to electron microscopy

Francisco Blanco-Silva