Line 39: Line 39:
 
Does the sample mean still converge to <math>\mu</math>  in probability? You must justify your answer.
 
Does the sample mean still converge to <math>\mu</math>  in probability? You must justify your answer.
  
ref. You can see the definition of the converge in probability [CS1ConvergeInProbability].
+
ref. You can see the definition of [[ECE 600 Convergence|convergence in probability]].
  
 
If <math>\mathbf{Y}_{n}\rightarrow\left(p\right)\rightarrow0</math> , then for any <math>\epsilon>0</math> , <math>P\left(\left\{ \left|\mathbf{Y}_{n}-\mu\right|>\epsilon\right\} \right)\rightarrow0</math>  as <math>n\rightarrow\infty</math> . According to the Chebyshev inequality [CS1ChebyshevInequality], <math>P\left(\left\{ \left|\mathbf{Y}_{n}-\mu\right|>\epsilon\right\} \right)\leq\frac{\sigma_{\mathbf{Y}}^{2}}{\epsilon^{2}}.</math> <math>\lim_{n\rightarrow\infty}\sigma_{\mathbf{Y}}^{2}</math> Therefore, we know that <math>\lim_{n\rightarrow\infty}P\left(\left\{ \left|\mathbf{Y}_{n}-\mu\right|>\epsilon\right\} \right)=0</math> . Thus <math>\mathbf{Y}_{n}\rightarrow\left(p\right)\rightarrow0</math> .
 
If <math>\mathbf{Y}_{n}\rightarrow\left(p\right)\rightarrow0</math> , then for any <math>\epsilon>0</math> , <math>P\left(\left\{ \left|\mathbf{Y}_{n}-\mu\right|>\epsilon\right\} \right)\rightarrow0</math>  as <math>n\rightarrow\infty</math> . According to the Chebyshev inequality [CS1ChebyshevInequality], <math>P\left(\left\{ \left|\mathbf{Y}_{n}-\mu\right|>\epsilon\right\} \right)\leq\frac{\sigma_{\mathbf{Y}}^{2}}{\epsilon^{2}}.</math> <math>\lim_{n\rightarrow\infty}\sigma_{\mathbf{Y}}^{2}</math> Therefore, we know that <math>\lim_{n\rightarrow\infty}P\left(\left\{ \left|\mathbf{Y}_{n}-\mu\right|>\epsilon\right\} \right)=0</math> . Thus <math>\mathbf{Y}_{n}\rightarrow\left(p\right)\rightarrow0</math> .
Line 51: Line 51:
 
Does the sequence <math>\mathbf{Y}_{n}</math>  converge to <math>0</math> almost everywhere?
 
Does the sequence <math>\mathbf{Y}_{n}</math>  converge to <math>0</math> almost everywhere?
  
• You can see the definition of the converge almost everywhere [CS1ConvergeAlmostEverywhere].
+
• You can see the definition of [[ECE 600 Convergence|convergence almost everywhere]].
  
 
• range of <math>\mathbf{X}_{i}=\left\{ 0,1\right\}</math> .  
 
• range of <math>\mathbf{X}_{i}=\left\{ 0,1\right\}</math> .  
Line 69: Line 69:
 
Note
 
Note
  
You can see the definition of the converge in mean-square [CS1ConvergeInMean-Square].
+
You can see the definition of [[ECE 600 Convergence|convergence in mean-square]].
  
 
<math>E\left[\mathbf{X}^{2}\right]=\sum_{k=0}^{1}k^{2}\left(\frac{1}{2}\right)=0^{2}\cdot\frac{1}{2}+1^{2}\cdot\frac{1}{2}=\frac{1}{2}.</math>  
 
<math>E\left[\mathbf{X}^{2}\right]=\sum_{k=0}^{1}k^{2}\left(\frac{1}{2}\right)=0^{2}\cdot\frac{1}{2}+1^{2}\cdot\frac{1}{2}=\frac{1}{2}.</math>  
Line 105: Line 105:
 
Note
 
Note
  
The explanation about the autocovariance function is [CS1Autocovariance].
+
The explanation about the autocovariance function is [[ECE 600 General Concepts of Stochastic Processes Definitions|here]].
  
 
<math>E\left[\mathbf{X}\left(t\right)\right]=1\cdot\frac{1}{2}+\left(-1\right)\cdot\frac{1}{2}=-\frac{1}{2}+\frac{1}{2}=0.</math>  
 
<math>E\left[\mathbf{X}\left(t\right)\right]=1\cdot\frac{1}{2}+\left(-1\right)\cdot\frac{1}{2}=-\frac{1}{2}+\frac{1}{2}=0.</math>  

Revision as of 13:05, 23 November 2010

7.14 QE 2008 August

1

The Weak Law of Large Numbers states that if $ \mathbf{X}_{1},\mathbf{X}_{2},\mathbf{X}_{3},\cdots $ is a sequence of i.i.d. random variables with finite mean $ E\left[\mathbf{X}_{i}\right]=\mu $ for every $ i $ , then the sample mean $ \mathbf{Y}_{n}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{X}_{i} $ converges to $ \mu $ in probability. Suppose that instead of being i.i.d , $ \mathbf{X}_{1},\mathbf{X}_{2},\mathbf{X}_{3},\cdots $ each have finite mean $ \mu $ , and the covariance function of the sequence $ \mathbf{X}_{n} $ is $ Cov\left(\mathbf{X}_{i},\mathbf{X}_{j}\right)=\sigma^{2}\rho^{\left|i-j\right|} $ , where $ \left|\rho\right|<1 $ and $ \sigma^{2}>0 $ .

a. (13 points)

Find the mean and variance of $ \mathbf{Y}_{n} $ .

$ E\left[\mathbf{Y}_{n}\right]=E\left[\frac{1}{n}\sum_{i=1}^{n}\mathbf{X}_{i}\right]=\frac{1}{n}\sum_{i=1}^{n}E\left[\mathbf{X}_{i}\right]=\frac{n\mu}{n}=\mu. $

$ Cov\left(\mathbf{X}_{i},\mathbf{X}_{j}\right)=E\left[\left(\mathbf{X}_{i}-\mu\right)\left(\mathbf{X}_{j}-\mu\right)\right]=E\left[\mathbf{X}_{i}\mathbf{X}_{j}\right]-\mu^{2}. $

$ E\left[\mathbf{X}_{i}\mathbf{X}_{j}\right]=\begin{cases} \begin{array}{lll} \sigma^{2}+\mu^{2} & & ,\; i=j\\ \sigma^{2}\rho^{\left|i-j\right|}+\mu^{2} & & ,\; i\neq j. \end{array}\end{cases} $

$ E\left[\mathbf{Y}_{n}^{2}\right]=E\left[\frac{1}{n^{2}}\sum_{i=1}^{n}\sum_{j=1}^{n}\mathbf{X}_{i}\mathbf{X}_{j}\right]=\frac{1}{n^{2}}\left[\sum_{i=1}^{n}E\left[\mathbf{X}_{i}^{2}\right]+\underset{_{i\neq j}}{\sum_{i=1}^{n}\sum_{j=1}^{n}}E\left[\mathbf{X}_{i}\mathbf{X}_{j}\right]\right] $$ =\frac{1}{n^{2}}\left[n\left(\sigma^{2}+\mu^{2}\right)+2\sum_{i=1}^{n-1}\left(n-i\right)\left(\sigma^{2}\rho^{i}+\mu^{2}\right)\right]=\frac{1}{n^{2}}\left[n\sigma^{2}+2\sigma^{2}\sum_{i=1}^{n-1}\left(n-i\right)\rho^{i}\right]+\mu^{2}. $

$ Var\left[\mathbf{Y}_{n}\right]=E\left[\mathbf{Y}_{n}^{2}\right]-E\left[\mathbf{Y}_{n}\right]^{2}=\frac{1}{n^{2}}\left[n\sigma^{2}+2\sigma^{2}\sum_{i=1}^{n-1}\left(n-i\right)\rho^{i}\right]+\mu^{2}-\mu^{2}=\frac{1}{n^{2}}\left[n\sigma^{2}+2\sigma^{2}\sum_{i=1}^{n-1}\left(n-i\right)\rho^{i}\right]. $

Note

If the element $ m_{ij} $ of $ n\times n $ matrix is defined as $ \left|i-j\right| $ , then

$ \left[\begin{array}{ccccc} 0 & 1 & 2 & \cdots & n-1\\ 1 & 0 & 1 & \ddots & \vdots\\ 2 & 1 & 0 & \ddots & 2\\ \vdots & \ddots & \ddots & \ddots & 1\\ n-1 & \cdots & 2 & 1 & 0 \end{array}\right]. $

b. (12 points)

Does the sample mean still converge to $ \mu $ in probability? You must justify your answer.

ref. You can see the definition of convergence in probability.

If $ \mathbf{Y}_{n}\rightarrow\left(p\right)\rightarrow0 $ , then for any $ \epsilon>0 $ , $ P\left(\left\{ \left|\mathbf{Y}_{n}-\mu\right|>\epsilon\right\} \right)\rightarrow0 $ as $ n\rightarrow\infty $ . According to the Chebyshev inequality [CS1ChebyshevInequality], $ P\left(\left\{ \left|\mathbf{Y}_{n}-\mu\right|>\epsilon\right\} \right)\leq\frac{\sigma_{\mathbf{Y}}^{2}}{\epsilon^{2}}. $ $ \lim_{n\rightarrow\infty}\sigma_{\mathbf{Y}}^{2} $ Therefore, we know that $ \lim_{n\rightarrow\infty}P\left(\left\{ \left|\mathbf{Y}_{n}-\mu\right|>\epsilon\right\} \right)=0 $ . Thus $ \mathbf{Y}_{n}\rightarrow\left(p\right)\rightarrow0 $ .

2

Let $ \mathbf{X}_{1},\mathbf{X}_{2},\mathbf{X}_{3},\cdots $ be a sequence of i.i.d Bernoulli random variables with $ p=1/2 $ , and let $ \mathbf{Y}_{n}=2^{n}\mathbf{X}_{1}\mathbf{X}_{2}\cdots\mathbf{X}_{n} $ .

a. (15 points)

Does the sequence $ \mathbf{Y}_{n} $ converge to $ 0 $ almost everywhere?

• You can see the definition of convergence almost everywhere.

• range of $ \mathbf{X}_{i}=\left\{ 0,1\right\} $ .

• range of $ \mathbf{Y}_{n}=\left\{ 0,2^{n}\right\} $ .

• the probability of $ \mathbf{Y}_{n}=2^{n}\Longrightarrow P\left(\mathbf{Y}_{n}=2^{n}\right)=P\left(\left\{ \mathbf{X}_{1}=1\right\} \cap\cdots\cap\left\{ \mathbf{X}_{n}=1\right\} \right)=\left(\frac{1}{2}\right)^{n}. $

$ \lim_{n\rightarrow\infty}P\left(\mathbf{Y}_{n}=2^{n}\right)=0 $ . $ \lim_{n\rightarrow\infty}P\left(\mathbf{Y}_{n}=0\right)=1 $ .

• Thus, $ \left\{ \mathbf{Y}_{n}\right\} \rightarrow0 $ with probability 1.

b. (15 points)

Does the sequence $ \mathbf{Y}_{n} $ converge to 0 in the mean square sense?

Note

You can see the definition of convergence in mean-square.

$ E\left[\mathbf{X}^{2}\right]=\sum_{k=0}^{1}k^{2}\left(\frac{1}{2}\right)=0^{2}\cdot\frac{1}{2}+1^{2}\cdot\frac{1}{2}=\frac{1}{2}. $

$ E\left[\left|\mathbf{Y}_{n}-0\right|^{2}\right]=E\left[\mathbf{Y}_{n}^{2}\right]=E\left[2^{2n}\mathbf{X}_{1}^{2}\mathbf{X}_{2}^{2}\cdots\mathbf{X}_{n}^{2}\right]=2^{2n}E\left[\mathbf{X}_{1}^{2}\mathbf{X}_{2}^{2}\cdots\mathbf{X}_{n}^{2}\right]=\left(4E\left[\mathbf{X}^{2}\right]\right)^{n}=2^{n}. $

$ \lim_{n\rightarrow\infty}E\left[\mathbf{Y}_{n}^{2}\right]=\lim_{n\rightarrow\infty}2^{n}=\infty. $

Thus, $ \mathbf{Y}_{n} $ does not converge to $ 0 $ in the mean square sense.

3

Consider a random process $ \mathbf{X}\left(t\right) $ that assumes values $ \pm1 $ . Suppose that $ \mathbf{X}\left(0\right)=\pm1 $ with probability $ 1/2 $ , and suppose that $ \mathbf{X}\left(t\right) $ then changes polarity with each occurrence of an event in a Poisson process of rate $ \lambda $ .

Note:

You might find the equations $ \frac{1}{2}\left(e^{x}+e^{-x}\right)=\sum_{j=0}^{\infty}\frac{x^{2j}}{\left(2j\right)!} $ and $ \frac{1}{2}\left(e^{x}-e^{-x}\right)=\sum_{j=0}^{\infty}\frac{x^{2j+1}}{\left(2j+1\right)!} $ helpful.

a. (15 points)

Find the probability mass function of $ \mathbf{X}\left(t\right) $ .

$ P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} |\left\{ \mathbf{X}\left(0\right)=1\right\} \right)=P\left(\left\{ \mathbf{N}\left(t\right)=\text{even}\right\} \right)=\sum_{n=0}^{\infty}e^{-\lambda t}\frac{\left(\lambda t\right)^{2n}}{\left(2n\right)!}=e^{-\lambda t}\sum_{n=0}^{\infty}\frac{\left(\lambda t\right)^{2n}}{\left(2n\right)!} $$ =e^{-\lambda t}\cdot\frac{1}{2}\left(e^{\lambda t}+e^{-\lambda t}\right)=\frac{1}{2}\left(1+e^{-2\lambda t}\right). $

$ P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} |\left\{ \mathbf{X}\left(0\right)=-1\right\} \right)=P\left(\left\{ \mathbf{N}\left(t\right)=\text{odd}\right\} \right)=\sum_{n=0}^{\infty}e^{-\lambda t}\frac{\left(\lambda t\right)^{2n+1}}{\left(2n+1\right)!}=e^{-\lambda t}\sum_{n=0}^{\infty}\frac{\left(\lambda t\right)^{2n+1}}{\left(2n+1\right)!} $$ =e^{-\lambda t}\cdot\frac{1}{2}\left(e^{\lambda t}-e^{-\lambda t}\right)=\frac{1}{2}\left(1-e^{-2\lambda t}\right). $

$ P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} \right)=P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} |\left\{ \mathbf{X}\left(0\right)=1\right\} \right)P\left(\left\{ \mathbf{X}\left(0\right)=1\right\} \right)+P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} |\left\{ \mathbf{X}\left(0\right)=-1\right\} \right)P\left(\left\{ \mathbf{X}\left(0\right)=-1\right\} \right) $$ =\frac{1}{2}\left(1+e^{-2\lambda t}\right)\cdot\frac{1}{2}+\frac{1}{2}\left(1-e^{-2\lambda t}\right)\cdot\frac{1}{2}=\frac{1}{2}. $

Thus, $ P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} \right)=P\left(\left\{ \mathbf{X}\left(t\right)=-1\right\} \right)=\frac{1}{2} $ .

b. (15 points)

Find the autocovariance function of the random process $ \mathbf{X}\left(t\right) $ .

Note

The explanation about the autocovariance function is here.

$ E\left[\mathbf{X}\left(t\right)\right]=1\cdot\frac{1}{2}+\left(-1\right)\cdot\frac{1}{2}=-\frac{1}{2}+\frac{1}{2}=0. $

$ C_{\mathbf{XX}}\left(t_{1},t_{2}\right)=E\left[\left(\mathbf{X}\left(t_{1}\right)-\mu_{\mathbf{X}}\left(t_{1}\right)\right)\left(\mathbf{X}\left(t_{2}\right)-\mu_{\mathbf{X}}\left(t_{2}\right)\right)^{*}\right]=R_{\mathbf{XX}}\left(t_{1},t_{2}\right)-\mu_{\mathbf{X}}\left(t_{1}\right)\mu_{\mathbf{X}}\left(t_{2}\right)^{*} $$ =R_{\mathbf{XX}}\left(t_{1},t_{2}\right)=1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}=1\right)\cap\mathbf{X}\left(t_{2}=1\right)\right\} \right)-1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}=-1\right)\cap\mathbf{X}\left(t_{2}=1\right)\right\} \right) $$ -1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}=1\right)\cap\mathbf{X}\left(t_{2}=-1\right)\right\} \right)+1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}=-1\right)\cap\mathbf{X}\left(t_{2}=-1\right)\right\} \right) $$ =1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}\right)=\mathbf{X}\left(t_{2}\right)\right\} \right)-1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}\right)\neq\mathbf{X}\left(t_{2}\right)\right\} \right). $

$ P\left(\left\{ \mathbf{X}\left(t_{1}\right)=\mathbf{X}\left(t_{2}\right)\right\} \right)=P\left(\left\{ \text{number of events occuring within }\left(t_{1},t_{2}\right)\text{is even}\right\} \right) $

$ =\sum_{n=0}^{\infty}e^{-\lambda\left(t_{2}-t_{1}\right)}\frac{\left[\lambda\left(t_{2}-t_{1}\right)\right]^{2n}}{\left(2n\right)!}=\frac{1}{2}e^{-\lambda\left(t_{2}-t_{1}\right)}\left(e^{\lambda\left(t_{2}-t_{1}\right)}+e^{-\lambda\left(t_{2}-t_{1}\right)}\right) $$ =\frac{1}{2}\left(1+e^{-2\lambda\left(t_{2}-t_{1}\right)}\right). $

By using the same method above, we can get $ P\left(\left\{ \mathbf{X}\left(t_{1}\right)\neq\mathbf{X}\left(t_{2}\right)\right\} \right)=\frac{1}{2}\left(1-e^{-2\lambda\left(t_{2}-t_{1}\right)}\right) $ . Thus,

$ C_{\mathbf{XX}}\left(t_{1},t_{2}\right)=\frac{1}{2}\left(1+e^{-2\lambda\left(t_{2}-t_{1}\right)}\right)-\frac{1}{2}\left(1-e^{-2\lambda\left(t_{2}-t_{1}\right)}\right)=e^{-2\lambda\left(t_{2}-t_{1}\right)}. $

4 (15 points)

Messages arrive at a message center according to a Poisson process of rate $ \lambda $ messages per hour. Every hour the messages that have arrived during the previous hour are forwarded to their destination. Find the expected value of the total time waited by all messages that arrive during the hour.


Back to ECE600

Back to ECE 600 QE

Alumni Liaison

To all math majors: "Mathematics is a wonderfully rich subject."

Dr. Paul Garrett