Line 3: Line 3:
 
'''1'''
 
'''1'''
  
The Weak Law of Large Numbers states that if <math>\mathbf{X}_{1},\mathbf{X}_{2},\mathbf{X}_{3},\cdots</math>  is a sequence of i.i.d.  random variables with finite mean <math>E\left[\mathbf{X}_{i}\right]=\mu</math>  for every <math>i</math> , then the sample mean <math>\mathbf{Y}_{n}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{X}_{i}</math> converges to <math>\mu</math>  in probability. Suppose that instead of being i.i.d , <math>\mathbf{X}_{1},\mathbf{X}_{2},\mathbf{X}_{3},\cdots</math>  each have finite mean <math>\mu</math> , and the covariance function of the sequence <math>\mathbf{X}_{n}</math>  is <math>Cov\left(\mathbf{X}_{i},\mathbf{X}_{j}\right)=\sigma^{2}\rho^{\left|i-j\right|}</math> , where <math>\left|\rho\right|<1</math>  and <math>\sigma^{2}>0</math> .
+
The Weak Law of Large Numbers states that if <math class="inline">\mathbf{X}_{1},\mathbf{X}_{2},\mathbf{X}_{3},\cdots</math>  is a sequence of i.i.d.  random variables with finite mean <math class="inline">E\left[\mathbf{X}_{i}\right]=\mu</math>  for every <math class="inline">i</math> , then the sample mean <math class="inline">\mathbf{Y}_{n}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{X}_{i}</math> converges to <math class="inline">\mu</math>  in probability. Suppose that instead of being i.i.d , <math class="inline">\mathbf{X}_{1},\mathbf{X}_{2},\mathbf{X}_{3},\cdots</math>  each have finite mean <math class="inline">\mu</math> , and the covariance function of the sequence <math class="inline">\mathbf{X}_{n}</math>  is <math class="inline">Cov\left(\mathbf{X}_{i},\mathbf{X}_{j}\right)=\sigma^{2}\rho^{\left|i-j\right|}</math> , where <math class="inline">\left|\rho\right|<1</math>  and <math class="inline">\sigma^{2}>0</math> .
  
 
a. (13 points)
 
a. (13 points)
  
Find the mean and variance of <math>\mathbf{Y}_{n}</math> .
+
Find the mean and variance of <math class="inline">\mathbf{Y}_{n}</math> .
  
<math>E\left[\mathbf{Y}_{n}\right]=E\left[\frac{1}{n}\sum_{i=1}^{n}\mathbf{X}_{i}\right]=\frac{1}{n}\sum_{i=1}^{n}E\left[\mathbf{X}_{i}\right]=\frac{n\mu}{n}=\mu.</math>  
+
<math class="inline">E\left[\mathbf{Y}_{n}\right]=E\left[\frac{1}{n}\sum_{i=1}^{n}\mathbf{X}_{i}\right]=\frac{1}{n}\sum_{i=1}^{n}E\left[\mathbf{X}_{i}\right]=\frac{n\mu}{n}=\mu.</math>  
  
<math>Cov\left(\mathbf{X}_{i},\mathbf{X}_{j}\right)=E\left[\left(\mathbf{X}_{i}-\mu\right)\left(\mathbf{X}_{j}-\mu\right)\right]=E\left[\mathbf{X}_{i}\mathbf{X}_{j}\right]-\mu^{2}.</math>  
+
<math class="inline">Cov\left(\mathbf{X}_{i},\mathbf{X}_{j}\right)=E\left[\left(\mathbf{X}_{i}-\mu\right)\left(\mathbf{X}_{j}-\mu\right)\right]=E\left[\mathbf{X}_{i}\mathbf{X}_{j}\right]-\mu^{2}.</math>  
  
<math>E\left[\mathbf{X}_{i}\mathbf{X}_{j}\right]=\begin{cases}
+
<math class="inline">E\left[\mathbf{X}_{i}\mathbf{X}_{j}\right]=\begin{cases}
 
\begin{array}{lll}
 
\begin{array}{lll}
 
\sigma^{2}+\mu^{2} &  & ,\; i=j\\
 
\sigma^{2}+\mu^{2} &  & ,\; i=j\\
Line 19: Line 19:
 
\end{array}\end{cases}</math>  
 
\end{array}\end{cases}</math>  
  
<math>E\left[\mathbf{Y}_{n}^{2}\right]=E\left[\frac{1}{n^{2}}\sum_{i=1}^{n}\sum_{j=1}^{n}\mathbf{X}_{i}\mathbf{X}_{j}\right]=\frac{1}{n^{2}}\left[\sum_{i=1}^{n}E\left[\mathbf{X}_{i}^{2}\right]+\underset{_{i\neq j}}{\sum_{i=1}^{n}\sum_{j=1}^{n}}E\left[\mathbf{X}_{i}\mathbf{X}_{j}\right]\right]</math><math>=\frac{1}{n^{2}}\left[n\left(\sigma^{2}+\mu^{2}\right)+2\sum_{i=1}^{n-1}\left(n-i\right)\left(\sigma^{2}\rho^{i}+\mu^{2}\right)\right]=\frac{1}{n^{2}}\left[n\sigma^{2}+2\sigma^{2}\sum_{i=1}^{n-1}\left(n-i\right)\rho^{i}\right]+\mu^{2}.</math>  
+
<math class="inline">E\left[\mathbf{Y}_{n}^{2}\right]=E\left[\frac{1}{n^{2}}\sum_{i=1}^{n}\sum_{j=1}^{n}\mathbf{X}_{i}\mathbf{X}_{j}\right]=\frac{1}{n^{2}}\left[\sum_{i=1}^{n}E\left[\mathbf{X}_{i}^{2}\right]+\underset{_{i\neq j}}{\sum_{i=1}^{n}\sum_{j=1}^{n}}E\left[\mathbf{X}_{i}\mathbf{X}_{j}\right]\right]</math><math class="inline">=\frac{1}{n^{2}}\left[n\left(\sigma^{2}+\mu^{2}\right)+2\sum_{i=1}^{n-1}\left(n-i\right)\left(\sigma^{2}\rho^{i}+\mu^{2}\right)\right]=\frac{1}{n^{2}}\left[n\sigma^{2}+2\sigma^{2}\sum_{i=1}^{n-1}\left(n-i\right)\rho^{i}\right]+\mu^{2}.</math>  
  
<math>Var\left[\mathbf{Y}_{n}\right]=E\left[\mathbf{Y}_{n}^{2}\right]-E\left[\mathbf{Y}_{n}\right]^{2}=\frac{1}{n^{2}}\left[n\sigma^{2}+2\sigma^{2}\sum_{i=1}^{n-1}\left(n-i\right)\rho^{i}\right]+\mu^{2}-\mu^{2}=\frac{1}{n^{2}}\left[n\sigma^{2}+2\sigma^{2}\sum_{i=1}^{n-1}\left(n-i\right)\rho^{i}\right].</math>  
+
<math class="inline">Var\left[\mathbf{Y}_{n}\right]=E\left[\mathbf{Y}_{n}^{2}\right]-E\left[\mathbf{Y}_{n}\right]^{2}=\frac{1}{n^{2}}\left[n\sigma^{2}+2\sigma^{2}\sum_{i=1}^{n-1}\left(n-i\right)\rho^{i}\right]+\mu^{2}-\mu^{2}=\frac{1}{n^{2}}\left[n\sigma^{2}+2\sigma^{2}\sum_{i=1}^{n-1}\left(n-i\right)\rho^{i}\right].</math>  
  
 
Note
 
Note
  
If the element <math>m_{ij}</math>  of <math>n\times n</math>  matrix is defined as <math>\left|i-j\right|</math> , then
+
If the element <math class="inline">m_{ij}</math>  of <math class="inline">n\times n</math>  matrix is defined as <math class="inline">\left|i-j\right|</math> , then
  
<math>\left[\begin{array}{ccccc}
+
<math class="inline">\left[\begin{array}{ccccc}
 
0 & 1 & 2 & \cdots & n-1\\
 
0 & 1 & 2 & \cdots & n-1\\
 
1 & 0 & 1 & \ddots & \vdots\\
 
1 & 0 & 1 & \ddots & \vdots\\
Line 37: Line 37:
 
b. (12 points)
 
b. (12 points)
  
Does the sample mean still converge to <math>\mu</math>  in probability? You must justify your answer.
+
Does the sample mean still converge to <math class="inline">\mu</math>  in probability? You must justify your answer.
  
 
ref. You can see the definition of [[ECE 600 Convergence|convergence in probability]].
 
ref. You can see the definition of [[ECE 600 Convergence|convergence in probability]].
  
If <math>\mathbf{Y}_{n}\rightarrow\left(p\right)\rightarrow0</math> , then for any <math>\epsilon>0</math> , <math>P\left(\left\{ \left|\mathbf{Y}_{n}-\mu\right|>\epsilon\right\} \right)\rightarrow0</math>  as <math>n\rightarrow\infty</math> . According to the Chebyshev inequality [CS1ChebyshevInequality], <math>P\left(\left\{ \left|\mathbf{Y}_{n}-\mu\right|>\epsilon\right\} \right)\leq\frac{\sigma_{\mathbf{Y}}^{2}}{\epsilon^{2}}.</math> <math>\lim_{n\rightarrow\infty}\sigma_{\mathbf{Y}}^{2}</math> Therefore, we know that <math>\lim_{n\rightarrow\infty}P\left(\left\{ \left|\mathbf{Y}_{n}-\mu\right|>\epsilon\right\} \right)=0</math> . Thus <math>\mathbf{Y}_{n}\rightarrow\left(p\right)\rightarrow0</math> .
+
If <math class="inline">\mathbf{Y}_{n}\rightarrow\left(p\right)\rightarrow0</math> , then for any <math class="inline">\epsilon>0</math> , <math class="inline">P\left(\left\{ \left|\mathbf{Y}_{n}-\mu\right|>\epsilon\right\} \right)\rightarrow0</math>  as <math class="inline">n\rightarrow\infty</math> . According to the Chebyshev inequality [CS1ChebyshevInequality], <math class="inline">P\left(\left\{ \left|\mathbf{Y}_{n}-\mu\right|>\epsilon\right\} \right)\leq\frac{\sigma_{\mathbf{Y}}^{2}}{\epsilon^{2}}.</math> <math class="inline">\lim_{n\rightarrow\infty}\sigma_{\mathbf{Y}}^{2}</math> Therefore, we know that <math class="inline">\lim_{n\rightarrow\infty}P\left(\left\{ \left|\mathbf{Y}_{n}-\mu\right|>\epsilon\right\} \right)=0</math> . Thus <math class="inline">\mathbf{Y}_{n}\rightarrow\left(p\right)\rightarrow0</math> .
  
 
2
 
2
  
Let <math>\mathbf{X}_{1},\mathbf{X}_{2},\mathbf{X}_{3},\cdots</math>  be a sequence of i.i.d  Bernoulli random variables with <math>p=1/2</math> , and let <math>\mathbf{Y}_{n}=2^{n}\mathbf{X}_{1}\mathbf{X}_{2}\cdots\mathbf{X}_{n}</math> .
+
Let <math class="inline">\mathbf{X}_{1},\mathbf{X}_{2},\mathbf{X}_{3},\cdots</math>  be a sequence of i.i.d  Bernoulli random variables with <math class="inline">p=1/2</math> , and let <math class="inline">\mathbf{Y}_{n}=2^{n}\mathbf{X}_{1}\mathbf{X}_{2}\cdots\mathbf{X}_{n}</math> .
  
 
a. (15 points)
 
a. (15 points)
  
Does the sequence <math>\mathbf{Y}_{n}</math>  converge to <math>0</math> almost everywhere?
+
Does the sequence <math class="inline">\mathbf{Y}_{n}</math>  converge to <math class="inline">0</math> almost everywhere?
  
 
• You can see the definition of [[ECE 600 Convergence|convergence almost everywhere]].
 
• You can see the definition of [[ECE 600 Convergence|convergence almost everywhere]].
  
• range of <math>\mathbf{X}_{i}=\left\{ 0,1\right\}</math> .  
+
• range of <math class="inline">\mathbf{X}_{i}=\left\{ 0,1\right\}</math> .  
  
• range of <math>\mathbf{Y}_{n}=\left\{ 0,2^{n}\right\}</math>  .
+
• range of <math class="inline">\mathbf{Y}_{n}=\left\{ 0,2^{n}\right\}</math>  .
  
• the probability of <math>\mathbf{Y}_{n}=2^{n}\Longrightarrow P\left(\mathbf{Y}_{n}=2^{n}\right)=P\left(\left\{ \mathbf{X}_{1}=1\right\} \cap\cdots\cap\left\{ \mathbf{X}_{n}=1\right\} \right)=\left(\frac{1}{2}\right)^{n}.</math>  
+
• the probability of <math class="inline">\mathbf{Y}_{n}=2^{n}\Longrightarrow P\left(\mathbf{Y}_{n}=2^{n}\right)=P\left(\left\{ \mathbf{X}_{1}=1\right\} \cap\cdots\cap\left\{ \mathbf{X}_{n}=1\right\} \right)=\left(\frac{1}{2}\right)^{n}.</math>  
  
• <math>\lim_{n\rightarrow\infty}P\left(\mathbf{Y}_{n}=2^{n}\right)=0</math> . <math>\lim_{n\rightarrow\infty}P\left(\mathbf{Y}_{n}=0\right)=1</math> .
+
• <math class="inline">\lim_{n\rightarrow\infty}P\left(\mathbf{Y}_{n}=2^{n}\right)=0</math> . <math class="inline">\lim_{n\rightarrow\infty}P\left(\mathbf{Y}_{n}=0\right)=1</math> .
  
• Thus, <math>\left\{ \mathbf{Y}_{n}\right\} \rightarrow0</math>  with probability 1.
+
• Thus, <math class="inline">\left\{ \mathbf{Y}_{n}\right\} \rightarrow0</math>  with probability 1.
  
 
b. (15 points)
 
b. (15 points)
  
Does the sequence <math>\mathbf{Y}_{n}</math>  converge to 0 in the mean square sense?
+
Does the sequence <math class="inline">\mathbf{Y}_{n}</math>  converge to 0 in the mean square sense?
  
 
Note
 
Note
Line 71: Line 71:
 
You can see the definition of [[ECE 600 Convergence|convergence in mean-square]].
 
You can see the definition of [[ECE 600 Convergence|convergence in mean-square]].
  
<math>E\left[\mathbf{X}^{2}\right]=\sum_{k=0}^{1}k^{2}\left(\frac{1}{2}\right)=0^{2}\cdot\frac{1}{2}+1^{2}\cdot\frac{1}{2}=\frac{1}{2}.</math>  
+
<math class="inline">E\left[\mathbf{X}^{2}\right]=\sum_{k=0}^{1}k^{2}\left(\frac{1}{2}\right)=0^{2}\cdot\frac{1}{2}+1^{2}\cdot\frac{1}{2}=\frac{1}{2}.</math>  
  
<math>E\left[\left|\mathbf{Y}_{n}-0\right|^{2}\right]=E\left[\mathbf{Y}_{n}^{2}\right]=E\left[2^{2n}\mathbf{X}_{1}^{2}\mathbf{X}_{2}^{2}\cdots\mathbf{X}_{n}^{2}\right]=2^{2n}E\left[\mathbf{X}_{1}^{2}\mathbf{X}_{2}^{2}\cdots\mathbf{X}_{n}^{2}\right]=\left(4E\left[\mathbf{X}^{2}\right]\right)^{n}=2^{n}.</math>  
+
<math class="inline">E\left[\left|\mathbf{Y}_{n}-0\right|^{2}\right]=E\left[\mathbf{Y}_{n}^{2}\right]=E\left[2^{2n}\mathbf{X}_{1}^{2}\mathbf{X}_{2}^{2}\cdots\mathbf{X}_{n}^{2}\right]=2^{2n}E\left[\mathbf{X}_{1}^{2}\mathbf{X}_{2}^{2}\cdots\mathbf{X}_{n}^{2}\right]=\left(4E\left[\mathbf{X}^{2}\right]\right)^{n}=2^{n}.</math>  
  
<math>\lim_{n\rightarrow\infty}E\left[\mathbf{Y}_{n}^{2}\right]=\lim_{n\rightarrow\infty}2^{n}=\infty.</math>  
+
<math class="inline">\lim_{n\rightarrow\infty}E\left[\mathbf{Y}_{n}^{2}\right]=\lim_{n\rightarrow\infty}2^{n}=\infty.</math>  
  
Thus, <math>\mathbf{Y}_{n}</math>  does not converge to <math>0</math>  in the mean square sense.
+
Thus, <math class="inline">\mathbf{Y}_{n}</math>  does not converge to <math class="inline">0</math>  in the mean square sense.
  
 
3
 
3
  
Consider a random process <math>\mathbf{X}\left(t\right)</math>  that assumes values <math>\pm1</math> . Suppose that <math>\mathbf{X}\left(0\right)=\pm1</math>  with probability <math>1/2</math> , and suppose that <math>\mathbf{X}\left(t\right)</math>  then changes polarity with each occurrence of an event in a Poisson process of rate <math>\lambda</math> .
+
Consider a random process <math class="inline">\mathbf{X}\left(t\right)</math>  that assumes values <math class="inline">\pm1</math> . Suppose that <math class="inline">\mathbf{X}\left(0\right)=\pm1</math>  with probability <math class="inline">1/2</math> , and suppose that <math class="inline">\mathbf{X}\left(t\right)</math>  then changes polarity with each occurrence of an event in a Poisson process of rate <math class="inline">\lambda</math> .
  
 
Note:
 
Note:
  
You might find the equations <math>\frac{1}{2}\left(e^{x}+e^{-x}\right)=\sum_{j=0}^{\infty}\frac{x^{2j}}{\left(2j\right)!}</math>  and <math>\frac{1}{2}\left(e^{x}-e^{-x}\right)=\sum_{j=0}^{\infty}\frac{x^{2j+1}}{\left(2j+1\right)!}</math>  helpful.
+
You might find the equations <math class="inline">\frac{1}{2}\left(e^{x}+e^{-x}\right)=\sum_{j=0}^{\infty}\frac{x^{2j}}{\left(2j\right)!}</math>  and <math class="inline">\frac{1}{2}\left(e^{x}-e^{-x}\right)=\sum_{j=0}^{\infty}\frac{x^{2j+1}}{\left(2j+1\right)!}</math>  helpful.
  
 
a. (15 points)
 
a. (15 points)
  
Find the probability mass function of <math>\mathbf{X}\left(t\right)</math> .
+
Find the probability mass function of <math class="inline">\mathbf{X}\left(t\right)</math> .
  
<math>P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} |\left\{ \mathbf{X}\left(0\right)=1\right\} \right)=P\left(\left\{ \mathbf{N}\left(t\right)=\text{even}\right\} \right)=\sum_{n=0}^{\infty}e^{-\lambda t}\frac{\left(\lambda t\right)^{2n}}{\left(2n\right)!}=e^{-\lambda t}\sum_{n=0}^{\infty}\frac{\left(\lambda t\right)^{2n}}{\left(2n\right)!}</math><math>=e^{-\lambda t}\cdot\frac{1}{2}\left(e^{\lambda t}+e^{-\lambda t}\right)=\frac{1}{2}\left(1+e^{-2\lambda t}\right).</math>  
+
<math class="inline">P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} |\left\{ \mathbf{X}\left(0\right)=1\right\} \right)=P\left(\left\{ \mathbf{N}\left(t\right)=\text{even}\right\} \right)=\sum_{n=0}^{\infty}e^{-\lambda t}\frac{\left(\lambda t\right)^{2n}}{\left(2n\right)!}=e^{-\lambda t}\sum_{n=0}^{\infty}\frac{\left(\lambda t\right)^{2n}}{\left(2n\right)!}</math><math class="inline">=e^{-\lambda t}\cdot\frac{1}{2}\left(e^{\lambda t}+e^{-\lambda t}\right)=\frac{1}{2}\left(1+e^{-2\lambda t}\right).</math>  
  
<math>P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} |\left\{ \mathbf{X}\left(0\right)=-1\right\} \right)=P\left(\left\{ \mathbf{N}\left(t\right)=\text{odd}\right\} \right)=\sum_{n=0}^{\infty}e^{-\lambda t}\frac{\left(\lambda t\right)^{2n+1}}{\left(2n+1\right)!}=e^{-\lambda t}\sum_{n=0}^{\infty}\frac{\left(\lambda t\right)^{2n+1}}{\left(2n+1\right)!}</math><math>=e^{-\lambda t}\cdot\frac{1}{2}\left(e^{\lambda t}-e^{-\lambda t}\right)=\frac{1}{2}\left(1-e^{-2\lambda t}\right).</math>  
+
<math class="inline">P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} |\left\{ \mathbf{X}\left(0\right)=-1\right\} \right)=P\left(\left\{ \mathbf{N}\left(t\right)=\text{odd}\right\} \right)=\sum_{n=0}^{\infty}e^{-\lambda t}\frac{\left(\lambda t\right)^{2n+1}}{\left(2n+1\right)!}=e^{-\lambda t}\sum_{n=0}^{\infty}\frac{\left(\lambda t\right)^{2n+1}}{\left(2n+1\right)!}</math><math class="inline">=e^{-\lambda t}\cdot\frac{1}{2}\left(e^{\lambda t}-e^{-\lambda t}\right)=\frac{1}{2}\left(1-e^{-2\lambda t}\right).</math>  
  
<math>P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} \right)=P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} |\left\{ \mathbf{X}\left(0\right)=1\right\} \right)P\left(\left\{ \mathbf{X}\left(0\right)=1\right\} \right)+P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} |\left\{ \mathbf{X}\left(0\right)=-1\right\} \right)P\left(\left\{ \mathbf{X}\left(0\right)=-1\right\} \right)</math><math>=\frac{1}{2}\left(1+e^{-2\lambda t}\right)\cdot\frac{1}{2}+\frac{1}{2}\left(1-e^{-2\lambda t}\right)\cdot\frac{1}{2}=\frac{1}{2}.</math>  
+
<math class="inline">P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} \right)=P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} |\left\{ \mathbf{X}\left(0\right)=1\right\} \right)P\left(\left\{ \mathbf{X}\left(0\right)=1\right\} \right)+P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} |\left\{ \mathbf{X}\left(0\right)=-1\right\} \right)P\left(\left\{ \mathbf{X}\left(0\right)=-1\right\} \right)</math><math class="inline">=\frac{1}{2}\left(1+e^{-2\lambda t}\right)\cdot\frac{1}{2}+\frac{1}{2}\left(1-e^{-2\lambda t}\right)\cdot\frac{1}{2}=\frac{1}{2}.</math>  
  
Thus, <math>P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} \right)=P\left(\left\{ \mathbf{X}\left(t\right)=-1\right\} \right)=\frac{1}{2}</math> .
+
Thus, <math class="inline">P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} \right)=P\left(\left\{ \mathbf{X}\left(t\right)=-1\right\} \right)=\frac{1}{2}</math> .
  
 
b. (15 points)
 
b. (15 points)
  
Find the autocovariance function of the random process <math>\mathbf{X}\left(t\right)</math> .
+
Find the autocovariance function of the random process <math class="inline">\mathbf{X}\left(t\right)</math> .
  
 
Note
 
Note
Line 107: Line 107:
 
The explanation about the autocovariance function is [[ECE 600 General Concepts of Stochastic Processes Definitions|here]].
 
The explanation about the autocovariance function is [[ECE 600 General Concepts of Stochastic Processes Definitions|here]].
  
<math>E\left[\mathbf{X}\left(t\right)\right]=1\cdot\frac{1}{2}+\left(-1\right)\cdot\frac{1}{2}=-\frac{1}{2}+\frac{1}{2}=0.</math>  
+
<math class="inline">E\left[\mathbf{X}\left(t\right)\right]=1\cdot\frac{1}{2}+\left(-1\right)\cdot\frac{1}{2}=-\frac{1}{2}+\frac{1}{2}=0.</math>  
  
<math>C_{\mathbf{XX}}\left(t_{1},t_{2}\right)=E\left[\left(\mathbf{X}\left(t_{1}\right)-\mu_{\mathbf{X}}\left(t_{1}\right)\right)\left(\mathbf{X}\left(t_{2}\right)-\mu_{\mathbf{X}}\left(t_{2}\right)\right)^{*}\right]=R_{\mathbf{XX}}\left(t_{1},t_{2}\right)-\mu_{\mathbf{X}}\left(t_{1}\right)\mu_{\mathbf{X}}\left(t_{2}\right)^{*}</math><math>=R_{\mathbf{XX}}\left(t_{1},t_{2}\right)=1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}=1\right)\cap\mathbf{X}\left(t_{2}=1\right)\right\} \right)-1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}=-1\right)\cap\mathbf{X}\left(t_{2}=1\right)\right\} \right)</math><math>-1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}=1\right)\cap\mathbf{X}\left(t_{2}=-1\right)\right\} \right)+1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}=-1\right)\cap\mathbf{X}\left(t_{2}=-1\right)\right\} \right)</math><math>=1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}\right)=\mathbf{X}\left(t_{2}\right)\right\} \right)-1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}\right)\neq\mathbf{X}\left(t_{2}\right)\right\} \right).</math>  
+
<math class="inline">C_{\mathbf{XX}}\left(t_{1},t_{2}\right)=E\left[\left(\mathbf{X}\left(t_{1}\right)-\mu_{\mathbf{X}}\left(t_{1}\right)\right)\left(\mathbf{X}\left(t_{2}\right)-\mu_{\mathbf{X}}\left(t_{2}\right)\right)^{*}\right]=R_{\mathbf{XX}}\left(t_{1},t_{2}\right)-\mu_{\mathbf{X}}\left(t_{1}\right)\mu_{\mathbf{X}}\left(t_{2}\right)^{*}</math><math class="inline">=R_{\mathbf{XX}}\left(t_{1},t_{2}\right)=1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}=1\right)\cap\mathbf{X}\left(t_{2}=1\right)\right\} \right)-1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}=-1\right)\cap\mathbf{X}\left(t_{2}=1\right)\right\} \right)</math><math class="inline">-1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}=1\right)\cap\mathbf{X}\left(t_{2}=-1\right)\right\} \right)+1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}=-1\right)\cap\mathbf{X}\left(t_{2}=-1\right)\right\} \right)</math><math class="inline">=1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}\right)=\mathbf{X}\left(t_{2}\right)\right\} \right)-1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}\right)\neq\mathbf{X}\left(t_{2}\right)\right\} \right).</math>  
  
<math>P\left(\left\{ \mathbf{X}\left(t_{1}\right)=\mathbf{X}\left(t_{2}\right)\right\} \right)=P\left(\left\{ \text{number of events occuring within }\left(t_{1},t_{2}\right)\text{is even}\right\} \right)</math>
+
<math class="inline">P\left(\left\{ \mathbf{X}\left(t_{1}\right)=\mathbf{X}\left(t_{2}\right)\right\} \right)=P\left(\left\{ \text{number of events occuring within }\left(t_{1},t_{2}\right)\text{is even}\right\} \right)</math>
  
<math>=\sum_{n=0}^{\infty}e^{-\lambda\left(t_{2}-t_{1}\right)}\frac{\left[\lambda\left(t_{2}-t_{1}\right)\right]^{2n}}{\left(2n\right)!}=\frac{1}{2}e^{-\lambda\left(t_{2}-t_{1}\right)}\left(e^{\lambda\left(t_{2}-t_{1}\right)}+e^{-\lambda\left(t_{2}-t_{1}\right)}\right)</math><math>=\frac{1}{2}\left(1+e^{-2\lambda\left(t_{2}-t_{1}\right)}\right).</math>
+
<math class="inline">=\sum_{n=0}^{\infty}e^{-\lambda\left(t_{2}-t_{1}\right)}\frac{\left[\lambda\left(t_{2}-t_{1}\right)\right]^{2n}}{\left(2n\right)!}=\frac{1}{2}e^{-\lambda\left(t_{2}-t_{1}\right)}\left(e^{\lambda\left(t_{2}-t_{1}\right)}+e^{-\lambda\left(t_{2}-t_{1}\right)}\right)</math><math class="inline">=\frac{1}{2}\left(1+e^{-2\lambda\left(t_{2}-t_{1}\right)}\right).</math>
  
By using the same method above, we can get <math>P\left(\left\{ \mathbf{X}\left(t_{1}\right)\neq\mathbf{X}\left(t_{2}\right)\right\} \right)=\frac{1}{2}\left(1-e^{-2\lambda\left(t_{2}-t_{1}\right)}\right)</math> . Thus,  
+
By using the same method above, we can get <math class="inline">P\left(\left\{ \mathbf{X}\left(t_{1}\right)\neq\mathbf{X}\left(t_{2}\right)\right\} \right)=\frac{1}{2}\left(1-e^{-2\lambda\left(t_{2}-t_{1}\right)}\right)</math> . Thus,  
  
<math>C_{\mathbf{XX}}\left(t_{1},t_{2}\right)=\frac{1}{2}\left(1+e^{-2\lambda\left(t_{2}-t_{1}\right)}\right)-\frac{1}{2}\left(1-e^{-2\lambda\left(t_{2}-t_{1}\right)}\right)=e^{-2\lambda\left(t_{2}-t_{1}\right)}.</math>  
+
<math class="inline">C_{\mathbf{XX}}\left(t_{1},t_{2}\right)=\frac{1}{2}\left(1+e^{-2\lambda\left(t_{2}-t_{1}\right)}\right)-\frac{1}{2}\left(1-e^{-2\lambda\left(t_{2}-t_{1}\right)}\right)=e^{-2\lambda\left(t_{2}-t_{1}\right)}.</math>  
  
 
4 (15 points)
 
4 (15 points)
  
Messages arrive at a message center according to a Poisson process of rate <math>\lambda</math>  messages per hour. Every hour the messages that have arrived during the previous hour are forwarded to their destination. Find the expected value of the total time waited by all messages that arrive during the hour.
+
Messages arrive at a message center according to a Poisson process of rate <math class="inline">\lambda</math>  messages per hour. Every hour the messages that have arrived during the previous hour are forwarded to their destination. Find the expected value of the total time waited by all messages that arrive during the hour.
  
 
----
 
----

Revision as of 06:43, 1 December 2010

7.14 QE 2008 August

1

The Weak Law of Large Numbers states that if $ \mathbf{X}_{1},\mathbf{X}_{2},\mathbf{X}_{3},\cdots $ is a sequence of i.i.d. random variables with finite mean $ E\left[\mathbf{X}_{i}\right]=\mu $ for every $ i $ , then the sample mean $ \mathbf{Y}_{n}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{X}_{i} $ converges to $ \mu $ in probability. Suppose that instead of being i.i.d , $ \mathbf{X}_{1},\mathbf{X}_{2},\mathbf{X}_{3},\cdots $ each have finite mean $ \mu $ , and the covariance function of the sequence $ \mathbf{X}_{n} $ is $ Cov\left(\mathbf{X}_{i},\mathbf{X}_{j}\right)=\sigma^{2}\rho^{\left|i-j\right|} $ , where $ \left|\rho\right|<1 $ and $ \sigma^{2}>0 $ .

a. (13 points)

Find the mean and variance of $ \mathbf{Y}_{n} $ .

$ E\left[\mathbf{Y}_{n}\right]=E\left[\frac{1}{n}\sum_{i=1}^{n}\mathbf{X}_{i}\right]=\frac{1}{n}\sum_{i=1}^{n}E\left[\mathbf{X}_{i}\right]=\frac{n\mu}{n}=\mu. $

$ Cov\left(\mathbf{X}_{i},\mathbf{X}_{j}\right)=E\left[\left(\mathbf{X}_{i}-\mu\right)\left(\mathbf{X}_{j}-\mu\right)\right]=E\left[\mathbf{X}_{i}\mathbf{X}_{j}\right]-\mu^{2}. $

$ E\left[\mathbf{X}_{i}\mathbf{X}_{j}\right]=\begin{cases} \begin{array}{lll} \sigma^{2}+\mu^{2} & & ,\; i=j\\ \sigma^{2}\rho^{\left|i-j\right|}+\mu^{2} & & ,\; i\neq j. \end{array}\end{cases} $

$ E\left[\mathbf{Y}_{n}^{2}\right]=E\left[\frac{1}{n^{2}}\sum_{i=1}^{n}\sum_{j=1}^{n}\mathbf{X}_{i}\mathbf{X}_{j}\right]=\frac{1}{n^{2}}\left[\sum_{i=1}^{n}E\left[\mathbf{X}_{i}^{2}\right]+\underset{_{i\neq j}}{\sum_{i=1}^{n}\sum_{j=1}^{n}}E\left[\mathbf{X}_{i}\mathbf{X}_{j}\right]\right] $$ =\frac{1}{n^{2}}\left[n\left(\sigma^{2}+\mu^{2}\right)+2\sum_{i=1}^{n-1}\left(n-i\right)\left(\sigma^{2}\rho^{i}+\mu^{2}\right)\right]=\frac{1}{n^{2}}\left[n\sigma^{2}+2\sigma^{2}\sum_{i=1}^{n-1}\left(n-i\right)\rho^{i}\right]+\mu^{2}. $

$ Var\left[\mathbf{Y}_{n}\right]=E\left[\mathbf{Y}_{n}^{2}\right]-E\left[\mathbf{Y}_{n}\right]^{2}=\frac{1}{n^{2}}\left[n\sigma^{2}+2\sigma^{2}\sum_{i=1}^{n-1}\left(n-i\right)\rho^{i}\right]+\mu^{2}-\mu^{2}=\frac{1}{n^{2}}\left[n\sigma^{2}+2\sigma^{2}\sum_{i=1}^{n-1}\left(n-i\right)\rho^{i}\right]. $

Note

If the element $ m_{ij} $ of $ n\times n $ matrix is defined as $ \left|i-j\right| $ , then

$ \left[\begin{array}{ccccc} 0 & 1 & 2 & \cdots & n-1\\ 1 & 0 & 1 & \ddots & \vdots\\ 2 & 1 & 0 & \ddots & 2\\ \vdots & \ddots & \ddots & \ddots & 1\\ n-1 & \cdots & 2 & 1 & 0 \end{array}\right]. $

b. (12 points)

Does the sample mean still converge to $ \mu $ in probability? You must justify your answer.

ref. You can see the definition of convergence in probability.

If $ \mathbf{Y}_{n}\rightarrow\left(p\right)\rightarrow0 $ , then for any $ \epsilon>0 $ , $ P\left(\left\{ \left|\mathbf{Y}_{n}-\mu\right|>\epsilon\right\} \right)\rightarrow0 $ as $ n\rightarrow\infty $ . According to the Chebyshev inequality [CS1ChebyshevInequality], $ P\left(\left\{ \left|\mathbf{Y}_{n}-\mu\right|>\epsilon\right\} \right)\leq\frac{\sigma_{\mathbf{Y}}^{2}}{\epsilon^{2}}. $ $ \lim_{n\rightarrow\infty}\sigma_{\mathbf{Y}}^{2} $ Therefore, we know that $ \lim_{n\rightarrow\infty}P\left(\left\{ \left|\mathbf{Y}_{n}-\mu\right|>\epsilon\right\} \right)=0 $ . Thus $ \mathbf{Y}_{n}\rightarrow\left(p\right)\rightarrow0 $ .

2

Let $ \mathbf{X}_{1},\mathbf{X}_{2},\mathbf{X}_{3},\cdots $ be a sequence of i.i.d Bernoulli random variables with $ p=1/2 $ , and let $ \mathbf{Y}_{n}=2^{n}\mathbf{X}_{1}\mathbf{X}_{2}\cdots\mathbf{X}_{n} $ .

a. (15 points)

Does the sequence $ \mathbf{Y}_{n} $ converge to $ 0 $ almost everywhere?

• You can see the definition of convergence almost everywhere.

• range of $ \mathbf{X}_{i}=\left\{ 0,1\right\} $ .

• range of $ \mathbf{Y}_{n}=\left\{ 0,2^{n}\right\} $ .

• the probability of $ \mathbf{Y}_{n}=2^{n}\Longrightarrow P\left(\mathbf{Y}_{n}=2^{n}\right)=P\left(\left\{ \mathbf{X}_{1}=1\right\} \cap\cdots\cap\left\{ \mathbf{X}_{n}=1\right\} \right)=\left(\frac{1}{2}\right)^{n}. $

$ \lim_{n\rightarrow\infty}P\left(\mathbf{Y}_{n}=2^{n}\right)=0 $ . $ \lim_{n\rightarrow\infty}P\left(\mathbf{Y}_{n}=0\right)=1 $ .

• Thus, $ \left\{ \mathbf{Y}_{n}\right\} \rightarrow0 $ with probability 1.

b. (15 points)

Does the sequence $ \mathbf{Y}_{n} $ converge to 0 in the mean square sense?

Note

You can see the definition of convergence in mean-square.

$ E\left[\mathbf{X}^{2}\right]=\sum_{k=0}^{1}k^{2}\left(\frac{1}{2}\right)=0^{2}\cdot\frac{1}{2}+1^{2}\cdot\frac{1}{2}=\frac{1}{2}. $

$ E\left[\left|\mathbf{Y}_{n}-0\right|^{2}\right]=E\left[\mathbf{Y}_{n}^{2}\right]=E\left[2^{2n}\mathbf{X}_{1}^{2}\mathbf{X}_{2}^{2}\cdots\mathbf{X}_{n}^{2}\right]=2^{2n}E\left[\mathbf{X}_{1}^{2}\mathbf{X}_{2}^{2}\cdots\mathbf{X}_{n}^{2}\right]=\left(4E\left[\mathbf{X}^{2}\right]\right)^{n}=2^{n}. $

$ \lim_{n\rightarrow\infty}E\left[\mathbf{Y}_{n}^{2}\right]=\lim_{n\rightarrow\infty}2^{n}=\infty. $

Thus, $ \mathbf{Y}_{n} $ does not converge to $ 0 $ in the mean square sense.

3

Consider a random process $ \mathbf{X}\left(t\right) $ that assumes values $ \pm1 $ . Suppose that $ \mathbf{X}\left(0\right)=\pm1 $ with probability $ 1/2 $ , and suppose that $ \mathbf{X}\left(t\right) $ then changes polarity with each occurrence of an event in a Poisson process of rate $ \lambda $ .

Note:

You might find the equations $ \frac{1}{2}\left(e^{x}+e^{-x}\right)=\sum_{j=0}^{\infty}\frac{x^{2j}}{\left(2j\right)!} $ and $ \frac{1}{2}\left(e^{x}-e^{-x}\right)=\sum_{j=0}^{\infty}\frac{x^{2j+1}}{\left(2j+1\right)!} $ helpful.

a. (15 points)

Find the probability mass function of $ \mathbf{X}\left(t\right) $ .

$ P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} |\left\{ \mathbf{X}\left(0\right)=1\right\} \right)=P\left(\left\{ \mathbf{N}\left(t\right)=\text{even}\right\} \right)=\sum_{n=0}^{\infty}e^{-\lambda t}\frac{\left(\lambda t\right)^{2n}}{\left(2n\right)!}=e^{-\lambda t}\sum_{n=0}^{\infty}\frac{\left(\lambda t\right)^{2n}}{\left(2n\right)!} $$ =e^{-\lambda t}\cdot\frac{1}{2}\left(e^{\lambda t}+e^{-\lambda t}\right)=\frac{1}{2}\left(1+e^{-2\lambda t}\right). $

$ P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} |\left\{ \mathbf{X}\left(0\right)=-1\right\} \right)=P\left(\left\{ \mathbf{N}\left(t\right)=\text{odd}\right\} \right)=\sum_{n=0}^{\infty}e^{-\lambda t}\frac{\left(\lambda t\right)^{2n+1}}{\left(2n+1\right)!}=e^{-\lambda t}\sum_{n=0}^{\infty}\frac{\left(\lambda t\right)^{2n+1}}{\left(2n+1\right)!} $$ =e^{-\lambda t}\cdot\frac{1}{2}\left(e^{\lambda t}-e^{-\lambda t}\right)=\frac{1}{2}\left(1-e^{-2\lambda t}\right). $

$ P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} \right)=P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} |\left\{ \mathbf{X}\left(0\right)=1\right\} \right)P\left(\left\{ \mathbf{X}\left(0\right)=1\right\} \right)+P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} |\left\{ \mathbf{X}\left(0\right)=-1\right\} \right)P\left(\left\{ \mathbf{X}\left(0\right)=-1\right\} \right) $$ =\frac{1}{2}\left(1+e^{-2\lambda t}\right)\cdot\frac{1}{2}+\frac{1}{2}\left(1-e^{-2\lambda t}\right)\cdot\frac{1}{2}=\frac{1}{2}. $

Thus, $ P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} \right)=P\left(\left\{ \mathbf{X}\left(t\right)=-1\right\} \right)=\frac{1}{2} $ .

b. (15 points)

Find the autocovariance function of the random process $ \mathbf{X}\left(t\right) $ .

Note

The explanation about the autocovariance function is here.

$ E\left[\mathbf{X}\left(t\right)\right]=1\cdot\frac{1}{2}+\left(-1\right)\cdot\frac{1}{2}=-\frac{1}{2}+\frac{1}{2}=0. $

$ C_{\mathbf{XX}}\left(t_{1},t_{2}\right)=E\left[\left(\mathbf{X}\left(t_{1}\right)-\mu_{\mathbf{X}}\left(t_{1}\right)\right)\left(\mathbf{X}\left(t_{2}\right)-\mu_{\mathbf{X}}\left(t_{2}\right)\right)^{*}\right]=R_{\mathbf{XX}}\left(t_{1},t_{2}\right)-\mu_{\mathbf{X}}\left(t_{1}\right)\mu_{\mathbf{X}}\left(t_{2}\right)^{*} $$ =R_{\mathbf{XX}}\left(t_{1},t_{2}\right)=1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}=1\right)\cap\mathbf{X}\left(t_{2}=1\right)\right\} \right)-1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}=-1\right)\cap\mathbf{X}\left(t_{2}=1\right)\right\} \right) $$ -1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}=1\right)\cap\mathbf{X}\left(t_{2}=-1\right)\right\} \right)+1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}=-1\right)\cap\mathbf{X}\left(t_{2}=-1\right)\right\} \right) $$ =1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}\right)=\mathbf{X}\left(t_{2}\right)\right\} \right)-1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}\right)\neq\mathbf{X}\left(t_{2}\right)\right\} \right). $

$ P\left(\left\{ \mathbf{X}\left(t_{1}\right)=\mathbf{X}\left(t_{2}\right)\right\} \right)=P\left(\left\{ \text{number of events occuring within }\left(t_{1},t_{2}\right)\text{is even}\right\} \right) $

$ =\sum_{n=0}^{\infty}e^{-\lambda\left(t_{2}-t_{1}\right)}\frac{\left[\lambda\left(t_{2}-t_{1}\right)\right]^{2n}}{\left(2n\right)!}=\frac{1}{2}e^{-\lambda\left(t_{2}-t_{1}\right)}\left(e^{\lambda\left(t_{2}-t_{1}\right)}+e^{-\lambda\left(t_{2}-t_{1}\right)}\right) $$ =\frac{1}{2}\left(1+e^{-2\lambda\left(t_{2}-t_{1}\right)}\right). $

By using the same method above, we can get $ P\left(\left\{ \mathbf{X}\left(t_{1}\right)\neq\mathbf{X}\left(t_{2}\right)\right\} \right)=\frac{1}{2}\left(1-e^{-2\lambda\left(t_{2}-t_{1}\right)}\right) $ . Thus,

$ C_{\mathbf{XX}}\left(t_{1},t_{2}\right)=\frac{1}{2}\left(1+e^{-2\lambda\left(t_{2}-t_{1}\right)}\right)-\frac{1}{2}\left(1-e^{-2\lambda\left(t_{2}-t_{1}\right)}\right)=e^{-2\lambda\left(t_{2}-t_{1}\right)}. $

4 (15 points)

Messages arrive at a message center according to a Poisson process of rate $ \lambda $ messages per hour. Every hour the messages that have arrived during the previous hour are forwarded to their destination. Find the expected value of the total time waited by all messages that arrive during the hour.


Back to ECE600

Back to ECE 600 QE

Alumni Liaison

ECE462 Survivor

Seraj Dosenbach