(New page: ==7.12 QE 2006 August== 1 Let <math>\mathbf{U}_{n}</math> be a sequence of independent, identically distributed zero-mean, unit-variance Gaussian random variables. The sequence <math>\m...)
 
 
(5 intermediate revisions by 2 users not shown)
Line 1: Line 1:
==7.12 QE 2006 August==
+
==7.12 [[ECE_PhD_Qualifying_Exams|QE]] 2006 August==
  
1
+
'''1'''
  
Let <math>\mathbf{U}_{n}</math>  be a sequence of independent, identically distributed zero-mean, unit-variance Gaussian random variables. The sequence <math>\mathbf{X}_{n}</math> , <math>n\geq1</math> , is given by <math>\mathbf{X}_{n}=\frac{1}{2}\mathbf{U}_{n}+\left(\frac{1}{2}\right)^{2}\mathbf{U}_{n-1}+\cdots+\left(\frac{1}{2}\right)^{n}\mathbf{U}_{1}.</math>  
+
Let <math class="inline">\mathbf{U}_{n}</math>  be a sequence of independent, identically distributed zero-mean, unit-variance Gaussian random variables. The sequence <math class="inline">\mathbf{X}_{n}</math> , <math class="inline">n\geq1</math> , is given by <math class="inline">\mathbf{X}_{n}=\frac{1}{2}\mathbf{U}_{n}+\left(\frac{1}{2}\right)^{2}\mathbf{U}_{n-1}+\cdots+\left(\frac{1}{2}\right)^{n}\mathbf{U}_{1}.</math>  
  
(a) (15 points)
+
'''(a) (15 points)'''
  
Find the mean and variance of <math>\mathbf{X}_{n}</math> .
+
Find the mean and variance of <math class="inline">\mathbf{X}_{n}</math> .
  
i)  Find <math>E\left[\mathbf{X}_{n}\right]</math>  
+
i)  Find <math class="inline">E\left[\mathbf{X}_{n}\right]</math>  
  
<math>\mathbf{X}_{n}=\sum_{k=0}^{n-1}\left(\frac{1}{2}\right)^{k+1}\mathbf{U}_{n-k}. E\left[\mathbf{X}_{n}\right]=E\left(\sum_{k=0}^{n-1}\left(\frac{1}{2}\right)^{k+1}\mathbf{U}_{n-k}\right)=\sum_{k=0}^{n-1}\left(\frac{1}{2}\right)^{k+1}E\left[\mathbf{U}_{n-k}\right]=0.</math>  
+
<math class="inline">\mathbf{X}_{n}=\sum_{k=0}^{n-1}\left(\frac{1}{2}\right)^{k+1}\mathbf{U}_{n-k}. E\left[\mathbf{X}_{n}\right]=E\left(\sum_{k=0}^{n-1}\left(\frac{1}{2}\right)^{k+1}\mathbf{U}_{n-k}\right)=\sum_{k=0}^{n-1}\left(\frac{1}{2}\right)^{k+1}E\left[\mathbf{U}_{n-k}\right]=0.</math>  
  
ii)  Find <math>E\left[\mathbf{X}_{n}^{2}\right]</math>  
+
ii)  Find <math class="inline">E\left[\mathbf{X}_{n}^{2}\right]</math>  
  
<math>E\left[\mathbf{X}_{n}^{2}\right]=E\left[\left(\sum_{k=0}^{n-1}\left(\frac{1}{2}\right)^{k+1}\mathbf{U}_{n-k}\right)^{2}\right]=E\left[\sum_{k=0}^{n-1}\sum_{j=0}^{n-1}\left(\frac{1}{2}\right)^{k+1}\left(\frac{1}{2}\right)^{j+1}\mathbf{U}_{n-k}\mathbf{U}_{n-j}\right]</math><math>=E\left[\sum_{k=0}^{n-1}\left(\frac{1}{2}\right)^{2k+2}\mathbf{U}_{n-k}^{2}+\underset{k\neq j}{\sum_{k=0}^{n-1}\sum_{j=0}^{n-1}}\left(\frac{1}{2}\right)^{k+1}\left(\frac{1}{2}\right)^{j+1}\mathbf{U}_{n-k}\mathbf{U}_{n-j}\right]</math><math>=\sum_{k=0}^{n-1}\left(\frac{1}{2}\right)^{2k+2}E\left[\mathbf{U}_{n-k}^{2}\right]+\underset{k\neq j}{\sum_{k=0}^{n-1}\sum_{j=0}^{n-1}}\left(\frac{1}{2}\right)^{k+1}\left(\frac{1}{2}\right)^{j+1}E\left[\mathbf{U}_{n-k}\right]E\left[\mathbf{U}_{n-j}\right]</math><math>=\sum_{k=0}^{n-1}\left(\frac{1}{2}\right)^{2k+2}=\sum_{k=1}^{n}\left(\frac{1}{2}\right)^{2k}=\frac{\left(\frac{1}{2}\right)^{2}\left(1-\left(\frac{1}{2}\right)^{2n}\right)}{1-\left(\frac{1}{2}\right)^{2}}=\frac{1}{3}\left(1-\left(\frac{1}{2}\right)^{2n}\right).</math>  
+
<math class="inline">E\left[\mathbf{X}_{n}^{2}\right]=E\left[\left(\sum_{k=0}^{n-1}\left(\frac{1}{2}\right)^{k+1}\mathbf{U}_{n-k}\right)^{2}\right]=E\left[\sum_{k=0}^{n-1}\sum_{j=0}^{n-1}\left(\frac{1}{2}\right)^{k+1}\left(\frac{1}{2}\right)^{j+1}\mathbf{U}_{n-k}\mathbf{U}_{n-j}\right]</math><math class="inline">=E\left[\sum_{k=0}^{n-1}\left(\frac{1}{2}\right)^{2k+2}\mathbf{U}_{n-k}^{2}+\underset{k\neq j}{\sum_{k=0}^{n-1}\sum_{j=0}^{n-1}}\left(\frac{1}{2}\right)^{k+1}\left(\frac{1}{2}\right)^{j+1}\mathbf{U}_{n-k}\mathbf{U}_{n-j}\right]</math><math class="inline">=\sum_{k=0}^{n-1}\left(\frac{1}{2}\right)^{2k+2}E\left[\mathbf{U}_{n-k}^{2}\right]+\underset{k\neq j}{\sum_{k=0}^{n-1}\sum_{j=0}^{n-1}}\left(\frac{1}{2}\right)^{k+1}\left(\frac{1}{2}\right)^{j+1}E\left[\mathbf{U}_{n-k}\right]E\left[\mathbf{U}_{n-j}\right]</math><math class="inline">=\sum_{k=0}^{n-1}\left(\frac{1}{2}\right)^{2k+2}=\sum_{k=1}^{n}\left(\frac{1}{2}\right)^{2k}=\frac{\left(\frac{1}{2}\right)^{2}\left(1-\left(\frac{1}{2}\right)^{2n}\right)}{1-\left(\frac{1}{2}\right)^{2}}=\frac{1}{3}\left(1-\left(\frac{1}{2}\right)^{2n}\right).</math>  
  
iii)  Find <math>Var\left[\mathbf{X}_{n}\right]</math>  
+
iii)  Find <math class="inline">Var\left[\mathbf{X}_{n}\right]</math>  
  
<math>Var\left[\mathbf{X}_{n}\right]=E\left[\mathbf{X}_{n}^{2}\right]-\left(E\left[\mathbf{X_{n}}\right]\right)^{2}=\frac{1}{3}\left(1-\left(\frac{1}{2}\right)^{2n}\right).</math>  
+
<math class="inline">Var\left[\mathbf{X}_{n}\right]=E\left[\mathbf{X}_{n}^{2}\right]-\left(E\left[\mathbf{X_{n}}\right]\right)^{2}=\frac{1}{3}\left(1-\left(\frac{1}{2}\right)^{2n}\right).</math>  
  
(b) (15 points)
+
'''(b) (15 points)'''
  
Find the characteristic function of <math>\mathbf{X}_{n}</math> .
+
Find the characteristic function of <math class="inline">\mathbf{X}_{n}</math> .
  
Since <math>\mathbf{U}_{n}</math>  is a sequence of i.i.d.  Gaussian random variables, <math>\mathbf{X}_{n}</math>  is a sequence of Gaussian random variables with zero mean and variance <math>\sigma_{\mathbf{X}_{n}}^{2}=\frac{1}{3}\left(1-\left(\frac{1}{2}\right)^{2n}\right)</math> . Hence the characteristic function of <math>\mathbf{X}_{n}</math>  is <math>\Phi_{\mathbf{X}_{n}}\left(\omega\right)=\exp\left(i\mu_{\mathbf{X}_{n}}\omega-\frac{1}{2}\sigma_{\mathbf{X}_{n}}^{2}\omega^{2}\right)=\exp\left(-\frac{\omega^{2}}{6}\left(1-\left(\frac{1}{2}\right)^{2n}\right)\right).</math>  
+
Since <math class="inline">\mathbf{U}_{n}</math>  is a sequence of i.i.d.  Gaussian random variables, <math class="inline">\mathbf{X}_{n}</math>  is a sequence of Gaussian random variables with zero mean and variance <math class="inline">\sigma_{\mathbf{X}_{n}}^{2}=\frac{1}{3}\left(1-\left(\frac{1}{2}\right)^{2n}\right)</math> . Hence the characteristic function of <math class="inline">\mathbf{X}_{n}</math>  is <math class="inline">\Phi_{\mathbf{X}_{n}}\left(\omega\right)=\exp\left(i\mu_{\mathbf{X}_{n}}\omega-\frac{1}{2}\sigma_{\mathbf{X}_{n}}^{2}\omega^{2}\right)=\exp\left(-\frac{\omega^{2}}{6}\left(1-\left(\frac{1}{2}\right)^{2n}\right)\right).</math>  
  
(c) (10 points)
+
'''(c) (10 points)'''
  
Does the sequence <math>\mathbf{X}_{n}</math>  converge in distribution? A simple yes or no answer is not sufficient. You must justify your answer.
+
Does the sequence <math class="inline">\mathbf{X}_{n}</math>  converge in distribution? A simple yes or no answer is not sufficient. You must justify your answer.
  
<math>\Phi=F_{\mathbf{X}_{n}}\left(x\right)=\int_{-\infty}^{x}\frac{1}{\sqrt{2\pi}\sigma_{\mathbf{X}_{n}}}\exp\left(-\frac{x'^{2}}{2\sigma_{\mathbf{X}_{n}}^{2}}\right)dx'</math>  where <math>\sigma_{\mathbf{X}_{n}}^{2}=\frac{1}{3}\left(1-\left(\frac{1}{2}\right)^{2n}\right)</math> .  
+
<math class="inline">\Phi=F_{\mathbf{X}_{n}}\left(x\right)=\int_{-\infty}^{x}\frac{1}{\sqrt{2\pi}\sigma_{\mathbf{X}_{n}}}\exp\left(-\frac{x'^{2}}{2\sigma_{\mathbf{X}_{n}}^{2}}\right)dx'</math>  where <math class="inline">\sigma_{\mathbf{X}_{n}}^{2}=\frac{1}{3}\left(1-\left(\frac{1}{2}\right)^{2n}\right)</math> .  
  
Since <math>\lim_{n\rightarrow\infty}\sigma_{\mathbf{X}_{n}}^{2}=\frac{1}{3} , \lim_{n\rightarrow\infty}F_{\mathbf{X}_{n}}=\int_{-\infty}^{x}\frac{1}{\sqrt{\frac{2\pi}{3}}}\exp\left(-\frac{x'^{2}}{2\sigma_{\mathbf{X}_{n}}^{2}}\right)dx'=F_{\mathbf{X}}\left(x\right).</math>  
+
Since <math class="inline">\lim_{n\rightarrow\infty}\sigma_{\mathbf{X}_{n}}^{2}=\frac{1}{3} , \lim_{n\rightarrow\infty}F_{\mathbf{X}_{n}}=\int_{-\infty}^{x}\frac{1}{\sqrt{\frac{2\pi}{3}}}\exp\left(-\frac{x'^{2}}{2\sigma_{\mathbf{X}_{n}}^{2}}\right)dx'=F_{\mathbf{X}}\left(x\right).</math>  
  
<math>\therefore</math>  The squance <math>\mathbf{X}_{n}</math>  converges in distribution.
+
<math class="inline">\therefore</math>  The squance <math class="inline">\mathbf{X}_{n}</math>  converges in distribution.
  
2
+
'''2'''
  
Let <math>\Phi</math>  be the standard normal distribution, i.e., the distribution function of a zero-mean, unit-variance Gaussian random variable. Let <math>\mathbf{X}</math>  be a normal random variable with mean <math>\mu</math>  and variance 1 . We want to find <math>E\left[\Phi\left(\mathbf{X}\right)\right]</math> .
+
Let <math class="inline">\Phi</math>  be the standard normal distribution, i.e., the distribution function of a zero-mean, unit-variance Gaussian random variable. Let <math class="inline">\mathbf{X}</math>  be a normal random variable with mean <math class="inline">\mu</math>  and variance 1 . We want to find <math class="inline">E\left[\Phi\left(\mathbf{X}\right)\right]</math> .
  
(a) (10 points)
+
'''(a) (10 points)'''
  
First show that <math>E\left[\Phi\left(\mathbf{X}\right)\right]=P\left(\mathbf{Z}\leq\mathbf{X}\right)</math> , where <math>\mathbf{Z}</math>  is a standard normal random variable independent of <math>\mathbf{X}</math> . Hint: Use an intermediate random variable <math>\mathbf{I}</math>  defined as  
+
First show that <math class="inline">E\left[\Phi\left(\mathbf{X}\right)\right]=P\left(\mathbf{Z}\leq\mathbf{X}\right)</math> , where <math class="inline">\mathbf{Z}</math>  is a standard normal random variable independent of <math class="inline">\mathbf{X}</math> . Hint: Use an intermediate random variable <math class="inline">\mathbf{I}</math>  defined as  
  
<math>\mathbf{I}=\left\{ \begin{array}{lll}
+
<math class="inline">\mathbf{I}=\left\{ \begin{array}{lll}
1 &  & \textrm{if }\mathbf{Z}\leq\mathbf{X}\\
+
1 &  & \text{if }\mathbf{Z}\leq\mathbf{X}\\
0 &  & \textrm{if }\mathbf{Z}>\mathbf{X}.
+
0 &  & \text{if }\mathbf{Z}>\mathbf{X}.
 
\end{array}\right.</math>  
 
\end{array}\right.</math>  
  
<math>P\left(\mathbf{Z}\leq\mathbf{X}\right)=\int_{-\infty}^{\infty}P\left(\mathbf{Z}\leq x|\mathbf{X}=x\right)\cdot f_{\mathbf{X}}\left(x\right)dx=\int_{-\infty}^{\infty}\Phi\left(x\right)\cdot f_{\mathbf{X}}\left(x\right)dx=E\left[\Phi\left(\mathbf{X}\right)\right].</math>  
+
<math class="inline">P\left(\mathbf{Z}\leq\mathbf{X}\right)=\int_{-\infty}^{\infty}P\left(\mathbf{Z}\leq x|\mathbf{X}=x\right)\cdot f_{\mathbf{X}}\left(x\right)dx=\int_{-\infty}^{\infty}\Phi\left(x\right)\cdot f_{\mathbf{X}}\left(x\right)dx=E\left[\Phi\left(\mathbf{X}\right)\right].</math>  
  
(b) (10 points)
+
'''(b) (10 points)'''
  
Now use the result from Part (a) to show that <math>E\left[\Phi\left(\mathbf{X}\right)\right]=\Phi\left(\frac{\mu}{\sqrt{2}}\right)</math> .
+
Now use the result from Part (a) to show that <math class="inline">E\left[\Phi\left(\mathbf{X}\right)\right]=\Phi\left(\frac{\mu}{\sqrt{2}}\right)</math> .
  
Let <math>\mathbf{Y}=\mathbf{Z}-\mathbf{X}</math> . Since <math>\mathbf{Z}</math>  and <math>\mathbf{X}</math>  are Gaussian random variables, <math>\mathbf{Y}</math>  is also a Gaussian random variable. <math>E\left[\mathbf{Y}\right]=E\left[\mathbf{Z}\right]-E\left[\mathbf{X}\right]=-\mu.</math>
+
Let <math class="inline">\mathbf{Y}=\mathbf{Z}-\mathbf{X}</math> . Since <math class="inline">\mathbf{Z}</math>  and <math class="inline">\mathbf{X}</math>  are Gaussian random variables, <math class="inline">\mathbf{Y}</math>  is also a Gaussian random variable.  
  
<math>Var\left[\mathbf{Y}\right]=E\left[\left(\mathbf{Y}-E\left[\mathbf{Y}\right]\right)^{2}\right]=E\left[\left(\mathbf{Z}-\left(\mathbf{X}-\mu\right)\right)^{2}\right]=E\left[\mathbf{Z}^{2}\right]+E\left[\left(\mathbf{X}-\mu\right)^{2}\right]-2E\left[\mathbf{Z}\right]E\left[\mathbf{X}-\mu\right]</math><math>=E\left[\mathbf{Z}^{2}\right]-E\left[\mathbf{Z}\right]E\left[\mathbf{X}-\mu\right]+E\left[\left(\mathbf{X}-\mu\right)^{2}\right]-E\left[\mathbf{Z}\right]E\left[\mathbf{X}-\mu\right]</math><math>=E\left[\mathbf{Z}^{2}\right]-\left(E\left[\mathbf{Z}\right]\right)^{2}+E\left[\left(\mathbf{X}-\mu\right)^{2}\right]-\left(E\left[\mathbf{X}-\mu\right]\right)^{2}=Var\left[\mathbf{Z}\right]+Var\left[\mathbf{X}\right]=2.</math>  
+
<math class="inline">E\left[\mathbf{Y}\right]=E\left[\mathbf{Z}\right]-E\left[\mathbf{X}\right]=-\mu.</math>  
  
<math>E\left[\Phi\left(\mathbf{X}\right)\right]=P\left(\left\{ \mathbf{Z}\leq\mathbf{X}\right\} \right)=P\left(\left\{ \mathbf{Y}\leq0\right\} \right)=\Phi\left(\frac{0-\left(-\mu\right)}{\sqrt{2}}\right)=\Phi\left(\frac{\mu}{\sqrt{2}}\right).</math>  
+
<math class="inline">Var\left[\mathbf{Y}\right]=E\left[\left(\mathbf{Y}-E\left[\mathbf{Y}\right]\right)^{2}\right]=E\left[\left(\mathbf{Z}-\left(\mathbf{X}-\mu\right)\right)^{2}\right]=E\left[\mathbf{Z}^{2}\right]+E\left[\left(\mathbf{X}-\mu\right)^{2}\right]-2E\left[\mathbf{Z}\right]E\left[\mathbf{X}-\mu\right]</math><math class="inline">=E\left[\mathbf{Z}^{2}\right]-E\left[\mathbf{Z}\right]E\left[\mathbf{X}-\mu\right]+E\left[\left(\mathbf{X}-\mu\right)^{2}\right]-E\left[\mathbf{Z}\right]E\left[\mathbf{X}-\mu\right]</math><math class="inline">=E\left[\mathbf{Z}^{2}\right]-\left(E\left[\mathbf{Z}\right]\right)^{2}+E\left[\left(\mathbf{X}-\mu\right)^{2}\right]-\left(E\left[\mathbf{X}-\mu\right]\right)^{2}=Var\left[\mathbf{Z}\right]+Var\left[\mathbf{X}\right]=2.</math>  
  
3 (15 points)
+
<math class="inline">E\left[\Phi\left(\mathbf{X}\right)\right]=P\left(\left\{ \mathbf{Z}\leq\mathbf{X}\right\} \right)=P\left(\left\{ \mathbf{Y}\leq0\right\} \right)=\Phi\left(\frac{0-\left(-\mu\right)}{\sqrt{2}}\right)=\Phi\left(\frac{\mu}{\sqrt{2}}\right).</math>
  
Let \mathbf{Y}(t)  be the output of linear system with impulse response h\left(t\right)  and input \mathbf{X}\left(t\right)+\mathbf{N}\left(t\right) , where \mathbf{X}\left(t\right)  and \mathbf{N}\left(t\right)  are jointly wide-sense stationary independent random processes. If \mathbf{Z}\left(t\right)=\mathbf{X}\left(t\right)-\mathbf{Y}\left(t\right) , find the power spectral density S_{\mathbf{Z}}\left(\omega\right)  in terms of S_{\mathbf{X}}\left(\omega\right) , S_{\mathbf{N}}\left(\omega\right) , m_{\mathbf{X}}=E\left[\mathbf{X}\right] , and m_{\mathbf{Y}}=E\left[\mathbf{Y}\right] .
+
'''3 (15 points)'''
 +
 
 +
Let <math class="inline">\mathbf{Y}(t)</math> be the output of linear system with impulse response <math class="inline">h\left(t\right)</math> and input <math class="inline">\mathbf{X}\left(t\right)+\mathbf{N}\left(t\right)</math> , where <math class="inline">\mathbf{X}\left(t\right)</math> and <math class="inline">\mathbf{N}\left(t\right)</math> are jointly wide-sense stationary independent random processes. If <math class="inline">\mathbf{Z}\left(t\right)=\mathbf{X}\left(t\right)-\mathbf{Y}\left(t\right)</math> , find the power spectral density <math class="inline">S_{\mathbf{Z}}\left(\omega\right)</math> in terms of <math class="inline">S_{\mathbf{X}}\left(\omega\right) , S_{\mathbf{N}}\left(\omega\right) , m_{\mathbf{X}}=E\left[\mathbf{X}\right]</math> , and <math class="inline">m_{\mathbf{Y}}=E\left[\mathbf{Y}\right]</math> .
  
 
Solution
 
Solution
  
Let \mathbf{M}\left(t\right)=\mathbf{X}\left(t\right)+\mathbf{N}\left(t\right) . Since \mathbf{X}\left(t\right)  and \mathbf{N}\left(t\right)  are jointly wide-sense stationary. \mathbf{M}\left(t\right)  is also a wide-sense stationary random process. \mathbf{Y}\left(t\right)=\mathbf{M}\left(t\right)*h\left(t\right). R_{\mathbf{Y}}\left(\tau\right)=\left(R_{\mathbf{M}}*h*\widetilde{h}\right)\left(\tau\right)\textrm{ where }\left(\widetilde{h}\left(t\right)=h\left(-t\right)\right). R_{\mathbf{M}}\left(\tau\right) R_{\mathbf{XY}}\left(\tau\right)
+
Let <math class="inline">\mathbf{M}\left(t\right)=\mathbf{X}\left(t\right)+\mathbf{N}\left(t\right)</math> . Since <math class="inline">\mathbf{X}\left(t\right)</math> and <math class="inline">\mathbf{N}\left(t\right)</math> are jointly wide-sense stationary. <math class="inline">\mathbf{M}\left(t\right)</math> is also a wide-sense stationary random process.  
  
R_{\mathbf{Z}}\left(\tau\right) S_{\mathbf{Z}}\left(\omega\right) \because m_{\mathbf{Y}}=m_{\mathbf{M}}*h\left(t\right)=\int_{-\infty}^{\infty}\left(m_{\mathbf{X}}+m_{\mathbf{N}}\right)h\left(t\right)dt=\left(m_{\mathbf{X}}+m_{\mathbf{N}}\right)H\left(0\right)\Rightarrow m_{\mathbf{N}}H\left(0\right)=m_{\mathbf{Y}}-m_{\mathbf{X}}H\left(0\right).  
+
<math class="inline">\mathbf{Y}\left(t\right)=\mathbf{M}\left(t\right)*h\left(t\right).</math>
  
4
+
<math class="inline">R_{\mathbf{Y}}\left(\tau\right)=\left(R_{\mathbf{M}}*h*\tilde{h}\right)\left(\tau\right)\text{ where }\left(\tilde{h}\left(t\right)=h\left(-t\right)\right).</math>
  
Suppose customer orders arrive according to an i.i.d.  Bernoulli random process \mathbf{X}_{n} with parameter p . Thus, an order arrives at time index n  (i.e., \mathbf{X}_{n}=1 ) with probability p ; if an order does not arrive at time index n , then \mathbf{X}_{n}=0 . When an order arrives, its size is an exponential random variable with parameter \lambda . Let \mathbf{S}_{n} be the total size of all orders up to time n .
+
<math class="inline">R_{\mathbf{M}}\left(\tau\right)=E\left[\mathbf{M}\left(t\right)\mathbf{M}\left(t+\tau\right)\right]</math><math class="inline">=E\left[\mathbf{X}\left(t\right)\mathbf{X}\left(t+\tau\right)\right]+E\left[\mathbf{X}\left(t\right)\right]E\left[\mathbf{N}\left(t+\tau\right)\right]+E\left[\mathbf{X}\left(t+\tau\right)\right]E\left[\mathbf{N}\left(t\right)\right]+E\left[\mathbf{N}\left(t\right)\mathbf{N}\left(t+\tau\right)\right]</math><math class="inline">=R_{\mathbf{X}}\left(\tau\right)+2m_{\mathbf{X}}m_{\mathbf{N}}+R_{\mathbf{N}}\left(\tau\right)</math>
  
(a) (20 points)
+
<math class="inline">R_{\mathbf{XY}}\left(\tau\right)=E\left[\mathbf{X}\left(t\right)\mathbf{Y}\left(t+\tau\right)\right]</math><math class="inline">=E\left[\mathbf{X}\left(t\right)\int_{-\infty}^{\infty}\left(\mathbf{X}\left(t+\tau-\alpha\right)+\mathbf{N}\left(t+\tau-\alpha\right)\right)h\left(\alpha\right)d\alpha\right]</math><math class="inline">=\int_{-\infty}^{\infty}\left(R_{\mathbf{X}}\left(\tau-\alpha\right)+E\left[\mathbf{X}\left(t\right)\right]E\left[\mathbf{N}\left(t+\tau-\alpha\right)\right]\right)h\left(\alpha\right)d\alpha</math><math class="inline">=R_{\mathbf{X}}\left(\tau\right)*h\left(\tau\right)+m_{\mathbf{X}}m_{\mathbf{N}}*h\left(\tau\right).</math>
  
Find the mean and autocorrelation function of \mathbf{S}_{n} .
+
<math class="inline">R_{\mathbf{Z}}\left(\tau\right)=E\left[\mathbf{Z}\left(t\right)\mathbf{Z}\left(t+\tau\right)\right]=E\left[\left(\mathbf{X}\left(t\right)-\mathbf{Y}\left(t\right)\right)\left(\mathbf{X}\left(t+\tau\right)-\mathbf{Y}\left(t+\tau\right)\right)\right]</math><math class="inline">=R_{\mathbf{X}}\left(\tau\right)-R_{\mathbf{YX}}\left(\tau\right)-R_{\mathbf{XY}}\left(\tau\right)+R_{\mathbf{YY}}\left(\tau\right).</math>
  
Let \mathbf{Y}_{n} be the size of an order at time index n , then \mathbf{Y}_{n} is a sequence of i.i.d.  exponential random variables. \mathbf{S}_{n}=\sum_{k=1}^{n}\mathbf{X}_{n}\mathbf{Y}_{n}. E\left[\mathbf{S}_{n}\right]=\sum_{k=1}^{n}E\left[\mathbf{X}_{n}\right]E\left[\mathbf{Y}_{n}\right]=\sum_{k=1}^{n}p\cdot\frac{1}{\lambda}=\frac{np}{\lambda}. R_{\mathbf{S}}\left(n,m\right)=E\left[\mathbf{S}_{n}\mathbf{S}_{m}\right]=\sum_{k=1}^{n}\sum_{l=1}^{m}E\left[\mathbf{X}_{n}\right]E\left[\mathbf{X}_{m}\right]E\left[\mathbf{Y}_{n}\right]E\left[\mathbf{Y}_{m}\right]=\sum_{k=1}^{n}\sum_{l=1}^{m}\frac{p^{2}}{\lambda^{2}}=nm\frac{p^{2}}{\lambda^{2}}.  
+
<math class="inline">S_{\mathbf{Z}}\left(\omega\right)=S_{\mathbf{X}}\left(\omega\right)-S_{\mathbf{YX}}\left(\omega\right)-S_{\mathbf{XY}}\left(\omega\right)+S_{\mathbf{Y}}\left(\omega\right)=S_{\mathbf{X}}\left(\omega\right)-S_{\mathbf{XY}}^{*}\left(\omega\right)-S_{\mathbf{XY}}\left(\omega\right)+S_{\mathbf{Y}}\left(\omega\right)</math><math class="inline">=S_{\mathbf{X}}\left(\omega\right)-2\Re\left\{ S_{\mathbf{XY}}\left(\omega\right)\right\} +S_{\mathbf{M}}\left(\omega\right)\Bigl|H\left(\omega\right)\Bigr|^{2}</math><math class="inline">=S_{\mathbf{X}}\left(\omega\right)-2\Re\left\{ S_{\mathbf{X}}\left(\omega\right)H\left(\omega\right)+2\pi m_{\mathbf{X}}m_{\mathbf{N}}\delta\left(\omega\right)H\left(\omega\right)\right\} +\left\{ S_{\mathbf{X}}\left(\omega\right)+2\pi m_{\mathbf{X}}m_{\mathbf{N}}\delta\left(\omega\right)+S_{\mathbf{N}}\left(\omega\right)\right\} \Bigl|H\left(\omega\right)\Bigr|^{2}</math><math class="inline">=S_{\mathbf{X}}\left(\omega\right)-2\Re\left\{ S_{\mathbf{X}}\left(\omega\right)H\left(\omega\right)+2\pi m_{\mathbf{X}}\left(m_{\mathbf{Y}}-m_{\mathbf{X}}H\left(0\right)\right)\delta\left(\omega\right)\right\} +</math><math class="inline">\left\{ S_{\mathbf{X}}\left(\omega\right)+S_{\mathbf{N}}\left(\omega\right)\right\} \Bigl|H\left(\omega\right)\Bigr|^{2}+2\pi m_{\mathbf{X}}\left(m_{\mathbf{Y}}-m_{\mathbf{X}}H\left(0\right)\right)H\left(0\right)\delta\left(\omega\right).</math>
  
(b) (5 points)
+
<math class="inline">\because m_{\mathbf{Y}}=m_{\mathbf{M}}*h\left(t\right)=\int_{-\infty}^{\infty}\left(m_{\mathbf{X}}+m_{\mathbf{N}}\right)h\left(t\right)dt=\left(m_{\mathbf{X}}+m_{\mathbf{N}}\right)H\left(0\right)\Rightarrow m_{\mathbf{N}}H\left(0\right)=m_{\mathbf{Y}}-m_{\mathbf{X}}H\left(0\right).</math>
  
Is \mathbf{S}_{n}  a stationary random process? Explain.
+
'''4'''
  
• Approach 1: \mathbf{S}_{n}  is not a stationary random process since R_{\mathbf{S}}\left(n,m\right) does not depend on only m-n .
+
Suppose customer orders arrive according to an i.i.d.  Bernoulli random process <math class="inline">\mathbf{X}_{n}</math> with parameter <math class="inline">p</math> . Thus, an order arrives at time index <math class="inline">n</math>  (i.e., <math class="inline">\mathbf{X}_{n}=1</math> ) with probability <math class="inline">p</math> ; if an order does not arrive at time index <math class="inline">n</math> , then <math class="inline">\mathbf{X}_{n}=0</math> . When an order arrives, its size is an exponential random variable with parameter <math class="inline">\lambda</math> . Let <math class="inline">\mathbf{S}_{n}</math> be the total size of all orders up to time <math class="inline">n</math> .
  
• Approach 2: \mathbf{S}_{n}  is not a stationary random process since E\left[\mathbf{S}_{n}\right]  is not constant.
+
'''(a) (20 points)'''
 +
 
 +
Find the mean and autocorrelation function of <math class="inline">\mathbf{S}_{n}</math> .
 +
 
 +
Let <math class="inline">\mathbf{Y}_{n}</math>  be the size of an order at time index <math class="inline">n</math> , then <math class="inline">\mathbf{Y}_{n}</math>  is a sequence of i.i.d.  exponential random variables.
 +
 
 +
<math class="inline">\mathbf{S}_{n}=\sum_{k=1}^{n}\mathbf{X}_{n}\mathbf{Y}_{n}.</math>
 +
 
 +
<math class="inline">E\left[\mathbf{S}_{n}\right]=\sum_{k=1}^{n}E\left[\mathbf{X}_{n}\right]E\left[\mathbf{Y}_{n}\right]=\sum_{k=1}^{n}p\cdot\frac{1}{\lambda}=\frac{np}{\lambda}.</math>
 +
 
 +
<math class="inline">R_{\mathbf{S}}\left(n,m\right)=E\left[\mathbf{S}_{n}\mathbf{S}_{m}\right]=\sum_{k=1}^{n}\sum_{l=1}^{m}E\left[\mathbf{X}_{n}\right]E\left[\mathbf{X}_{m}\right]E\left[\mathbf{Y}_{n}\right]E\left[\mathbf{Y}_{m}\right]=\sum_{k=1}^{n}\sum_{l=1}^{m}\frac{p^{2}}{\lambda^{2}}=nm\frac{p^{2}}{\lambda^{2}}.</math>
 +
 
 +
'''(b) (5 points)'''
 +
 
 +
Is <math class="inline">\mathbf{S}_{n}</math>  a stationary random process? Explain.
 +
 
 +
• Approach 1: <math class="inline">\mathbf{S}_{n}</math>  is not a stationary random process since <math class="inline">R_{\mathbf{S}}\left(n,m\right)</math>  does not depend on only <math class="inline">m-n</math> .
 +
 
 +
• Approach 2: <math class="inline">\mathbf{S}_{n}</math> is not a stationary random process since <math class="inline">E\left[\mathbf{S}_{n}\right]</math> is not constant.
  
 
----
 
----
 
[[ECE600|Back to ECE600]]
 
[[ECE600|Back to ECE600]]
  
[[ECE 600 QE|Back to ECE 600 QE]]
+
[[ECE 600 QE|Back to my ECE 600 QE page]]
 +
 
 +
[[ECE_PhD_Qualifying_Exams|Back to the general ECE PHD QE page]] (for problem discussion)

Latest revision as of 07:29, 27 June 2012

7.12 QE 2006 August

1

Let $ \mathbf{U}_{n} $ be a sequence of independent, identically distributed zero-mean, unit-variance Gaussian random variables. The sequence $ \mathbf{X}_{n} $ , $ n\geq1 $ , is given by $ \mathbf{X}_{n}=\frac{1}{2}\mathbf{U}_{n}+\left(\frac{1}{2}\right)^{2}\mathbf{U}_{n-1}+\cdots+\left(\frac{1}{2}\right)^{n}\mathbf{U}_{1}. $

(a) (15 points)

Find the mean and variance of $ \mathbf{X}_{n} $ .

i) Find $ E\left[\mathbf{X}_{n}\right] $

$ \mathbf{X}_{n}=\sum_{k=0}^{n-1}\left(\frac{1}{2}\right)^{k+1}\mathbf{U}_{n-k}. E\left[\mathbf{X}_{n}\right]=E\left(\sum_{k=0}^{n-1}\left(\frac{1}{2}\right)^{k+1}\mathbf{U}_{n-k}\right)=\sum_{k=0}^{n-1}\left(\frac{1}{2}\right)^{k+1}E\left[\mathbf{U}_{n-k}\right]=0. $

ii) Find $ E\left[\mathbf{X}_{n}^{2}\right] $

$ E\left[\mathbf{X}_{n}^{2}\right]=E\left[\left(\sum_{k=0}^{n-1}\left(\frac{1}{2}\right)^{k+1}\mathbf{U}_{n-k}\right)^{2}\right]=E\left[\sum_{k=0}^{n-1}\sum_{j=0}^{n-1}\left(\frac{1}{2}\right)^{k+1}\left(\frac{1}{2}\right)^{j+1}\mathbf{U}_{n-k}\mathbf{U}_{n-j}\right] $$ =E\left[\sum_{k=0}^{n-1}\left(\frac{1}{2}\right)^{2k+2}\mathbf{U}_{n-k}^{2}+\underset{k\neq j}{\sum_{k=0}^{n-1}\sum_{j=0}^{n-1}}\left(\frac{1}{2}\right)^{k+1}\left(\frac{1}{2}\right)^{j+1}\mathbf{U}_{n-k}\mathbf{U}_{n-j}\right] $$ =\sum_{k=0}^{n-1}\left(\frac{1}{2}\right)^{2k+2}E\left[\mathbf{U}_{n-k}^{2}\right]+\underset{k\neq j}{\sum_{k=0}^{n-1}\sum_{j=0}^{n-1}}\left(\frac{1}{2}\right)^{k+1}\left(\frac{1}{2}\right)^{j+1}E\left[\mathbf{U}_{n-k}\right]E\left[\mathbf{U}_{n-j}\right] $$ =\sum_{k=0}^{n-1}\left(\frac{1}{2}\right)^{2k+2}=\sum_{k=1}^{n}\left(\frac{1}{2}\right)^{2k}=\frac{\left(\frac{1}{2}\right)^{2}\left(1-\left(\frac{1}{2}\right)^{2n}\right)}{1-\left(\frac{1}{2}\right)^{2}}=\frac{1}{3}\left(1-\left(\frac{1}{2}\right)^{2n}\right). $

iii) Find $ Var\left[\mathbf{X}_{n}\right] $

$ Var\left[\mathbf{X}_{n}\right]=E\left[\mathbf{X}_{n}^{2}\right]-\left(E\left[\mathbf{X_{n}}\right]\right)^{2}=\frac{1}{3}\left(1-\left(\frac{1}{2}\right)^{2n}\right). $

(b) (15 points)

Find the characteristic function of $ \mathbf{X}_{n} $ .

Since $ \mathbf{U}_{n} $ is a sequence of i.i.d. Gaussian random variables, $ \mathbf{X}_{n} $ is a sequence of Gaussian random variables with zero mean and variance $ \sigma_{\mathbf{X}_{n}}^{2}=\frac{1}{3}\left(1-\left(\frac{1}{2}\right)^{2n}\right) $ . Hence the characteristic function of $ \mathbf{X}_{n} $ is $ \Phi_{\mathbf{X}_{n}}\left(\omega\right)=\exp\left(i\mu_{\mathbf{X}_{n}}\omega-\frac{1}{2}\sigma_{\mathbf{X}_{n}}^{2}\omega^{2}\right)=\exp\left(-\frac{\omega^{2}}{6}\left(1-\left(\frac{1}{2}\right)^{2n}\right)\right). $

(c) (10 points)

Does the sequence $ \mathbf{X}_{n} $ converge in distribution? A simple yes or no answer is not sufficient. You must justify your answer.

$ \Phi=F_{\mathbf{X}_{n}}\left(x\right)=\int_{-\infty}^{x}\frac{1}{\sqrt{2\pi}\sigma_{\mathbf{X}_{n}}}\exp\left(-\frac{x'^{2}}{2\sigma_{\mathbf{X}_{n}}^{2}}\right)dx' $ where $ \sigma_{\mathbf{X}_{n}}^{2}=\frac{1}{3}\left(1-\left(\frac{1}{2}\right)^{2n}\right) $ .

Since $ \lim_{n\rightarrow\infty}\sigma_{\mathbf{X}_{n}}^{2}=\frac{1}{3} , \lim_{n\rightarrow\infty}F_{\mathbf{X}_{n}}=\int_{-\infty}^{x}\frac{1}{\sqrt{\frac{2\pi}{3}}}\exp\left(-\frac{x'^{2}}{2\sigma_{\mathbf{X}_{n}}^{2}}\right)dx'=F_{\mathbf{X}}\left(x\right). $

$ \therefore $ The squance $ \mathbf{X}_{n} $ converges in distribution.

2

Let $ \Phi $ be the standard normal distribution, i.e., the distribution function of a zero-mean, unit-variance Gaussian random variable. Let $ \mathbf{X} $ be a normal random variable with mean $ \mu $ and variance 1 . We want to find $ E\left[\Phi\left(\mathbf{X}\right)\right] $ .

(a) (10 points)

First show that $ E\left[\Phi\left(\mathbf{X}\right)\right]=P\left(\mathbf{Z}\leq\mathbf{X}\right) $ , where $ \mathbf{Z} $ is a standard normal random variable independent of $ \mathbf{X} $ . Hint: Use an intermediate random variable $ \mathbf{I} $ defined as

$ \mathbf{I}=\left\{ \begin{array}{lll} 1 & & \text{if }\mathbf{Z}\leq\mathbf{X}\\ 0 & & \text{if }\mathbf{Z}>\mathbf{X}. \end{array}\right. $

$ P\left(\mathbf{Z}\leq\mathbf{X}\right)=\int_{-\infty}^{\infty}P\left(\mathbf{Z}\leq x|\mathbf{X}=x\right)\cdot f_{\mathbf{X}}\left(x\right)dx=\int_{-\infty}^{\infty}\Phi\left(x\right)\cdot f_{\mathbf{X}}\left(x\right)dx=E\left[\Phi\left(\mathbf{X}\right)\right]. $

(b) (10 points)

Now use the result from Part (a) to show that $ E\left[\Phi\left(\mathbf{X}\right)\right]=\Phi\left(\frac{\mu}{\sqrt{2}}\right) $ .

Let $ \mathbf{Y}=\mathbf{Z}-\mathbf{X} $ . Since $ \mathbf{Z} $ and $ \mathbf{X} $ are Gaussian random variables, $ \mathbf{Y} $ is also a Gaussian random variable.

$ E\left[\mathbf{Y}\right]=E\left[\mathbf{Z}\right]-E\left[\mathbf{X}\right]=-\mu. $

$ Var\left[\mathbf{Y}\right]=E\left[\left(\mathbf{Y}-E\left[\mathbf{Y}\right]\right)^{2}\right]=E\left[\left(\mathbf{Z}-\left(\mathbf{X}-\mu\right)\right)^{2}\right]=E\left[\mathbf{Z}^{2}\right]+E\left[\left(\mathbf{X}-\mu\right)^{2}\right]-2E\left[\mathbf{Z}\right]E\left[\mathbf{X}-\mu\right] $$ =E\left[\mathbf{Z}^{2}\right]-E\left[\mathbf{Z}\right]E\left[\mathbf{X}-\mu\right]+E\left[\left(\mathbf{X}-\mu\right)^{2}\right]-E\left[\mathbf{Z}\right]E\left[\mathbf{X}-\mu\right] $$ =E\left[\mathbf{Z}^{2}\right]-\left(E\left[\mathbf{Z}\right]\right)^{2}+E\left[\left(\mathbf{X}-\mu\right)^{2}\right]-\left(E\left[\mathbf{X}-\mu\right]\right)^{2}=Var\left[\mathbf{Z}\right]+Var\left[\mathbf{X}\right]=2. $

$ E\left[\Phi\left(\mathbf{X}\right)\right]=P\left(\left\{ \mathbf{Z}\leq\mathbf{X}\right\} \right)=P\left(\left\{ \mathbf{Y}\leq0\right\} \right)=\Phi\left(\frac{0-\left(-\mu\right)}{\sqrt{2}}\right)=\Phi\left(\frac{\mu}{\sqrt{2}}\right). $

3 (15 points)

Let $ \mathbf{Y}(t) $ be the output of linear system with impulse response $ h\left(t\right) $ and input $ \mathbf{X}\left(t\right)+\mathbf{N}\left(t\right) $ , where $ \mathbf{X}\left(t\right) $ and $ \mathbf{N}\left(t\right) $ are jointly wide-sense stationary independent random processes. If $ \mathbf{Z}\left(t\right)=\mathbf{X}\left(t\right)-\mathbf{Y}\left(t\right) $ , find the power spectral density $ S_{\mathbf{Z}}\left(\omega\right) $ in terms of $ S_{\mathbf{X}}\left(\omega\right) , S_{\mathbf{N}}\left(\omega\right) , m_{\mathbf{X}}=E\left[\mathbf{X}\right] $ , and $ m_{\mathbf{Y}}=E\left[\mathbf{Y}\right] $ .

Solution

Let $ \mathbf{M}\left(t\right)=\mathbf{X}\left(t\right)+\mathbf{N}\left(t\right) $ . Since $ \mathbf{X}\left(t\right) $ and $ \mathbf{N}\left(t\right) $ are jointly wide-sense stationary. $ \mathbf{M}\left(t\right) $ is also a wide-sense stationary random process.

$ \mathbf{Y}\left(t\right)=\mathbf{M}\left(t\right)*h\left(t\right). $

$ R_{\mathbf{Y}}\left(\tau\right)=\left(R_{\mathbf{M}}*h*\tilde{h}\right)\left(\tau\right)\text{ where }\left(\tilde{h}\left(t\right)=h\left(-t\right)\right). $

$ R_{\mathbf{M}}\left(\tau\right)=E\left[\mathbf{M}\left(t\right)\mathbf{M}\left(t+\tau\right)\right] $$ =E\left[\mathbf{X}\left(t\right)\mathbf{X}\left(t+\tau\right)\right]+E\left[\mathbf{X}\left(t\right)\right]E\left[\mathbf{N}\left(t+\tau\right)\right]+E\left[\mathbf{X}\left(t+\tau\right)\right]E\left[\mathbf{N}\left(t\right)\right]+E\left[\mathbf{N}\left(t\right)\mathbf{N}\left(t+\tau\right)\right] $$ =R_{\mathbf{X}}\left(\tau\right)+2m_{\mathbf{X}}m_{\mathbf{N}}+R_{\mathbf{N}}\left(\tau\right) $

$ R_{\mathbf{XY}}\left(\tau\right)=E\left[\mathbf{X}\left(t\right)\mathbf{Y}\left(t+\tau\right)\right] $$ =E\left[\mathbf{X}\left(t\right)\int_{-\infty}^{\infty}\left(\mathbf{X}\left(t+\tau-\alpha\right)+\mathbf{N}\left(t+\tau-\alpha\right)\right)h\left(\alpha\right)d\alpha\right] $$ =\int_{-\infty}^{\infty}\left(R_{\mathbf{X}}\left(\tau-\alpha\right)+E\left[\mathbf{X}\left(t\right)\right]E\left[\mathbf{N}\left(t+\tau-\alpha\right)\right]\right)h\left(\alpha\right)d\alpha $$ =R_{\mathbf{X}}\left(\tau\right)*h\left(\tau\right)+m_{\mathbf{X}}m_{\mathbf{N}}*h\left(\tau\right). $

$ R_{\mathbf{Z}}\left(\tau\right)=E\left[\mathbf{Z}\left(t\right)\mathbf{Z}\left(t+\tau\right)\right]=E\left[\left(\mathbf{X}\left(t\right)-\mathbf{Y}\left(t\right)\right)\left(\mathbf{X}\left(t+\tau\right)-\mathbf{Y}\left(t+\tau\right)\right)\right] $$ =R_{\mathbf{X}}\left(\tau\right)-R_{\mathbf{YX}}\left(\tau\right)-R_{\mathbf{XY}}\left(\tau\right)+R_{\mathbf{YY}}\left(\tau\right). $

$ S_{\mathbf{Z}}\left(\omega\right)=S_{\mathbf{X}}\left(\omega\right)-S_{\mathbf{YX}}\left(\omega\right)-S_{\mathbf{XY}}\left(\omega\right)+S_{\mathbf{Y}}\left(\omega\right)=S_{\mathbf{X}}\left(\omega\right)-S_{\mathbf{XY}}^{*}\left(\omega\right)-S_{\mathbf{XY}}\left(\omega\right)+S_{\mathbf{Y}}\left(\omega\right) $$ =S_{\mathbf{X}}\left(\omega\right)-2\Re\left\{ S_{\mathbf{XY}}\left(\omega\right)\right\} +S_{\mathbf{M}}\left(\omega\right)\Bigl|H\left(\omega\right)\Bigr|^{2} $$ =S_{\mathbf{X}}\left(\omega\right)-2\Re\left\{ S_{\mathbf{X}}\left(\omega\right)H\left(\omega\right)+2\pi m_{\mathbf{X}}m_{\mathbf{N}}\delta\left(\omega\right)H\left(\omega\right)\right\} +\left\{ S_{\mathbf{X}}\left(\omega\right)+2\pi m_{\mathbf{X}}m_{\mathbf{N}}\delta\left(\omega\right)+S_{\mathbf{N}}\left(\omega\right)\right\} \Bigl|H\left(\omega\right)\Bigr|^{2} $$ =S_{\mathbf{X}}\left(\omega\right)-2\Re\left\{ S_{\mathbf{X}}\left(\omega\right)H\left(\omega\right)+2\pi m_{\mathbf{X}}\left(m_{\mathbf{Y}}-m_{\mathbf{X}}H\left(0\right)\right)\delta\left(\omega\right)\right\} + $$ \left\{ S_{\mathbf{X}}\left(\omega\right)+S_{\mathbf{N}}\left(\omega\right)\right\} \Bigl|H\left(\omega\right)\Bigr|^{2}+2\pi m_{\mathbf{X}}\left(m_{\mathbf{Y}}-m_{\mathbf{X}}H\left(0\right)\right)H\left(0\right)\delta\left(\omega\right). $

$ \because m_{\mathbf{Y}}=m_{\mathbf{M}}*h\left(t\right)=\int_{-\infty}^{\infty}\left(m_{\mathbf{X}}+m_{\mathbf{N}}\right)h\left(t\right)dt=\left(m_{\mathbf{X}}+m_{\mathbf{N}}\right)H\left(0\right)\Rightarrow m_{\mathbf{N}}H\left(0\right)=m_{\mathbf{Y}}-m_{\mathbf{X}}H\left(0\right). $

4

Suppose customer orders arrive according to an i.i.d. Bernoulli random process $ \mathbf{X}_{n} $ with parameter $ p $ . Thus, an order arrives at time index $ n $ (i.e., $ \mathbf{X}_{n}=1 $ ) with probability $ p $ ; if an order does not arrive at time index $ n $ , then $ \mathbf{X}_{n}=0 $ . When an order arrives, its size is an exponential random variable with parameter $ \lambda $ . Let $ \mathbf{S}_{n} $ be the total size of all orders up to time $ n $ .

(a) (20 points)

Find the mean and autocorrelation function of $ \mathbf{S}_{n} $ .

Let $ \mathbf{Y}_{n} $ be the size of an order at time index $ n $ , then $ \mathbf{Y}_{n} $ is a sequence of i.i.d. exponential random variables.

$ \mathbf{S}_{n}=\sum_{k=1}^{n}\mathbf{X}_{n}\mathbf{Y}_{n}. $

$ E\left[\mathbf{S}_{n}\right]=\sum_{k=1}^{n}E\left[\mathbf{X}_{n}\right]E\left[\mathbf{Y}_{n}\right]=\sum_{k=1}^{n}p\cdot\frac{1}{\lambda}=\frac{np}{\lambda}. $

$ R_{\mathbf{S}}\left(n,m\right)=E\left[\mathbf{S}_{n}\mathbf{S}_{m}\right]=\sum_{k=1}^{n}\sum_{l=1}^{m}E\left[\mathbf{X}_{n}\right]E\left[\mathbf{X}_{m}\right]E\left[\mathbf{Y}_{n}\right]E\left[\mathbf{Y}_{m}\right]=\sum_{k=1}^{n}\sum_{l=1}^{m}\frac{p^{2}}{\lambda^{2}}=nm\frac{p^{2}}{\lambda^{2}}. $

(b) (5 points)

Is $ \mathbf{S}_{n} $ a stationary random process? Explain.

• Approach 1: $ \mathbf{S}_{n} $ is not a stationary random process since $ R_{\mathbf{S}}\left(n,m\right) $ does not depend on only $ m-n $ .

• Approach 2: $ \mathbf{S}_{n} $ is not a stationary random process since $ E\left[\mathbf{S}_{n}\right] $ is not constant.


Back to ECE600

Back to my ECE 600 QE page

Back to the general ECE PHD QE page (for problem discussion)

Alumni Liaison

Questions/answers with a recent ECE grad

Ryne Rayburn