(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
6.2 MRB 1994 Final
+
=6.2 MRB 1994 Final=
  
1. (15 pts.)
+
'''1.''' (15 pts.)
  
 
Three boxes that appear identical contain the following combinations of coins: Box X - 2 quaters; Box Y - 1 quaters, 2 dimes; Box Z - 1 quater, 1 dime. One of the boxes is selected at random, and a coin is selected at random from that box. The coin selected is a quater. What is the probability that the box selected contains at least one dime?
 
Three boxes that appear identical contain the following combinations of coins: Box X - 2 quaters; Box Y - 1 quaters, 2 dimes; Box Z - 1 quater, 1 dime. One of the boxes is selected at random, and a coin is selected at random from that box. The coin selected is a quater. What is the probability that the box selected contains at least one dime?
  
Solution
+
=Solution=
  
 
• We can define the events as
 
• We can define the events as
Line 19: Line 19:
 
– Z = Box Z is selected.
 
– Z = Box Z is selected.
  
• From the given information, we have <math>P\left(X\right)=P\left(Y\right)=P\left(Z\right)=1/3 . P\left(Q|X\right)=1,P\left(Q|Y\right)=1/3,P\left(Q|Z\right)=1/2</math> .
+
• From the given information, we have <math class="inline">P\left(X\right)=P\left(Y\right)=P\left(Z\right)=1/3 . P\left(Q|X\right)=1,P\left(Q|Y\right)=1/3,P\left(Q|Z\right)=1/2</math> .
  
• By using Bayes' theorem, <math>P\left(A|Q\right)</math>  is  
+
• By using Bayes' theorem, <math class="inline">P\left(A|Q\right)</math>  is  
  
<math>P\left(A|Q\right)=\frac{P\left(A\cap Q\right)}{P\left(Q\right)}=\frac{P\left(\left(Y\cup Z\right)\cap Q\right)}{P\left(Q\right)}=\frac{P\left(Y\cap Q\right)\cup P\left(Z\cap Q\right)}{P\left(Q\cap X\right)+P\left(Q\cap Y\right)+P\left(Q\cap Z\right)}</math><math>=\frac{P\left(Y\cap Q\right)+P\left(Z\cap Q\right)}{P\left(Q|X\right)P\left(X\right)+P\left(Q|Y\right)P\left(Y\right)+P\left(Q|Z\right)P\left(Z\right)}</math><math>=\frac{P\left(Q|Y\right)P\left(Y\right)+P\left(Q|Z\right)P\left(Z\right)}{P\left(Q|X\right)P\left(X\right)+P\left(Q|Y\right)P\left(Y\right)+P\left(Q|Z\right)P\left(Z\right)}</math><math>=\frac{P\left(Q|Y\right)+P\left(Q|Z\right)}{P\left(Q|X\right)+P\left(Q|Y\right)+P\left(Q|Z\right)}=\frac{\frac{1}{3}+\frac{1}{2}}{1+\frac{1}{3}+\frac{1}{2}}=\frac{\frac{5}{6}}{\frac{11}{6}}=\frac{5}{11}.</math>  
+
<math class="inline">P\left(A|Q\right)=\frac{P\left(A\cap Q\right)}{P\left(Q\right)}=\frac{P\left(\left(Y\cup Z\right)\cap Q\right)}{P\left(Q\right)}=\frac{P\left(Y\cap Q\right)\cup P\left(Z\cap Q\right)}{P\left(Q\cap X\right)+P\left(Q\cap Y\right)+P\left(Q\cap Z\right)}</math><math class="inline">=\frac{P\left(Y\cap Q\right)+P\left(Z\cap Q\right)}{P\left(Q|X\right)P\left(X\right)+P\left(Q|Y\right)P\left(Y\right)+P\left(Q|Z\right)P\left(Z\right)}</math><math class="inline">=\frac{P\left(Q|Y\right)P\left(Y\right)+P\left(Q|Z\right)P\left(Z\right)}{P\left(Q|X\right)P\left(X\right)+P\left(Q|Y\right)P\left(Y\right)+P\left(Q|Z\right)P\left(Z\right)}</math><math class="inline">=\frac{P\left(Q|Y\right)+P\left(Q|Z\right)}{P\left(Q|X\right)+P\left(Q|Y\right)+P\left(Q|Z\right)}=\frac{\frac{1}{3}+\frac{1}{2}}{1+\frac{1}{3}+\frac{1}{2}}=\frac{\frac{5}{6}}{\frac{11}{6}}=\frac{5}{11}.</math>  
  
2. (20 pts.)
+
'''2.''' (20 pts.)
  
 
Multiple Choice Problems: Select the single best answer to each of the following four multiple choice questions by circling the letter in fron of the answer. There is space to work out the problems on the next page if needed.
 
Multiple Choice Problems: Select the single best answer to each of the following four multiple choice questions by circling the letter in fron of the answer. There is space to work out the problems on the next page if needed.
  
I.
+
=I.=
  
Consider a random experiment with probability space <math>\left(\mathcal{S},\mathcal{F},P\right)</math>  and let <math>A\in\mathcal{F}</math>  and <math>B\in\mathcal{F}</math> . Let <math>P\left(A\right)=1/3 , P\left(B\right)=1/3</math> , <math>and P\left(A\cap B\right)=1/4</math> . Find <math>P\left(A|\bar{B}\right)</math> .
+
Consider a random experiment with probability space <math class="inline">\left(\mathcal{S},\mathcal{F},P\right)</math>  and let <math class="inline">A\in\mathcal{F}</math>  and <math class="inline">B\in\mathcal{F}</math> . Let <math class="inline">P\left(A\right)=1/3 , P\left(B\right)=1/3</math> , and <math class="inline">P\left(A\cap B\right)=1/4</math> . Find <math class="inline">P\left(A|\bar{B}\right)</math> .
  
 
A. 1/12  B. 1/8  C. 1/6  D. 1/4  E. 1/3  
 
A. 1/12  B. 1/8  C. 1/6  D. 1/4  E. 1/3  
  
Solution
+
=Solution=
  
<math>P\left(A|\bar{B}\right)=\frac{P\left(A\cap\bar{B}\right)}{P\left(\bar{B}\right)}=\frac{P\left(A\cap\bar{B}\right)}{1-P\left(B\right)}=\frac{P\left(A\right)-P\left(A\cap B\right)}{1-P\left(B\right)}=\frac{\frac{1}{3}-\frac{1}{4}}{1-\frac{1}{3}}=\frac{\frac{1}{12}}{\frac{2}{3}}=\frac{1}{12}\cdot\frac{3}{2}=\frac{1}{8}.</math>  
+
<math class="inline">P\left(A|\bar{B}\right)=\frac{P\left(A\cap\bar{B}\right)}{P\left(\bar{B}\right)}=\frac{P\left(A\cap\bar{B}\right)}{1-P\left(B\right)}=\frac{P\left(A\right)-P\left(A\cap B\right)}{1-P\left(B\right)}=\frac{\frac{1}{3}-\frac{1}{4}}{1-\frac{1}{3}}=\frac{\frac{1}{12}}{\frac{2}{3}}=\frac{1}{12}\cdot\frac{3}{2}=\frac{1}{8}.</math>  
  
II.
+
=II.=
  
Let <math>\mathbf{X}</math>  and <math>\mathbf{Y}</math>  be two jointly distributed random variables with joint pdf
+
Let <math class="inline">\mathbf{X}</math>  and <math class="inline">\mathbf{Y}</math>  be two jointly distributed random variables with joint pdf
  
<math>f\left(x,y\right)=\left\{ \begin{array}{lll}
+
<math class="inline">f\left(x,y\right)=\left\{ \begin{array}{lll}
 
e^{-\left(x+y\right)}    \text{, for }x\geq0\text{ and }y\geq0,\\
 
e^{-\left(x+y\right)}    \text{, for }x\geq0\text{ and }y\geq0,\\
 
0    \text{, elsewhere.}
 
0    \text{, elsewhere.}
 
\end{array}\right.</math>  
 
\end{array}\right.</math>  
  
Find <math>P\left(\left\{ \mathbf{X}>\mathbf{Y}\right\} |\left\{ \mathbf{X}>1/2\right\} \right)</math> .
+
Find <math class="inline">P\left(\left\{ \mathbf{X}>\mathbf{Y}\right\} |\left\{ \mathbf{X}>1/2\right\} \right)</math> .
  
A. <math>1/\sqrt{e}</math>  B. <math>\left(2\sqrt{e}-1\right)/2\sqrt{e}</math>  C. <math>\left(1-2\sqrt{e}\right)/2\sqrt{e}</math>  D. <math>2\sqrt{e}/\left(2\sqrt{e}-1\right)</math>  E. <math>2\sqrt{e}/\left(1-2\sqrt{e}\right)</math>  
+
A. <math class="inline">1/\sqrt{e}</math>  B. <math class="inline">\left(2\sqrt{e}-1\right)/2\sqrt{e}</math>  C. <math class="inline">\left(1-2\sqrt{e}\right)/2\sqrt{e}</math>  D. <math class="inline">2\sqrt{e}/\left(2\sqrt{e}-1\right)</math>  E. <math class="inline">2\sqrt{e}/\left(1-2\sqrt{e}\right)</math>  
  
Solution
+
=Solution=
  
 
By using Bayes' theroem,  
 
By using Bayes' theroem,  
  
<math>P\left(\left\{ \mathbf{X}>\mathbf{Y}\right\} |\left\{ \mathbf{X}>\frac{1}{2}\right\} \right)=\frac{P\left(\left\{ \mathbf{X}>\mathbf{Y}\right\} \cap\left\{ \mathbf{X}>\frac{1}{2}\right\} \right)}{P\left(\left\{ \mathbf{X}>\frac{1}{2}\right\} \right)}.</math>  
+
<math class="inline">P\left(\left\{ \mathbf{X}>\mathbf{Y}\right\} |\left\{ \mathbf{X}>\frac{1}{2}\right\} \right)=\frac{P\left(\left\{ \mathbf{X}>\mathbf{Y}\right\} \cap\left\{ \mathbf{X}>\frac{1}{2}\right\} \right)}{P\left(\left\{ \mathbf{X}>\frac{1}{2}\right\} \right)}.</math>  
  
<math>P\left(\left\{ \mathbf{X}>\mathbf{Y}\right\} \cap\left\{ \mathbf{X}>\frac{1}{2}\right\} \right)=\int_{1/2}^{\infty}\int_{0}^{x}e^{-\left(x+y\right)}dydx=\int_{1/2}^{\infty}-e^{-\left(x+y\right)}\Bigl|_{0}^{x}dx</math><math>=\int_{1/2}^{\infty}\left[e^{-x}-e^{-2x}\right]dx=-e^{-x}+\frac{1}{2}e^{-2x}\Bigl|_{1/2}^{\infty}=e^{-\frac{1}{2}}-\frac{1}{2}e^{-1}.</math>  
+
<math class="inline">P\left(\left\{ \mathbf{X}>\mathbf{Y}\right\} \cap\left\{ \mathbf{X}>\frac{1}{2}\right\} \right)=\int_{1/2}^{\infty}\int_{0}^{x}e^{-\left(x+y\right)}dydx=\int_{1/2}^{\infty}-e^{-\left(x+y\right)}\Bigl|_{0}^{x}dx</math><math class="inline">=\int_{1/2}^{\infty}\left[e^{-x}-e^{-2x}\right]dx=-e^{-x}+\frac{1}{2}e^{-2x}\Bigl|_{1/2}^{\infty}=e^{-\frac{1}{2}}-\frac{1}{2}e^{-1}.</math>  
  
<math>P\left(\left\{ \mathbf{X}>\frac{1}{2}\right\} \right)=\int_{1/2}^{\infty}e^{-\left(x+y\right)}dx=-e^{-\left(x+y\right)}\Bigl|_{1/2}^{\infty}=e^{-\left(\frac{1}{2}+y\right)}.</math>  
+
<math class="inline">P\left(\left\{ \mathbf{X}>\frac{1}{2}\right\} \right)=\int_{1/2}^{\infty}e^{-\left(x+y\right)}dx=-e^{-\left(x+y\right)}\Bigl|_{1/2}^{\infty}=e^{-\left(\frac{1}{2}+y\right)}.</math>  
  
<math>P\left(\left\{ \mathbf{X}>\mathbf{Y}\right\} |\left\{ \mathbf{X}>\frac{1}{2}\right\} \right)=\frac{e^{-\frac{1}{2}}-\frac{1}{2}e^{-1}}{e^{-\frac{1}{2}}}=\frac{2\sqrt{e}-1}{2\sqrt{e}}.</math>  
+
<math class="inline">P\left(\left\{ \mathbf{X}>\mathbf{Y}\right\} |\left\{ \mathbf{X}>\frac{1}{2}\right\} \right)=\frac{e^{-\frac{1}{2}}-\frac{1}{2}e^{-1}}{e^{-\frac{1}{2}}}=\frac{2\sqrt{e}-1}{2\sqrt{e}}.</math>  
  
III.
+
=III.=
  
Let <math>\mathbf{X}</math>  be a random variable with mean 2 , variance 8 , and moment generating function <math>\phi_{\mathbf{X}}\left(s\right)=E\left\{ e^{s\mathbf{X}}\right\}</math>  . Find the first three terms in the series expansion of <math>\phi_{\mathbf{X}}\left(s\right)</math>  about zero. (Hint: The moment generating theorem).
+
Let <math class="inline">\mathbf{X}</math>  be a random variable with mean 2 , variance 8 , and moment generating function <math class="inline">\phi_{\mathbf{X}}\left(s\right)=E\left\{ e^{s\mathbf{X}}\right\}</math>  . Find the first three terms in the series expansion of <math class="inline">\phi_{\mathbf{X}}\left(s\right)</math>  about zero. (Hint: The moment generating theorem).
  
A. <math>2s+2s^{2}</math>  B. <math>1+2s+6s^{2}</math>  C. <math>1+2s+2s^{2}</math>  D. <math>1+2s+4s^{2}</math>  E. <math>1+2s+12s^{2}</math>  
+
A. <math class="inline">2s+2s^{2}</math>  B. <math class="inline">1+2s+6s^{2}</math>  C. <math class="inline">1+2s+2s^{2}</math>  D. <math class="inline">1+2s+4s^{2}</math>  E. <math class="inline">1+2s+12s^{2}</math>  
  
 
=Recall=
 
=Recall=
  
According to the series expansion, <math>e^{\lambda}=\sum_{k=0}^{\infty}\frac{\lambda^{k}}{k!}</math> .
+
According to the series expansion, <math class="inline">e^{\lambda}=\sum_{k=0}^{\infty}\frac{\lambda^{k}}{k!}</math> .
  
Solution
+
=Solution=
  
<math>\phi_{\mathbf{X}}\left(s\right)=E\left[e^{s\mathbf{X}}\right]=E\left[1+s\mathbf{X}+\frac{s^{2}\mathbf{X}^{2}}{2}+\cdots\right]=1+E\left[\mathbf{X}\right]s+\frac{E\left[\mathbf{X}^{2}\right]s^{2}}{2}+\cdots=1+2s+6s^{2}.</math>  
+
<math class="inline">\phi_{\mathbf{X}}\left(s\right)=E\left[e^{s\mathbf{X}}\right]=E\left[1+s\mathbf{X}+\frac{s^{2}\mathbf{X}^{2}}{2}+\cdots\right]=1+E\left[\mathbf{X}\right]s+\frac{E\left[\mathbf{X}^{2}\right]s^{2}}{2}+\cdots=1+2s+6s^{2}.</math>  
  
Since <math>E\left[\mathbf{X}^{2}\right]=Var\left[\mathbf{X}\right]+\left(E\left[\mathbf{X}\right]\right)^{2}=8+4=12</math> .
+
Since <math class="inline">E\left[\mathbf{X}^{2}\right]=Var\left[\mathbf{X}\right]+\left(E\left[\mathbf{X}\right]\right)^{2}=8+4=12</math> .
  
IV.
+
=IV.=
  
Find the characteristic function <math>\Phi\left(\omega\right)</math>  of an exponentially distributed random variable with mean <math>\mu</math> .
+
Find the characteristic function <math class="inline">\Phi\left(\omega\right)</math>  of an exponentially distributed random variable with mean <math class="inline">\mu</math> .
  
A. <math>\exp\left\{ \mu\left(e^{i\omega}-1\right)\right\}</math>  B. <math>\exp\left\{ \mu e^{i\omega}\right\} -1</math>  C. <math>\left(1-i\omega\mu\right)^{-1}</math>  D. <math>\left(1+i\omega\mu\right)^{-2}</math>  E. <math>e^{i\omega\mu}e^{-\frac{1}{2}\omega^{2}\mu^{2}}</math>  
+
A. <math class="inline">\exp\left\{ \mu\left(e^{i\omega}-1\right)\right\}</math>  B. <math class="inline">\exp\left\{ \mu e^{i\omega}\right\} -1</math>  C. <math class="inline">\left(1-i\omega\mu\right)^{-1}</math>  D. <math class="inline">\left(1+i\omega\mu\right)^{-2}</math>  E. <math class="inline">e^{i\omega\mu}e^{-\frac{1}{2}\omega^{2}\mu^{2}}</math>  
  
Solution
+
=Solution=
  
<math>\Phi\left(\omega\right)=E\left[e^{i\omega\mathbf{X}}\right]=\int_{0}^{\infty}\frac{1}{\mu}e^{-\frac{x}{\mu}}e^{i\omega x}dx=\frac{1}{\mu}\int_{0}^{\infty}e^{-x\left(1/\mu-i\omega\right)}=\frac{1}{\mu}\cdot\frac{e^{-x\left(1/\mu-i\omega\right)}}{-\left(1/\mu-i\omega\right)}\biggl|_{0}^{\infty}</math><math>=\frac{1}{\mu}\cdot\frac{1}{\left(1/\mu-i\omega\right)}=\frac{1}{1-i\omega\mu}=\left(1-i\omega\mu\right)^{-1}.</math>  
+
<math class="inline">\Phi\left(\omega\right)=E\left[e^{i\omega\mathbf{X}}\right]=\int_{0}^{\infty}\frac{1}{\mu}e^{-\frac{x}{\mu}}e^{i\omega x}dx=\frac{1}{\mu}\int_{0}^{\infty}e^{-x\left(1/\mu-i\omega\right)}=\frac{1}{\mu}\cdot\frac{e^{-x\left(1/\mu-i\omega\right)}}{-\left(1/\mu-i\omega\right)}\biggl|_{0}^{\infty}</math><math class="inline">=\frac{1}{\mu}\cdot\frac{1}{\left(1/\mu-i\omega\right)}=\frac{1}{1-i\omega\mu}=\left(1-i\omega\mu\right)^{-1}.</math>  
  
<math>\because\frac{1}{\mu}-i\omega>0</math>  because <math>i\omega</math>  is a imaginary term.
+
<math class="inline">\because\frac{1}{\mu}-i\omega>0</math>  because <math class="inline">i\omega</math>  is a imaginary term.
  
3. (15 pts.)
+
'''3.''' (15 pts.)
  
Let <math>\left\{ t_{k}\right\}</math>  be a set of Poisson points with parameter \lambda  on the positive real line such that if <math>\mathbf{N}\left(t_{1},t_{2}\right)</math>  is defined as the number of points in the interval <math>\left(t_{1},t_{2}\right]</math> , then
+
Let <math class="inline">\left\{ t_{k}\right\}</math>  be a set of Poisson points with parameter \lambda  on the positive real line such that if <math class="inline">\mathbf{N}\left(t_{1},t_{2}\right)</math>  is defined as the number of points in the interval <math class="inline">\left(t_{1},t_{2}\right]</math> , then
  
<math>P\left(\left\{ \mathbf{N}\left(t_{1},t_{2}\right)=k\right\} \right)=\frac{\left[\lambda\left(t_{2}-t_{1}\right)\right]^{k}e^{-\lambda\left(t_{2}-t_{1}\right)}}{k!},\quad k=0,1,2,\cdots,\quad t_{2}>t_{1}\geq0.</math>  
+
<math class="inline">P\left(\left\{ \mathbf{N}\left(t_{1},t_{2}\right)=k\right\} \right)=\frac{\left[\lambda\left(t_{2}-t_{1}\right)\right]^{k}e^{-\lambda\left(t_{2}-t_{1}\right)}}{k!},\quad k=0,1,2,\cdots,\quad t_{2}>t_{1}\geq0.</math>  
  
Let <math>\mathbf{X}\left(t\right)=\mathbf{N}\left(0,t\right)</math>  be the Poisson counting process associated with these points for <math>t>0\;\left(n.b.,\;\mathbf{X}\left(0\right)=0\right)</math>  
+
Let <math class="inline">\mathbf{X}\left(t\right)=\mathbf{N}\left(0,t\right)</math>  be the Poisson counting process associated with these points for <math class="inline">t>0\;\left(n.b.,\;\mathbf{X}\left(0\right)=0\right)</math>  
  
(a) Find the mean of <math>\mathbf{X}\left(t\right)</math> .
+
(a) Find the mean of <math class="inline">\mathbf{X}\left(t\right)</math> .
  
<math>\Phi_{\mathbf{X}\left(t\right)}\left(\omega\right)=E\left[e^{i\omega\mathbf{X}\left(t\right)}\right]=\sum_{k=0}^{\infty}e^{i\omega k}\frac{\left(\lambda t\right)^{k}e^{-\lambda t}}{k!}=e^{-\lambda t}\sum_{k=0}^{\infty}\frac{\left[e^{i\omega}\lambda t\right]^{k}}{k!}=e^{\lambda t\left[e^{i\omega}-1\right]}.</math>  
+
<math class="inline">\Phi_{\mathbf{X}\left(t\right)}\left(\omega\right)=E\left[e^{i\omega\mathbf{X}\left(t\right)}\right]=\sum_{k=0}^{\infty}e^{i\omega k}\frac{\left(\lambda t\right)^{k}e^{-\lambda t}}{k!}=e^{-\lambda t}\sum_{k=0}^{\infty}\frac{\left[e^{i\omega}\lambda t\right]^{k}}{k!}=e^{\lambda t\left[e^{i\omega}-1\right]}.</math>  
  
<math>\phi_{\mathbf{X}}\left(s\right)=e^{\lambda t\left[e^{s}-1\right]}.</math>  
+
<math class="inline">\phi_{\mathbf{X}}\left(s\right)=e^{\lambda t\left[e^{s}-1\right]}.</math>  
  
<math>E\left[\mathbf{X}\left(t\right)\right]=\frac{d\phi_{\mathbf{X}}\left(s\right)}{ds}\biggl|_{s=0}=e^{\lambda t\left[e^{s}-1\right]}\lambda te^{s}\biggl|_{s=0}=\lambda t.</math>  
+
<math class="inline">E\left[\mathbf{X}\left(t\right)\right]=\frac{d\phi_{\mathbf{X}}\left(s\right)}{ds}\biggl|_{s=0}=e^{\lambda t\left[e^{s}-1\right]}\lambda te^{s}\biggl|_{s=0}=\lambda t.</math>  
  
Alternative solution: <math>\mathbf{X}\left(t\right)</math>  is Poisson process with parameter <math>\lambda t  \Rightarrow  E\left[\mathbf{X}\left(t\right)\right]=Var\left[\mathbf{X}\left(t\right)\right]=\lambda t</math> .
+
Alternative solution: <math class="inline">\mathbf{X}\left(t\right)</math>  is Poisson process with parameter <math class="inline">\lambda t  \Rightarrow  E\left[\mathbf{X}\left(t\right)\right]=Var\left[\mathbf{X}\left(t\right)\right]=\lambda t</math> .
  
(b) Find the variance of <math>\mathbf{X}\left(t\right)</math> .
+
(b) Find the variance of <math class="inline">\mathbf{X}\left(t\right)</math> .
  
<math>E\left[\mathbf{X}\left(t\right)^{2}\right]=\frac{d^{2}\phi_{\mathbf{X}\left(t\right)}\left(s\right)}{ds^{2}}\left|_{s=0}\right.=\frac{d}{ds}\left[e^{\lambda t\left[e^{s}-1\right]}\lambda te^{s}\right]\left|_{s=0}\right.=\lambda te^{s}e^{\lambda t\left[e^{s}-1\right]}\lambda te^{s}+\lambda te^{s}e^{\lambda t\left[e^{s}-1\right]}\left|_{s=0}\right.=\left(\lambda t\right)^{2}+\lambda t.</math>  
+
<math class="inline">E\left[\mathbf{X}\left(t\right)^{2}\right]=\frac{d^{2}\phi_{\mathbf{X}\left(t\right)}\left(s\right)}{ds^{2}}\left|_{s=0}\right.=\frac{d}{ds}\left[e^{\lambda t\left[e^{s}-1\right]}\lambda te^{s}\right]\left|_{s=0}\right.=\lambda te^{s}e^{\lambda t\left[e^{s}-1\right]}\lambda te^{s}+\lambda te^{s}e^{\lambda t\left[e^{s}-1\right]}\left|_{s=0}\right.=\left(\lambda t\right)^{2}+\lambda t.</math>  
  
<math>Var\left[\mathbf{X}\left(t\right)\right]=E\left[\mathbf{X}\left(t\right)^{2}\right]-\left(E\left[\mathbf{X}\left(t\right)\right]\right)^{2}=\left(\lambda t\right)^{2}+\lambda t-\left(\lambda t\right)^{2}=\lambda t.</math>  
+
<math class="inline">Var\left[\mathbf{X}\left(t\right)\right]=E\left[\mathbf{X}\left(t\right)^{2}\right]-\left(E\left[\mathbf{X}\left(t\right)\right]\right)^{2}=\left(\lambda t\right)^{2}+\lambda t-\left(\lambda t\right)^{2}=\lambda t.</math>  
  
Alternative solution: <math>\mathbf{X}\left(t\right)</math>  is Poisson process with parameter <math>\lambda t  \Rightarrow  E\left[\mathbf{X}\left(t\right)\right]=Var\left[\mathbf{X}\left(t\right)\right]=\lambda t .</math>
+
Alternative solution: <math class="inline">\mathbf{X}\left(t\right)</math>  is Poisson process with parameter <math class="inline">\lambda t  \Rightarrow  E\left[\mathbf{X}\left(t\right)\right]=Var\left[\mathbf{X}\left(t\right)\right]=\lambda t .</math>
  
(c) Derive an expression for the autocorrelation function of <math>\mathbf{X}\left(t\right)</math> .
+
(c) Derive an expression for the autocorrelation function of <math class="inline">\mathbf{X}\left(t\right)</math> .
  
Assume <math>t_{2}>t_{1}</math> .
+
'''Assume''' <math class="inline">t_{2}>t_{1}</math> .
  
<math>R_{\mathbf{XX}}\left(t_{1},t_{2}\right)=E\left[\mathbf{X}\left(t_{1}\right)\mathbf{X}\left(t_{2}\right)\right]=E\left[\mathbf{X}\left(t_{1}\right)\left[\mathbf{X}\left(t_{2}\right)-\mathbf{X}\left(t_{1}\right)+\mathbf{X}\left(t_{1}\right)\right]\right]</math><math>=E\left[\mathbf{X}\left(t_{1}\right)\left[\mathbf{X}\left(t_{2}\right)-\mathbf{X}\left(t_{1}\right)\right]\right]+E\left[\mathbf{X}^{2}\left(t_{1}\right)\right]</math><math>=E\left[\mathbf{X}\left(t_{1}\right)\right]E\left[\mathbf{X}\left(t_{2}\right)-\mathbf{X}\left(t_{1}\right)\right]+E\left[\mathbf{X}^{2}\left(t_{1}\right)\right]</math><math>=\left(\lambda t_{1}\right)\lambda\left(t_{2}-t_{1}\right)+\lambda^{2}t_{1}^{2}+\lambda t_{1}=\lambda^{2}t_{1}t_{2}-\lambda^{2}t_{1}^{2}+\lambda^{2}t_{1}^{2}+\lambda t_{1}=\lambda^{2}t_{1}t_{2}+\lambda t_{1}.</math>  
+
<math class="inline">R_{\mathbf{XX}}\left(t_{1},t_{2}\right)=E\left[\mathbf{X}\left(t_{1}\right)\mathbf{X}\left(t_{2}\right)\right]=E\left[\mathbf{X}\left(t_{1}\right)\left[\mathbf{X}\left(t_{2}\right)-\mathbf{X}\left(t_{1}\right)+\mathbf{X}\left(t_{1}\right)\right]\right]</math><math class="inline">=E\left[\mathbf{X}\left(t_{1}\right)\left[\mathbf{X}\left(t_{2}\right)-\mathbf{X}\left(t_{1}\right)\right]\right]+E\left[\mathbf{X}^{2}\left(t_{1}\right)\right]</math><math class="inline">=E\left[\mathbf{X}\left(t_{1}\right)\right]E\left[\mathbf{X}\left(t_{2}\right)-\mathbf{X}\left(t_{1}\right)\right]+E\left[\mathbf{X}^{2}\left(t_{1}\right)\right]</math><math class="inline">=\left(\lambda t_{1}\right)\lambda\left(t_{2}-t_{1}\right)+\lambda^{2}t_{1}^{2}+\lambda t_{1}=\lambda^{2}t_{1}t_{2}-\lambda^{2}t_{1}^{2}+\lambda^{2}t_{1}^{2}+\lambda t_{1}=\lambda^{2}t_{1}t_{2}+\lambda t_{1}.</math>  
  
Similarly, for <math>t_{2}<t_{1}</math> , <math>R_{\mathbf{XX}}\left(t_{1},t_{2}\right)=\lambda^{2}t_{1}t_{2}+\lambda t_{2}</math> .
+
Similarly, for <math class="inline">t_{2}<t_{1}</math> , <math class="inline">R_{\mathbf{XX}}\left(t_{1},t_{2}\right)=\lambda^{2}t_{1}t_{2}+\lambda t_{2}</math> .
  
<math>\therefore R_{\mathbf{XX}}\left(t_{1},t_{2}\right)=\lambda^{2}t_{1}t_{2}+\lambda\min\left(t_{1},t_{2}\right).</math>  
+
<math class="inline">\therefore R_{\mathbf{XX}}\left(t_{1},t_{2}\right)=\lambda^{2}t_{1}t_{2}+\lambda\min\left(t_{1},t_{2}\right).</math>  
  
<math>\because</math>  Recall: <math>Var\left[\mathbf{X}\left(t_{1}\right)\right]=E\left[\mathbf{X}^{2}\left(t_{1}\right)\right]-\left(E\left[\mathbf{X}\left(t_{1}\right)\right]\right)^{2}\Longrightarrow E\left[\mathbf{X}^{2}\left(t_{1}\right)\right]=Var\left[\mathbf{X}\left(t_{1}\right)\right]+\left(E\left[\mathbf{X}\left(t_{1}\right)\right]\right)^{2}</math> .
+
<math class="inline">\because</math>  Recall: <math class="inline">Var\left[\mathbf{X}\left(t_{1}\right)\right]=E\left[\mathbf{X}^{2}\left(t_{1}\right)\right]-\left(E\left[\mathbf{X}\left(t_{1}\right)\right]\right)^{2}\Longrightarrow E\left[\mathbf{X}^{2}\left(t_{1}\right)\right]=Var\left[\mathbf{X}\left(t_{1}\right)\right]+\left(E\left[\mathbf{X}\left(t_{1}\right)\right]\right)^{2}</math> .
  
(d) Is <math>\mathbf{X}\left(t\right)</math>  wide-sense stationary? Explain your answer.
+
(d) Is <math class="inline">\mathbf{X}\left(t\right)</math>  wide-sense stationary? Explain your answer.
  
No, <math>\mathbf{X}\left(t\right)</math>  is not WSS, because <math>E\left[\mathbf{X}\left(t\right)\right]=\lambda t</math>  is not constant.
+
No, <math class="inline">\mathbf{X}\left(t\right)</math>  is not WSS, because <math class="inline">E\left[\mathbf{X}\left(t\right)\right]=\lambda t</math>  is not constant.
  
4. (15 pts.)
+
'''4.''' (15 pts.)
  
Let <math>\mathbf{X}</math>  be a continuous random variable with pdf <math>f_{\mathbf{X}}\left(x\right)</math> , mean <math>\mu</math> , and variance <math>\sigma^{2}</math> . Prove the Chebyshev Inequality:<math>P\left(\left\{ \mathbf{X}-\mu\right\} \geq\epsilon\right)\leq\frac{\sigma^{2}}{\epsilon^{2}}</math>, where <math>\epsilon</math>  is any positive constant.
+
Let <math class="inline">\mathbf{X}</math>  be a continuous random variable with pdf <math class="inline">f_{\mathbf{X}}\left(x\right)</math> , mean <math class="inline">\mu</math> , and variance <math class="inline">\sigma^{2}</math> . Prove the Chebyshev Inequality:<math class="inline">P\left(\left\{ \mathbf{X}-\mu\right\} \geq\epsilon\right)\leq\frac{\sigma^{2}}{\epsilon^{2}}</math>, where <math class="inline">\epsilon</math>  is any positive constant.
  
Solution
+
=Solution=
  
 
You can find the proof of Chebyshev inequality [CS1ChebyshevInequality].
 
You can find the proof of Chebyshev inequality [CS1ChebyshevInequality].
  
5. (15 pts.)
+
'''5.''' (15 pts.)
  
Let <math>\mathbf{X}\left(t\right)</math>  be a zero-mean wide-sense stationary Gaussian white noise process with autocorrelation function <math>R_{\mathbf{XX}}\left(\tau\right)=S_{0}\delta\left(\tau\right)</math> . Suppose that <math>\mathbf{X}\left(t\right)</math>  is the input to a linear time invariant system with impulse response <math>h\left(t\right)=e^{-\alpha t}\cdot1_{\left[0,\infty\right)}\left(t\right)</math>, where <math>\alpha</math>  is a positive constant. Let <math>\mathbf{Y}\left(t\right)</math>  be the output of the system and assume that the input has been applied to the system for all time.
+
Let <math class="inline">\mathbf{X}\left(t\right)</math>  be a zero-mean wide-sense stationary Gaussian white noise process with autocorrelation function <math class="inline">R_{\mathbf{XX}}\left(\tau\right)=S_{0}\delta\left(\tau\right)</math> . Suppose that <math class="inline">\mathbf{X}\left(t\right)</math>  is the input to a linear time invariant system with impulse response <math class="inline">h\left(t\right)=e^{-\alpha t}\cdot1_{\left[0,\infty\right)}\left(t\right)</math>, where <math class="inline">\alpha</math>  is a positive constant. Let <math class="inline">\mathbf{Y}\left(t\right)</math>  be the output of the system and assume that the input has been applied to the system for all time.
  
(a) What is the mean of <math>\mathbf{Y}\left(t\right)</math> ?
+
(a) What is the mean of <math class="inline">\mathbf{Y}\left(t\right)</math> ?
  
<math>E\left[\mathbf{Y}\left(t\right)\right]=E\left[\int_{-\infty}^{\infty}h\left(\tau\right)\mathbf{X}\left(t-\tau\right)d\tau\right]=\int_{-\infty}^{\infty}h\left(\tau\right)E\left[\mathbf{X}\left(t-\tau\right)\right]d\tau=\int_{-\infty}^{\infty}h\left(\tau\right)\cdot0d\tau=0.</math>  
+
<math class="inline">E\left[\mathbf{Y}\left(t\right)\right]=E\left[\int_{-\infty}^{\infty}h\left(\tau\right)\mathbf{X}\left(t-\tau\right)d\tau\right]=\int_{-\infty}^{\infty}h\left(\tau\right)E\left[\mathbf{X}\left(t-\tau\right)\right]d\tau=\int_{-\infty}^{\infty}h\left(\tau\right)\cdot0d\tau=0.</math>  
  
(b) What is the power spectral density of <math>\mathbf{Y}\left(t\right)</math> ?
+
(b) What is the power spectral density of <math class="inline">\mathbf{Y}\left(t\right)</math> ?
  
<math>S_{\mathbf{XX}}\left(\omega\right)=\int_{-\infty}^{\infty}S_{0}\delta\left(\tau\right)e^{-i\omega\tau}d\tau=S_{0}.</math>  
+
<math class="inline">S_{\mathbf{XX}}\left(\omega\right)=\int_{-\infty}^{\infty}S_{0}\delta\left(\tau\right)e^{-i\omega\tau}d\tau=S_{0}.</math>  
  
<math>H\left(\omega\right)=\int_{-\infty}^{\infty}h\left(t\right)e^{-i\omega t}dt=\int_{0}^{\infty}e^{-\alpha t}e^{-i\omega t}dt=\int_{0}^{\infty}e^{-\left(\alpha+i\omega\right)t}dt=\frac{e^{-\left(\alpha+i\omega\right)t}}{-\left(\alpha+i\omega\right)}\biggl|_{0}^{\infty}=\frac{1}{\alpha+i\omega}.</math>  
+
<math class="inline">H\left(\omega\right)=\int_{-\infty}^{\infty}h\left(t\right)e^{-i\omega t}dt=\int_{0}^{\infty}e^{-\alpha t}e^{-i\omega t}dt=\int_{0}^{\infty}e^{-\left(\alpha+i\omega\right)t}dt=\frac{e^{-\left(\alpha+i\omega\right)t}}{-\left(\alpha+i\omega\right)}\biggl|_{0}^{\infty}=\frac{1}{\alpha+i\omega}.</math>  
  
<math>S_{\mathbf{YY}}\left(\omega\right)=S_{\mathbf{XX}}\left(\omega\right)\left|H\left(\omega\right)\right|^{2}=S_{\mathbf{XX}}\left(\omega\right)H\left(\omega\right)H^{*}\left(\omega\right)=S_{0}\cdot\frac{1}{\alpha+i\omega}\cdot\frac{1}{\alpha-i\omega}=\frac{S_{0}}{\alpha^{2}+\omega^{2}}.</math>  
+
<math class="inline">S_{\mathbf{YY}}\left(\omega\right)=S_{\mathbf{XX}}\left(\omega\right)\left|H\left(\omega\right)\right|^{2}=S_{\mathbf{XX}}\left(\omega\right)H\left(\omega\right)H^{*}\left(\omega\right)=S_{0}\cdot\frac{1}{\alpha+i\omega}\cdot\frac{1}{\alpha-i\omega}=\frac{S_{0}}{\alpha^{2}+\omega^{2}}.</math>  
  
(c) What is the autocorrelation function of <math>\mathbf{Y}\left(t\right)</math> ?
+
(c) What is the autocorrelation function of <math class="inline">\mathbf{Y}\left(t\right)</math> ?
  
<math>S_{\mathbf{YY}}\left(\omega\right)=\frac{S_{0}}{\alpha^{2}+\omega^{2}}=\left(\frac{S_{0}}{2\alpha}\right)\frac{2\alpha}{\alpha^{2}+\omega^{2}}\leftrightarrow\left(\frac{S_{0}}{2\alpha}\right)e^{-\alpha\left|\tau\right|}=R_{\mathbf{YY}}\left(\tau\right).</math>  
+
<math class="inline">S_{\mathbf{YY}}\left(\omega\right)=\frac{S_{0}}{\alpha^{2}+\omega^{2}}=\left(\frac{S_{0}}{2\alpha}\right)\frac{2\alpha}{\alpha^{2}+\omega^{2}}\leftrightarrow\left(\frac{S_{0}}{2\alpha}\right)e^{-\alpha\left|\tau\right|}=R_{\mathbf{YY}}\left(\tau\right).</math>  
  
<math>\because e^{-\alpha\left|\tau\right|}\leftrightarrow\frac{2\alpha}{\alpha^{2}+\omega^{2}}\text{ (on the table given)}.</math>  
+
<math class="inline">\because e^{-\alpha\left|\tau\right|}\leftrightarrow\frac{2\alpha}{\alpha^{2}+\omega^{2}}\text{ (on the table given)}.</math>  
  
 
If there is no table, then  
 
If there is no table, then  
  
<math>R_{\mathbf{YY}}\left(\tau\right)=\frac{1}{2\pi}\int_{-\infty}^{\infty}S_{\mathbf{YY}}\left(\omega\right)e^{i\omega\tau}d\omega=\frac{1}{2\pi}\int_{-\infty}^{\infty}\frac{S_{0}}{\alpha^{2}+\omega^{2}}\cdot e^{i\omega\tau}d\omega.</math>  
+
<math class="inline">R_{\mathbf{YY}}\left(\tau\right)=\frac{1}{2\pi}\int_{-\infty}^{\infty}S_{\mathbf{YY}}\left(\omega\right)e^{i\omega\tau}d\omega=\frac{1}{2\pi}\int_{-\infty}^{\infty}\frac{S_{0}}{\alpha^{2}+\omega^{2}}\cdot e^{i\omega\tau}d\omega.</math>  
  
(d) Write an expression for the second-order density <math>f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right)</math>  of <math>\mathbf{Y}\left(t\right)</math> .
+
(d) Write an expression for the second-order density <math class="inline">f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right)</math>  of <math class="inline">\mathbf{Y}\left(t\right)</math> .
  
<math>\mathbf{Y}\left(t\right)</math>  is a WSS Gaussian random process with <math>E\left[\mathbf{Y}\left(t\right)\right]=0 , \sigma_{\mathbf{Y}\left(t\right)}^{2}=R_{\mathbf{YY}}\left(0\right)=\frac{S_{0}}{2\alpha}</math> .
+
<math class="inline">\mathbf{Y}\left(t\right)</math>  is a WSS Gaussian random process with <math class="inline">E\left[\mathbf{Y}\left(t\right)\right]=0 , \sigma_{\mathbf{Y}\left(t\right)}^{2}=R_{\mathbf{YY}}\left(0\right)=\frac{S_{0}}{2\alpha}</math> .
  
<math>r_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}=r\left(t_{1}-t_{2}\right)=\frac{C_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{\sqrt{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}}=\frac{R_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{R_{\mathbf{YY}}\left(0\right)}=e^{-\alpha\left|t_{1}-t_{2}\right|}.</math>  
+
<math class="inline">r_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}=r\left(t_{1}-t_{2}\right)=\frac{C_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{\sqrt{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}}=\frac{R_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{R_{\mathbf{YY}}\left(0\right)}=e^{-\alpha\left|t_{1}-t_{2}\right|}.</math>  
  
<math>f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right)=\frac{1}{2\pi\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}\sqrt{1-r^{2}}}\exp\left\{ \frac{-1}{2\left(1-r^{2}\right)}\left[\frac{y_{1}^{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}}-\frac{2ry_{1}y_{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}}+\frac{y_{2}^{2}}{\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}\right]\right\} </math><math>=\frac{1}{2\pi\frac{S_{0}}{2\alpha}\sqrt{1-e^{-2\alpha\left|t_{1}-t_{2}\right|}}}\exp\left\{ \frac{-1}{2\left(1-e^{-2\alpha\left|t_{1}-t_{2}\right|}\right)}\left[\frac{y_{1}^{2}}{S_{0}/2\alpha}-\frac{2y_{1}y_{2}e^{-\alpha\left|t_{1}-t_{2}\right|}}{S_{0}/2\alpha}+\frac{y_{2}^{2}}{S_{0}/2\alpha}\right]\right\} </math><math>=\frac{\alpha}{\pi S_{0}\sqrt{1-e^{-2\alpha\left|t_{1}-t_{2}\right|}}}\exp\left\{ \frac{-\alpha}{S_{0}\left(1-e^{-2\alpha\left|t_{1}-t_{2}\right|}\right)}\left[y_{1}^{2}-2y_{1}y_{2}e^{-\alpha\left|t_{1}-t_{2}\right|}+y_{2}^{2}\right]\right\}</math> .  
+
<math class="inline">f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right)=\frac{1}{2\pi\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}\sqrt{1-r^{2}}}\exp\left\{ \frac{-1}{2\left(1-r^{2}\right)}\left[\frac{y_{1}^{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}}-\frac{2ry_{1}y_{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}}+\frac{y_{2}^{2}}{\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}\right]\right\} </math><math class="inline">=\frac{1}{2\pi\frac{S_{0}}{2\alpha}\sqrt{1-e^{-2\alpha\left|t_{1}-t_{2}\right|}}}\exp\left\{ \frac{-1}{2\left(1-e^{-2\alpha\left|t_{1}-t_{2}\right|}\right)}\left[\frac{y_{1}^{2}}{S_{0}/2\alpha}-\frac{2y_{1}y_{2}e^{-\alpha\left|t_{1}-t_{2}\right|}}{S_{0}/2\alpha}+\frac{y_{2}^{2}}{S_{0}/2\alpha}\right]\right\} </math><math class="inline">=\frac{\alpha}{\pi S_{0}\sqrt{1-e^{-2\alpha\left|t_{1}-t_{2}\right|}}}\exp\left\{ \frac{-\alpha}{S_{0}\left(1-e^{-2\alpha\left|t_{1}-t_{2}\right|}\right)}\left[y_{1}^{2}-2y_{1}y_{2}e^{-\alpha\left|t_{1}-t_{2}\right|}+y_{2}^{2}\right]\right\}</math> .  
  
6. (20 pts.)
+
'''6.''' (20 pts.)
  
 
(a)
 
(a)
  
Let A , B , and C  be three events defined on a random experiment. If <math>P\left(A\cap B\cap C\right)=P\left(A\right)P\left(B\right)P\left(C\right)</math> , then A , B , and C  are statistically independent.
+
Let A , B , and C  be three events defined on a random experiment. If <math class="inline">P\left(A\cap B\cap C\right)=P\left(A\right)P\left(B\right)P\left(C\right)</math> , then A , B , and C  are statistically independent.
  
Recall
+
=Recall=
  
Two events A  and B  are independent iff <math>P\left(A\cap B\right)=P\left(A\right)P\left(B\right)</math> .
+
Two events A  and B  are independent iff <math class="inline">P\left(A\cap B\right)=P\left(A\right)P\left(B\right)</math> .
  
Solution
+
=Solution=
  
False. Must also know <math>P(A\cap B)=P\left(A\right)P\left(B\right)</math> , <math>P\left(B\cap C\right)=P\left(B\right)P\left(C\right)</math> , and <math>P\left(C\cap A\right)=P\left(C\right)P\left(A\right)</math> .  
+
''False.'' Must also know <math class="inline">P(A\cap B)=P\left(A\right)P\left(B\right)</math> , <math class="inline">P\left(B\cap C\right)=P\left(B\right)P\left(C\right)</math> , and <math class="inline">P\left(C\cap A\right)=P\left(C\right)P\left(A\right)</math> .  
  
 
(b)
 
(b)
  
If the autocorrelation function <math>R_{\mathbf{X}}\left(t_{1},t_{2}\right)</math>  of random process <math>\mathbf{X}\left(t\right)</math>  can be written as a function of the time difference <math>t_{2}-t_{1}</math> , then <math>\mathbf{X}\left(t\right)</math>  is wide-sense stationary.
+
If the autocorrelation function <math class="inline">R_{\mathbf{X}}\left(t_{1},t_{2}\right)</math>  of random process <math class="inline">\mathbf{X}\left(t\right)</math>  can be written as a function of the time difference <math class="inline">t_{2}-t_{1}</math> , then <math class="inline">\mathbf{X}\left(t\right)</math>  is wide-sense stationary.
  
Solution
+
=Solution=
  
False. <math>E\left[\mathbf{X}\left(t\right)\right]</math>  must also be constant.
+
''False.'' <math class="inline">E\left[\mathbf{X}\left(t\right)\right]</math>  must also be constant.
  
 
(c)
 
(c)
Line 202: Line 202:
 
All stationary random processes are wide-sense stationary.
 
All stationary random processes are wide-sense stationary.
  
Solution
+
=Solution=
  
True.
+
''True.''
  
 
(d)
 
(d)
  
The autocorrelation function <math>R_{\mathbf{XX}}\left(\tau\right)</math>  of a real wide-sense stationary random process <math>\mathbf{X}\left(t\right)</math>  is nonnegative for all <math>\tau</math> .
+
The autocorrelation function <math class="inline">R_{\mathbf{XX}}\left(\tau\right)</math>  of a real wide-sense stationary random process <math class="inline">\mathbf{X}\left(t\right)</math>  is nonnegative for all <math class="inline">\tau</math> .
  
Solution
+
=Solution=
  
False. <math>R_{\mathbf{XX}}\left(t_{1},t_{2}\right)</math>  is non-negative definite. However, it does not mean that <math>R_{\mathbf{XX}}\left(t_{1},t_{2}\right)</math>  is nonnegative.
+
''False.'' <math class="inline">R_{\mathbf{XX}}\left(t_{1},t_{2}\right)</math>  is non-negative definite. However, it does not mean that <math class="inline">R_{\mathbf{XX}}\left(t_{1},t_{2}\right)</math>  is nonnegative.
  
 
(e)
 
(e)
  
Let <math>\mathbf{X}\left(t\right)</math>  and <math>\mathbf{Y}\left(t\right)</math>  be two zero-mean statistically independent, jointly wide-sense stationary random processes. Then the cross-correlation function <math>R_{\mathbf{XY}}\left(\tau\right)=0</math>  for all <math>\tau</math> .
+
Let <math class="inline">\mathbf{X}\left(t\right)</math>  and <math class="inline">\mathbf{Y}\left(t\right)</math>  be two zero-mean statistically independent, jointly wide-sense stationary random processes. Then the cross-correlation function <math class="inline">R_{\mathbf{XY}}\left(\tau\right)=0</math>  for all <math class="inline">\tau</math> .
  
Solution
+
=Solution=
  
True.
+
''True.''
  
<math>R_{\mathbf{XY}}\left(t_{1},t_{2}\right)=E\left[\mathbf{X}\left(t_{1}\right)\mathbf{Y}^{*}\left(t_{2}\right)\right]=E\left[\mathbf{X}\left(t_{1}\right)\right]E\left[\mathbf{Y}^{*}\left(t_{2}\right)\right]=0\cdot0=0.</math>  
+
<math class="inline">R_{\mathbf{XY}}\left(t_{1},t_{2}\right)=E\left[\mathbf{X}\left(t_{1}\right)\mathbf{Y}^{*}\left(t_{2}\right)\right]=E\left[\mathbf{X}\left(t_{1}\right)\right]E\left[\mathbf{Y}^{*}\left(t_{2}\right)\right]=0\cdot0=0.</math>  
  
 
(f)
 
(f)
  
The cross-correlation function <math>R_{\mathbf{XY}}\left(\tau\right)</math>  of two real, jointly wide-sense stationary random process <math>\mathbf{X}\left(t\right)</math>  and <math>\mathbf{Y}\left(t\right)</math>  has its peak value at <math>\tau=0</math> .
+
The cross-correlation function <math class="inline">R_{\mathbf{XY}}\left(\tau\right)</math>  of two real, jointly wide-sense stationary random process <math class="inline">\mathbf{X}\left(t\right)</math>  and <math class="inline">\mathbf{Y}\left(t\right)</math>  has its peak value at <math class="inline">\tau=0</math> .
  
Solution
+
=Solution=
  
False. Consider <math>\mathbf{Y}\left(t\right)=\mathbf{X}\left(t-\delta\right)</math>  where <math>\delta\neq0</math> .
+
''False.'' Consider <math class="inline">\mathbf{Y}\left(t\right)=\mathbf{X}\left(t-\delta\right)</math>  where <math class="inline">\delta\neq0</math> .
  
 
(g)
 
(g)
  
The power spectral density of a real, wide-sense stationary random process <math>\mathbf{X}\left(t\right)</math>  is a non-negative even function of <math>\omega</math> .
+
The power spectral density of a real, wide-sense stationary random process <math class="inline">\mathbf{X}\left(t\right)</math>  is a non-negative even function of <math class="inline">\omega</math> .
  
Solution
+
=Solution=
  
True.
+
''True.''
  
 
(h)
 
(h)
  
If <math>\mathbf{X}</math>  and <math>\mathbf{Y}</math>  are two statistically independent random variables, then <math>f_{\mathbf{X}}\left(x|y\right)=f_{\mathbf{X}}\left(x\right)</math> .
+
If <math class="inline">\mathbf{X}</math>  and <math class="inline">\mathbf{Y}</math>  are two statistically independent random variables, then <math class="inline">f_{\mathbf{X}}\left(x|y\right)=f_{\mathbf{X}}\left(x\right)</math> .
  
Solution.
+
=Solution.=
  
True. <math>P\left(\mathbf{\left\{ X=x\right\} }|\left\{ \mathbf{Y}=y\right\} \right)=\frac{P\left(\left\{ \mathbf{X}=x\right\} \cap\left\{ \mathbf{Y}=y\right\} \right)}{P\left(\left\{ \mathbf{Y}=y\right\} \right)}=\frac{P\left(\left\{ \mathbf{X}=x\right\} \right)\cdot P\left(\left\{ \mathbf{Y}=y\right\} \right)}{P\left(\left\{ \mathbf{Y}=y\right\} \right)}=P\left(\left\{ \mathbf{X}=x\right\} \right).</math>  
+
''True.'' <math class="inline">P\left(\mathbf{\left\{ X=x\right\} }|\left\{ \mathbf{Y}=y\right\} \right)=\frac{P\left(\left\{ \mathbf{X}=x\right\} \cap\left\{ \mathbf{Y}=y\right\} \right)}{P\left(\left\{ \mathbf{Y}=y\right\} \right)}=\frac{P\left(\left\{ \mathbf{X}=x\right\} \right)\cdot P\left(\left\{ \mathbf{Y}=y\right\} \right)}{P\left(\left\{ \mathbf{Y}=y\right\} \right)}=P\left(\left\{ \mathbf{X}=x\right\} \right).</math>  
  
<math>f_{\mathbf{X}}\left(x|y\right)=\frac{f_{\mathbf{XY}}\left(x,y\right)}{f_{\mathbf{Y}}\left(y\right)}=\frac{f_{\mathbf{X}}\left(x\right)\cdot f_{\mathbf{Y}}\left(y\right)}{f_{\mathbf{Y}}\left(y\right)}=f_{\mathbf{X}}\left(x\right).</math>  
+
<math class="inline">f_{\mathbf{X}}\left(x|y\right)=\frac{f_{\mathbf{XY}}\left(x,y\right)}{f_{\mathbf{Y}}\left(y\right)}=\frac{f_{\mathbf{X}}\left(x\right)\cdot f_{\mathbf{Y}}\left(y\right)}{f_{\mathbf{Y}}\left(y\right)}=f_{\mathbf{X}}\left(x\right).</math>  
  
 
(i)
 
(i)
  
If <math>\mathbf{X}</math>  and <math>\mathbf{Y}</math>  are two random variables, and <math>f_{X}\left(x|y\right)=f_{X}\left(x\right)</math> , then <math>\mathbf{X}</math>  and <math>\mathbf{Y}</math>  are statistically independent.
+
If <math class="inline">\mathbf{X}</math>  and <math class="inline">\mathbf{Y}</math>  are two random variables, and <math class="inline">f_{X}\left(x|y\right)=f_{X}\left(x\right)</math> , then <math class="inline">\mathbf{X}</math>  and <math class="inline">\mathbf{Y}</math>  are statistically independent.
  
Solution
+
=Solution=
  
True.
+
''True.''
  
 
(j)
 
(j)
  
If <math>\left\{ \mathbf{X}_{n}\right\}</math>  is a sequence of random variables that converges to a random variable <math>\mathbf{X}</math>  as <math>n\rightarrow\infty</math> , then <math>\left\{ \mathbf{X}_{n}\right\}</math>  converges to <math>\mathbf{X}</math>  in the means-square sense.
+
If <math class="inline">\left\{ \mathbf{X}_{n}\right\}</math>  is a sequence of random variables that converges to a random variable <math class="inline">\mathbf{X}</math>  as <math class="inline">n\rightarrow\infty</math> , then <math class="inline">\left\{ \mathbf{X}_{n}\right\}</math>  converges to <math class="inline">\mathbf{X}</math>  in the means-square sense.
  
Solution
+
=Solution=
  
False. The explanation at first is about converge in almost everywhere. <math>\left(a.e.\right)\nRightarrow\left(m.s.\right)</math>  and <math>\left(m.s.\right)\nRightarrow\left(a.e.\right)</math> .
+
''False.'' The explanation at first is about converge in almost everywhere. <math class="inline">\left(a.e.\right)\nRightarrow\left(m.s.\right)</math>  and <math class="inline">\left(m.s.\right)\nRightarrow\left(a.e.\right)</math> .
  
 
----
 
----

Latest revision as of 06:18, 1 December 2010

6.2 MRB 1994 Final

1. (15 pts.)

Three boxes that appear identical contain the following combinations of coins: Box X - 2 quaters; Box Y - 1 quaters, 2 dimes; Box Z - 1 quater, 1 dime. One of the boxes is selected at random, and a coin is selected at random from that box. The coin selected is a quater. What is the probability that the box selected contains at least one dime?

Solution

• We can define the events as

– A = Box selected at random contains at least one dime.

– Q = Coin drawn from box selected is a quater.

– X = Box X is selected.

– Y = Box Y is selected.

– Z = Box Z is selected.

• From the given information, we have $ P\left(X\right)=P\left(Y\right)=P\left(Z\right)=1/3 . P\left(Q|X\right)=1,P\left(Q|Y\right)=1/3,P\left(Q|Z\right)=1/2 $ .

• By using Bayes' theorem, $ P\left(A|Q\right) $ is

$ P\left(A|Q\right)=\frac{P\left(A\cap Q\right)}{P\left(Q\right)}=\frac{P\left(\left(Y\cup Z\right)\cap Q\right)}{P\left(Q\right)}=\frac{P\left(Y\cap Q\right)\cup P\left(Z\cap Q\right)}{P\left(Q\cap X\right)+P\left(Q\cap Y\right)+P\left(Q\cap Z\right)} $$ =\frac{P\left(Y\cap Q\right)+P\left(Z\cap Q\right)}{P\left(Q|X\right)P\left(X\right)+P\left(Q|Y\right)P\left(Y\right)+P\left(Q|Z\right)P\left(Z\right)} $$ =\frac{P\left(Q|Y\right)P\left(Y\right)+P\left(Q|Z\right)P\left(Z\right)}{P\left(Q|X\right)P\left(X\right)+P\left(Q|Y\right)P\left(Y\right)+P\left(Q|Z\right)P\left(Z\right)} $$ =\frac{P\left(Q|Y\right)+P\left(Q|Z\right)}{P\left(Q|X\right)+P\left(Q|Y\right)+P\left(Q|Z\right)}=\frac{\frac{1}{3}+\frac{1}{2}}{1+\frac{1}{3}+\frac{1}{2}}=\frac{\frac{5}{6}}{\frac{11}{6}}=\frac{5}{11}. $

2. (20 pts.)

Multiple Choice Problems: Select the single best answer to each of the following four multiple choice questions by circling the letter in fron of the answer. There is space to work out the problems on the next page if needed.

I.

Consider a random experiment with probability space $ \left(\mathcal{S},\mathcal{F},P\right) $ and let $ A\in\mathcal{F} $ and $ B\in\mathcal{F} $ . Let $ P\left(A\right)=1/3 , P\left(B\right)=1/3 $ , and $ P\left(A\cap B\right)=1/4 $ . Find $ P\left(A|\bar{B}\right) $ .

A. 1/12 B. 1/8 C. 1/6 D. 1/4 E. 1/3

Solution

$ P\left(A|\bar{B}\right)=\frac{P\left(A\cap\bar{B}\right)}{P\left(\bar{B}\right)}=\frac{P\left(A\cap\bar{B}\right)}{1-P\left(B\right)}=\frac{P\left(A\right)-P\left(A\cap B\right)}{1-P\left(B\right)}=\frac{\frac{1}{3}-\frac{1}{4}}{1-\frac{1}{3}}=\frac{\frac{1}{12}}{\frac{2}{3}}=\frac{1}{12}\cdot\frac{3}{2}=\frac{1}{8}. $

II.

Let $ \mathbf{X} $ and $ \mathbf{Y} $ be two jointly distributed random variables with joint pdf

$ f\left(x,y\right)=\left\{ \begin{array}{lll} e^{-\left(x+y\right)} \text{, for }x\geq0\text{ and }y\geq0,\\ 0 \text{, elsewhere.} \end{array}\right. $

Find $ P\left(\left\{ \mathbf{X}>\mathbf{Y}\right\} |\left\{ \mathbf{X}>1/2\right\} \right) $ .

A. $ 1/\sqrt{e} $ B. $ \left(2\sqrt{e}-1\right)/2\sqrt{e} $ C. $ \left(1-2\sqrt{e}\right)/2\sqrt{e} $ D. $ 2\sqrt{e}/\left(2\sqrt{e}-1\right) $ E. $ 2\sqrt{e}/\left(1-2\sqrt{e}\right) $

Solution

By using Bayes' theroem,

$ P\left(\left\{ \mathbf{X}>\mathbf{Y}\right\} |\left\{ \mathbf{X}>\frac{1}{2}\right\} \right)=\frac{P\left(\left\{ \mathbf{X}>\mathbf{Y}\right\} \cap\left\{ \mathbf{X}>\frac{1}{2}\right\} \right)}{P\left(\left\{ \mathbf{X}>\frac{1}{2}\right\} \right)}. $

$ P\left(\left\{ \mathbf{X}>\mathbf{Y}\right\} \cap\left\{ \mathbf{X}>\frac{1}{2}\right\} \right)=\int_{1/2}^{\infty}\int_{0}^{x}e^{-\left(x+y\right)}dydx=\int_{1/2}^{\infty}-e^{-\left(x+y\right)}\Bigl|_{0}^{x}dx $$ =\int_{1/2}^{\infty}\left[e^{-x}-e^{-2x}\right]dx=-e^{-x}+\frac{1}{2}e^{-2x}\Bigl|_{1/2}^{\infty}=e^{-\frac{1}{2}}-\frac{1}{2}e^{-1}. $

$ P\left(\left\{ \mathbf{X}>\frac{1}{2}\right\} \right)=\int_{1/2}^{\infty}e^{-\left(x+y\right)}dx=-e^{-\left(x+y\right)}\Bigl|_{1/2}^{\infty}=e^{-\left(\frac{1}{2}+y\right)}. $

$ P\left(\left\{ \mathbf{X}>\mathbf{Y}\right\} |\left\{ \mathbf{X}>\frac{1}{2}\right\} \right)=\frac{e^{-\frac{1}{2}}-\frac{1}{2}e^{-1}}{e^{-\frac{1}{2}}}=\frac{2\sqrt{e}-1}{2\sqrt{e}}. $

III.

Let $ \mathbf{X} $ be a random variable with mean 2 , variance 8 , and moment generating function $ \phi_{\mathbf{X}}\left(s\right)=E\left\{ e^{s\mathbf{X}}\right\} $ . Find the first three terms in the series expansion of $ \phi_{\mathbf{X}}\left(s\right) $ about zero. (Hint: The moment generating theorem).

A. $ 2s+2s^{2} $ B. $ 1+2s+6s^{2} $ C. $ 1+2s+2s^{2} $ D. $ 1+2s+4s^{2} $ E. $ 1+2s+12s^{2} $

Recall

According to the series expansion, $ e^{\lambda}=\sum_{k=0}^{\infty}\frac{\lambda^{k}}{k!} $ .

Solution

$ \phi_{\mathbf{X}}\left(s\right)=E\left[e^{s\mathbf{X}}\right]=E\left[1+s\mathbf{X}+\frac{s^{2}\mathbf{X}^{2}}{2}+\cdots\right]=1+E\left[\mathbf{X}\right]s+\frac{E\left[\mathbf{X}^{2}\right]s^{2}}{2}+\cdots=1+2s+6s^{2}. $

Since $ E\left[\mathbf{X}^{2}\right]=Var\left[\mathbf{X}\right]+\left(E\left[\mathbf{X}\right]\right)^{2}=8+4=12 $ .

IV.

Find the characteristic function $ \Phi\left(\omega\right) $ of an exponentially distributed random variable with mean $ \mu $ .

A. $ \exp\left\{ \mu\left(e^{i\omega}-1\right)\right\} $ B. $ \exp\left\{ \mu e^{i\omega}\right\} -1 $ C. $ \left(1-i\omega\mu\right)^{-1} $ D. $ \left(1+i\omega\mu\right)^{-2} $ E. $ e^{i\omega\mu}e^{-\frac{1}{2}\omega^{2}\mu^{2}} $

Solution

$ \Phi\left(\omega\right)=E\left[e^{i\omega\mathbf{X}}\right]=\int_{0}^{\infty}\frac{1}{\mu}e^{-\frac{x}{\mu}}e^{i\omega x}dx=\frac{1}{\mu}\int_{0}^{\infty}e^{-x\left(1/\mu-i\omega\right)}=\frac{1}{\mu}\cdot\frac{e^{-x\left(1/\mu-i\omega\right)}}{-\left(1/\mu-i\omega\right)}\biggl|_{0}^{\infty} $$ =\frac{1}{\mu}\cdot\frac{1}{\left(1/\mu-i\omega\right)}=\frac{1}{1-i\omega\mu}=\left(1-i\omega\mu\right)^{-1}. $

$ \because\frac{1}{\mu}-i\omega>0 $ because $ i\omega $ is a imaginary term.

3. (15 pts.)

Let $ \left\{ t_{k}\right\} $ be a set of Poisson points with parameter \lambda on the positive real line such that if $ \mathbf{N}\left(t_{1},t_{2}\right) $ is defined as the number of points in the interval $ \left(t_{1},t_{2}\right] $ , then

$ P\left(\left\{ \mathbf{N}\left(t_{1},t_{2}\right)=k\right\} \right)=\frac{\left[\lambda\left(t_{2}-t_{1}\right)\right]^{k}e^{-\lambda\left(t_{2}-t_{1}\right)}}{k!},\quad k=0,1,2,\cdots,\quad t_{2}>t_{1}\geq0. $

Let $ \mathbf{X}\left(t\right)=\mathbf{N}\left(0,t\right) $ be the Poisson counting process associated with these points for $ t>0\;\left(n.b.,\;\mathbf{X}\left(0\right)=0\right) $

(a) Find the mean of $ \mathbf{X}\left(t\right) $ .

$ \Phi_{\mathbf{X}\left(t\right)}\left(\omega\right)=E\left[e^{i\omega\mathbf{X}\left(t\right)}\right]=\sum_{k=0}^{\infty}e^{i\omega k}\frac{\left(\lambda t\right)^{k}e^{-\lambda t}}{k!}=e^{-\lambda t}\sum_{k=0}^{\infty}\frac{\left[e^{i\omega}\lambda t\right]^{k}}{k!}=e^{\lambda t\left[e^{i\omega}-1\right]}. $

$ \phi_{\mathbf{X}}\left(s\right)=e^{\lambda t\left[e^{s}-1\right]}. $

$ E\left[\mathbf{X}\left(t\right)\right]=\frac{d\phi_{\mathbf{X}}\left(s\right)}{ds}\biggl|_{s=0}=e^{\lambda t\left[e^{s}-1\right]}\lambda te^{s}\biggl|_{s=0}=\lambda t. $

Alternative solution: $ \mathbf{X}\left(t\right) $ is Poisson process with parameter $ \lambda t \Rightarrow E\left[\mathbf{X}\left(t\right)\right]=Var\left[\mathbf{X}\left(t\right)\right]=\lambda t $ .

(b) Find the variance of $ \mathbf{X}\left(t\right) $ .

$ E\left[\mathbf{X}\left(t\right)^{2}\right]=\frac{d^{2}\phi_{\mathbf{X}\left(t\right)}\left(s\right)}{ds^{2}}\left|_{s=0}\right.=\frac{d}{ds}\left[e^{\lambda t\left[e^{s}-1\right]}\lambda te^{s}\right]\left|_{s=0}\right.=\lambda te^{s}e^{\lambda t\left[e^{s}-1\right]}\lambda te^{s}+\lambda te^{s}e^{\lambda t\left[e^{s}-1\right]}\left|_{s=0}\right.=\left(\lambda t\right)^{2}+\lambda t. $

$ Var\left[\mathbf{X}\left(t\right)\right]=E\left[\mathbf{X}\left(t\right)^{2}\right]-\left(E\left[\mathbf{X}\left(t\right)\right]\right)^{2}=\left(\lambda t\right)^{2}+\lambda t-\left(\lambda t\right)^{2}=\lambda t. $

Alternative solution: $ \mathbf{X}\left(t\right) $ is Poisson process with parameter $ \lambda t \Rightarrow E\left[\mathbf{X}\left(t\right)\right]=Var\left[\mathbf{X}\left(t\right)\right]=\lambda t . $

(c) Derive an expression for the autocorrelation function of $ \mathbf{X}\left(t\right) $ .

Assume $ t_{2}>t_{1} $ .

$ R_{\mathbf{XX}}\left(t_{1},t_{2}\right)=E\left[\mathbf{X}\left(t_{1}\right)\mathbf{X}\left(t_{2}\right)\right]=E\left[\mathbf{X}\left(t_{1}\right)\left[\mathbf{X}\left(t_{2}\right)-\mathbf{X}\left(t_{1}\right)+\mathbf{X}\left(t_{1}\right)\right]\right] $$ =E\left[\mathbf{X}\left(t_{1}\right)\left[\mathbf{X}\left(t_{2}\right)-\mathbf{X}\left(t_{1}\right)\right]\right]+E\left[\mathbf{X}^{2}\left(t_{1}\right)\right] $$ =E\left[\mathbf{X}\left(t_{1}\right)\right]E\left[\mathbf{X}\left(t_{2}\right)-\mathbf{X}\left(t_{1}\right)\right]+E\left[\mathbf{X}^{2}\left(t_{1}\right)\right] $$ =\left(\lambda t_{1}\right)\lambda\left(t_{2}-t_{1}\right)+\lambda^{2}t_{1}^{2}+\lambda t_{1}=\lambda^{2}t_{1}t_{2}-\lambda^{2}t_{1}^{2}+\lambda^{2}t_{1}^{2}+\lambda t_{1}=\lambda^{2}t_{1}t_{2}+\lambda t_{1}. $

Similarly, for $ t_{2}<t_{1} $ , $ R_{\mathbf{XX}}\left(t_{1},t_{2}\right)=\lambda^{2}t_{1}t_{2}+\lambda t_{2} $ .

$ \therefore R_{\mathbf{XX}}\left(t_{1},t_{2}\right)=\lambda^{2}t_{1}t_{2}+\lambda\min\left(t_{1},t_{2}\right). $

$ \because $ Recall: $ Var\left[\mathbf{X}\left(t_{1}\right)\right]=E\left[\mathbf{X}^{2}\left(t_{1}\right)\right]-\left(E\left[\mathbf{X}\left(t_{1}\right)\right]\right)^{2}\Longrightarrow E\left[\mathbf{X}^{2}\left(t_{1}\right)\right]=Var\left[\mathbf{X}\left(t_{1}\right)\right]+\left(E\left[\mathbf{X}\left(t_{1}\right)\right]\right)^{2} $ .

(d) Is $ \mathbf{X}\left(t\right) $ wide-sense stationary? Explain your answer.

No, $ \mathbf{X}\left(t\right) $ is not WSS, because $ E\left[\mathbf{X}\left(t\right)\right]=\lambda t $ is not constant.

4. (15 pts.)

Let $ \mathbf{X} $ be a continuous random variable with pdf $ f_{\mathbf{X}}\left(x\right) $ , mean $ \mu $ , and variance $ \sigma^{2} $ . Prove the Chebyshev Inequality:$ P\left(\left\{ \mathbf{X}-\mu\right\} \geq\epsilon\right)\leq\frac{\sigma^{2}}{\epsilon^{2}} $, where $ \epsilon $ is any positive constant.

Solution

You can find the proof of Chebyshev inequality [CS1ChebyshevInequality].

5. (15 pts.)

Let $ \mathbf{X}\left(t\right) $ be a zero-mean wide-sense stationary Gaussian white noise process with autocorrelation function $ R_{\mathbf{XX}}\left(\tau\right)=S_{0}\delta\left(\tau\right) $ . Suppose that $ \mathbf{X}\left(t\right) $ is the input to a linear time invariant system with impulse response $ h\left(t\right)=e^{-\alpha t}\cdot1_{\left[0,\infty\right)}\left(t\right) $, where $ \alpha $ is a positive constant. Let $ \mathbf{Y}\left(t\right) $ be the output of the system and assume that the input has been applied to the system for all time.

(a) What is the mean of $ \mathbf{Y}\left(t\right) $ ?

$ E\left[\mathbf{Y}\left(t\right)\right]=E\left[\int_{-\infty}^{\infty}h\left(\tau\right)\mathbf{X}\left(t-\tau\right)d\tau\right]=\int_{-\infty}^{\infty}h\left(\tau\right)E\left[\mathbf{X}\left(t-\tau\right)\right]d\tau=\int_{-\infty}^{\infty}h\left(\tau\right)\cdot0d\tau=0. $

(b) What is the power spectral density of $ \mathbf{Y}\left(t\right) $ ?

$ S_{\mathbf{XX}}\left(\omega\right)=\int_{-\infty}^{\infty}S_{0}\delta\left(\tau\right)e^{-i\omega\tau}d\tau=S_{0}. $

$ H\left(\omega\right)=\int_{-\infty}^{\infty}h\left(t\right)e^{-i\omega t}dt=\int_{0}^{\infty}e^{-\alpha t}e^{-i\omega t}dt=\int_{0}^{\infty}e^{-\left(\alpha+i\omega\right)t}dt=\frac{e^{-\left(\alpha+i\omega\right)t}}{-\left(\alpha+i\omega\right)}\biggl|_{0}^{\infty}=\frac{1}{\alpha+i\omega}. $

$ S_{\mathbf{YY}}\left(\omega\right)=S_{\mathbf{XX}}\left(\omega\right)\left|H\left(\omega\right)\right|^{2}=S_{\mathbf{XX}}\left(\omega\right)H\left(\omega\right)H^{*}\left(\omega\right)=S_{0}\cdot\frac{1}{\alpha+i\omega}\cdot\frac{1}{\alpha-i\omega}=\frac{S_{0}}{\alpha^{2}+\omega^{2}}. $

(c) What is the autocorrelation function of $ \mathbf{Y}\left(t\right) $ ?

$ S_{\mathbf{YY}}\left(\omega\right)=\frac{S_{0}}{\alpha^{2}+\omega^{2}}=\left(\frac{S_{0}}{2\alpha}\right)\frac{2\alpha}{\alpha^{2}+\omega^{2}}\leftrightarrow\left(\frac{S_{0}}{2\alpha}\right)e^{-\alpha\left|\tau\right|}=R_{\mathbf{YY}}\left(\tau\right). $

$ \because e^{-\alpha\left|\tau\right|}\leftrightarrow\frac{2\alpha}{\alpha^{2}+\omega^{2}}\text{ (on the table given)}. $

If there is no table, then

$ R_{\mathbf{YY}}\left(\tau\right)=\frac{1}{2\pi}\int_{-\infty}^{\infty}S_{\mathbf{YY}}\left(\omega\right)e^{i\omega\tau}d\omega=\frac{1}{2\pi}\int_{-\infty}^{\infty}\frac{S_{0}}{\alpha^{2}+\omega^{2}}\cdot e^{i\omega\tau}d\omega. $

(d) Write an expression for the second-order density $ f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right) $ of $ \mathbf{Y}\left(t\right) $ .

$ \mathbf{Y}\left(t\right) $ is a WSS Gaussian random process with $ E\left[\mathbf{Y}\left(t\right)\right]=0 , \sigma_{\mathbf{Y}\left(t\right)}^{2}=R_{\mathbf{YY}}\left(0\right)=\frac{S_{0}}{2\alpha} $ .

$ r_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}=r\left(t_{1}-t_{2}\right)=\frac{C_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{\sqrt{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}}=\frac{R_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{R_{\mathbf{YY}}\left(0\right)}=e^{-\alpha\left|t_{1}-t_{2}\right|}. $

$ f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right)=\frac{1}{2\pi\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}\sqrt{1-r^{2}}}\exp\left\{ \frac{-1}{2\left(1-r^{2}\right)}\left[\frac{y_{1}^{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}}-\frac{2ry_{1}y_{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}}+\frac{y_{2}^{2}}{\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}\right]\right\} $$ =\frac{1}{2\pi\frac{S_{0}}{2\alpha}\sqrt{1-e^{-2\alpha\left|t_{1}-t_{2}\right|}}}\exp\left\{ \frac{-1}{2\left(1-e^{-2\alpha\left|t_{1}-t_{2}\right|}\right)}\left[\frac{y_{1}^{2}}{S_{0}/2\alpha}-\frac{2y_{1}y_{2}e^{-\alpha\left|t_{1}-t_{2}\right|}}{S_{0}/2\alpha}+\frac{y_{2}^{2}}{S_{0}/2\alpha}\right]\right\} $$ =\frac{\alpha}{\pi S_{0}\sqrt{1-e^{-2\alpha\left|t_{1}-t_{2}\right|}}}\exp\left\{ \frac{-\alpha}{S_{0}\left(1-e^{-2\alpha\left|t_{1}-t_{2}\right|}\right)}\left[y_{1}^{2}-2y_{1}y_{2}e^{-\alpha\left|t_{1}-t_{2}\right|}+y_{2}^{2}\right]\right\} $ .

6. (20 pts.)

(a)

Let A , B , and C be three events defined on a random experiment. If $ P\left(A\cap B\cap C\right)=P\left(A\right)P\left(B\right)P\left(C\right) $ , then A , B , and C are statistically independent.

Recall

Two events A and B are independent iff $ P\left(A\cap B\right)=P\left(A\right)P\left(B\right) $ .

Solution

False. Must also know $ P(A\cap B)=P\left(A\right)P\left(B\right) $ , $ P\left(B\cap C\right)=P\left(B\right)P\left(C\right) $ , and $ P\left(C\cap A\right)=P\left(C\right)P\left(A\right) $ .

(b)

If the autocorrelation function $ R_{\mathbf{X}}\left(t_{1},t_{2}\right) $ of random process $ \mathbf{X}\left(t\right) $ can be written as a function of the time difference $ t_{2}-t_{1} $ , then $ \mathbf{X}\left(t\right) $ is wide-sense stationary.

Solution

False. $ E\left[\mathbf{X}\left(t\right)\right] $ must also be constant.

(c)

All stationary random processes are wide-sense stationary.

Solution

True.

(d)

The autocorrelation function $ R_{\mathbf{XX}}\left(\tau\right) $ of a real wide-sense stationary random process $ \mathbf{X}\left(t\right) $ is nonnegative for all $ \tau $ .

Solution

False. $ R_{\mathbf{XX}}\left(t_{1},t_{2}\right) $ is non-negative definite. However, it does not mean that $ R_{\mathbf{XX}}\left(t_{1},t_{2}\right) $ is nonnegative.

(e)

Let $ \mathbf{X}\left(t\right) $ and $ \mathbf{Y}\left(t\right) $ be two zero-mean statistically independent, jointly wide-sense stationary random processes. Then the cross-correlation function $ R_{\mathbf{XY}}\left(\tau\right)=0 $ for all $ \tau $ .

Solution

True.

$ R_{\mathbf{XY}}\left(t_{1},t_{2}\right)=E\left[\mathbf{X}\left(t_{1}\right)\mathbf{Y}^{*}\left(t_{2}\right)\right]=E\left[\mathbf{X}\left(t_{1}\right)\right]E\left[\mathbf{Y}^{*}\left(t_{2}\right)\right]=0\cdot0=0. $

(f)

The cross-correlation function $ R_{\mathbf{XY}}\left(\tau\right) $ of two real, jointly wide-sense stationary random process $ \mathbf{X}\left(t\right) $ and $ \mathbf{Y}\left(t\right) $ has its peak value at $ \tau=0 $ .

Solution

False. Consider $ \mathbf{Y}\left(t\right)=\mathbf{X}\left(t-\delta\right) $ where $ \delta\neq0 $ .

(g)

The power spectral density of a real, wide-sense stationary random process $ \mathbf{X}\left(t\right) $ is a non-negative even function of $ \omega $ .

Solution

True.

(h)

If $ \mathbf{X} $ and $ \mathbf{Y} $ are two statistically independent random variables, then $ f_{\mathbf{X}}\left(x|y\right)=f_{\mathbf{X}}\left(x\right) $ .

Solution.

True. $ P\left(\mathbf{\left\{ X=x\right\} }|\left\{ \mathbf{Y}=y\right\} \right)=\frac{P\left(\left\{ \mathbf{X}=x\right\} \cap\left\{ \mathbf{Y}=y\right\} \right)}{P\left(\left\{ \mathbf{Y}=y\right\} \right)}=\frac{P\left(\left\{ \mathbf{X}=x\right\} \right)\cdot P\left(\left\{ \mathbf{Y}=y\right\} \right)}{P\left(\left\{ \mathbf{Y}=y\right\} \right)}=P\left(\left\{ \mathbf{X}=x\right\} \right). $

$ f_{\mathbf{X}}\left(x|y\right)=\frac{f_{\mathbf{XY}}\left(x,y\right)}{f_{\mathbf{Y}}\left(y\right)}=\frac{f_{\mathbf{X}}\left(x\right)\cdot f_{\mathbf{Y}}\left(y\right)}{f_{\mathbf{Y}}\left(y\right)}=f_{\mathbf{X}}\left(x\right). $

(i)

If $ \mathbf{X} $ and $ \mathbf{Y} $ are two random variables, and $ f_{X}\left(x|y\right)=f_{X}\left(x\right) $ , then $ \mathbf{X} $ and $ \mathbf{Y} $ are statistically independent.

Solution

True.

(j)

If $ \left\{ \mathbf{X}_{n}\right\} $ is a sequence of random variables that converges to a random variable $ \mathbf{X} $ as $ n\rightarrow\infty $ , then $ \left\{ \mathbf{X}_{n}\right\} $ converges to $ \mathbf{X} $ in the means-square sense.

Solution

False. The explanation at first is about converge in almost everywhere. $ \left(a.e.\right)\nRightarrow\left(m.s.\right) $ and $ \left(m.s.\right)\nRightarrow\left(a.e.\right) $ .


Back to ECE600

Back to ECE 600 Finals

Alumni Liaison

has a message for current ECE438 students.

Sean Hu, ECE PhD 2009