(2 intermediate revisions by 2 users not shown)
Line 1: Line 1:
7.5 QE 2002 August
+
7.5 [[ECE_PhD_Qualifying_Exams|QE]] 2002 August
  
 
1. (25 Points)
 
1. (25 Points)
  
Consider a random experiment in which a point is selected at random from the unit square (sample space <math>S=\left[0,1\right]\times\left[0,1\right] )</math>. Assume that all points in <math>S</math>  are equally likely to be selected. Let the random variable <math>\mathbf{X}\left(\omega\right)</math>  be the distance from the outcome <math>\omega</math>  to the origin (the lower left corner of the unit square). Find the cumulative distribution function (cdf) <math>F_{\mathbf{X}}\left(x\right)=P\left(\left\{ \mathbf{X}\leq x\right\} \right)</math>  of the random variable <math>\mathbf{X}</math> . Make sure and specify your answer for all <math>x\in\mathbf{R}</math> .
+
Consider a random experiment in which a point is selected at random from the unit square (sample space <math class="inline">S=\left[0,1\right]\times\left[0,1\right] )</math>. Assume that all points in <math class="inline">S</math>  are equally likely to be selected. Let the random variable <math class="inline">\mathbf{X}\left(\omega\right)</math>  be the distance from the outcome <math class="inline">\omega</math>  to the origin (the lower left corner of the unit square). Find the cumulative distribution function (cdf) <math class="inline">F_{\mathbf{X}}\left(x\right)=P\left(\left\{ \mathbf{X}\leq x\right\} \right)</math>  of the random variable <math class="inline">\mathbf{X}</math> . Make sure and specify your answer for all <math class="inline">x\in\mathbf{R}</math> .
  
 
[[Image:pasted20.png]]
 
[[Image:pasted20.png]]
  
<math>F_{\mathbf{X}}\left(x\right)=P\left(\left\{ \mathbf{X}\leq x\right\} \right)=P\left(\left\{ w:\mathbf{X}\left(w\right)\leq x\right\} \right).</math>  
+
<math class="inline">F_{\mathbf{X}}\left(x\right)=P\left(\left\{ \mathbf{X}\leq x\right\} \right)=P\left(\left\{ w:\mathbf{X}\left(w\right)\leq x\right\} \right).</math>  
  
• <math>i)\; x<0,\; F_{\mathbf{X}}\left(x\right)=0</math>  
+
• <math class="inline">i)\; x<0,\; F_{\mathbf{X}}\left(x\right)=0</math>  
  
• <math>ii)\;0\leq x\leq1,\; F_{\mathbf{X}}\left(x\right)=\frac{\pi}{4}x^{2}</math>  
+
• <math class="inline">ii)\;0\leq x\leq1,\; F_{\mathbf{X}}\left(x\right)=\frac{\pi}{4}x^{2}</math>  
  
• <math>iii)\;1<x<\sqrt{2},  F_{\mathbf{X}}\left(x\right)=2\left(\frac{1}{2}\times1\times\sqrt{x^{2}-1}\right)+\pi x^{2}\times\frac{\frac{\pi}{2}-2\theta}{2\pi}</math><math>=\sqrt{x^{2}-1}+\frac{\pi}{4}x^{2}-\theta x^{2}=\sqrt{x^{2}-1}+\frac{\pi}{4}x^{2}-x^{2}\cos^{-1}\frac{1}{x}</math><math>=\sqrt{x^{2}-1}+\left(\frac{\pi}{4}-\cos^{-1}\frac{1}{x}\right)x^{2}.</math>  
+
• <math class="inline">iii)\;1<x<\sqrt{2},  F_{\mathbf{X}}\left(x\right)=2\left(\frac{1}{2}\times1\times\sqrt{x^{2}-1}\right)+\pi x^{2}\times\frac{\frac{\pi}{2}-2\theta}{2\pi}</math><math class="inline">=\sqrt{x^{2}-1}+\frac{\pi}{4}x^{2}-\theta x^{2}=\sqrt{x^{2}-1}+\frac{\pi}{4}x^{2}-x^{2}\cos^{-1}\frac{1}{x}</math><math class="inline">=\sqrt{x^{2}-1}+\left(\frac{\pi}{4}-\cos^{-1}\frac{1}{x}\right)x^{2}.</math>  
  
• <math>iv)\; x\geq\sqrt{2},\; F_{\mathbf{X}}\left(x\right)=1</math>   
+
• <math class="inline">iv)\; x\geq\sqrt{2},\; F_{\mathbf{X}}\left(x\right)=1</math>   
  
<math>\therefore\; F_{\mathbf{X}}\left(x\right)=\begin{cases}
+
<math class="inline">\therefore\; F_{\mathbf{X}}\left(x\right)=\begin{cases}
 
\begin{array}{lll}
 
\begin{array}{lll}
 
0 &  & ,\; x<0\\
 
0 &  & ,\; x<0\\
Line 27: Line 27:
 
2. (25 Points)
 
2. (25 Points)
  
Let <math>\mathbf{X}</math>  and <math>\mathbf{Y}</math>  be two jointly distributed Gaussian random variables. The random variable <math>\mathbf{X}</math>  has mean <math>\mu_{\mathbf{X}}</math>  and variance <math>\sigma_{\mathbf{X}}^{2}</math> . The correlation coefficient between <math>\mathbf{X}</math>  and <math>\mathbf{Y}</math>  is <math>r</math> . Define a new random variable <math>\mathbf{Z}</math>  by <math>\mathbf{Z}=a\mathbf{X}+b\mathbf{Y}</math>, where <math>a</math>  and <math>b</math>  are real numbers.
+
Let <math class="inline">\mathbf{X}</math>  and <math class="inline">\mathbf{Y}</math>  be two jointly distributed Gaussian random variables. The random variable <math class="inline">\mathbf{X}</math>  has mean <math class="inline">\mu_{\mathbf{X}}</math>  and variance <math class="inline">\sigma_{\mathbf{X}}^{2}</math> . The correlation coefficient between <math class="inline">\mathbf{X}</math>  and <math class="inline">\mathbf{Y}</math>  is <math class="inline">r</math> . Define a new random variable <math class="inline">\mathbf{Z}</math>  by <math class="inline">\mathbf{Z}=a\mathbf{X}+b\mathbf{Y}</math>, where <math class="inline">a</math>  and <math class="inline">b</math>  are real numbers.
  
 
(a)
 
(a)
  
Prove that <math>\mathbf{Z}</math>  is a Gaussian random variable.
+
Prove that <math class="inline">\mathbf{Z}</math>  is a Gaussian random variable.
  
<math>\Phi_{\mathbf{Z}}\left(\omega\right)=E\left[e^{i\omega\mathbf{Z}}\right]=E\left[e^{i\omega\left(a\mathbf{X}+b\mathbf{Y}\right)}\right]=\Phi_{\mathbf{XY}}\left(a\omega,b\omega\right).</math>  
+
<math class="inline">\Phi_{\mathbf{Z}}\left(\omega\right)=E\left[e^{i\omega\mathbf{Z}}\right]=E\left[e^{i\omega\left(a\mathbf{X}+b\mathbf{Y}\right)}\right]=\Phi_{\mathbf{XY}}\left(a\omega,b\omega\right).</math>  
  
<math>\Phi_{\mathbf{XY}}\left(\omega_{1},\omega_{2}\right)=\exp\left[i\left(\mu_{\mathbf{X}}\omega_{1}+\mu_{\mathbf{Y}}\omega_{2}\right)-\frac{1}{2}\left(\sigma_{\mathbf{X}}^{2}\omega_{1}^{2}+2r\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}\omega_{1}\omega_{2}+\sigma_{\mathbf{Y}}^{2}\omega_{2}^{2}\right)\right].</math>  
+
<math class="inline">\Phi_{\mathbf{XY}}\left(\omega_{1},\omega_{2}\right)=\exp\left[i\left(\mu_{\mathbf{X}}\omega_{1}+\mu_{\mathbf{Y}}\omega_{2}\right)-\frac{1}{2}\left(\sigma_{\mathbf{X}}^{2}\omega_{1}^{2}+2r\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}\omega_{1}\omega_{2}+\sigma_{\mathbf{Y}}^{2}\omega_{2}^{2}\right)\right].</math>  
  
<math>\Phi_{\mathbf{Z}}\left(\omega\right)=\Phi_{\mathbf{XY}}\left(a\omega,b\omega\right)=\exp\left[i\left(a\mu_{\mathbf{X}}+b\mu_{\mathbf{Y}}\right)\omega-\frac{1}{2}\left(a^{2}\sigma_{\mathbf{X}}^{2}+2rab\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}+b^{2}\sigma_{\mathbf{Y}}^{2}\right)\omega^{2}\right],</math>
+
<math class="inline">\Phi_{\mathbf{Z}}\left(\omega\right)=\Phi_{\mathbf{XY}}\left(a\omega,b\omega\right)=\exp\left[i\left(a\mu_{\mathbf{X}}+b\mu_{\mathbf{Y}}\right)\omega-\frac{1}{2}\left(a^{2}\sigma_{\mathbf{X}}^{2}+2rab\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}+b^{2}\sigma_{\mathbf{Y}}^{2}\right)\omega^{2}\right],</math>
  
which is the characteristic function of a Gaussian random variable with mean <math>a\mu_{\mathbf{X}}+b\mu_{\mathbf{Y}}</math>  and variance <math>a^{2}\sigma_{\mathbf{X}}^{2}+2rab\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}+b^{2}\sigma_{\mathbf{Y}}^{2}</math> .
+
which is the characteristic function of a Gaussian random variable with mean <math class="inline">a\mu_{\mathbf{X}}+b\mu_{\mathbf{Y}}</math>  and variance <math class="inline">a^{2}\sigma_{\mathbf{X}}^{2}+2rab\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}+b^{2}\sigma_{\mathbf{Y}}^{2}</math> .
  
 
(b)
 
(b)
  
Find the mean of <math>\mathbf{Z}</math> . Express your answer in terms of the parameters <math>\mu_{\mathbf{X}}</math> , <math>\sigma_{\mathbf{X}}^{2}</math> , <math>\mu_{\mathbf{Y}}</math> , <math>\sigma_{\mathbf{Y}}^{2}</math> , <math>r</math> , <math>a</math> , and <math>b</math> .
+
Find the mean of <math class="inline">\mathbf{Z}</math> . Express your answer in terms of the parameters <math class="inline">\mu_{\mathbf{X}}</math> , <math class="inline">\sigma_{\mathbf{X}}^{2}</math> , <math class="inline">\mu_{\mathbf{Y}}</math> , <math class="inline">\sigma_{\mathbf{Y}}^{2}</math> , <math class="inline">r</math> , <math class="inline">a</math> , and <math class="inline">b</math> .
  
<math>E\left[\mathbf{Z}\right]=a\mu_{\mathbf{X}}+b\mu_{\mathbf{Y}}.</math>  
+
<math class="inline">E\left[\mathbf{Z}\right]=a\mu_{\mathbf{X}}+b\mu_{\mathbf{Y}}.</math>  
  
 
(c)
 
(c)
  
Find the variance of <math>\mathbf{Z}</math> . Express your answer in terms of the parameters <math>\mu_{\mathbf{X}}</math> , <math>\sigma_{\mathbf{X}}^{2}</math> , <math>\mu_{\mathbf{Y}}</math> , <math>\sigma_{\mathbf{Y}}^{2}</math> , <math>r</math> , <math>a</math> , and <math>b</math> .
+
Find the variance of <math class="inline">\mathbf{Z}</math> . Express your answer in terms of the parameters <math class="inline">\mu_{\mathbf{X}}</math> , <math class="inline">\sigma_{\mathbf{X}}^{2}</math> , <math class="inline">\mu_{\mathbf{Y}}</math> , <math class="inline">\sigma_{\mathbf{Y}}^{2}</math> , <math class="inline">r</math> , <math class="inline">a</math> , and <math class="inline">b</math> .
  
<math>Var\left[\mathbf{Z}\right]=a^{2}\sigma_{\mathbf{X}}^{2}+2rab\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}+b^{2}\sigma_{\mathbf{Y}}^{2}.</math>  
+
<math class="inline">Var\left[\mathbf{Z}\right]=a^{2}\sigma_{\mathbf{X}}^{2}+2rab\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}+b^{2}\sigma_{\mathbf{Y}}^{2}.</math>  
  
 
3. (25 Points)
 
3. (25 Points)
  
Let <math>\mathbf{X}\left(t\right)</math>  be a wide-sense stationary Gaussian random process with mean <math>\mu_{\mathbf{X}}</math>  and autocorrelation function <math>R_{\mathbf{XX}}\left(\tau\right)</math> . Let <math>\mathbf{Y}\left(t\right)=c_{1}\mathbf{X}\left(t\right)-c_{2}\mathbf{X}\left(t-\tau\right),</math> where <math>c_{1}</math>  and <math>c_{2}</math>  are real numbers. What is the probability that <math>\mathbf{Y}\left(t\right)</math>  is less than or equal to a real number <math>\gamma</math> ? Express your answer in terms of “phi-function”<math>\Phi\left(x\right)=\int_{-\infty}^{x}\frac{1}{\sqrt{2\pi}}e^{-z^{2}/2}dz.</math>  
+
Let <math class="inline">\mathbf{X}\left(t\right)</math>  be a wide-sense stationary Gaussian random process with mean <math class="inline">\mu_{\mathbf{X}}</math>  and autocorrelation function <math class="inline">R_{\mathbf{XX}}\left(\tau\right)</math> . Let <math class="inline">\mathbf{Y}\left(t\right)=c_{1}\mathbf{X}\left(t\right)-c_{2}\mathbf{X}\left(t-\tau\right),</math> where <math class="inline">c_{1}</math>  and <math class="inline">c_{2}</math>  are real numbers. What is the probability that <math class="inline">\mathbf{Y}\left(t\right)</math>  is less than or equal to a real number <math class="inline">\gamma</math> ? Express your answer in terms of “phi-function”<math class="inline">\Phi\left(x\right)=\int_{-\infty}^{x}\frac{1}{\sqrt{2\pi}}e^{-z^{2}/2}dz.</math>  
  
 
Solution
 
Solution
  
Since <math>\mathbf{X}\left(t\right)</math>  is a WSS Gaussian random process, <math>\mathbf{Y}\left(t\right)</math>  is a Gaussian process.
+
Since <math class="inline">\mathbf{X}\left(t\right)</math>  is a WSS Gaussian random process, <math class="inline">\mathbf{Y}\left(t\right)</math>  is a Gaussian process.
  
<math>E\left[\mathbf{Y}\left(t\right)\right]=c_{1}E\left[\mathbf{X}\left(t\right)\right]-c_{2}E\left[\mathbf{X}\left(t-\tau\right)\right]=\left(c_{1}-c_{2}\right)\mu_{\mathbf{X}}.</math>  
+
<math class="inline">E\left[\mathbf{Y}\left(t\right)\right]=c_{1}E\left[\mathbf{X}\left(t\right)\right]-c_{2}E\left[\mathbf{X}\left(t-\tau\right)\right]=\left(c_{1}-c_{2}\right)\mu_{\mathbf{X}}.</math>  
  
<math>E\left[\mathbf{Y}^{2}\left(t\right)\right]=E\left[\left(c_{1}\mathbf{X}\left(t\right)-c_{2}\mathbf{X}\left(t-\tau\right)\right)^{2}\right]</math><math>=c_{1}^{2}E\left[\mathbf{X}^{2}\left(t\right)\right]-2c_{1}c_{2}E\left[\mathbf{X}\left(t\right)\mathbf{X}\left(t-\tau\right)\right]+c_{2}^{2}E\left[\mathbf{X}^{2}\left(t-\tau\right)\right]</math><math>=\left(c_{1}^{2}+c_{2}^{2}\right)R_{\mathbf{X}}\left(0\right)-2c_{1}c_{2}R_{\mathbf{X}}\left(-\tau\right).</math>  
+
<math class="inline">E\left[\mathbf{Y}^{2}\left(t\right)\right]=E\left[\left(c_{1}\mathbf{X}\left(t\right)-c_{2}\mathbf{X}\left(t-\tau\right)\right)^{2}\right]</math><math class="inline">=c_{1}^{2}E\left[\mathbf{X}^{2}\left(t\right)\right]-2c_{1}c_{2}E\left[\mathbf{X}\left(t\right)\mathbf{X}\left(t-\tau\right)\right]+c_{2}^{2}E\left[\mathbf{X}^{2}\left(t-\tau\right)\right]</math><math class="inline">=\left(c_{1}^{2}+c_{2}^{2}\right)R_{\mathbf{X}}\left(0\right)-2c_{1}c_{2}R_{\mathbf{X}}\left(-\tau\right).</math>  
  
<math>Var\left[\mathbf{Y}\left(t\right)\right]=E\left[\mathbf{Y}^{2}\left(t\right)\right]-E\left[\mathbf{Y}\left(t\right)\right]^{2}</math><math>=\left(c_{1}^{2}+c_{2}^{2}\right)R_{\mathbf{X}}\left(0\right)-2c_{1}c_{2}R_{\mathbf{X}}\left(-\tau\right)-\left(\left(c_{1}-c_{2}\right)\mu_{\mathbf{X}}\right)^{2}</math><math>=\left(c_{1}^{2}+c_{2}^{2}\right)R_{\mathbf{X}}\left(0\right)-2c_{1}c_{2}R_{\mathbf{X}}\left(-\tau\right)-\left(c_{1}^{2}+c_{2}^{2}\right)\mu_{\mathbf{X}}^{2}+2c_{1}c_{2}\mu_{\mathbf{X}}^{2}</math><math>=\left(c_{1}^{2}+c_{2}^{2}\right)\left(R_{\mathbf{X}}\left(0\right)-\mu_{\mathbf{X}}^{2}\right)+2c_{1}c_{2}\left(\mu_{\mathbf{X}}^{2}-R_{\mathbf{X}}\left(-\tau\right)\right).</math>  
+
<math class="inline">Var\left[\mathbf{Y}\left(t\right)\right]=E\left[\mathbf{Y}^{2}\left(t\right)\right]-E\left[\mathbf{Y}\left(t\right)\right]^{2}</math><math class="inline">=\left(c_{1}^{2}+c_{2}^{2}\right)R_{\mathbf{X}}\left(0\right)-2c_{1}c_{2}R_{\mathbf{X}}\left(-\tau\right)-\left(\left(c_{1}-c_{2}\right)\mu_{\mathbf{X}}\right)^{2}</math><math class="inline">=\left(c_{1}^{2}+c_{2}^{2}\right)R_{\mathbf{X}}\left(0\right)-2c_{1}c_{2}R_{\mathbf{X}}\left(-\tau\right)-\left(c_{1}^{2}+c_{2}^{2}\right)\mu_{\mathbf{X}}^{2}+2c_{1}c_{2}\mu_{\mathbf{X}}^{2}</math><math class="inline">=\left(c_{1}^{2}+c_{2}^{2}\right)\left(R_{\mathbf{X}}\left(0\right)-\mu_{\mathbf{X}}^{2}\right)+2c_{1}c_{2}\left(\mu_{\mathbf{X}}^{2}-R_{\mathbf{X}}\left(-\tau\right)\right).</math>  
  
<math>P\left(\left\{ \mathbf{Y}\left(t\right)\leq r\right\} \right)=\Phi\left(\frac{r-\left(c_{1}-c_{2}\right)\mu_{\mathbf{X}}}{\sqrt{\left(c_{1}^{2}+c_{2}^{2}\right)\left(R_{\mathbf{X}}\left(0\right)-\mu_{\mathbf{X}}^{2}\right)+2c_{1}c_{2}\left(\mu_{\mathbf{X}}^{2}-R_{\mathbf{X}}\left(-\tau\right)\right)}}\right).</math>  
+
<math class="inline">P\left(\left\{ \mathbf{Y}\left(t\right)\leq r\right\} \right)=\Phi\left(\frac{r-\left(c_{1}-c_{2}\right)\mu_{\mathbf{X}}}{\sqrt{\left(c_{1}^{2}+c_{2}^{2}\right)\left(R_{\mathbf{X}}\left(0\right)-\mu_{\mathbf{X}}^{2}\right)+2c_{1}c_{2}\left(\mu_{\mathbf{X}}^{2}-R_{\mathbf{X}}\left(-\tau\right)\right)}}\right).</math>  
  
 
4. (25 Points)
 
4. (25 Points)
Line 73: Line 73:
 
Assume that the distribution of stars within a galaxy is accurately modeled by a 3-dimensional homogeneous Poisson process for which the following two facts are known to be true:
 
Assume that the distribution of stars within a galaxy is accurately modeled by a 3-dimensional homogeneous Poisson process for which the following two facts are known to be true:
  
• The number of starts in a region of volume <math>V</math>  is a Poisson random variable with mean <math>\lambda V</math> , where <math>\lambda>0</math> .
+
• The number of starts in a region of volume <math class="inline">V</math>  is a Poisson random variable with mean <math class="inline">\lambda V</math> , where <math class="inline">\lambda>0</math> .
  
 
• The number of starts in any two disjoint regions are statistically independent.
 
• The number of starts in any two disjoint regions are statistically independent.
Line 83: Line 83:
 
Find the probability density function (pdf) of the distance to the nearest star.
 
Find the probability density function (pdf) of the distance to the nearest star.
  
Let <math>\mathbf{R}</math>  be the distance to nearest star.
+
Let <math class="inline">\mathbf{R}</math>  be the distance to nearest star.
  
<math>F_{\mathbf{R}}\left(r\right)=P\left(\left\{ \mathbf{R}\leq r\right\} \right).</math>  
+
<math class="inline">F_{\mathbf{R}}\left(r\right)=P\left(\left\{ \mathbf{R}\leq r\right\} \right).</math>  
  
• <math>i)\; r<0,\; F_{\mathbf{R}}\left(r\right)=0.</math>  
+
• <math class="inline">i)\; r<0,\; F_{\mathbf{R}}\left(r\right)=0.</math>  
  
• <math>ii)\; r\geq0,   
+
• <math class="inline">ii)\; r\geq0,   
  
F_{\mathbf{R}}\left(r\right)=P\left(\left\{ \text{there exist one or more stars in the sphere of radius }r\right\} \right)</math><math>=1-P\left(\left\{ \text{no star exists in the sphere of radius }r\right\} \right)</math><math>=1-e^{-\frac{4}{3}\pi r^{3}\lambda}.</math>  
+
F_{\mathbf{R}}\left(r\right)=P\left(\left\{ \text{there exist one or more stars in the sphere of radius }r\right\} \right)</math><math class="inline">=1-P\left(\left\{ \text{no star exists in the sphere of radius }r\right\} \right)</math><math class="inline">=1-e^{-\frac{4}{3}\pi r^{3}\lambda}.</math>  
  
<math>\therefore f_{\mathbf{R}}\left(r\right)=\begin{cases}
+
<math class="inline">\therefore f_{\mathbf{R}}\left(r\right)=\begin{cases}
 
\begin{array}{lll}
 
\begin{array}{lll}
 
4\pi r^{2}\lambda e^{-\frac{4}{3}\pi r^{3}\lambda} &  & ,\; r\geq0\\
 
4\pi r^{2}\lambda e^{-\frac{4}{3}\pi r^{3}\lambda} &  & ,\; r\geq0\\
Line 103: Line 103:
 
Find the most likely distance to the nearest star.
 
Find the most likely distance to the nearest star.
  
<math>\frac{df_{\mathbf{R}}\left(r\right)}{dr}=8\pi r\lambda e^{-\frac{4}{3}\pi r^{3}\lambda}-\left(4\pi r^{2}\lambda\right)^{2}e^{-\frac{4}{3}\pi r^{3}\lambda}  =  0</math>
+
<math class="inline">\frac{df_{\mathbf{R}}\left(r\right)}{dr}=8\pi r\lambda e^{-\frac{4}{3}\pi r^{3}\lambda}-\left(4\pi r^{2}\lambda\right)^{2}e^{-\frac{4}{3}\pi r^{3}\lambda}  =  0</math>
 
<br>
 
<br>
<math>e^{-\frac{4}{3}\pi r^{3}\lambda}\left(8\pi r\lambda-\left(4\pi r^{2}\lambda\right)^{2}\right)  =  0</math>
+
<math class="inline">e^{-\frac{4}{3}\pi r^{3}\lambda}\left(8\pi r\lambda-\left(4\pi r^{2}\lambda\right)^{2}\right)  =  0</math>
 
<br>
 
<br>
<math>8\pi r\lambda-16\pi^{2}r^{4}\lambda^{2}  =  0</math>
+
<math class="inline">8\pi r\lambda-16\pi^{2}r^{4}\lambda^{2}  =  0</math>
 
<br>
 
<br>
<math>1-8\pi r^{3}\lambda  =  0</math>
+
<math class="inline">1-8\pi r^{3}\lambda  =  0</math>
 
<br>
 
<br>
  
<math>\therefore r=\left(\frac{1}{2\pi\lambda}\right)^{\frac{1}{3}}.</math>  
+
<math class="inline">\therefore r=\left(\frac{1}{2\pi\lambda}\right)^{\frac{1}{3}}.</math>  
  
 
----
 
----
 
[[ECE600|Back to ECE600]]
 
[[ECE600|Back to ECE600]]
  
[[ECE 600 QE|Back to ECE 600 QE]]
+
[[ECE 600 QE|Back to my ECE 600 QE page]]
 +
 
 +
[[ECE_PhD_Qualifying_Exams|Back to the general ECE PHD QE page]] (for problem discussion)

Latest revision as of 07:32, 27 June 2012

7.5 QE 2002 August

1. (25 Points)

Consider a random experiment in which a point is selected at random from the unit square (sample space $ S=\left[0,1\right]\times\left[0,1\right] ) $. Assume that all points in $ S $ are equally likely to be selected. Let the random variable $ \mathbf{X}\left(\omega\right) $ be the distance from the outcome $ \omega $ to the origin (the lower left corner of the unit square). Find the cumulative distribution function (cdf) $ F_{\mathbf{X}}\left(x\right)=P\left(\left\{ \mathbf{X}\leq x\right\} \right) $ of the random variable $ \mathbf{X} $ . Make sure and specify your answer for all $ x\in\mathbf{R} $ .

Pasted20.png

$ F_{\mathbf{X}}\left(x\right)=P\left(\left\{ \mathbf{X}\leq x\right\} \right)=P\left(\left\{ w:\mathbf{X}\left(w\right)\leq x\right\} \right). $

$ i)\; x<0,\; F_{\mathbf{X}}\left(x\right)=0 $

$ ii)\;0\leq x\leq1,\; F_{\mathbf{X}}\left(x\right)=\frac{\pi}{4}x^{2} $

$ iii)\;1<x<\sqrt{2}, F_{\mathbf{X}}\left(x\right)=2\left(\frac{1}{2}\times1\times\sqrt{x^{2}-1}\right)+\pi x^{2}\times\frac{\frac{\pi}{2}-2\theta}{2\pi} $$ =\sqrt{x^{2}-1}+\frac{\pi}{4}x^{2}-\theta x^{2}=\sqrt{x^{2}-1}+\frac{\pi}{4}x^{2}-x^{2}\cos^{-1}\frac{1}{x} $$ =\sqrt{x^{2}-1}+\left(\frac{\pi}{4}-\cos^{-1}\frac{1}{x}\right)x^{2}. $

$ iv)\; x\geq\sqrt{2},\; F_{\mathbf{X}}\left(x\right)=1 $

$ \therefore\; F_{\mathbf{X}}\left(x\right)=\begin{cases} \begin{array}{lll} 0 & & ,\; x<0\\ \frac{\pi}{4}x^{2} & & ,\;0\leq x\leq1\\ \sqrt{x^{2}-1}+\left(\frac{\pi}{4}-\cos^{-1}\frac{1}{x}\right)x^{2} & & ,\;1<x<\sqrt{2}\\ 1 & & ,\; x\geq\sqrt{2}. \end{array}\end{cases} $

2. (25 Points)

Let $ \mathbf{X} $ and $ \mathbf{Y} $ be two jointly distributed Gaussian random variables. The random variable $ \mathbf{X} $ has mean $ \mu_{\mathbf{X}} $ and variance $ \sigma_{\mathbf{X}}^{2} $ . The correlation coefficient between $ \mathbf{X} $ and $ \mathbf{Y} $ is $ r $ . Define a new random variable $ \mathbf{Z} $ by $ \mathbf{Z}=a\mathbf{X}+b\mathbf{Y} $, where $ a $ and $ b $ are real numbers.

(a)

Prove that $ \mathbf{Z} $ is a Gaussian random variable.

$ \Phi_{\mathbf{Z}}\left(\omega\right)=E\left[e^{i\omega\mathbf{Z}}\right]=E\left[e^{i\omega\left(a\mathbf{X}+b\mathbf{Y}\right)}\right]=\Phi_{\mathbf{XY}}\left(a\omega,b\omega\right). $

$ \Phi_{\mathbf{XY}}\left(\omega_{1},\omega_{2}\right)=\exp\left[i\left(\mu_{\mathbf{X}}\omega_{1}+\mu_{\mathbf{Y}}\omega_{2}\right)-\frac{1}{2}\left(\sigma_{\mathbf{X}}^{2}\omega_{1}^{2}+2r\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}\omega_{1}\omega_{2}+\sigma_{\mathbf{Y}}^{2}\omega_{2}^{2}\right)\right]. $

$ \Phi_{\mathbf{Z}}\left(\omega\right)=\Phi_{\mathbf{XY}}\left(a\omega,b\omega\right)=\exp\left[i\left(a\mu_{\mathbf{X}}+b\mu_{\mathbf{Y}}\right)\omega-\frac{1}{2}\left(a^{2}\sigma_{\mathbf{X}}^{2}+2rab\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}+b^{2}\sigma_{\mathbf{Y}}^{2}\right)\omega^{2}\right], $

which is the characteristic function of a Gaussian random variable with mean $ a\mu_{\mathbf{X}}+b\mu_{\mathbf{Y}} $ and variance $ a^{2}\sigma_{\mathbf{X}}^{2}+2rab\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}+b^{2}\sigma_{\mathbf{Y}}^{2} $ .

(b)

Find the mean of $ \mathbf{Z} $ . Express your answer in terms of the parameters $ \mu_{\mathbf{X}} $ , $ \sigma_{\mathbf{X}}^{2} $ , $ \mu_{\mathbf{Y}} $ , $ \sigma_{\mathbf{Y}}^{2} $ , $ r $ , $ a $ , and $ b $ .

$ E\left[\mathbf{Z}\right]=a\mu_{\mathbf{X}}+b\mu_{\mathbf{Y}}. $

(c)

Find the variance of $ \mathbf{Z} $ . Express your answer in terms of the parameters $ \mu_{\mathbf{X}} $ , $ \sigma_{\mathbf{X}}^{2} $ , $ \mu_{\mathbf{Y}} $ , $ \sigma_{\mathbf{Y}}^{2} $ , $ r $ , $ a $ , and $ b $ .

$ Var\left[\mathbf{Z}\right]=a^{2}\sigma_{\mathbf{X}}^{2}+2rab\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}+b^{2}\sigma_{\mathbf{Y}}^{2}. $

3. (25 Points)

Let $ \mathbf{X}\left(t\right) $ be a wide-sense stationary Gaussian random process with mean $ \mu_{\mathbf{X}} $ and autocorrelation function $ R_{\mathbf{XX}}\left(\tau\right) $ . Let $ \mathbf{Y}\left(t\right)=c_{1}\mathbf{X}\left(t\right)-c_{2}\mathbf{X}\left(t-\tau\right), $ where $ c_{1} $ and $ c_{2} $ are real numbers. What is the probability that $ \mathbf{Y}\left(t\right) $ is less than or equal to a real number $ \gamma $ ? Express your answer in terms of “phi-function”$ \Phi\left(x\right)=\int_{-\infty}^{x}\frac{1}{\sqrt{2\pi}}e^{-z^{2}/2}dz. $

Solution

Since $ \mathbf{X}\left(t\right) $ is a WSS Gaussian random process, $ \mathbf{Y}\left(t\right) $ is a Gaussian process.

$ E\left[\mathbf{Y}\left(t\right)\right]=c_{1}E\left[\mathbf{X}\left(t\right)\right]-c_{2}E\left[\mathbf{X}\left(t-\tau\right)\right]=\left(c_{1}-c_{2}\right)\mu_{\mathbf{X}}. $

$ E\left[\mathbf{Y}^{2}\left(t\right)\right]=E\left[\left(c_{1}\mathbf{X}\left(t\right)-c_{2}\mathbf{X}\left(t-\tau\right)\right)^{2}\right] $$ =c_{1}^{2}E\left[\mathbf{X}^{2}\left(t\right)\right]-2c_{1}c_{2}E\left[\mathbf{X}\left(t\right)\mathbf{X}\left(t-\tau\right)\right]+c_{2}^{2}E\left[\mathbf{X}^{2}\left(t-\tau\right)\right] $$ =\left(c_{1}^{2}+c_{2}^{2}\right)R_{\mathbf{X}}\left(0\right)-2c_{1}c_{2}R_{\mathbf{X}}\left(-\tau\right). $

$ Var\left[\mathbf{Y}\left(t\right)\right]=E\left[\mathbf{Y}^{2}\left(t\right)\right]-E\left[\mathbf{Y}\left(t\right)\right]^{2} $$ =\left(c_{1}^{2}+c_{2}^{2}\right)R_{\mathbf{X}}\left(0\right)-2c_{1}c_{2}R_{\mathbf{X}}\left(-\tau\right)-\left(\left(c_{1}-c_{2}\right)\mu_{\mathbf{X}}\right)^{2} $$ =\left(c_{1}^{2}+c_{2}^{2}\right)R_{\mathbf{X}}\left(0\right)-2c_{1}c_{2}R_{\mathbf{X}}\left(-\tau\right)-\left(c_{1}^{2}+c_{2}^{2}\right)\mu_{\mathbf{X}}^{2}+2c_{1}c_{2}\mu_{\mathbf{X}}^{2} $$ =\left(c_{1}^{2}+c_{2}^{2}\right)\left(R_{\mathbf{X}}\left(0\right)-\mu_{\mathbf{X}}^{2}\right)+2c_{1}c_{2}\left(\mu_{\mathbf{X}}^{2}-R_{\mathbf{X}}\left(-\tau\right)\right). $

$ P\left(\left\{ \mathbf{Y}\left(t\right)\leq r\right\} \right)=\Phi\left(\frac{r-\left(c_{1}-c_{2}\right)\mu_{\mathbf{X}}}{\sqrt{\left(c_{1}^{2}+c_{2}^{2}\right)\left(R_{\mathbf{X}}\left(0\right)-\mu_{\mathbf{X}}^{2}\right)+2c_{1}c_{2}\left(\mu_{\mathbf{X}}^{2}-R_{\mathbf{X}}\left(-\tau\right)\right)}}\right). $

4. (25 Points)

Assume that the distribution of stars within a galaxy is accurately modeled by a 3-dimensional homogeneous Poisson process for which the following two facts are known to be true:

• The number of starts in a region of volume $ V $ is a Poisson random variable with mean $ \lambda V $ , where $ \lambda>0 $ .

• The number of starts in any two disjoint regions are statistically independent.

Assume you are located at an arbitrary position near the center of the galaxy.

(a)

Find the probability density function (pdf) of the distance to the nearest star.

Let $ \mathbf{R} $ be the distance to nearest star.

$ F_{\mathbf{R}}\left(r\right)=P\left(\left\{ \mathbf{R}\leq r\right\} \right). $

$ i)\; r<0,\; F_{\mathbf{R}}\left(r\right)=0. $

$ ii)\; r\geq0, F_{\mathbf{R}}\left(r\right)=P\left(\left\{ \text{there exist one or more stars in the sphere of radius }r\right\} \right) $$ =1-P\left(\left\{ \text{no star exists in the sphere of radius }r\right\} \right) $$ =1-e^{-\frac{4}{3}\pi r^{3}\lambda}. $

$ \therefore f_{\mathbf{R}}\left(r\right)=\begin{cases} \begin{array}{lll} 4\pi r^{2}\lambda e^{-\frac{4}{3}\pi r^{3}\lambda} & & ,\; r\geq0\\ 0 & & ,\; r<0. \end{array}\end{cases} $

(b)

Find the most likely distance to the nearest star.

$ \frac{df_{\mathbf{R}}\left(r\right)}{dr}=8\pi r\lambda e^{-\frac{4}{3}\pi r^{3}\lambda}-\left(4\pi r^{2}\lambda\right)^{2}e^{-\frac{4}{3}\pi r^{3}\lambda} = 0 $
$ e^{-\frac{4}{3}\pi r^{3}\lambda}\left(8\pi r\lambda-\left(4\pi r^{2}\lambda\right)^{2}\right) = 0 $
$ 8\pi r\lambda-16\pi^{2}r^{4}\lambda^{2} = 0 $
$ 1-8\pi r^{3}\lambda = 0 $

$ \therefore r=\left(\frac{1}{2\pi\lambda}\right)^{\frac{1}{3}}. $


Back to ECE600

Back to my ECE 600 QE page

Back to the general ECE PHD QE page (for problem discussion)

Alumni Liaison

To all math majors: "Mathematics is a wonderfully rich subject."

Dr. Paul Garrett