(New page: ==7.11 QE 2006 January== 1 (33 points) Let <math>\mathbf{X}</math> and <math>\mathbf{Y}</math> be two joinly distributed random variables having joint pdf <math>f_{\mathbf{XY}}\left(...)
 
 
(4 intermediate revisions by 2 users not shown)
Line 1: Line 1:
==7.11 QE 2006 January==
+
==7.11 [[ECE_PhD_Qualifying_Exams|QE]] 2006 January==
  
 
1 (33 points)
 
1 (33 points)
  
Let <math>\mathbf{X}</math>  and <math>\mathbf{Y}</math>  be two joinly distributed random variables having joint pdf  
+
Let <math class="inline">\mathbf{X}</math>  and <math class="inline">\mathbf{Y}</math>  be two joinly distributed random variables having joint pdf  
  
<math>f_{\mathbf{XY}}\left(x,y\right)=\left\{ \begin{array}{lll}
+
<math class="inline">f_{\mathbf{XY}}\left(x,y\right)=\left\{ \begin{array}{lll}
 
1, &  & \text{ for }0\leq x\leq1\text{ and }0\leq y\leq1\\
 
1, &  & \text{ for }0\leq x\leq1\text{ and }0\leq y\leq1\\
 
0, &  & \text{ elsewhere. }
 
0, &  & \text{ elsewhere. }
Line 12: Line 12:
 
(a)
 
(a)
  
Are <math>\mathbf{X}</math>  and <math>\mathbf{Y}</math>  statistically independent? Justify your answer.
+
Are <math class="inline">\mathbf{X}</math>  and <math class="inline">\mathbf{Y}</math>  statistically independent? Justify your answer.
  
<math>f_{\mathbf{X}}\left(x\right)=\int_{-\infty}^{\infty}f_{\mathbf{XY}}\left(x,y\right)dy=\int_{0}^{1}dy=1\text{ for }0\leq x\leq1.</math>  
+
<math class="inline">f_{\mathbf{X}}\left(x\right)=\int_{-\infty}^{\infty}f_{\mathbf{XY}}\left(x,y\right)dy=\int_{0}^{1}dy=1\text{ for }0\leq x\leq1.</math>  
  
<math>f_{\mathbf{Y}}\left(y\right)=\int_{-\infty}^{\infty}f_{\mathbf{XY}}\left(x,y\right)dx=\int_{0}^{1}dx=1\text{ for }0\leq y\leq1.</math>   
+
<math class="inline">f_{\mathbf{Y}}\left(y\right)=\int_{-\infty}^{\infty}f_{\mathbf{XY}}\left(x,y\right)dx=\int_{0}^{1}dx=1\text{ for }0\leq y\leq1.</math>   
  
Since <math>f_{\mathbf{XY}}\left(x,y\right)=f_{\mathbf{X}}\left(x\right)f_{\mathbf{Y}}\left(y\right)</math> , <math>\mathbf{X}</math>  and <math>\mathbf{Y}</math>  are <math>statistically independent.</math>
+
Since <math class="inline">f_{\mathbf{XY}}\left(x,y\right)=f_{\mathbf{X}}\left(x\right)f_{\mathbf{Y}}\left(y\right)</math> , <math class="inline">\mathbf{X}</math>  and <math class="inline">\mathbf{Y}</math>  are <math class="inline">statistically independent.</math>
  
 
(b)
 
(b)
  
Let <math>\mathbf{Z}</math>  be a new random variable defined as <math>\mathbf{Z}=\mathbf{X}+\mathbf{Y}</math> . Find the cdf of <math>\mathbf{Z}</math> .
+
Let <math class="inline">\mathbf{Z}</math>  be a new random variable defined as <math class="inline">\mathbf{Z}=\mathbf{X}+\mathbf{Y}</math> . Find the cdf of <math class="inline">\mathbf{Z}</math> .
  
<math>F_{\mathbf{Z}}\left(z\right)=P\left(\left\{ \mathbf{Z}\leq z\right\} \right)=P\left(\left\{ \mathbf{X}+\mathbf{Y}\leq z\right\} \right).</math>  
+
<math class="inline">F_{\mathbf{Z}}\left(z\right)=P\left(\left\{ \mathbf{Z}\leq z\right\} \right)=P\left(\left\{ \mathbf{X}+\mathbf{Y}\leq z\right\} \right).</math>  
  
• i) if <math>z<0</math> , then <math>F_{\mathbf{Z}}\left(z\right)=0</math> .
+
• i) if <math class="inline">z<0</math> , then <math class="inline">F_{\mathbf{Z}}\left(z\right)=0</math> .
  
• ii) if <math>z\geq2</math> , then <math>F_{\mathbf{Z}}\left(z\right)=1</math> .
+
• ii) if <math class="inline">z\geq2</math> , then <math class="inline">F_{\mathbf{Z}}\left(z\right)=1</math> .
  
• iii) if <math>0\leq z\leq1</math> , then <math>F_{\mathbf{Z}}\left(z\right)=\iint f_{\mathbf{XY}}\left(x,y\right)dxdy=\iint1\cdot dxdy=\frac{1}{2}z^{2}</math> .
+
• iii) if <math class="inline">0\leq z\leq1</math> , then <math class="inline">F_{\mathbf{Z}}\left(z\right)=\iint f_{\mathbf{XY}}\left(x,y\right)dxdy=\iint1\cdot dxdy=\frac{1}{2}z^{2}</math> .
  
• iv) if <math>1<z<2</math> , then <math>F_{\mathbf{Z}}=\iint f_{\mathbf{XY}}\left(x,y\right)dxdy=\iint1\cdot dxdy=1-\frac{1}{2}\left(2-z\right)^{2}</math> .
+
• iv) if <math class="inline">1<z<2</math> , then <math class="inline">F_{\mathbf{Z}}=\iint f_{\mathbf{XY}}\left(x,y\right)dxdy=\iint1\cdot dxdy=1-\frac{1}{2}\left(2-z\right)^{2}</math> .
  
<math>\therefore F_{\mathbf{Z}}\left(z\right)=\left\{ \begin{array}{lll}
+
<math class="inline">\therefore F_{\mathbf{Z}}\left(z\right)=\left\{ \begin{array}{lll}
 
0 &  & ,z<0\\
 
0 &  & ,z<0\\
 
\frac{1}{2}z^{2} &  & ,0\leq z\leq1\\
 
\frac{1}{2}z^{2} &  & ,0\leq z\leq1\\
Line 41: Line 41:
 
\end{array}\right.</math>  
 
\end{array}\right.</math>  
  
[[Image:003.eps]]
+
[[Image:003.png]]
  
 
(c)
 
(c)
  
Find the variance of <math>\mathbf{Z}</math> .
+
Find the variance of <math class="inline">\mathbf{Z}</math> .
  
<math>f_{\mathbf{Z}}\left(z\right)=\left\{ \begin{array}{lll}
+
<math class="inline">f_{\mathbf{Z}}\left(z\right)=\left\{ \begin{array}{lll}
 
z &  & ,0\leq z\leq1\\
 
z &  & ,0\leq z\leq1\\
 
2-z &  & ,1<z<2\\
 
2-z &  & ,1<z<2\\
0 &  & \textrm{,otherwise.}
+
0 &  & \text{,otherwise.}
 
\end{array}\right.</math>  
 
\end{array}\right.</math>  
  
<math>E\left[\mathbf{Z}\right]=\int_{-\infty}^{\infty}z\cdot f_{\mathbf{Z}}\left(z\right)dz=\int_{0}^{1}z^{2}dz+\int_{1}^{2}\left(2z-z^{2}\right)dz=\frac{1}{3}z^{3}\Bigl|_{0}^{1}+z^{2}-\frac{1}{3}z^{3}\Bigl|_{1}^{2}=\frac{1}{3}+3-\frac{7}{3}=1.</math> <math>E\left[\mathbf{Z}^{2}\right]=\int_{-\infty}^{\infty}z^{2}\cdot f_{\mathbf{Z}}\left(z\right)dz=\int_{0}^{1}z^{3}dz+\int_{1}^{2}\left(2z^{2}-z^{3}\right)dz=\frac{1}{4}z^{4}\Bigl|_{0}^{1}+\frac{2}{3}z^{3}-\frac{1}{4}z^{4}\Bigl|_{1}^{2}=\frac{1}{4}+\frac{14}{3}-\frac{15}{4}=\frac{7}{6}.</math> <math>Var\left[\mathbf{Z}\right]=E\left[\mathbf{Z}^{2}\right]-\left(E\left[\mathbf{Z}\right]\right)^{2}=\frac{1}{6}.</math>  
+
<math class="inline">E\left[\mathbf{Z}\right]=\int_{-\infty}^{\infty}z\cdot f_{\mathbf{Z}}\left(z\right)dz=\int_{0}^{1}z^{2}dz+\int_{1}^{2}\left(2z-z^{2}\right)dz=\frac{1}{3}z^{3}\Bigl|_{0}^{1}+z^{2}-\frac{1}{3}z^{3}\Bigl|_{1}^{2}=\frac{1}{3}+3-\frac{7}{3}=1.</math> <math class="inline">E\left[\mathbf{Z}^{2}\right]=\int_{-\infty}^{\infty}z^{2}\cdot f_{\mathbf{Z}}\left(z\right)dz=\int_{0}^{1}z^{3}dz+\int_{1}^{2}\left(2z^{2}-z^{3}\right)dz=\frac{1}{4}z^{4}\Bigl|_{0}^{1}+\frac{2}{3}z^{3}-\frac{1}{4}z^{4}\Bigl|_{1}^{2}=\frac{1}{4}+\frac{14}{3}-\frac{15}{4}=\frac{7}{6}.</math> <math class="inline">Var\left[\mathbf{Z}\right]=E\left[\mathbf{Z}^{2}\right]-\left(E\left[\mathbf{Z}\right]\right)^{2}=\frac{1}{6}.</math>  
  
 
2 (33 points)
 
2 (33 points)
  
Suppose that <math>\mathbf{X}</math>  and <math>\mathbf{N}</math>  are two jointly distributed random variables, with <math>\mathbf{X}</math>  being a continuous random variable that is uniformly distributed on the interval <math>\left(0,1\right)</math>  and <math>\mathbf{N}</math>  being a discrete random variable taking on values <math>0,1,2,\cdots</math>  and having conditional probability mass function <math>p_{\mathbf{N}}\left(n|\left\{ \mathbf{X}=x\right\} \right)=x^{n}\left(1-x\right),\quad n=0,1,2,\cdots</math> .
+
Suppose that <math class="inline">\mathbf{X}</math>  and <math class="inline">\mathbf{N}</math>  are two jointly distributed random variables, with <math class="inline">\mathbf{X}</math>  being a continuous random variable that is uniformly distributed on the interval <math class="inline">\left(0,1\right)</math>  and <math class="inline">\mathbf{N}</math>  being a discrete random variable taking on values <math class="inline">0,1,2,\cdots</math>  and having conditional probability mass function <math class="inline">p_{\mathbf{N}}\left(n|\left\{ \mathbf{X}=x\right\} \right)=x^{n}\left(1-x\right),\quad n=0,1,2,\cdots</math> .
  
 
(a)
 
(a)
Line 63: Line 63:
 
Find the probability that \mathbf{N}=n .
 
Find the probability that \mathbf{N}=n .
  
<math>f_{\mathbf{X}}\left(x\right)=\left\{ \begin{array}{lll}
+
<math class="inline">f_{\mathbf{X}}\left(x\right)=\left\{ \begin{array}{lll}
 
1 &  & ,0\leq x\leq1\\
 
1 &  & ,0\leq x\leq1\\
0 &  & ,\textrm{otherwise.}
+
0 &  & ,\text{otherwise.}
 
\end{array}\right.</math>  
 
\end{array}\right.</math>  
  
<math>P\left(\left\{ \mathbf{N}=n\right\} \right)=\int_{-\infty}^{\infty}p_{\mathbf{N}}\left(n|\left\{ \mathbf{X}=x\right\} \right)f_{\mathbf{X}}\left(x\right)dx=\int_{0}^{1}x^{n}\left(1-x\right)dx</math>
+
<math class="inline">P\left(\left\{ \mathbf{N}=n\right\} \right)=\int_{-\infty}^{\infty}p_{\mathbf{N}}\left(n|\left\{ \mathbf{X}=x\right\} \right)f_{\mathbf{X}}\left(x\right)dx=\int_{0}^{1}x^{n}\left(1-x\right)dx</math>
  
<math>=\frac{1}{n+1}x^{n+1}-\frac{1}{n+2}x^{n+2}\Bigl|_{0}^{1}=\frac{1}{n+1}-\frac{1}{n+2}=\frac{1}{\left(n+1\right)\left(n+2\right)}. </math>
+
<math class="inline">=\frac{1}{n+1}x^{n+1}-\frac{1}{n+2}x^{n+2}\Bigl|_{0}^{1}=\frac{1}{n+1}-\frac{1}{n+2}=\frac{1}{\left(n+1\right)\left(n+2\right)}. </math>
  
 
(b)
 
(b)
  
Find the conditional density of <math>\mathbf{X}</math>  given <math>\left\{ \mathbf{N}=n\right\}</math>  .
+
Find the conditional density of <math class="inline">\mathbf{X}</math>  given <math class="inline">\left\{ \mathbf{N}=n\right\}</math>  .
  
 
By using Bayes' theorem,
 
By using Bayes' theorem,
  
<math>f_{\mathbf{X}}\left(x|\left\{ \mathbf{N}=n\right\} \right)=\frac{p_{\mathbf{N}}\left(n|\left\{ \mathbf{X}=x\right\} \right)f_{\mathbf{X}}\left(x\right)}{p_{\mathbf{N}}\left(n\right)}=\left\{ \begin{array}{lll}
+
<math class="inline">f_{\mathbf{X}}\left(x|\left\{ \mathbf{N}=n\right\} \right)=\frac{p_{\mathbf{N}}\left(n|\left\{ \mathbf{X}=x\right\} \right)f_{\mathbf{X}}\left(x\right)}{p_{\mathbf{N}}\left(n\right)}=\left\{ \begin{array}{lll}
 
\left(n+1\right)\left(n+2\right)x^{n}\left(1-x\right) &  & ,0\leq x\leq1\\
 
\left(n+1\right)\left(n+2\right)x^{n}\left(1-x\right) &  & ,0\leq x\leq1\\
 
0 &  & ,\text{otherwise.}
 
0 &  & ,\text{otherwise.}
Line 85: Line 85:
 
(c)
 
(c)
  
Find the minimum mean-square error estimator of <math>\mathbf{X}</math>  given <math>\left\{ \mathbf{N}=n\right\}</math>  .
+
Find the minimum mean-square error estimator of <math class="inline">\mathbf{X}</math>  given <math class="inline">\left\{ \mathbf{N}=n\right\}</math>  .
  
<math>MMSE=E\left[\mathbf{X}|\left\{ \mathbf{N}=n\right\} \right]=\int_{-\infty}^{\infty}x\cdot f_{\mathbf{X}}\left(x|\left\{ \mathbf{N}=n\right\} \right)dx=\int_{0}^{1}\left(n+1\right)\left(n+2\right)x^{n+1}\left(1-x\right)dx</math><math>=\left(n+1\right)\left(n+2\right)\left(\frac{1}{n+2}x^{n+2}-\frac{1}{n+3}x^{n+3}\right)\biggl|_{0}^{1}=\left(n+1\right)\left(n+2\right)\left(\frac{1}{n+2}-\frac{1}{n+3}\right)</math><math>=\frac{\left(n+1\right)\left(n+2\right)}{\left(n+2\right)\left(n+3\right)}=\frac{n+1}{n+3}.</math>
+
<math class="inline">MMSE=E\left[\mathbf{X}|\left\{ \mathbf{N}=n\right\} \right]=\int_{-\infty}^{\infty}x\cdot f_{\mathbf{X}}\left(x|\left\{ \mathbf{N}=n\right\} \right)dx=\int_{0}^{1}\left(n+1\right)\left(n+2\right)x^{n+1}\left(1-x\right)dx</math><math class="inline">=\left(n+1\right)\left(n+2\right)\left(\frac{1}{n+2}x^{n+2}-\frac{1}{n+3}x^{n+3}\right)\biggl|_{0}^{1}=\left(n+1\right)\left(n+2\right)\left(\frac{1}{n+2}-\frac{1}{n+3}\right)</math><math class="inline">=\frac{\left(n+1\right)\left(n+2\right)}{\left(n+2\right)\left(n+3\right)}=\frac{n+1}{n+3}.</math>
  
 
3 (34 points)
 
3 (34 points)
Line 97: Line 97:
 
2. The number of towers in any two disjoint regions are statistically independent.
 
2. The number of towers in any two disjoint regions are statistically independent.
  
Assume you are located at a point we will call the origin within this 2-dimensional region, and let <math>R_{\left(1\right)}<R_{\left(2\right)}<R_{\left(3\right)}<\cdots</math>  be the ordered distances between the origin and the towers.
+
Assume you are located at a point we will call the origin within this 2-dimensional region, and let <math class="inline">R_{\left(1\right)}<R_{\left(2\right)}<R_{\left(3\right)}<\cdots</math>  be the ordered distances between the origin and the towers.
  
 
(a)
 
(a)
  
Show that <math>R_{\left(1\right)}^{2},R_{\left(2\right)}^{2},R_{\left(3\right)}^{2},\cdots</math>  are the points of a one-dimensional homogeneous Poisson process.
+
Show that <math class="inline">R_{\left(1\right)}^{2},R_{\left(2\right)}^{2},R_{\left(3\right)}^{2},\cdots</math>  are the points of a one-dimensional homogeneous Poisson process.
  
<math>P\left(R_{\left(k+1\right)}^{2}-R_{\left(k\right)}^{2}>r\right)=P\left(\textrm{there is no tower in area }\pi r\right)=\frac{\left(\lambda\pi r\right)^{0}}{0!}e^{-\lambda\pi r}=e^{-\lambda\pi r}.</math>  
+
<math class="inline">P\left(R_{\left(k+1\right)}^{2}-R_{\left(k\right)}^{2}>r\right)=P\left(\text{there is no tower in area }\pi r\right)=\frac{\left(\lambda\pi r\right)^{0}}{0!}e^{-\lambda\pi r}=e^{-\lambda\pi r}.</math>  
  
<math>P\left(R_{\left(k+1\right)}^{2}-R_{\left(k\right)}^{2}\leq r\right)=P\left(\left\{ \text{there is at least one tower in area }\pi r\right\} \right)</math><math>=1-e^{-\lambda\pi r}\text{: CDF of exponential random variable}.</math>  
+
<math class="inline">P\left(R_{\left(k+1\right)}^{2}-R_{\left(k\right)}^{2}\leq r\right)=P\left(\left\{ \text{there is at least one tower in area }\pi r\right\} \right)</math><math class="inline">=1-e^{-\lambda\pi r}\text{: CDF of exponential random variable}.</math>  
  
 
ref. You can see the expressions about exponentail distribution [CS1ExponentialDistribution].
 
ref. You can see the expressions about exponentail distribution [CS1ExponentialDistribution].
  
<math>R_{\left(k+1\right)}^{2}-R_{\left(k\right)}^{2}</math>  is an exponential random variable with parameter <math>\lambda\pi</math> .
+
<math class="inline">R_{\left(k+1\right)}^{2}-R_{\left(k\right)}^{2}</math>  is an exponential random variable with parameter <math class="inline">\lambda\pi</math> .
  
<math>\therefore R_{\left(1\right)}^{2},R_{\left(2\right)}^{2},R_{\left(3\right)}^{2},\cdots</math>  are the points of a one-dimensional homogeneous Poisson process.
+
<math class="inline">\therefore R_{\left(1\right)}^{2},R_{\left(2\right)}^{2},R_{\left(3\right)}^{2},\cdots</math>  are the points of a one-dimensional homogeneous Poisson process.
  
 
(b)
 
(b)
  
What is the rate of the Poisson process in part (a)? <math>\lambda\pi</math> .
+
What is the rate of the Poisson process in part (a)? <math class="inline">\lambda\pi</math> .
  
cf. The mean value for the exponential random variable is <math>\frac{1}{\lambda\pi}</math> .
+
cf. The mean value for the exponential random variable is <math class="inline">\frac{1}{\lambda\pi}</math> .
  
 
(c)
 
(c)
  
Determine the density function of <math>R_{\left(k\right)}</math> , the distance to the <math>k</math> -th nearest cell tower.
+
Determine the density function of <math class="inline">R_{\left(k\right)}</math> , the distance to the <math class="inline">k</math> -th nearest cell tower.
  
<math>F_{k}\left(x\right)\triangleq P\left(R_{\left(k\right)}\leq x\right)</math><math>=P\left(\text{There are at least }k\text{ towers within the distance between origin and }x\right)</math><math>=P\left(N\left(0,x\right)\geq k\right)=1-P\left(N\left(0,x\right)\leq k-1\right)=1-\sum_{j=0}^{k-1}\frac{\left(\lambda\pi x^{2}\right)^{j}}{j!}e^{-\lambda\pi x^{2}}.</math>  
+
<math class="inline">F_{k}\left(x\right)\triangleq P\left(R_{\left(k\right)}\leq x\right)</math><math class="inline">=P\left(\text{There are at least }k\text{ towers within the distance between origin and }x\right)</math><math class="inline">=P\left(N\left(0,x\right)\geq k\right)=1-P\left(N\left(0,x\right)\leq k-1\right)=1-\sum_{j=0}^{k-1}\frac{\left(\lambda\pi x^{2}\right)^{j}}{j!}e^{-\lambda\pi x^{2}}.</math>  
  
<math>f_{k}\left(x\right)=\frac{dF_{k}\left(x\right)}{dx}=-\sum_{j=0}^{k-1}\left\{ \frac{j\left(\lambda\pi x^{2}\right)^{j-1}\left(2\lambda\pi x\right)}{j!}e^{-\lambda\pi x^{2}}+\frac{\left(\lambda\pi x^{2}\right)^{j}}{j!}e^{-\lambda\pi x^{2}}\left(-2\lambda\pi x\right)\right\} </math><math>=\left(2\lambda\pi x\right)e^{-\lambda\pi x^{2}}\cdot\left\{ -\sum_{j=1}^{k-1}\frac{\left(\lambda\pi x^{2}\right)^{j-1}}{\left(j-1\right)!}e^{-\lambda\pi x^{2}}+\sum_{j=0}^{k-1}\frac{\left(\lambda\pi x^{2}\right)^{j}}{j!}\right\} </math><math>=\left(2\lambda\pi x\right)e^{-\lambda\pi x^{2}}\cdot\left\{ -\sum_{j=0}^{k-2}\frac{\left(\lambda\pi x^{2}\right)^{j}}{j!}+\sum_{j=0}^{k-1}\frac{\left(\lambda\pi x^{2}\right)^{j}}{j!}\right\} </math><math>=\left(2\lambda\pi x\right)e^{-\lambda\pi x^{2}}\cdot\frac{\left(\lambda\pi x^{2}\right)^{k-1}}{\left(k-1\right)!}.</math>  
+
<math class="inline">f_{k}\left(x\right)=\frac{dF_{k}\left(x\right)}{dx}=-\sum_{j=0}^{k-1}\left\{ \frac{j\left(\lambda\pi x^{2}\right)^{j-1}\left(2\lambda\pi x\right)}{j!}e^{-\lambda\pi x^{2}}+\frac{\left(\lambda\pi x^{2}\right)^{j}}{j!}e^{-\lambda\pi x^{2}}\left(-2\lambda\pi x\right)\right\} </math><math class="inline">=\left(2\lambda\pi x\right)e^{-\lambda\pi x^{2}}\cdot\left\{ -\sum_{j=1}^{k-1}\frac{\left(\lambda\pi x^{2}\right)^{j-1}}{\left(j-1\right)!}e^{-\lambda\pi x^{2}}+\sum_{j=0}^{k-1}\frac{\left(\lambda\pi x^{2}\right)^{j}}{j!}\right\} </math><math class="inline">=\left(2\lambda\pi x\right)e^{-\lambda\pi x^{2}}\cdot\left\{ -\sum_{j=0}^{k-2}\frac{\left(\lambda\pi x^{2}\right)^{j}}{j!}+\sum_{j=0}^{k-1}\frac{\left(\lambda\pi x^{2}\right)^{j}}{j!}\right\} </math><math class="inline">=\left(2\lambda\pi x\right)e^{-\lambda\pi x^{2}}\cdot\frac{\left(\lambda\pi x^{2}\right)^{k-1}}{\left(k-1\right)!}.</math>  
  
 
----
 
----
 
[[ECE600|Back to ECE600]]
 
[[ECE600|Back to ECE600]]
  
[[ECE 600 QE|Back to ECE 600 QE]]
+
[[ECE 600 QE|Back to my ECE 600 QE page]]
 +
 
 +
[[ECE_PhD_Qualifying_Exams|Back to the general ECE PHD QE page]] (for problem discussion)

Latest revision as of 07:30, 27 June 2012

7.11 QE 2006 January

1 (33 points)

Let $ \mathbf{X} $ and $ \mathbf{Y} $ be two joinly distributed random variables having joint pdf

$ f_{\mathbf{XY}}\left(x,y\right)=\left\{ \begin{array}{lll} 1, & & \text{ for }0\leq x\leq1\text{ and }0\leq y\leq1\\ 0, & & \text{ elsewhere. } \end{array}\right. $

(a)

Are $ \mathbf{X} $ and $ \mathbf{Y} $ statistically independent? Justify your answer.

$ f_{\mathbf{X}}\left(x\right)=\int_{-\infty}^{\infty}f_{\mathbf{XY}}\left(x,y\right)dy=\int_{0}^{1}dy=1\text{ for }0\leq x\leq1. $

$ f_{\mathbf{Y}}\left(y\right)=\int_{-\infty}^{\infty}f_{\mathbf{XY}}\left(x,y\right)dx=\int_{0}^{1}dx=1\text{ for }0\leq y\leq1. $

Since $ f_{\mathbf{XY}}\left(x,y\right)=f_{\mathbf{X}}\left(x\right)f_{\mathbf{Y}}\left(y\right) $ , $ \mathbf{X} $ and $ \mathbf{Y} $ are $ statistically independent. $

(b)

Let $ \mathbf{Z} $ be a new random variable defined as $ \mathbf{Z}=\mathbf{X}+\mathbf{Y} $ . Find the cdf of $ \mathbf{Z} $ .

$ F_{\mathbf{Z}}\left(z\right)=P\left(\left\{ \mathbf{Z}\leq z\right\} \right)=P\left(\left\{ \mathbf{X}+\mathbf{Y}\leq z\right\} \right). $

• i) if $ z<0 $ , then $ F_{\mathbf{Z}}\left(z\right)=0 $ .

• ii) if $ z\geq2 $ , then $ F_{\mathbf{Z}}\left(z\right)=1 $ .

• iii) if $ 0\leq z\leq1 $ , then $ F_{\mathbf{Z}}\left(z\right)=\iint f_{\mathbf{XY}}\left(x,y\right)dxdy=\iint1\cdot dxdy=\frac{1}{2}z^{2} $ .

• iv) if $ 1<z<2 $ , then $ F_{\mathbf{Z}}=\iint f_{\mathbf{XY}}\left(x,y\right)dxdy=\iint1\cdot dxdy=1-\frac{1}{2}\left(2-z\right)^{2} $ .

$ \therefore F_{\mathbf{Z}}\left(z\right)=\left\{ \begin{array}{lll} 0 & & ,z<0\\ \frac{1}{2}z^{2} & & ,0\leq z\leq1\\ 1-\frac{1}{2}\left(2-z\right)^{2} & & ,1<z<2\\ 1 & & ,z\geq2 \end{array}\right. $

003.png

(c)

Find the variance of $ \mathbf{Z} $ .

$ f_{\mathbf{Z}}\left(z\right)=\left\{ \begin{array}{lll} z & & ,0\leq z\leq1\\ 2-z & & ,1<z<2\\ 0 & & \text{,otherwise.} \end{array}\right. $

$ E\left[\mathbf{Z}\right]=\int_{-\infty}^{\infty}z\cdot f_{\mathbf{Z}}\left(z\right)dz=\int_{0}^{1}z^{2}dz+\int_{1}^{2}\left(2z-z^{2}\right)dz=\frac{1}{3}z^{3}\Bigl|_{0}^{1}+z^{2}-\frac{1}{3}z^{3}\Bigl|_{1}^{2}=\frac{1}{3}+3-\frac{7}{3}=1. $ $ E\left[\mathbf{Z}^{2}\right]=\int_{-\infty}^{\infty}z^{2}\cdot f_{\mathbf{Z}}\left(z\right)dz=\int_{0}^{1}z^{3}dz+\int_{1}^{2}\left(2z^{2}-z^{3}\right)dz=\frac{1}{4}z^{4}\Bigl|_{0}^{1}+\frac{2}{3}z^{3}-\frac{1}{4}z^{4}\Bigl|_{1}^{2}=\frac{1}{4}+\frac{14}{3}-\frac{15}{4}=\frac{7}{6}. $ $ Var\left[\mathbf{Z}\right]=E\left[\mathbf{Z}^{2}\right]-\left(E\left[\mathbf{Z}\right]\right)^{2}=\frac{1}{6}. $

2 (33 points)

Suppose that $ \mathbf{X} $ and $ \mathbf{N} $ are two jointly distributed random variables, with $ \mathbf{X} $ being a continuous random variable that is uniformly distributed on the interval $ \left(0,1\right) $ and $ \mathbf{N} $ being a discrete random variable taking on values $ 0,1,2,\cdots $ and having conditional probability mass function $ p_{\mathbf{N}}\left(n|\left\{ \mathbf{X}=x\right\} \right)=x^{n}\left(1-x\right),\quad n=0,1,2,\cdots $ .

(a)

Find the probability that \mathbf{N}=n .

$ f_{\mathbf{X}}\left(x\right)=\left\{ \begin{array}{lll} 1 & & ,0\leq x\leq1\\ 0 & & ,\text{otherwise.} \end{array}\right. $

$ P\left(\left\{ \mathbf{N}=n\right\} \right)=\int_{-\infty}^{\infty}p_{\mathbf{N}}\left(n|\left\{ \mathbf{X}=x\right\} \right)f_{\mathbf{X}}\left(x\right)dx=\int_{0}^{1}x^{n}\left(1-x\right)dx $

$ =\frac{1}{n+1}x^{n+1}-\frac{1}{n+2}x^{n+2}\Bigl|_{0}^{1}=\frac{1}{n+1}-\frac{1}{n+2}=\frac{1}{\left(n+1\right)\left(n+2\right)}. $

(b)

Find the conditional density of $ \mathbf{X} $ given $ \left\{ \mathbf{N}=n\right\} $ .

By using Bayes' theorem,

$ f_{\mathbf{X}}\left(x|\left\{ \mathbf{N}=n\right\} \right)=\frac{p_{\mathbf{N}}\left(n|\left\{ \mathbf{X}=x\right\} \right)f_{\mathbf{X}}\left(x\right)}{p_{\mathbf{N}}\left(n\right)}=\left\{ \begin{array}{lll} \left(n+1\right)\left(n+2\right)x^{n}\left(1-x\right) & & ,0\leq x\leq1\\ 0 & & ,\text{otherwise.} \end{array}\right. $

(c)

Find the minimum mean-square error estimator of $ \mathbf{X} $ given $ \left\{ \mathbf{N}=n\right\} $ .

$ MMSE=E\left[\mathbf{X}|\left\{ \mathbf{N}=n\right\} \right]=\int_{-\infty}^{\infty}x\cdot f_{\mathbf{X}}\left(x|\left\{ \mathbf{N}=n\right\} \right)dx=\int_{0}^{1}\left(n+1\right)\left(n+2\right)x^{n+1}\left(1-x\right)dx $$ =\left(n+1\right)\left(n+2\right)\left(\frac{1}{n+2}x^{n+2}-\frac{1}{n+3}x^{n+3}\right)\biggl|_{0}^{1}=\left(n+1\right)\left(n+2\right)\left(\frac{1}{n+2}-\frac{1}{n+3}\right) $$ =\frac{\left(n+1\right)\left(n+2\right)}{\left(n+2\right)\left(n+3\right)}=\frac{n+1}{n+3}. $

3 (34 points)

Assume that the locations of cellular telephone towers can be accurately modeled by a 2-dimensional homogeneous Poisson process for which the following two facts are know to be true:

1. The number of towers in a region of area A is a Poisson random variable with mean \lambda A , where \lambda>0 .

2. The number of towers in any two disjoint regions are statistically independent.

Assume you are located at a point we will call the origin within this 2-dimensional region, and let $ R_{\left(1\right)}<R_{\left(2\right)}<R_{\left(3\right)}<\cdots $ be the ordered distances between the origin and the towers.

(a)

Show that $ R_{\left(1\right)}^{2},R_{\left(2\right)}^{2},R_{\left(3\right)}^{2},\cdots $ are the points of a one-dimensional homogeneous Poisson process.

$ P\left(R_{\left(k+1\right)}^{2}-R_{\left(k\right)}^{2}>r\right)=P\left(\text{there is no tower in area }\pi r\right)=\frac{\left(\lambda\pi r\right)^{0}}{0!}e^{-\lambda\pi r}=e^{-\lambda\pi r}. $

$ P\left(R_{\left(k+1\right)}^{2}-R_{\left(k\right)}^{2}\leq r\right)=P\left(\left\{ \text{there is at least one tower in area }\pi r\right\} \right) $$ =1-e^{-\lambda\pi r}\text{: CDF of exponential random variable}. $

ref. You can see the expressions about exponentail distribution [CS1ExponentialDistribution].

$ R_{\left(k+1\right)}^{2}-R_{\left(k\right)}^{2} $ is an exponential random variable with parameter $ \lambda\pi $ .

$ \therefore R_{\left(1\right)}^{2},R_{\left(2\right)}^{2},R_{\left(3\right)}^{2},\cdots $ are the points of a one-dimensional homogeneous Poisson process.

(b)

What is the rate of the Poisson process in part (a)? $ \lambda\pi $ .

cf. The mean value for the exponential random variable is $ \frac{1}{\lambda\pi} $ .

(c)

Determine the density function of $ R_{\left(k\right)} $ , the distance to the $ k $ -th nearest cell tower.

$ F_{k}\left(x\right)\triangleq P\left(R_{\left(k\right)}\leq x\right) $$ =P\left(\text{There are at least }k\text{ towers within the distance between origin and }x\right) $$ =P\left(N\left(0,x\right)\geq k\right)=1-P\left(N\left(0,x\right)\leq k-1\right)=1-\sum_{j=0}^{k-1}\frac{\left(\lambda\pi x^{2}\right)^{j}}{j!}e^{-\lambda\pi x^{2}}. $

$ f_{k}\left(x\right)=\frac{dF_{k}\left(x\right)}{dx}=-\sum_{j=0}^{k-1}\left\{ \frac{j\left(\lambda\pi x^{2}\right)^{j-1}\left(2\lambda\pi x\right)}{j!}e^{-\lambda\pi x^{2}}+\frac{\left(\lambda\pi x^{2}\right)^{j}}{j!}e^{-\lambda\pi x^{2}}\left(-2\lambda\pi x\right)\right\} $$ =\left(2\lambda\pi x\right)e^{-\lambda\pi x^{2}}\cdot\left\{ -\sum_{j=1}^{k-1}\frac{\left(\lambda\pi x^{2}\right)^{j-1}}{\left(j-1\right)!}e^{-\lambda\pi x^{2}}+\sum_{j=0}^{k-1}\frac{\left(\lambda\pi x^{2}\right)^{j}}{j!}\right\} $$ =\left(2\lambda\pi x\right)e^{-\lambda\pi x^{2}}\cdot\left\{ -\sum_{j=0}^{k-2}\frac{\left(\lambda\pi x^{2}\right)^{j}}{j!}+\sum_{j=0}^{k-1}\frac{\left(\lambda\pi x^{2}\right)^{j}}{j!}\right\} $$ =\left(2\lambda\pi x\right)e^{-\lambda\pi x^{2}}\cdot\frac{\left(\lambda\pi x^{2}\right)^{k-1}}{\left(k-1\right)!}. $


Back to ECE600

Back to my ECE 600 QE page

Back to the general ECE PHD QE page (for problem discussion)

Alumni Liaison

Basic linear algebra uncovers and clarifies very important geometry and algebra.

Dr. Paul Garrett