Line 134: | Line 134: | ||
[[ECE600|Back to ECE600]] | [[ECE600|Back to ECE600]] | ||
− | [[ECE 600 QE|Back to ECE 600 QE]] | + | [[ECE 600 QE|Back to my ECE 600 QE page]] |
+ | |||
+ | [[ECE_PhD_Qualifying_Exams|Back to the general ECE PHD QE page]] (for problem discussion) |
Revision as of 07:26, 27 June 2012
7.9 QE 2004 August
1. (20 pts.)
A probability space $ \left(\mathcal{S},\mathcal{F},\mathcal{P}\right) $ has a sample space consisting of all pairs of positive integers: $ \mathcal{S}=\left\{ \left(k,m\right):\; k=1,2,\cdots;\; m=1,2,\cdots\right\} $ . The event space $ \mathcal{F} $ is the power set of $ \mathcal{S} $ , and the probability measure $ \mathcal{P} $ is specified by the pmf $ p\left(k,m\right)=p^{2}\left(1-p\right)^{k+m-2},\qquad p\in\left(0,1\right) $.
(a)
Find $ P\left(\left\{ \left(k,m\right):\; k\geq m\right\} \right) $ .
$ P\left(\left\{ \left(k,m\right):\; k\geq m\right\} \right)=\sum_{k=1}^{\infty}\sum_{m=1}^{k}p\left(k,m\right)=\sum_{k=1}^{\infty}\sum_{m=1}^{k}p^{2}\left(1-p\right)^{k+m-2}=\frac{p^{2}}{\left(1-p\right)^{2}}\cdot\sum_{k=1}^{\infty}\left(1-p\right)^{k}\sum_{m=1}^{k}\left(1-p\right)^{m} $$ =\frac{p^{2}}{\left(1-p\right)^{2}}\cdot\sum_{k=1}^{\infty}\left(1-p\right)^{k}\cdot\frac{\left(1-p\right)\left(1-\left(1-p\right)^{k}\right)}{1-\left(1-p\right)}=\frac{p}{1-p}\cdot\sum_{k=1}^{\infty}\left(1-p\right)^{k}\cdot\left(1-\left(1-p\right)^{k}\right) $$ =\frac{p}{1-p}\cdot\left[\sum_{k=1}^{\infty}\left(1-p\right)^{k}-\sum_{k=1}^{\infty}\left(1-p\right)^{2k}\right]=\frac{p}{1-p}\cdot\left[\frac{1-p}{1-\left(1-p\right)}-\frac{\left(1-p\right)^{2}}{1-\left(1-p\right)^{2}}\right] $$ =\frac{p}{1-p}\cdot\left[\frac{1-p}{p}-\frac{\left(1-p\right)^{2}}{p\left(2-p\right)}\right]=1-\frac{1-p}{2-p}=\frac{2-p-1+p}{2-p}=\frac{1}{2-p}. $
(b)
Find $ P\left(\left\{ \left(k,m\right):\; k+m=r\right\} \right) $ , for $ r=2,3,\cdots $ .
$ P\left(\left\{ \left(k,m\right):\; k+m=r\right\} \right)=\sum_{r=2}^{\infty}\sum_{k=1}^{r-1}p\left(k,r-k\right)=\sum_{r=2}^{\infty}\sum_{k=1}^{r-1}p^{2}\left(1-p\right)^{r-2} $$ =\frac{p^{2}}{\left(1-p\right)^{2}}\cdot\sum_{r=2}^{\infty}\left(r-1\right)\left(1-p\right)^{r}=\frac{p^{2}}{\left(1-p\right)^{2}}\cdot\sum_{r=1}^{\infty}r\left(1-p\right)^{r+1} $$ =\frac{p^{2}}{1-p}\cdot\sum_{r=1}^{\infty}r\left(1-p\right)^{r}=\frac{p^{2}}{1-p}\cdot\frac{1-p}{\left(1-\left(1-p\right)\right)^{2}}=1. $
Note
We use Taylor Series: $ \sum_{r=1}^{\infty}r\left(1-p\right)^{r}=\frac{1-p}{\left(1-\left(1-p\right)\right)^{2}} $ .
(c)
Find $ P\left(\left\{ \left(k,m\right):\; k\text{ is an odd number}\right\} \right) $ .
$ P\left(\left\{ \left(k,m\right):\; k\text{ is an odd number}\right\} \right)=1-P\left(\left\{ \left(k,m\right):\; k\text{ is an even number}\right\} \right) $$ =1-\sum_{i=1}^{\infty}\sum_{m=1}^{\infty}p\left(2i,m\right)=1-\sum_{i=1}^{\infty}\sum_{m=1}^{\infty}p^{2}\left(1-p\right)^{2i+m-2} $$ =1-\frac{p^{2}}{\left(1-p\right)^{2}}\cdot\sum_{i=1}^{\infty}\left(1-p\right)^{2i}\sum_{m=1}^{\infty}\left(1-p\right)^{m}=1-\frac{p^{2}}{\left(1-p\right)^{2}}\cdot\sum_{i=1}^{\infty}\left(1-p\right)^{2i}\cdot\frac{1-p}{1-\left(1-p\right)} $$ =1-\frac{p}{1-p}\cdot\sum_{i=1}^{\infty}\left(1-p\right)^{2i}=1-\frac{p}{1-p}\cdot\frac{\left(1-p\right)^{2}}{1-\left(1-p\right)^{2}}=1-\frac{p}{1-p}\cdot\frac{\left(1-p\right)^{2}}{p\left(2-p\right)} $$ =1-\frac{1-p}{2-p}=\frac{2-p-1+p}{2-p}=\frac{1}{2-p}. $
2. (20 pts.)
Let $ \mathbf{X} $ and $ \mathbf{Y} $ be two independent identically distributed exponential random variables having mean $ \mu $ . Let $ \mathbf{Z}=\mathbf{X}+\mathbf{Y} $ . Find $ f_{\mathbf{X}}\left(x|\mathbf{Z}=z\right) $ , the conditional pdf of $ \mathbf{X} $ given the event $ \left\{ \mathbf{Z}=z\right\} $ .
Note
This problem is very simlar to the example except that it deals with the exponential random variable rather than the Poisson random variable.
Solution
By using Bayes' theorem,
$ f_{\mathbf{X}}\left(x|\mathbf{Z}=z\right)=\frac{f_{\mathbf{XZ}}\left(x,z\right)}{f_{\mathbf{Z}}\left(z\right)}=\frac{f_{\mathbf{Z}}\left(z|\mathbf{X}=x\right)f_{\mathbf{X}}\left(x\right)}{f_{\mathbf{Z}}\left(z\right)}=\frac{f_{\mathbf{Y}}\left(z-x\right)f_{\mathbf{X}}\left(x\right)}{f_{\mathbf{Z}}\left(z\right)}=? $
Acording to the definition of the exponential distribution, $ f_{\mathbf{X}}\left(x\right)=\frac{1}{\mu}e^{-\frac{x}{\mu}}\text{ and }f_{\mathbf{Y}}\left(y\right)=\frac{1}{\mu}e^{-\frac{y}{\mu}}. $
$ \Phi_{\mathbf{X}}\left(\omega\right)=\Phi_{\mathbf{Y}}\left(\omega\right)=\frac{1}{1-i\mu\omega}. $
$ \Phi_{\mathbf{Z}}\left(\omega\right)=E\left[e^{i\omega\mathbf{Z}}\right]=E\left[e^{i\omega\left(\mathbf{X}+\mathbf{Y}\right)}\right]=E\left[e^{i\omega\mathbf{X}}\right]E\left[e^{i\omega\mathbf{Y}}\right]=\Phi_{\mathbf{X}}\left(\omega\right)\Phi_{\mathbf{Y}}\left(\omega\right)=\frac{1}{1-i\mu\omega}\cdot\frac{1}{1-i\mu\omega}=? $
3. (25 pts.)
Let $ \mathbf{X}_{1},\cdots,\mathbf{X}_{n} $ be independent identically distributed (i.i.d. ) random variables uniformaly distributed over the interval $ \left[0,1\right] $ .
(a)
Find the probability density function of $ \mathbf{Y}=\max\left\{ \mathbf{X}_{1},\cdots,\mathbf{X}_{n}\right\} $ .
ref.
This problem is almost identical to the example.
Solution
$ F_{\mathbf{Y}}(y)=P\left(\left\{ \mathbf{Y}\leq y\right\} \right)=P\left(\left\{ \max\left\{ \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}\right\} \leq y\right\} \right)=P\left(\left\{ \mathbf{X}_{1}\leq y\right\} \cap\left\{ \mathbf{X}_{2}\leq y\right\} \cap\cdots\cap\left\{ \mathbf{X}_{n}\leq y\right\} \right) $$ =P\left(\left\{ \mathbf{X}_{1}\leq y\right\} \right)P\left(\left\{ \mathbf{X}_{2}\leq y\right\} \right)\cdots P\left(\left\{ \mathbf{X}_{n}\leq y\right\} \right)=\left(F_{\mathbf{X}}\left(y\right)\right)^{n} $
where $ f_{\mathbf{X}}(x)=\mathbf{1}_{\left[0,1\right]}(x) $ and $ F_{\mathbf{X}}\left(x\right)=\left\{ \begin{array}{ll} 0 & \quad,\; x<0\\ x & \quad,\;0\leq x<1\\ 1 & \quad,\; x\geq1. \end{array}\right. $
$ f_{\mathbf{Y}}\left(y\right)=\frac{dF_{\mathbf{Y}}\left(y\right)}{dy}=n\left[F_{\mathbf{X}}\left(y\right)\right]^{n-1}\cdot f_{\mathbf{X}}\left(y\right)=n\cdot y^{n-1}\cdot\mathbf{1}_{\left[0,1\right]}(y). $
(b)
Find the probability density function of $ \mathbf{Z}=\min\left\{ \mathbf{X}_{1},\cdots,\mathbf{X}_{n}\right\} $ .
Solution
$ F_{\mathbf{Z}}(z)=P\left(\left\{ \mathbf{Z}\leq z\right\} \right)=1-P\left(\left\{ \mathbf{Z}>z\right\} \right)=1-P\left(\left\{ \min\left\{ \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}\right\} >z\right\} \right) $$ =1-P\left(\left\{ \mathbf{X}_{1}>z\right\} \cap\left\{ \mathbf{X}_{2}>z\right\} \cap\cdots\cap\left\{ \mathbf{X}_{n}>z\right\} \right)=1-\left(1-F_{\mathbf{X}}(z)\right)^{n}. $
$ f_{\mathbf{Z}}(z)=\frac{dF_{\mathbf{Z}}(z)}{dz}=n\left(1-F_{\mathbf{X}}(z)\right)^{n-1}\cdot f_{\mathbf{X}}(z)=n\left(1-z\right)^{n-1}\cdot\mathbf{1}_{\left[0,1\right]}\left(z\right). $
4. (35 pts.)
Assume that $ \mathbf{X}\left(t\right) $ is a zero-mean, continuous-time, Gaussian white noise process with autocorrelation function $ R_{\mathbf{XX}}\left(t_{1},t_{2}\right)=\frac{N_{0}}{2}\delta\left(t_{1}-t_{2}\right). $ Let $ \mathbf{Y}\left(t\right) $ be a new random process defined as the output of a linear time-invariant system with impulse response $ h\left(t\right)=\frac{1}{T}e^{-t/T}\cdot u\left(t\right), $ where $ u\left(t\right) $ is the unit step function and $ T>0 $ .
(a)
What is the mean of $ \mathbf{Y\left(t\right)} $ ?
$ E\left[\mathbf{Y}\left(t\right)\right]=E\left[\int_{-\infty}^{\infty}h\left(\tau\right)\mathbf{X}\left(t-\tau\right)d\tau\right]=\int_{-\infty}^{\infty}h\left(\tau\right)E\left[\mathbf{X}\left(t-\tau\right)\right]d\tau=\int_{-\infty}^{\infty}h\left(\tau\right)\cdot0d\tau=0. $
(b)
What is the autocorrelation function of $ \mathbf{Y}\left(t\right) $ ?
$ S_{\mathbf{XX}}\left(\omega\right)=\int_{-\infty}^{\infty}\frac{N_{0}}{2}\delta\left(\tau\right)e^{-i\omega\tau}d\tau=\frac{N_{0}}{2}. $
Let $ \alpha=\frac{1}{T} $ .
$ H\left(\omega\right)=\int_{-\infty}^{\infty}h\left(t\right)e^{-i\omega t}dt=\int_{0}^{\infty}\alpha e^{-\alpha t}\cdot e^{-i\omega t}dt=\alpha\int_{0}^{\infty}e^{-\left(\alpha+i\omega\right)t}dt=\alpha\frac{e^{-\left(\alpha+i\omega\right)t}}{-\left(\alpha+i\omega\right)}\biggl|_{0}^{\infty}=\frac{\alpha}{\alpha+i\omega}. $
$ S_{\mathbf{YY}}\left(\omega\right)=S_{\mathbf{XX}}\left(\omega\right)\left|H\left(\omega\right)\right|^{2}=S_{\mathbf{XX}}\left(\omega\right)H\left(\omega\right)H^{*}\left(\omega\right)=\frac{N_{0}}{2}\cdot\frac{\alpha}{\alpha+i\omega}\cdot\frac{\alpha}{\alpha-i\omega}=\frac{\alpha^{2}N_{0}}{2\left(\alpha^{2}+\omega^{2}\right)}. $
$ S_{\mathbf{YY}}\left(\omega\right)=\frac{\alpha^{2}N_{0}}{2\left(\alpha^{2}+\omega^{2}\right)}=\left(\frac{\alpha N_{0}}{4}\right)\frac{2\alpha}{\alpha^{2}+\omega^{2}}\leftrightarrow\left(\frac{\alpha N_{0}}{4}\right)e^{-\alpha\left|\tau\right|}=R_{\mathbf{YY}}\left(\tau\right). $
$ \because e^{-\alpha\left|\tau\right|}\leftrightarrow\frac{2\alpha}{\alpha^{2}+\omega^{2}}\text{ (on the table given)}. $
$ \therefore R_{\mathbf{YY}}\left(\tau\right)=\left(\frac{\alpha N_{0}}{4}\right)e^{-\alpha\left|\tau\right|}=\left(\frac{N_{0}}{4T}\right)e^{-\frac{\left|\tau\right|}{T}}. $
(c)
Write an expression for the $ n $ -th order characteristic function of $ \mathbf{Y}\left(t\right) $ sampled at time $ t_{1},t_{2},\cdots,t_{n} $ . Simplify as much as possible.
(d)
Write an expression for the second-order pdf $ f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right) $ of $ \mathbf{Y}\left(t\right) $ . simplify as much as possible.
$ \mathbf{Y}\left(t\right) $ is a WSS Gaussian random process with $ E\left[\mathbf{Y}\left(t\right)\right]=0 , \sigma_{\mathbf{Y}\left(t\right)}^{2}=R_{\mathbf{YY}}\left(0\right)=\frac{N_{0}}{4} $ .
$ r_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}=r\left(t_{1}-t_{2}\right)=\frac{C_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{\sqrt{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}}=\frac{R_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{R_{\mathbf{YY}}\left(0\right)}=e^{-\alpha\left|t_{1}-t_{2}\right|}. $
$ f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right)=\frac{1}{2\pi\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}\sqrt{1-r^{2}}}\exp\left\{ \frac{-1}{2\left(1-r^{2}\right)}\left[\frac{y_{1}^{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}}-\frac{2ry_{1}y_{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}}+\frac{y_{2}^{2}}{\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}\right]\right\} $$ =\frac{1}{2\pi\frac{N_{0}}{4}\sqrt{1-e^{-2\alpha\left|t_{1}-t_{2}\right|}}}\exp\left\{ \frac{-1}{2\left(1-e^{-2\alpha\left|t_{1}-t_{2}\right|}\right)}\left[\frac{y_{1}^{2}}{N_{0}/4}-\frac{2y_{1}y_{2}e^{-\alpha\left|t_{1}-t_{2}\right|}}{N_{0}/4}+\frac{y_{2}^{2}}{N_{0}/4}\right]\right\} $$ =\frac{2}{\pi N_{0}\sqrt{1-e^{-2\alpha\left|t_{1}-t_{2}\right|}}}\exp\left\{ \frac{-2}{N_{0}\left(1-e^{-2\alpha\left|t_{1}-t_{2}\right|}\right)}\left[y_{1}^{2}-2y_{1}y_{2}e^{-\alpha\left|t_{1}-t_{2}\right|}+y_{2}^{2}\right]\right\} $ .
(e)
Find the minium mean-square estimate of $ \mathbf{Y}\left(t_{2}\right) $ given that $ \mathbf{Y}\left(t_{1}\right)=y_{1} $ . Simplify your answer as much as possible.
$ \widehat{y_{2}}_{MMS}\left(y_{1}\right)=E\left[\mathbf{Y}\left(t_{2}\right)|\mathbf{Y}\left(t_{1}\right)=y_{1}\right]=\int_{-\infty}^{\infty}y_{2}\cdot f_{\mathbf{Y}\left(t_{2}\right)}\left(y_{2}|\mathbf{Y}\left(t_{1}\right)=y_{1}\right)dy_{2} $
$ \text{where }f_{\mathbf{Y}\left(t_{2}\right)}\left(y_{2}|\mathbf{Y}\left(t_{1}\right)=y_{1}\right)=\frac{f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1,}y_{2}\right)}{f_{\mathbf{Y}\left(t_{1}\right)}\left(y_{1}\right)}. $
Back to the general ECE PHD QE page (for problem discussion)