(New page: 6.1 MRB 1992 Final 1. (15 pts.) Let <math>\mathbf{X}_{1},\cdots,\mathbf{X}_{n},\cdots</math> be a sequence of independent, identically distributed random variables, each having pdf <ma...) |
|||
(One intermediate revision by the same user not shown) | |||
Line 1: | Line 1: | ||
− | 6.1 MRB 1992 Final | + | =6.1 MRB 1992 Final= |
1. (15 pts.) | 1. (15 pts.) | ||
− | Let <math>\mathbf{X}_{1},\cdots,\mathbf{X}_{n},\cdots</math> be a sequence of independent, identically distributed random variables, each having pdf | + | Let <math class="inline">\mathbf{X}_{1},\cdots,\mathbf{X}_{n},\cdots</math> be a sequence of independent, identically distributed random variables, each having pdf |
− | <math>f_{\mathbf{X}}\left(x\right)=\frac{1}{\mu}\exp\left\{ -\frac{x}{\mu}\right\} \cdot\mathbf{1}_{\left[0,\infty\right)}\left(x\right).</math> Let <math>Y_{n}</math> be a new random variable defined by | + | <math class="inline">f_{\mathbf{X}}\left(x\right)=\frac{1}{\mu}\exp\left\{ -\frac{x}{\mu}\right\} \cdot\mathbf{1}_{\left[0,\infty\right)}\left(x\right).</math> Let <math class="inline">Y_{n}</math> be a new random variable defined by |
− | <math>\mathbf{Y}_{n}=\min\left\{ \mathbf{X}_{1},\cdots,\mathbf{X}_{n}\right\}</math> . | + | <math class="inline">\mathbf{Y}_{n}=\min\left\{ \mathbf{X}_{1},\cdots,\mathbf{X}_{n}\right\}</math> . |
Note | Note | ||
− | <math>\mathbf{X}</math> is the exponentially distributed random variable. | + | <math class="inline">\mathbf{X}</math> is the exponentially distributed random variable. |
(a) | (a) | ||
− | Find the pdf of <math>\mathbf{Y}_{n}</math> . | + | Find the pdf of <math class="inline">\mathbf{Y}_{n}</math> . |
− | • The cdf of <math>\mathbf{Y}_{n}</math> is | + | • The cdf of <math class="inline">\mathbf{Y}_{n}</math> is |
− | <math>F_{\mathbf{Y}_{n}}\left(y\right)=P\left(\left\{ \mathbf{Y}_{n}\leq y\right\} \right)=1-P\left(\left\{ \mathbf{Y}_{n}>y\right\} \right)</math><math>=1-P\left(\left\{ \mathbf{X}_{1}>y\right\} \cap\left\{ \mathbf{X}_{2}>y\right\} \cap\cdots\cap\left\{ \mathbf{X}_{n}>y\right\} \right)</math><math>=1-\prod_{k=1}^{n}\left(1-F_{\mathbf{X}_{k}}\left(y\right)\right)=1-\left(1-F_{\mathbf{X}}\left(y\right)\right)^{n}</math><math>=\left[1-\exp\left\{ -y/\left(\mu/n\right)\right\} \right]\cdot\mathbf{1}_{\left[0,\infty\right)}\left(y\right).</math> | + | <math class="inline">F_{\mathbf{Y}_{n}}\left(y\right)=P\left(\left\{ \mathbf{Y}_{n}\leq y\right\} \right)=1-P\left(\left\{ \mathbf{Y}_{n}>y\right\} \right)</math><math class="inline">=1-P\left(\left\{ \mathbf{X}_{1}>y\right\} \cap\left\{ \mathbf{X}_{2}>y\right\} \cap\cdots\cap\left\{ \mathbf{X}_{n}>y\right\} \right)</math><math class="inline">=1-\prod_{k=1}^{n}\left(1-F_{\mathbf{X}_{k}}\left(y\right)\right)=1-\left(1-F_{\mathbf{X}}\left(y\right)\right)^{n}</math><math class="inline">=\left[1-\exp\left\{ -y/\left(\mu/n\right)\right\} \right]\cdot\mathbf{1}_{\left[0,\infty\right)}\left(y\right).</math> |
− | • The pdf of <math>\mathbf{Y}_{n}</math> that is derived from the cdf of <math>\mathbf{Y}_{n}</math> is <math>f_{\mathbf{Y}_{n}}\left(y\right)=\frac{dF_{\mathbf{Y}_{n}}\left(y\right)}{dy}=\frac{1}{\left(\mu/n\right)}\exp\left\{ \frac{-y}{\left(\mu/n\right)}\right\} \cdot\mathbf{1}_{\left[0,\infty\right)}\left(y\right).</math> | + | • The pdf of <math class="inline">\mathbf{Y}_{n}</math> that is derived from the cdf of <math class="inline">\mathbf{Y}_{n}</math> is <math class="inline">f_{\mathbf{Y}_{n}}\left(y\right)=\frac{dF_{\mathbf{Y}_{n}}\left(y\right)}{dy}=\frac{1}{\left(\mu/n\right)}\exp\left\{ \frac{-y}{\left(\mu/n\right)}\right\} \cdot\mathbf{1}_{\left[0,\infty\right)}\left(y\right).</math> |
− | • <math>\mathbf{Y}_{n}</math> is exponentially distributed with mean <math>\mu/n</math> . | + | • <math class="inline">\mathbf{Y}_{n}</math> is exponentially distributed with mean <math class="inline">\mu/n</math> . |
− | • <math>E\left[\mathbf{Y}_{n}\right]=\mu/n</math> , <math>E\left[\mathbf{Y}_{n}^{2}\right]=\int_{0}^{\infty}\frac{y^{2}}{\left(\mu/n\right)}\exp\left\{ \frac{-y}{\left(\mu/n\right)}\right\} dy=\frac{2\mu^{2}}{n^{2}}</math> , and <math>\sigma_{\mathbf{Y}_{n}}^{2}=E\left[\mathbf{Y}^{2}\right]-\left(E\left[\mathbf{Y}\right]\right)^{2}=\frac{2\mu^{2}}{n^{2}}-\frac{\mu^{2}}{n^{2}}=\frac{\mu^{2}}{n^{2}}.</math> | + | • <math class="inline">E\left[\mathbf{Y}_{n}\right]=\mu/n</math> , <math class="inline">E\left[\mathbf{Y}_{n}^{2}\right]=\int_{0}^{\infty}\frac{y^{2}}{\left(\mu/n\right)}\exp\left\{ \frac{-y}{\left(\mu/n\right)}\right\} dy=\frac{2\mu^{2}}{n^{2}}</math> , and <math class="inline">\sigma_{\mathbf{Y}_{n}}^{2}=E\left[\mathbf{Y}^{2}\right]-\left(E\left[\mathbf{Y}\right]\right)^{2}=\frac{2\mu^{2}}{n^{2}}-\frac{\mu^{2}}{n^{2}}=\frac{\mu^{2}}{n^{2}}.</math> |
− | – In fact, we directly know that <math>\sigma_{\mathbf{Y}_{n}}^{2}=\frac{\mu^{2}}{n^{2}}</math> because we know that <math>\mathbf{Y}_{n}</math> is exponetially distributed. | + | – In fact, we directly know that <math class="inline">\sigma_{\mathbf{Y}_{n}}^{2}=\frac{\mu^{2}}{n^{2}}</math> because we know that <math class="inline">\mathbf{Y}_{n}</math> is exponetially distributed. |
− | • Thus as <math>n\rightarrow\infty</math> , <math>E\left[\mathbf{Y}_{n}\right]\rightarrow0</math> and <math>\sigma_{\mathbf{Y}_{n}}^{2}\rightarrow0</math> . So we suspect that <math>\mathbf{Y}_{n}\rightarrow0</math> . Let's check to see if this is true in <math>\left(p\right)</math> and <math>\left(m.s.\right)</math> . | + | • Thus as <math class="inline">n\rightarrow\infty</math> , <math class="inline">E\left[\mathbf{Y}_{n}\right]\rightarrow0</math> and <math class="inline">\sigma_{\mathbf{Y}_{n}}^{2}\rightarrow0</math> . So we suspect that <math class="inline">\mathbf{Y}_{n}\rightarrow0</math> . Let's check to see if this is true in <math class="inline">\left(p\right)</math> and <math class="inline">\left(m.s.\right)</math> . |
(b) | (b) | ||
− | Does the sequence of random variables <math>\left\{ \mathbf{Y}_{n}\right\}</math> converge in probability? Justify your answer. | + | Does the sequence of random variables <math class="inline">\left\{ \mathbf{Y}_{n}\right\}</math> converge in probability? Justify your answer. |
− | • If <math>\mathbf{Y}_{n}\rightarrow\left(p\right)\rightarrow0</math> , then for any <math>\epsilon>0 P\left(\left\{ \left|\mathbf{Y}_{n}-0\right|>\epsilon\right\} \right)\rightarrow0</math> as <math>n\rightarrow\infty</math> . | + | • If <math class="inline">\mathbf{Y}_{n}\rightarrow\left(p\right)\rightarrow0</math> , then for any <math class="inline">\epsilon>0 P\left(\left\{ \left|\mathbf{Y}_{n}-0\right|>\epsilon\right\} \right)\rightarrow0</math> as <math class="inline">n\rightarrow\infty</math> . |
− | • This is true if for each <math>\epsilon>0</math> and <math>\delta>0</math> , there exists an <math>n_{0}</math> such that <math>P\left(\left\{ \left|\mathbf{Y}_{n}-0\right|>\epsilon\right\} \right)<\delta</math> for all <math>n>n_{0}</math> . | + | • This is true if for each <math class="inline">\epsilon>0</math> and <math class="inline">\delta>0</math> , there exists an <math class="inline">n_{0}</math> such that <math class="inline">P\left(\left\{ \left|\mathbf{Y}_{n}-0\right|>\epsilon\right\} \right)<\delta</math> for all <math class="inline">n>n_{0}</math> . |
− | • Noting that <math>P\left(\left\{ \left|\mathbf{Y}_{n}-0\right|>\epsilon\right\} \right)=P\left(\left\{ \mathbf{Y}_{n}>\epsilon\right\} \right)=e^{-\frac{n\epsilon}{\mu}}</math> . | + | • Noting that <math class="inline">P\left(\left\{ \left|\mathbf{Y}_{n}-0\right|>\epsilon\right\} \right)=P\left(\left\{ \mathbf{Y}_{n}>\epsilon\right\} \right)=e^{-\frac{n\epsilon}{\mu}}</math> . |
− | • If we take <math>n_{0}=\left\lceil \frac{\mu}{\epsilon}\ln\left(\frac{1}{\delta}\right)\right\rceil</math> , this will be true. | + | • If we take <math class="inline">n_{0}=\left\lceil \frac{\mu}{\epsilon}\ln\left(\frac{1}{\delta}\right)\right\rceil</math> , this will be true. |
− | • Thus <math>\mathbf{Y}_{n}\rightarrow\left(p\right)\rightarrow0</math> . | + | • Thus <math class="inline">\mathbf{Y}_{n}\rightarrow\left(p\right)\rightarrow0</math> . |
(c) | (c) | ||
− | Does the sequence of random variables <math>\left\{ \mathbf{Y}_{n}\right\}</math> converge in the mean-square sense? Justify your answer. | + | Does the sequence of random variables <math class="inline">\left\{ \mathbf{Y}_{n}\right\}</math> converge in the mean-square sense? Justify your answer. |
− | • If <math>\mathbf{Y}_{n}\rightarrow\left(m.s.\right)\rightarrow0</math> , then <math>E\left[\left|\mathbf{Y}_{n}-0\right|^{2}\right]\rightarrow0</math> as <math>n\rightarrow\infty</math> . | + | • If <math class="inline">\mathbf{Y}_{n}\rightarrow\left(m.s.\right)\rightarrow0</math> , then <math class="inline">E\left[\left|\mathbf{Y}_{n}-0\right|^{2}\right]\rightarrow0</math> as <math class="inline">n\rightarrow\infty</math> . |
− | • This is true if, for every <math>\epsilon>0</math> , there exists an <math>n_{0}</math> such that <math>E\left[\left|\mathbf{Y}_{n}-0\right|^{2}\right]<\epsilon</math> for <math>\forall n>n_{0}</math> . | + | • This is true if, for every <math class="inline">\epsilon>0</math> , there exists an <math class="inline">n_{0}</math> such that <math class="inline">E\left[\left|\mathbf{Y}_{n}-0\right|^{2}\right]<\epsilon</math> for <math class="inline">\forall n>n_{0}</math> . |
− | • Noting that <math>E\left[\left|\mathbf{Y}_{n}-0\right|^{2}\right]=E\left[\mathbf{Y}_{n}^{2}\right]=\frac{2\mu^{2}}{n^{2}}</math> . | + | • Noting that <math class="inline">E\left[\left|\mathbf{Y}_{n}-0\right|^{2}\right]=E\left[\mathbf{Y}_{n}^{2}\right]=\frac{2\mu^{2}}{n^{2}}</math> . |
− | • If we take <math>n_{0}=\left\lceil \mu\sqrt{\frac{2}{\epsilon}}\right\rceil</math> , this is true. | + | • If we take <math class="inline">n_{0}=\left\lceil \mu\sqrt{\frac{2}{\epsilon}}\right\rceil</math> , this is true. |
− | • Thus <math>\mathbf{Y}_{n}\rightarrow\left(m.s.\right)\rightarrow0</math> . | + | • Thus <math class="inline">\mathbf{Y}_{n}\rightarrow\left(m.s.\right)\rightarrow0</math> . |
4. (14 pts.) | 4. (14 pts.) | ||
− | Let <math>\mathbf{X}\left(t\right)</math> be a zero-mean wide-sense stationary Gaussian white noise process with autocorrelation function <math>R_{\mathbf{XX}}\left(\tau\right)=S_{0}\delta\left(\tau\right)</math> . Suppose that <math>\mathbf{X}\left(t\right)</math> is the input to a linear time invariant system with impulse respose <math>h\left(t\right)=\frac{\sin\sigma\left(t-b\right)}{\sigma\left(t-b\right)},\quad-\infty<t<\infty,</math> where <math>b</math> is a positive constant. Let <math>\mathbf{Y}\left(t\right)</math> be the output of the system and assume that the imput has been applied to the system for all time. | + | Let <math class="inline">\mathbf{X}\left(t\right)</math> be a zero-mean wide-sense stationary Gaussian white noise process with autocorrelation function <math class="inline">R_{\mathbf{XX}}\left(\tau\right)=S_{0}\delta\left(\tau\right)</math> . Suppose that <math class="inline">\mathbf{X}\left(t\right)</math> is the input to a linear time invariant system with impulse respose <math class="inline">h\left(t\right)=\frac{\sin\sigma\left(t-b\right)}{\sigma\left(t-b\right)},\quad-\infty<t<\infty,</math> where <math class="inline">b</math> is a positive constant. Let <math class="inline">\mathbf{Y}\left(t\right)</math> be the output of the system and assume that the imput has been applied to the system for all time. |
(a) | (a) | ||
− | What is the mean of <math>\mathbf{Y}\left(t\right)</math> ? | + | What is the mean of <math class="inline">\mathbf{Y}\left(t\right)</math> ? |
− | <math>E\left[\mathbf{Y}\left(t\right)\right]=E\left[\int_{-\infty}^{\infty}h\left(\tau\right)\mathbf{X}\left(t-\tau\right)d\tau\right]=\int_{-\infty}^{\infty}h\left(\tau\right)E\left[\mathbf{X}\left(t-\tau\right)\right]d\tau=\int_{-\infty}^{\infty}h\left(\tau\right)\cdot0d\tau=0. </math> | + | <math class="inline">E\left[\mathbf{Y}\left(t\right)\right]=E\left[\int_{-\infty}^{\infty}h\left(\tau\right)\mathbf{X}\left(t-\tau\right)d\tau\right]=\int_{-\infty}^{\infty}h\left(\tau\right)E\left[\mathbf{X}\left(t-\tau\right)\right]d\tau=\int_{-\infty}^{\infty}h\left(\tau\right)\cdot0d\tau=0. </math> |
(b) | (b) | ||
− | What is the power spectral density of <math>\mathbf{Y}\left(t\right)</math> ? | + | What is the power spectral density of <math class="inline">\mathbf{Y}\left(t\right)</math> ? |
− | <math>S_{\mathbf{XX}}\left(\omega\right)=\int_{-\infty}^{\infty}S_{0}\delta\left(\tau\right)e^{-i\omega\tau}d\tau=S_{0}.</math> | + | <math class="inline">S_{\mathbf{XX}}\left(\omega\right)=\int_{-\infty}^{\infty}S_{0}\delta\left(\tau\right)e^{-i\omega\tau}d\tau=S_{0}.</math> |
− | <math>H\left(\omega\right)=\int_{-\infty}^{\infty}h\left(t\right)e^{-i\omega t}dt=\int_{0}^{\infty}\frac{\sin\sigma\left(t-b\right)}{\sigma\left(t-b\right)}e^{-i\omega t}dt=\int_{0}^{\infty}\frac{\sin\sigma t}{\sigma t}e^{-i\omega b}dt=e^{-i\omega b}\left(\frac{\pi}{\sigma}\right)\cdot1_{\left[-\sigma,\sigma\right]}\left(\omega\right).</math> | + | <math class="inline">H\left(\omega\right)=\int_{-\infty}^{\infty}h\left(t\right)e^{-i\omega t}dt=\int_{0}^{\infty}\frac{\sin\sigma\left(t-b\right)}{\sigma\left(t-b\right)}e^{-i\omega t}dt=\int_{0}^{\infty}\frac{\sin\sigma t}{\sigma t}e^{-i\omega b}dt=e^{-i\omega b}\left(\frac{\pi}{\sigma}\right)\cdot1_{\left[-\sigma,\sigma\right]}\left(\omega\right).</math> |
− | <math>\left|H\left(\omega\right)\right|^{2}=H\left(\omega\right)H^{*}\left(\omega\right)=\left(\frac{\pi}{\sigma}\right)^{2}\cdot1_{\left[-\sigma,\sigma\right]}\left(\omega\right).</math> | + | <math class="inline">\left|H\left(\omega\right)\right|^{2}=H\left(\omega\right)H^{*}\left(\omega\right)=\left(\frac{\pi}{\sigma}\right)^{2}\cdot1_{\left[-\sigma,\sigma\right]}\left(\omega\right).</math> |
− | <math>S_{\mathbf{YY}}\left(\omega\right)=S_{\mathbf{XX}}\left(\omega\right)\left|H\left(\omega\right)\right|^{2}=S_{0}\cdot\left(\frac{\pi}{\sigma}\right)^{2}\cdot1_{\left[-\sigma,\sigma\right]}\left(\omega\right).</math> | + | <math class="inline">S_{\mathbf{YY}}\left(\omega\right)=S_{\mathbf{XX}}\left(\omega\right)\left|H\left(\omega\right)\right|^{2}=S_{0}\cdot\left(\frac{\pi}{\sigma}\right)^{2}\cdot1_{\left[-\sigma,\sigma\right]}\left(\omega\right).</math> |
(c) | (c) | ||
− | What is the autocorrelation function of <math>\mathbf{Y}\left(t\right)</math> ? | + | What is the autocorrelation function of <math class="inline">\mathbf{Y}\left(t\right)</math> ? |
− | <math>R_{\mathbf{YY}}\left(\tau\right)=\frac{1}{2\pi}\int_{-\infty}^{\infty}S_{\mathbf{YY}}\left(\omega\right)e^{i\omega\tau}d\omega=\frac{S_{0}}{2\pi}\left(\frac{\pi}{\sigma}\right)^{2}\int_{-\sigma}^{\sigma}e^{i\omega\tau}d\omega=\frac{S_{0}\pi}{2\sigma^{2}}\cdot\frac{e^{i\tau\sigma}-e^{-i\tau\sigma}}{i\tau}</math><math>=\frac{S_{0}\pi}{\tau\sigma^{2}}\cdot\frac{e^{i\tau\sigma}-e^{-i\tau\sigma}}{2i}=\frac{S_{0}\pi}{\tau\sigma^{2}}\cdot\sin\tau\sigma=S_{0}\left(\frac{\pi}{\sigma}\right)\cdot\frac{\sin\sigma\tau}{\sigma\tau}.</math> | + | <math class="inline">R_{\mathbf{YY}}\left(\tau\right)=\frac{1}{2\pi}\int_{-\infty}^{\infty}S_{\mathbf{YY}}\left(\omega\right)e^{i\omega\tau}d\omega=\frac{S_{0}}{2\pi}\left(\frac{\pi}{\sigma}\right)^{2}\int_{-\sigma}^{\sigma}e^{i\omega\tau}d\omega=\frac{S_{0}\pi}{2\sigma^{2}}\cdot\frac{e^{i\tau\sigma}-e^{-i\tau\sigma}}{i\tau}</math><math class="inline">=\frac{S_{0}\pi}{\tau\sigma^{2}}\cdot\frac{e^{i\tau\sigma}-e^{-i\tau\sigma}}{2i}=\frac{S_{0}\pi}{\tau\sigma^{2}}\cdot\sin\tau\sigma=S_{0}\left(\frac{\pi}{\sigma}\right)\cdot\frac{\sin\sigma\tau}{\sigma\tau}.</math> |
(d) | (d) | ||
− | Write an expression for the second-order density <math>f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right)</math> of <math>\mathbf{Y}\left(t\right)</math> . | + | Write an expression for the second-order density <math class="inline">f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right)</math> of <math class="inline">\mathbf{Y}\left(t\right)</math> . |
− | Because <math>\mathbf{X}\left(t\right)</math> is a WSS Gaussian random process and the system is an LTI system, <math>\mathbf{Y}\left(t\right)</math> is also a WSS Gaussian random process . <math>\mathbf{Y}\left(t\right)</math> is a WSS Gaussian random process with <math>E\left[\mathbf{Y}\left(t\right)\right]=0</math> , <math>\sigma_{\mathbf{Y}\left(t\right)}^{2}=R_{\mathbf{YY}}\left(0\right)=S_{0}\left(\frac{\pi}{\sigma}\right)</math> . | + | Because <math class="inline">\mathbf{X}\left(t\right)</math> is a WSS Gaussian random process and the system is an LTI system, <math class="inline">\mathbf{Y}\left(t\right)</math> is also a WSS Gaussian random process . <math class="inline">\mathbf{Y}\left(t\right)</math> is a WSS Gaussian random process with <math class="inline">E\left[\mathbf{Y}\left(t\right)\right]=0</math> , <math class="inline">\sigma_{\mathbf{Y}\left(t\right)}^{2}=R_{\mathbf{YY}}\left(0\right)=S_{0}\left(\frac{\pi}{\sigma}\right)</math> . |
− | <math>r_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}=r\left(t_{1}-t_{2}\right)=\frac{R_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{\sqrt{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}}=\frac{R_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{R_{\mathbf{YY}}\left(0\right)}=\frac{\sin\sigma\tau}{\sigma\tau}.</math> | + | <math class="inline">r_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}=r\left(t_{1}-t_{2}\right)=\frac{R_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{\sqrt{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}}=\frac{R_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{R_{\mathbf{YY}}\left(0\right)}=\frac{\sin\sigma\tau}{\sigma\tau}.</math> |
− | <math>f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right)=\frac{1}{2\pi\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}\sqrt{1-r^{2}\left(t_{1}-t_{2}\right)}}\exp\left\{ \frac{-1}{2\left(1-r^{2}\left(t_{1}-t_{2}\right)\right)}\left[\frac{y_{1}^{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}}-\frac{2r\left(t_{1}-t_{2}\right)y_{1}y_{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}}+\frac{y_{2}^{2}}{\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}\right]\right\} </math><math>=\frac{1}{2\pi R_{\mathbf{YY}}\left(0\right)\sqrt{1-r^{2}\left(t_{1}-t_{2}\right)}}\exp\left\{ \frac{-1}{2R_{\mathbf{YY}}\left(0\right)\left(1-r^{2}\left(t_{1}-t_{2}\right)\right)}\left[y_{1}^{2}-2r\left(t_{1}-t_{2}\right)y_{1}y_{2}+y_{2}^{2}\right]\right\} .</math> | + | <math class="inline">f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right)=\frac{1}{2\pi\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}\sqrt{1-r^{2}\left(t_{1}-t_{2}\right)}}\exp\left\{ \frac{-1}{2\left(1-r^{2}\left(t_{1}-t_{2}\right)\right)}\left[\frac{y_{1}^{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}}-\frac{2r\left(t_{1}-t_{2}\right)y_{1}y_{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}}+\frac{y_{2}^{2}}{\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}\right]\right\} </math><math class="inline">=\frac{1}{2\pi R_{\mathbf{YY}}\left(0\right)\sqrt{1-r^{2}\left(t_{1}-t_{2}\right)}}\exp\left\{ \frac{-1}{2R_{\mathbf{YY}}\left(0\right)\left(1-r^{2}\left(t_{1}-t_{2}\right)\right)}\left[y_{1}^{2}-2r\left(t_{1}-t_{2}\right)y_{1}y_{2}+y_{2}^{2}\right]\right\} .</math> |
− | <math>\therefore \left(\mathbf{Y}\left(t_{1}\right),\mathbf{Y}\left(t_{2}\right)\right)</math> has distribution <math>N\left[0,0,\sqrt{S_{0}\frac{\pi}{\sigma}},\sqrt{S_{0}\frac{\pi}{\sigma}},\frac{\sin\sigma\left(t_{1}-t_{2}\right)}{\sigma\left(t_{1}-t_{2}\right)}\right]</math> . | + | <math class="inline">\therefore \left(\mathbf{Y}\left(t_{1}\right),\mathbf{Y}\left(t_{2}\right)\right)</math> has distribution <math class="inline">N\left[0,0,\sqrt{S_{0}\frac{\pi}{\sigma}},\sqrt{S_{0}\frac{\pi}{\sigma}},\frac{\sin\sigma\left(t_{1}-t_{2}\right)}{\sigma\left(t_{1}-t_{2}\right)}\right]</math> . |
+ | |||
+ | ---- | ||
+ | [[ECE600|Back to ECE600]] | ||
+ | |||
+ | [[ECE 600 Finals|Back to ECE 600 Finals]] |
Latest revision as of 06:17, 1 December 2010
6.1 MRB 1992 Final
1. (15 pts.)
Let $ \mathbf{X}_{1},\cdots,\mathbf{X}_{n},\cdots $ be a sequence of independent, identically distributed random variables, each having pdf
$ f_{\mathbf{X}}\left(x\right)=\frac{1}{\mu}\exp\left\{ -\frac{x}{\mu}\right\} \cdot\mathbf{1}_{\left[0,\infty\right)}\left(x\right). $ Let $ Y_{n} $ be a new random variable defined by
$ \mathbf{Y}_{n}=\min\left\{ \mathbf{X}_{1},\cdots,\mathbf{X}_{n}\right\} $ .
Note
$ \mathbf{X} $ is the exponentially distributed random variable.
(a)
Find the pdf of $ \mathbf{Y}_{n} $ .
• The cdf of $ \mathbf{Y}_{n} $ is
$ F_{\mathbf{Y}_{n}}\left(y\right)=P\left(\left\{ \mathbf{Y}_{n}\leq y\right\} \right)=1-P\left(\left\{ \mathbf{Y}_{n}>y\right\} \right) $$ =1-P\left(\left\{ \mathbf{X}_{1}>y\right\} \cap\left\{ \mathbf{X}_{2}>y\right\} \cap\cdots\cap\left\{ \mathbf{X}_{n}>y\right\} \right) $$ =1-\prod_{k=1}^{n}\left(1-F_{\mathbf{X}_{k}}\left(y\right)\right)=1-\left(1-F_{\mathbf{X}}\left(y\right)\right)^{n} $$ =\left[1-\exp\left\{ -y/\left(\mu/n\right)\right\} \right]\cdot\mathbf{1}_{\left[0,\infty\right)}\left(y\right). $
• The pdf of $ \mathbf{Y}_{n} $ that is derived from the cdf of $ \mathbf{Y}_{n} $ is $ f_{\mathbf{Y}_{n}}\left(y\right)=\frac{dF_{\mathbf{Y}_{n}}\left(y\right)}{dy}=\frac{1}{\left(\mu/n\right)}\exp\left\{ \frac{-y}{\left(\mu/n\right)}\right\} \cdot\mathbf{1}_{\left[0,\infty\right)}\left(y\right). $
• $ \mathbf{Y}_{n} $ is exponentially distributed with mean $ \mu/n $ .
• $ E\left[\mathbf{Y}_{n}\right]=\mu/n $ , $ E\left[\mathbf{Y}_{n}^{2}\right]=\int_{0}^{\infty}\frac{y^{2}}{\left(\mu/n\right)}\exp\left\{ \frac{-y}{\left(\mu/n\right)}\right\} dy=\frac{2\mu^{2}}{n^{2}} $ , and $ \sigma_{\mathbf{Y}_{n}}^{2}=E\left[\mathbf{Y}^{2}\right]-\left(E\left[\mathbf{Y}\right]\right)^{2}=\frac{2\mu^{2}}{n^{2}}-\frac{\mu^{2}}{n^{2}}=\frac{\mu^{2}}{n^{2}}. $
– In fact, we directly know that $ \sigma_{\mathbf{Y}_{n}}^{2}=\frac{\mu^{2}}{n^{2}} $ because we know that $ \mathbf{Y}_{n} $ is exponetially distributed.
• Thus as $ n\rightarrow\infty $ , $ E\left[\mathbf{Y}_{n}\right]\rightarrow0 $ and $ \sigma_{\mathbf{Y}_{n}}^{2}\rightarrow0 $ . So we suspect that $ \mathbf{Y}_{n}\rightarrow0 $ . Let's check to see if this is true in $ \left(p\right) $ and $ \left(m.s.\right) $ .
(b)
Does the sequence of random variables $ \left\{ \mathbf{Y}_{n}\right\} $ converge in probability? Justify your answer.
• If $ \mathbf{Y}_{n}\rightarrow\left(p\right)\rightarrow0 $ , then for any $ \epsilon>0 P\left(\left\{ \left|\mathbf{Y}_{n}-0\right|>\epsilon\right\} \right)\rightarrow0 $ as $ n\rightarrow\infty $ .
• This is true if for each $ \epsilon>0 $ and $ \delta>0 $ , there exists an $ n_{0} $ such that $ P\left(\left\{ \left|\mathbf{Y}_{n}-0\right|>\epsilon\right\} \right)<\delta $ for all $ n>n_{0} $ .
• Noting that $ P\left(\left\{ \left|\mathbf{Y}_{n}-0\right|>\epsilon\right\} \right)=P\left(\left\{ \mathbf{Y}_{n}>\epsilon\right\} \right)=e^{-\frac{n\epsilon}{\mu}} $ .
• If we take $ n_{0}=\left\lceil \frac{\mu}{\epsilon}\ln\left(\frac{1}{\delta}\right)\right\rceil $ , this will be true.
• Thus $ \mathbf{Y}_{n}\rightarrow\left(p\right)\rightarrow0 $ .
(c)
Does the sequence of random variables $ \left\{ \mathbf{Y}_{n}\right\} $ converge in the mean-square sense? Justify your answer.
• If $ \mathbf{Y}_{n}\rightarrow\left(m.s.\right)\rightarrow0 $ , then $ E\left[\left|\mathbf{Y}_{n}-0\right|^{2}\right]\rightarrow0 $ as $ n\rightarrow\infty $ .
• This is true if, for every $ \epsilon>0 $ , there exists an $ n_{0} $ such that $ E\left[\left|\mathbf{Y}_{n}-0\right|^{2}\right]<\epsilon $ for $ \forall n>n_{0} $ .
• Noting that $ E\left[\left|\mathbf{Y}_{n}-0\right|^{2}\right]=E\left[\mathbf{Y}_{n}^{2}\right]=\frac{2\mu^{2}}{n^{2}} $ .
• If we take $ n_{0}=\left\lceil \mu\sqrt{\frac{2}{\epsilon}}\right\rceil $ , this is true.
• Thus $ \mathbf{Y}_{n}\rightarrow\left(m.s.\right)\rightarrow0 $ .
4. (14 pts.)
Let $ \mathbf{X}\left(t\right) $ be a zero-mean wide-sense stationary Gaussian white noise process with autocorrelation function $ R_{\mathbf{XX}}\left(\tau\right)=S_{0}\delta\left(\tau\right) $ . Suppose that $ \mathbf{X}\left(t\right) $ is the input to a linear time invariant system with impulse respose $ h\left(t\right)=\frac{\sin\sigma\left(t-b\right)}{\sigma\left(t-b\right)},\quad-\infty<t<\infty, $ where $ b $ is a positive constant. Let $ \mathbf{Y}\left(t\right) $ be the output of the system and assume that the imput has been applied to the system for all time.
(a)
What is the mean of $ \mathbf{Y}\left(t\right) $ ?
$ E\left[\mathbf{Y}\left(t\right)\right]=E\left[\int_{-\infty}^{\infty}h\left(\tau\right)\mathbf{X}\left(t-\tau\right)d\tau\right]=\int_{-\infty}^{\infty}h\left(\tau\right)E\left[\mathbf{X}\left(t-\tau\right)\right]d\tau=\int_{-\infty}^{\infty}h\left(\tau\right)\cdot0d\tau=0. $
(b)
What is the power spectral density of $ \mathbf{Y}\left(t\right) $ ?
$ S_{\mathbf{XX}}\left(\omega\right)=\int_{-\infty}^{\infty}S_{0}\delta\left(\tau\right)e^{-i\omega\tau}d\tau=S_{0}. $
$ H\left(\omega\right)=\int_{-\infty}^{\infty}h\left(t\right)e^{-i\omega t}dt=\int_{0}^{\infty}\frac{\sin\sigma\left(t-b\right)}{\sigma\left(t-b\right)}e^{-i\omega t}dt=\int_{0}^{\infty}\frac{\sin\sigma t}{\sigma t}e^{-i\omega b}dt=e^{-i\omega b}\left(\frac{\pi}{\sigma}\right)\cdot1_{\left[-\sigma,\sigma\right]}\left(\omega\right). $
$ \left|H\left(\omega\right)\right|^{2}=H\left(\omega\right)H^{*}\left(\omega\right)=\left(\frac{\pi}{\sigma}\right)^{2}\cdot1_{\left[-\sigma,\sigma\right]}\left(\omega\right). $
$ S_{\mathbf{YY}}\left(\omega\right)=S_{\mathbf{XX}}\left(\omega\right)\left|H\left(\omega\right)\right|^{2}=S_{0}\cdot\left(\frac{\pi}{\sigma}\right)^{2}\cdot1_{\left[-\sigma,\sigma\right]}\left(\omega\right). $
(c)
What is the autocorrelation function of $ \mathbf{Y}\left(t\right) $ ?
$ R_{\mathbf{YY}}\left(\tau\right)=\frac{1}{2\pi}\int_{-\infty}^{\infty}S_{\mathbf{YY}}\left(\omega\right)e^{i\omega\tau}d\omega=\frac{S_{0}}{2\pi}\left(\frac{\pi}{\sigma}\right)^{2}\int_{-\sigma}^{\sigma}e^{i\omega\tau}d\omega=\frac{S_{0}\pi}{2\sigma^{2}}\cdot\frac{e^{i\tau\sigma}-e^{-i\tau\sigma}}{i\tau} $$ =\frac{S_{0}\pi}{\tau\sigma^{2}}\cdot\frac{e^{i\tau\sigma}-e^{-i\tau\sigma}}{2i}=\frac{S_{0}\pi}{\tau\sigma^{2}}\cdot\sin\tau\sigma=S_{0}\left(\frac{\pi}{\sigma}\right)\cdot\frac{\sin\sigma\tau}{\sigma\tau}. $
(d)
Write an expression for the second-order density $ f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right) $ of $ \mathbf{Y}\left(t\right) $ .
Because $ \mathbf{X}\left(t\right) $ is a WSS Gaussian random process and the system is an LTI system, $ \mathbf{Y}\left(t\right) $ is also a WSS Gaussian random process . $ \mathbf{Y}\left(t\right) $ is a WSS Gaussian random process with $ E\left[\mathbf{Y}\left(t\right)\right]=0 $ , $ \sigma_{\mathbf{Y}\left(t\right)}^{2}=R_{\mathbf{YY}}\left(0\right)=S_{0}\left(\frac{\pi}{\sigma}\right) $ .
$ r_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}=r\left(t_{1}-t_{2}\right)=\frac{R_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{\sqrt{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}}=\frac{R_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{R_{\mathbf{YY}}\left(0\right)}=\frac{\sin\sigma\tau}{\sigma\tau}. $ $ f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right)=\frac{1}{2\pi\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}\sqrt{1-r^{2}\left(t_{1}-t_{2}\right)}}\exp\left\{ \frac{-1}{2\left(1-r^{2}\left(t_{1}-t_{2}\right)\right)}\left[\frac{y_{1}^{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}}-\frac{2r\left(t_{1}-t_{2}\right)y_{1}y_{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}}+\frac{y_{2}^{2}}{\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}\right]\right\} $$ =\frac{1}{2\pi R_{\mathbf{YY}}\left(0\right)\sqrt{1-r^{2}\left(t_{1}-t_{2}\right)}}\exp\left\{ \frac{-1}{2R_{\mathbf{YY}}\left(0\right)\left(1-r^{2}\left(t_{1}-t_{2}\right)\right)}\left[y_{1}^{2}-2r\left(t_{1}-t_{2}\right)y_{1}y_{2}+y_{2}^{2}\right]\right\} . $
$ \therefore \left(\mathbf{Y}\left(t_{1}\right),\mathbf{Y}\left(t_{2}\right)\right) $ has distribution $ N\left[0,0,\sqrt{S_{0}\frac{\pi}{\sigma}},\sqrt{S_{0}\frac{\pi}{\sigma}},\frac{\sin\sigma\left(t_{1}-t_{2}\right)}{\sigma\left(t_{1}-t_{2}\right)}\right] $ .