Line 133: Line 133:
  
 
* It would be better to have a clear definition of <math>\tau</math>.
 
* It would be better to have a clear definition of <math>\tau</math>.
 
 
* When computing the mean of <math>\mathbf{Y}</math>, it would be clearer to add a description about why the first equality is true.
 
* When computing the mean of <math>\mathbf{Y}</math>, it would be clearer to add a description about why the first equality is true.
 
 
* The second equality is ambiguous. <math>\mu_x</math> should not be moved out alone from the integration.
 
* The second equality is ambiguous. <math>\mu_x</math> should not be moved out alone from the integration.
  

Latest revision as of 00:53, 31 March 2015


ECE Ph.D. Qualifying Exam

Communication, Networking, Signal and Image Processing (CS)

Question 1: Probability and Random Processes

August 2011



Part 4

Jump to Part 1,2,3


Assume that $ \mathbf{X}(t) $ is a zero-mean continuous-time Gaussian white noise process with autocorrelation function

                $ R_{\mathbf{XX}}(t_1,t_2)=\delta(t_1-t_2). $

Let $ \mathbf{Y}(t) $ be a new random process ontained by passing $ \mathbf{X}(t) $ through a linear time-invariant system with impulse response $ h(t) $ whose Fourier transform $ H(\omega) $ has the ideal low-pass characteristic

               $ H(\omega) = \begin{cases} 1, & \mbox{if } |\omega|\leq\Omega,\\ 0, & \mbox{elsewhere,} \end{cases} $

where $ \Omega>0 $.

a) Find the mean of $ \mathbf{Y}(t) $.

b) Find the autocorrelation function of $ \mathbf{Y}(t) $.

c) Find the joint pdf of $ \mathbf{Y}(t_1) $ and $ \mathbf{Y}(t_2) $ for any two arbitrary sample time $ t_1 $ and $ t_2 $.

d) What is the minimum time difference $ t_1-t_2 $ such that $ \mathbf{Y}(t_1) $ and $ \mathbf{Y}(t_2) $ are statistically independent?


Solution 1:

a)

$ \begin{align} E[Y(t)] &= E[X(t)\ast h(t)] &\\ &= E\left[\int_{-\infty}^\infty x(\alpha)h(t-\alpha)d\alpha\right] &\\ &= \int_{-\infty}^\infty E[X(\alpha)]h(t-\alpha)d\alpha &\\ &= \int_{-\infty}^\infty 0\cdot h(t-\alpha)d\alpha &\\ &= 0 & \end{align} $

b)

$ R_{XX}(t_1,t_2)=\delta(t_1-t_2)\Rightarrow X(t) $ is wide-sense stationary

$ \Rightarrow R_{XX}(\tau)= R_{XX}(t,t+\tau)=\delta(\tau) $

$ \Rightarrow S_{XX}(\omega)= \int_{-\infty}^\infty R_{XX}(\tau)e^{-i\omega\tau}d\tau = \int_{-\infty}^\infty \delta(\tau)e^{-i\omega\tau}d\tau = 1 $

$ \Rightarrow S_{YY}(\omega)= S_{XX}\cdot\vert H(\omega)\vert^2 = 1\cdot H(\omega)\cdot H^\ast(\omega) = H(\omega) $

$ \begin{align} \Rightarrow R_{YY}(\tau)&=\frac{1}{2\pi}\int_{-\infty}^\infty S_{YY}(\omega)e^{i\omega\tau}d\omega &\\ &= \frac{1}{2\pi}\int_{-\Omega}^\Omega e^{i\omega\tau}d\omega &\\ &= \frac{1}{2\pi}\cdot\frac{1}{i\tau}e^{i\omega\tau}\vert_{\omega=-\Omega}^\Omega &\\ &= \frac{1}{\pi\tau}\left(\frac{1}{2i}\left(e^{i\Omega\tau}-e^{-i\Omega\tau}\right)\right) &\\ &= \frac{\text{sin}\Omega\tau}{\pi\tau} & \end{align} $

c)

Since $ \mathbf{X}(t) $ is Gaussian random process, and $ h(t) $ is the impulse response of a linear time-invariant system.

$ \Rightarrow \mathbf{Y}(t) $ is also a Gaussian random process.

$ \Rightarrow \mathbf{Y}(t_1) $ and $ \mathbf{Y}(t_2) $ are jointly Gaussian distributed for any $ t_1 $, $ t_2 $, $ t_1\neq t_2 $.

$ \Rightarrow $ The joint pdf

$ f_{Y(t_1)Y(t_2)}(y_1,y_2)=\frac{1}{2\pi\sigma_{Y_1}\sigma_{Y_2}\sqrt{1-r^2}}\text{exp}\left\{\frac{-1}{2(1-r^2)}\left(\frac{(y_1-\eta_{Y_1})^2}{\sigma_{Y_1}^2}-2r\frac{(y_1-\eta_{Y_1})(y_2-\eta_{Y_2})}{\sigma_{Y_1}\sigma_{Y_2}}+\frac{(y_2-\eta_{Y_2})^2}{\sigma_{Y_2}^2}\right)\right\} $,

where

$ \sigma_{Y_1}=\sigma_{Y_2}=\sqrt{R_{YY}(0)}=\sqrt{\lim_{\tau \to 0}\frac{\text{sin}\Omega\tau}{\pi\tau}}=\sqrt{\frac{\Omega}{\pi}} $,

$ \eta_{Y_1}=\eta_{Y_2}=E[Y(t)]=0 $,

$ r=\frac{\text{cov}(Y(t_1),Y(t_2))}{\sigma_{Y_1}\sigma_{Y_2}}=\frac{R_{YY}(t_1-t_2)}{\sigma_{Y_1}\sigma_{Y_2}}=\frac{\frac{\text{sin}\Omega(t_1-t_2)}{\pi(t_1-t_2)}}{\frac{\Omega}{\pi}}=\frac{\text{sin}\Omega(t_1-t_2)}{\Omega(t_1-t_2)} $.

d)

$ \mathbf{Y}(t_1) $ and $ \mathbf{Y}(t_2) $ are jointly Gaussian random variables.

$ \Rightarrow \mathbf{Y}(t_1) $ and $ \mathbf{Y}(t_2) $ are statistically independent if and only if $ \text{cov}(Y(t_1),Y(t_2))=0 $.

$ \Rightarrow R_{YY}(t_1-t_2)=0 $

$ \Rightarrow \frac{\text{sin}\Omega(t_1-t_2)}{\pi(t_1-t_2)}=0 \Rightarrow \text{sin}\Omega(t_1-t_2)=0 \text{ and } t_1\neq t_2 $

$ \Rightarrow \Omega(t_1-t_2)=n\pi \text{, } n\in\mathbb{Z} \text{ and } t_1\neq t_2 $

$ \Rightarrow t_1-t_2=n\frac{\pi}{\Omega} \text{, } n\in\mathbb{Z}/\{0\} $

$ \Rightarrow $ The minimum time difference is $ \frac{\pi}{\Omega} $ as $ n=1 $.


Solution 2:

a)

$ h(t) $ is a LTI system and $ \mathbf{X}(t) $ is WSS because $ R_{XX}(t_1,t_2)=\delta(t_1-t_2) $,

where $ R_{XX} $ only depends on $ \tau $.

$ \mu_{y}(t)=\int_{-\infty}^{\infty}\mu_x(t-\alpha)h(\alpha)d\alpha=\mu_x\int_{-\infty}^{\infty}(t-\alpha)h(\alpha)d\alpha=0 $

Comments:

  • It would be better to have a clear definition of $ \tau $.
  • When computing the mean of $ \mathbf{Y} $, it would be clearer to add a description about why the first equality is true.
  • The second equality is ambiguous. $ \mu_x $ should not be moved out alone from the integration.

b)

Because this is a LTI system,

$ S_y(\omega)=S_x(\omega)|H(\omega)|^2=\begin{cases}1, & \text{for } |\omega|\leq\Omega\\0, &\text{elsewhere}\end{cases} $.

Therefore,

$ R_y(\tau)=\frac{1}{2\pi}\int_{-\infty}^{\infty}S_x(\omega)e^{j\omega\tau}d\omega=\frac{1}{2\pi}\int_{-\Omega}^{\Omega}e^{j\omega\tau}d\omega=\frac{\text{sin}(\Omega\tau)}{\pi\tau} $.

Comments:

  • It would be nicer to illustrate that the power spectral density of $ \mathbf{X} $, $ S_x(\omega) $, is $ 1 $.

c)

$ \mathbf{X} $ is a Gaussian process with variance $ \sigma^2=1 $, $ \mu_x=0 $. From proof, we know if $ \mathbf{X} $ is a Gaussian process and $ H $ is an LTI system. The output $ \mathbf{Y} $ will be a Gaussian process. The joint Gaussian pdf of $ \mathbf{Y}(t_1) $ and $ \mathbf{Y}(t_2) $ is

$ f_{yy}=\frac{1}{2\pi\sigma_y^2}\text{exp}\left[2(1-r)\frac{y^2}{\sigma_y^2}\left(\frac{-1}{2(1-r)}\right)\right] $,

where $ r=\frac{R_y(\tau)}{\sigma_y^2} $.

Comments:

  • The concept is right, but the the final equation is not the joint Gaussian pdf of $ \mathbf{Y}(t_1) $ and $ \mathbf{Y}(t_2) $. The arguments of the pdf should be $ y(t_1) $ and $ y(t_2) $, instead of $ y $.

d)

If $ \mathbf{Y}(t_1) $ and $ \mathbf{Y}(t_2) $ are independent, then $ r=0 $. That is

$ \frac{\text{sin}(\Omega\tau)}{\pi\tau}=0 $, when $ \omega\tau=n\pi $, where $ n=1,2,3,... $.

Therefore, the minimize $ \tau=\frac{\pi}{\Omega} $.


Related Problems:

a) Find the autocovariance of $ [Y(t) Y(2t) Y(3t) \cdots Y(nt)]^T $, where $ t=\frac{2\pi}{\Omega} $.

b) If $ H(\omega)=\begin{cases}\vert \omega\vert^{1/2}, & \vert \omega\vert\leq\Omega\\0, &\text{elsewhere}\end{cases} $, redo the original problem (b) and (c).


"Communication, Networks, Signal, and Image Processing" (CS)- Question 1, August 2011

Go to


Back to ECE Qualifying Exams (QE) page

Alumni Liaison

EISL lab graduate

Mu Qiao