(New page: Category:ECE Category:QE Category:CNSIP Category:problem solving Category:random variables Category:probability <center> <font size= 4> [[ECE_PhD_Qualifying_Exams...) |
|||
Line 23: | Line 23: | ||
---- | ---- | ||
=Part 2= | =Part 2= | ||
− | + | Let <math>X_1,X_2,...</math> be a sequence of jointly Gaussian random variables with covariance | |
+ | |||
+ | <math>Cov(X_i,X_j) = \left\{ \begin{array}{ll} | ||
+ | {\sigma}^2, & i=j\\ | ||
+ | \rho{\sigma}^2, & |i-j|=1\\ | ||
+ | 0, & otherwise | ||
+ | \end{array} \right.</math> | ||
+ | |||
+ | Suppose we take 2 consecutive samples from this sequence to form a vector <math>X</math>, which is then linearly transformed to form a 2-dimensional random vector <math>Y=AX</math>. Find a matrix <math>A</math> so that the components of <math>Y</math> are independent random variables You must justify your answer. | ||
---- | ---- | ||
=Solution 1= | =Solution 1= | ||
− | + | Suppose | |
+ | |||
+ | <math>A=\left(\begin{array}{cc} | ||
+ | a & b\\ | ||
+ | c & d | ||
+ | \end{array} \right)</math>. | ||
+ | |||
+ | Then the new 2-D random vector can be expressed as | ||
+ | |||
+ | <math>Y=\left(\begin{array}{c}Y_1 \\ Y_2\end{array} \right)=A\left(\begin{array}{c}X_i \\ X_j\end{array} \right)=\left(\begin{array}{c}aX_i+bX_j \\ cX_i+dX_j\end{array} \right)</math> | ||
+ | |||
− | + | Therefore, | |
− | + | <math>\begin{array}{l}Cov(Y_1,Y_2)=E[(aX_i+bX_j-E(aX_i+bX_j))(cX_i+dX_j-E(cX_i+dX_j))] \\ | |
+ | =E[(aX_i+bX_j-aE(X_i)-bE(X_j))(cX_i+dX_j-cE(X_i)-dE(X_j))] \\ | ||
+ | =E[acX_i^2+adX_iX_j-acX_iE(X_i)-adX_iE(X_j)+bcX_iX_j+bdX_j^2-bcX_jE(X_i)\\ | ||
+ | -bdX_jE(X_j)-acX_iE(X_i)-adX_jE(X_i)+acE(X_i)^2+adE(X_i)E(X_j)\\ | ||
+ | -bcX_iE(X_j)-bdX_jE(X_j)+bcE(X_i)E(X_j)+bdE(X_i)^2]\\ | ||
+ | =E(ac(X_i-E(X_i))^2+(ad+bc)(X_i-E(X_i)(X_j-E(X_j))+bd(X_j-E(X_j))^2]\\ | ||
+ | =(ac)Cov(X_i,X_i)+(ad+bc)Cov(X-i,X_j)+(bd)Cov(X_j,X_j)\\ | ||
+ | =ac\sigma^2+(ad+bc)\rho\sigma^2+bd\sigma^2 | ||
+ | \end{array}</math> | ||
− | <math> | + | Let the above formula equal to 0 and <math>a=b=d=1</math>, we get <math>c=-1</math>. |
− | + | Therefore, a solution is | |
− | <math> | + | <math>A=\left(\begin{array}{cc} |
+ | 1 & 1\\ | ||
+ | -1 & 1 | ||
+ | \end{array} \right)</math>. | ||
− | |||
− | |||
---- | ---- |
Revision as of 17:36, 3 November 2014
Communication, Networking, Signal and Image Processing (CS)
Question 1: Probability and Random Processes
August 2013
Part 2
Let $ X_1,X_2,... $ be a sequence of jointly Gaussian random variables with covariance
$ Cov(X_i,X_j) = \left\{ \begin{array}{ll} {\sigma}^2, & i=j\\ \rho{\sigma}^2, & |i-j|=1\\ 0, & otherwise \end{array} \right. $
Suppose we take 2 consecutive samples from this sequence to form a vector $ X $, which is then linearly transformed to form a 2-dimensional random vector $ Y=AX $. Find a matrix $ A $ so that the components of $ Y $ are independent random variables You must justify your answer.
Solution 1
Suppose
$ A=\left(\begin{array}{cc} a & b\\ c & d \end{array} \right) $.
Then the new 2-D random vector can be expressed as
$ Y=\left(\begin{array}{c}Y_1 \\ Y_2\end{array} \right)=A\left(\begin{array}{c}X_i \\ X_j\end{array} \right)=\left(\begin{array}{c}aX_i+bX_j \\ cX_i+dX_j\end{array} \right) $
Therefore,
$ \begin{array}{l}Cov(Y_1,Y_2)=E[(aX_i+bX_j-E(aX_i+bX_j))(cX_i+dX_j-E(cX_i+dX_j))] \\ =E[(aX_i+bX_j-aE(X_i)-bE(X_j))(cX_i+dX_j-cE(X_i)-dE(X_j))] \\ =E[acX_i^2+adX_iX_j-acX_iE(X_i)-adX_iE(X_j)+bcX_iX_j+bdX_j^2-bcX_jE(X_i)\\ -bdX_jE(X_j)-acX_iE(X_i)-adX_jE(X_i)+acE(X_i)^2+adE(X_i)E(X_j)\\ -bcX_iE(X_j)-bdX_jE(X_j)+bcE(X_i)E(X_j)+bdE(X_i)^2]\\ =E(ac(X_i-E(X_i))^2+(ad+bc)(X_i-E(X_i)(X_j-E(X_j))+bd(X_j-E(X_j))^2]\\ =(ac)Cov(X_i,X_i)+(ad+bc)Cov(X-i,X_j)+(bd)Cov(X_j,X_j)\\ =ac\sigma^2+(ad+bc)\rho\sigma^2+bd\sigma^2 \end{array} $
Let the above formula equal to 0 and $ a=b=d=1 $, we get $ c=-1 $.
Therefore, a solution is
$ A=\left(\begin{array}{cc} 1 & 1\\ -1 & 1 \end{array} \right) $.
Solution 2
For $ n $ flips, there are $ n-1 $ changeovers at most. Assume random variable $ k_i $ for changeover,
$ p({k_i}=1)=p(1-p)+(1-p)p=2p(1-p) $
$ E(k)=\sum_{i_1}^{n-1}p(k_i=1)=2(n-1)p(p-1) $
Critique on Solution 2:
It might be better to claim the changeover as a Bernoulli random variable so the logic is easier to understand.