Line 142: | Line 142: | ||
(b) | (b) | ||
− | Find the conditional density of \mathbf{Y} conditioned on \mathbf{X}=x . | + | Find the conditional density of <math>\mathbf{Y}</math> conditioned on <math>\mathbf{X}=x</math> . |
− | f_{\mathbf{Y}}\left(y|\left\{ \mathbf{X}=x\right\} \right)=\frac{f_{\mathbf{XY}}\left(x,y\right)}{f_{\mathbf{X}}(x)}. | + | <math>f_{\mathbf{Y}}\left(y|\left\{ \mathbf{X}=x\right\} \right)=\frac{f_{\mathbf{XY}}\left(x,y\right)}{f_{\mathbf{X}}(x)}.</math> |
− | f_{\mathbf{X}}(x)=\int_{-\infty}^{\infty}f_{\mathbf{XY}}\left(x,y\right)dy=\int_{0}^{1-x}2dy=2\left(1-x\right)\cdot\mathbf{1}_{\left[0,1\right]}(x). | + | <math>f_{\mathbf{X}}(x)=\int_{-\infty}^{\infty}f_{\mathbf{XY}}\left(x,y\right)dy=\int_{0}^{1-x}2dy=2\left(1-x\right)\cdot\mathbf{1}_{\left[0,1\right]}(x).</math> |
− | f_{\mathbf{Y}}\left(y|\left\{ \mathbf{X}=x\right\} \right)=\frac{f_{\mathbf{XY}}\left(x,y\right)}{f_{\mathbf{X}}(x)}=\frac{2}{2\left(1-x\right)}=\frac{1}{1-x}\textrm{ where }0\leq y\leq1-x\Longrightarrow\frac{1}{1-x}\cdot\mathbf{1}_{\left[0,1-x\right]}\left(y\right). | + | <math>f_{\mathbf{Y}}\left(y|\left\{ \mathbf{X}=x\right\} \right)=\frac{f_{\mathbf{XY}}\left(x,y\right)}{f_{\mathbf{X}}(x)}=\frac{2}{2\left(1-x\right)}=\frac{1}{1-x}\textrm{ where }0\leq y\leq1-x\Longrightarrow\frac{1}{1-x}\cdot\mathbf{1}_{\left[0,1-x\right]}\left(y\right).</math> |
(c) | (c) | ||
− | Find the minimum mean-square error estimator \hat{y}_{MMS}\left(x\right) of \mathbf{Y} given that \mathbf{X}=x . | + | Find the minimum mean-square error estimator <math>\hat{y}_{MMS}\left(x\right)</math> of <math>\mathbf{Y}</math> given that <math>\mathbf{X}=x</math> . |
− | \hat{y}_{MMS}\left(x\right)=E\left[\mathbf{Y}|\left\{ \mathbf{X}=x\right\} \right]=\int_{\mathbf{R}}yf_{\mathbf{Y}}\left(y|\left\{ \mathbf{X}=x\right\} \right)dy=\int_{0}^{1-x}\frac{y}{1-x}dy=\frac{y^{2}}{2\left(1-x\right)}\biggl|_{0}^{1-x}=\frac{1-x}{2}. | + | <math>\hat{y}_{MMS}\left(x\right)=E\left[\mathbf{Y}|\left\{ \mathbf{X}=x\right\} \right]=\int_{\mathbf{R}}yf_{\mathbf{Y}}\left(y|\left\{ \mathbf{X}=x\right\} \right)dy=\int_{0}^{1-x}\frac{y}{1-x}dy=\frac{y^{2}}{2\left(1-x\right)}\biggl|_{0}^{1-x}=\frac{1-x}{2}.</math> |
(d) | (d) | ||
Line 160: | Line 160: | ||
Find a maximum aposteriori probability estimator. | Find a maximum aposteriori probability estimator. | ||
− | \hat{y}_{MAP}\left(x\right)=\arg\max_{y}\left\{ f_{Y}\left(y|\left\{ \mathbf{X}=x\right\} \right)\right\} but f_{Y}\left(y|\left\{ \mathbf{X}=x\right\} \right)=\frac{1}{1-x}\cdot\mathbf{1}_{\left[0,1-x\right]}\left(y\right) . Any \hat{y}\in\left[0,1-x\right] is a MAP estimator. The MAP estimator is NOT unique. | + | <math>\hat{y}_{MAP}\left(x\right)=\arg\max_{y}\left\{ f_{Y}\left(y|\left\{ \mathbf{X}=x\right\} \right)\right\}</math> but <math>f_{Y}\left(y|\left\{ \mathbf{X}=x\right\} \right)=\frac{1}{1-x}\cdot\mathbf{1}_{\left[0,1-x\right]}\left(y\right)</math> . Any <math>\hat{y}\in\left[0,1-x\right]</math> is a MAP estimator. The MAP estimator is NOT unique. |
Example. Two jointly distributed independent random variables | Example. Two jointly distributed independent random variables | ||
− | Let \mathbf{X} and \mathbf{Y} be two jointly distributed, independent random variables. The pdf of \mathbf{X} is | + | Let <math>\mathbf{X}</math> and <math>\mathbf{Y}</math> be two jointly distributed, independent random variables. The pdf of <math>\mathbf{X}</math> is |
− | f_{\mathbf{X}}\left(x\right)=xe^{-x^{2}/2}\cdot\mathbf{1}_{\left[0,\infty\right)}\left(x\right), and \mathbf{Y} is a Gaussian random variable with mean 0 and variance 1 . Let \mathbf{U} and \mathbf{V} be two new random variables defined as \mathbf{U}=\sqrt{\mathbf{X}^{2}+\mathbf{Y}^{2}} and \mathbf{V}=\lambda\mathbf{Y}/\mathbf{X} where \lambda is a positive real number. | + | <math>f_{\mathbf{X}}\left(x\right)=xe^{-x^{2}/2}\cdot\mathbf{1}_{\left[0,\infty\right)}\left(x\right)</math>, and <math>\mathbf{Y}</math> is a Gaussian random variable with mean 0 and variance 1 . Let <math>\mathbf{U}</math> and <math>\mathbf{V}</math> be two new random variables defined as <math>\mathbf{U}=\sqrt{\mathbf{X}^{2}+\mathbf{Y}^{2}}</math> and <math>\mathbf{V}=\lambda\mathbf{Y}/\mathbf{X}</math> where <math>\lambda</math> is a positive real number. |
(a) | (a) | ||
− | Find the joint pdf of \mathbf{U} and \mathbf{V} . (Direct pdf method) | + | Find the joint pdf of <math>\mathbf{U}</math> and <math>\mathbf{V}</math> . (Direct pdf method) |
− | f_{\mathbf{UV}}\left(u,v\right)=f_{\mathbf{XY}}\left(x\left(u,v\right),y\left(u,v\right)\right)\left|\frac{\partial\left(x,y\right)}{\partial\left(u,v\right)}\right| | + | <math>f_{\mathbf{UV}}\left(u,v\right)=f_{\mathbf{XY}}\left(x\left(u,v\right),y\left(u,v\right)\right)\left|\frac{\partial\left(x,y\right)}{\partial\left(u,v\right)}\right|</math> |
− | Solving for x and y in terms of u and v , we have u^{2}=x^{2}+y^{2} and v^{2}=\frac{\lambda^{2}y^{2}}{x^{2}}\Longrightarrow y^{2}=\frac{v^{2}x^{2}}{\lambda^{2}} . | + | Solving for x and y in terms of u and v , we have <math>u^{2}=x^{2}+y^{2}</math> and <math>v^{2}=\frac{\lambda^{2}y^{2}}{x^{2}}\Longrightarrow y^{2}=\frac{v^{2}x^{2}}{\lambda^{2}}</math> . |
− | Now, u^{2}=x^{2}+y^{2}=x^{2}+\frac{v^{2}x^{2}}{\lambda^{2}}=x^{2}\left(1+v^{2}/\lambda^{2}\right)\Longrightarrow x=\frac{u}{\sqrt{1+v^{2}/\lambda^{2}}}\Longrightarrow x\left(u,v\right)=\frac{u}{\sqrt{1+v^{2}/\lambda^{2}}} . | + | Now, <math>u^{2}=x^{2}+y^{2}=x^{2}+\frac{v^{2}x^{2}}{\lambda^{2}}=x^{2}\left(1+v^{2}/\lambda^{2}\right)\Longrightarrow x=\frac{u}{\sqrt{1+v^{2}/\lambda^{2}}}\Longrightarrow x\left(u,v\right)=\frac{u}{\sqrt{1+v^{2}/\lambda^{2}}}</math> . |
− | Thus, y=\frac{vx}{\lambda}=\frac{vu}{\lambda\sqrt{1+v^{2}/\lambda^{2}}}\Longrightarrow y\left(u,v\right)=\frac{vu}{\lambda\sqrt{1+v^{2}/\lambda^{2}}} . | + | Thus, <math>y=\frac{vx}{\lambda}=\frac{vu}{\lambda\sqrt{1+v^{2}/\lambda^{2}}}\Longrightarrow y\left(u,v\right)=\frac{vu}{\lambda\sqrt{1+v^{2}/\lambda^{2}}}</math> . |
Computing the Jacobian. | Computing the Jacobian. | ||
− | \frac{\partial\left(x,y\right)}{\partial\left(u,v\right)} | + | <math>\frac{\partial\left(x,y\right)}{\partial\left(u,v\right)}</math> |
− | Because \mathbf{X} and \mathbf{Y} are statistically independent | + | Because <math>\mathbf{X}</math> and <math>\mathbf{Y}</math> are statistically independent |
− | f_{\mathbf{XY}}\left(x,y\right)=f_{\mathbf{X}}\left(x\right)f_{\mathbf{Y}}\left(y\right)=xe^{-x^{2}/2}\cdot\mathbf{1}_{\left[0,\infty\right)}\left(x\right)\cdot\frac{1}{\sqrt{2\pi}}e^{-y^{2}/2}=\frac{x}{\sqrt{2\pi}}e^{-\left(x^{2}+y^{2}\right)/2}\cdot\mathbf{1}_{\left[0,\infty\right)}\left(x\right). | + | <math>f_{\mathbf{XY}}\left(x,y\right)=f_{\mathbf{X}}\left(x\right)f_{\mathbf{Y}}\left(y\right)=xe^{-x^{2}/2}\cdot\mathbf{1}_{\left[0,\infty\right)}\left(x\right)\cdot\frac{1}{\sqrt{2\pi}}e^{-y^{2}/2}=\frac{x}{\sqrt{2\pi}}e^{-\left(x^{2}+y^{2}\right)/2}\cdot\mathbf{1}_{\left[0,\infty\right)}\left(x\right).</math> |
Substituting these quantities, we get | Substituting these quantities, we get | ||
− | f_{\mathbf{UV}}\left(u,v\right) | + | <math>f_{\mathbf{UV}}\left(u,v\right)</math> |
(b) | (b) | ||
− | Are \mathbf{U} and \mathbf{V} statistically independent? Justify your answer. | + | Are <math>\mathbf{U}</math> and <math>\mathbf{V}</math> statistically independent? Justify your answer. |
− | \mathbf{U} and \mathbf{V} are statistically independent iff f_{\mathbf{UV}}\left(u,v\right)=f_{\mathbf{U}}\left(u\right)f_{\mathbf{V}}\left(v\right) . | + | <math>\mathbf{U}</math> and <math>\mathbf{V}</math> are statistically independent iff <math>f_{\mathbf{UV}}\left(u,v\right)=f_{\mathbf{U}}\left(u\right)f_{\mathbf{V}}\left(v\right)</math> . |
− | Now from part (a), we see | + | Now from part (a), we see that <math>f_{\mathbf{UV}}\left(u,v\right)=c_{1}g_{1}\left(u\right)\cdot c_{2}g_{2}\left(v\right)</math> where <math>g_{1}\left(u\right)=u^{2}e^{-u^{2}/2}\cdot\mathbf{1}_{\left[0,\infty\right)}\left(u\right)</math> and <math>g_{2}\left(v\right)=\frac{1}{\left(\lambda^{2}+v^{2}\right)^{\frac{3}{2}}}</math> with <math>c_{1}</math> and <math>c_{2}</math> selected such that <math>f_{\mathbf{U}}\left(u\right)=c_{1}g_{1}\left(u\right)</math> and <math>f_{\mathbf{V}}\left(v\right)=c_{2}g_{2}\left(v\right)</math> are both valid pdfs. |
− | \therefore \mathbf{U} and \mathbf{V} are statistically independent. | + | <math>\therefore \mathbf{U}</math> and <math>\mathbf{V}</math> are statistically independent. |
Example. Two jointly distributed random variables (Joint characteristic function) | Example. Two jointly distributed random variables (Joint characteristic function) | ||
− | Let \mathbf{X} and \mathbf{Y} be tweo jointly distributed random variables having joint characteristic function | + | Let <math>\mathbf{X}</math> and <math>\mathbf{Y}</math> be tweo jointly distributed random variables having joint characteristic function |
− | \Phi_{\mathbf{XY}}\left(\omega_{1},\omega_{2}\right)=\frac{1}{\left(1-i\omega_{1}\right)\left(1-i\omega_{2}\right)}. | + | <math>\Phi_{\mathbf{XY}}\left(\omega_{1},\omega_{2}\right)=\frac{1}{\left(1-i\omega_{1}\right)\left(1-i\omega_{2}\right)}.</math> |
(a) | (a) | ||
− | Calculate E\left[\mathbf{X}\right] . | + | Calculate <math>E\left[\mathbf{X}\right]</math> . |
− | \Phi_{\mathbf{X}}\left(\omega\right)=\Phi_{\mathbf{XY}}\left(\omega,0\right)=\frac{1}{1-i\omega}=\left(1-i\omega\right)^{-1} | + | <math>\Phi_{\mathbf{X}}\left(\omega\right)=\Phi_{\mathbf{XY}}\left(\omega,0\right)=\frac{1}{1-i\omega}=\left(1-i\omega\right)^{-1}</math> |
− | E\left[\mathbf{X}\right]=\frac{d}{d\left(i\omega\right)}\Phi_{\mathbf{X}}\left(\omega\right)|_{i\omega=0}=(-1)(1-i\omega)^{-2}(-1)|_{i\omega=0}=1 | + | <math>E\left[\mathbf{X}\right]=\frac{d}{d\left(i\omega\right)}\Phi_{\mathbf{X}}\left(\omega\right)|_{i\omega=0}=(-1)(1-i\omega)^{-2}(-1)|_{i\omega=0}=1</math> |
(b) | (b) | ||
− | Calculate E\left[\mathbf{Y}\right] | + | Calculate <math>E\left[\mathbf{Y}\right]</math> |
− | E\left[\mathbf{Y}\right]=1 | + | <math>E\left[\mathbf{Y}\right]=1</math> |
(c) | (c) | ||
− | Calculate E\left[\mathbf{XY}\right] . | + | Calculate <math>E\left[\mathbf{XY}\right]</math> . |
− | E\left[\mathbf{XY}\right]=\frac{\partial^{2}}{\partial\left(i\omega_{1}\right)\partial\left(i\omega_{2}\right)}\Phi_{\mathbf{XY}}\left(\omega_{1},\omega_{2}\right)|_{i\omega_{1}=i\omega_{2}=0}=\left(1-i\omega_{1}\right)^{-2}\left(1-i\omega_{2}\right)^{-2}|_{i\omega_{1}=i\omega_{2}=0}=1 | + | <math>E\left[\mathbf{XY}\right]=\frac{\partial^{2}}{\partial\left(i\omega_{1}\right)\partial\left(i\omega_{2}\right)}\Phi_{\mathbf{XY}}\left(\omega_{1},\omega_{2}\right)|_{i\omega_{1}=i\omega_{2}=0}=\left(1-i\omega_{1}\right)^{-2}\left(1-i\omega_{2}\right)^{-2}|_{i\omega_{1}=i\omega_{2}=0}=1</math> |
(d) | (d) | ||
− | Calculate E\left[\mathbf{X}^{j}\mathbf{Y}^{k}\right] . | + | Calculate <math>E\left[\mathbf{X}^{j}\mathbf{Y}^{k}\right]</math> . |
− | E\left[\mathbf{X}^{j}\mathbf{Y}^{k}\right] | + | <math>E\left[\mathbf{X}^{j}\mathbf{Y}^{k}\right]</math> |
(e) | (e) | ||
− | Calculate the correlation coefficient r_{\mathbf{XY}} between \mathbf{X} and \mathbf{Y} . | + | Calculate the correlation coefficient <math>r_{\mathbf{XY}}</math> between <math>\mathbf{X}</math> and <math>\mathbf{Y}</math> . |
− | r_{\mathbf{XY}}=\frac{Cov\left(\mathbf{X},\mathbf{Y}\right)}{\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}}=\frac{E\left[\mathbf{XY}\right]-E\left[\mathbf{X}\right]E\left[\mathbf{Y}\right]}{\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}}=\frac{1-1\cdot1}{\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}}=0. | + | <math>r_{\mathbf{XY}}=\frac{Cov\left(\mathbf{X},\mathbf{Y}\right)}{\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}}=\frac{E\left[\mathbf{XY}\right]-E\left[\mathbf{X}\right]E\left[\mathbf{Y}\right]}{\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}}=\frac{1-1\cdot1}{\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}}=0.</math> |
Example. Geometric random variable | Example. Geometric random variable | ||
− | Let \mathbf{X} be a random variable with probability mass function | + | Let <math>\mathbf{X}</math> be a random variable with probability mass function |
− | p_{\mathbf{X}}\left(k\right)=\alpha\left(1-\alpha\right)^{k-1},k=1,2,3,\cdots | + | <math>p_{\mathbf{X}}\left(k\right)=\alpha\left(1-\alpha\right)^{k-1},k=1,2,3,\cdots</math> |
− | where 0<\alpha<1 . | + | where <math>0<\alpha<1</math> . |
Note | Note | ||
− | This is a geometric random variable with success probability \alpha . | + | This is a geometric random variable with success probability <math>\alpha</math> . |
− | (a) Find the characteristic function of \mathbf{X} . | + | (a) Find the characteristic function of <math>\mathbf{X}</math> . |
− | \Phi_{\mathbf{X}}\left(\omega\right) | + | <math>\Phi_{\mathbf{X}}\left(\omega\right)</math> |
− | since \left|e^{i\omega}\left(1-\alpha\right)\right|<1 . | + | since <math>\left|e^{i\omega}\left(1-\alpha\right)\right|<1</math> . |
− | \because0<1-\alpha<1 and the real term of e^{i\omega}=\cos\omega+i\sin\omega is \left|\cos\omega\right|<1 . | + | <math>\because0<1-\alpha<1</math> and the real term of <math>e^{i\omega}=\cos\omega+i\sin\omega</math> is <math>\left|\cos\omega\right|<1</math> . |
− | (b) Find the mean of \mathbf{X} . | + | (b) Find the mean of <math>\mathbf{X}</math> . |
− | E\left[\mathbf{X}\right] | + | <math>E\left[\mathbf{X}\right]</math> |
Note | Note | ||
− | You can see the other approach to find E\left[\mathbf{X}\right] and Var\left[\mathbf{X}\right] [CS1GeometricDistribution]. | + | You can see the other approach to find <math>E\left[\mathbf{X}\right]</math> and <math>Var\left[\mathbf{X}\right]</math> [CS1GeometricDistribution]. |
− | (c) Find the variance of \mathbf{X} . | + | (c) Find the variance of <math>\mathbf{X}</math> . |
− | E\left[\mathbf{X}^{2}\right] | + | <math>E\left[\mathbf{X}^{2}\right]</math> |
because | because | ||
Line 276: | Line 276: | ||
− | \frac{d}{d\left(i\omega\right)}e^{i\omega2}\left(1-e^{i\omega}\left(1-\alpha\right)\right)^{-2}\left|_{i\omega=0}\right. | + | <math>\frac{d}{d\left(i\omega\right)}e^{i\omega2}\left(1-e^{i\omega}\left(1-\alpha\right)\right)^{-2}\left|_{i\omega=0}\right.</math> |
− | Var\left[\mathbf{X}\right]=E\left[\mathbf{X}^{2}\right]-\left(E\left[\mathbf{X}\right]\right)^{2}=\frac{2-\alpha}{\alpha^{2}}-\frac{1}{\alpha^{2}}=\frac{1-\alpha}{\alpha^{2}}. | + | <math>Var\left[\mathbf{X}\right]=E\left[\mathbf{X}^{2}\right]-\left(E\left[\mathbf{X}\right]\right)^{2}=\frac{2-\alpha}{\alpha^{2}}-\frac{1}{\alpha^{2}}=\frac{1-\alpha}{\alpha^{2}}.</math> |
Example. Secquence of binomially distributed random variables | Example. Secquence of binomially distributed random variables | ||
− | Let \left\{ \mathbf{X}_{n}\right\} _{n\geq1} be a sequence of binomially distributed random variables, with the n_{th} random variable \mathbf{X}_{n} having pmf | + | Let <math>\left\{ \mathbf{X}_{n}\right\} _{n\geq1}</math> be a sequence of binomially distributed random variables, with the <math>n_{th}</math> random variable <math>\mathbf{X}_{n}</math> having pmf |
− | P_{\mathbf{X}_{n}}(k)=P\left(\left\{ \mathbf{X}_{n}=k\right\} \right)=\left(\begin{array}{c} | + | <math>P_{\mathbf{X}_{n}}(k)=P\left(\left\{ \mathbf{X}_{n}=k\right\} \right)=\left(\begin{array}{c} |
n\\ | n\\ | ||
k | k | ||
− | \end{array}\right)p_{n}^{k}\left(1-p_{n}\right)^{n-k}\;,\; k=0,1,\cdots,n,\; p_{n}\in\left(0,1\right). Show that, if the p_{n} have the property that np_{n}\rightarrow\lambda as n\rightarrow\infty , where \lambda is a positive constant, then the sequence \left\{ \mathbf{X}_{n}\right\} _{n\leq1} converges in distribution to a Poisson random variable \mathbf{X} with mean \lambda . | + | \end{array}\right)p_{n}^{k}\left(1-p_{n}\right)^{n-k}\;,\; k=0,1,\cdots,n,\; p_{n}\in\left(0,1\right).</math> Show that, if the <math>p_{n}</math> have the property that <math>np_{n}\rightarrow\lambda</math> as <math>n\rightarrow\infty</math> , where <math>\lambda</math> is a positive constant, then the sequence <math>\left\{ \mathbf{X}_{n}\right\} _{n\leq1}</math> converges in distribution to a Poisson random variable <math>\mathbf{X}</math> with mean <math>\lambda</math> . |
Hint: | Hint: | ||
Line 293: | Line 293: | ||
You may find the following fact useful: | You may find the following fact useful: | ||
− | \lim_{n\rightarrow\infty}\left(1+\frac{x}{n}\right)^{n}=e^{x}. | + | <math>\lim_{n\rightarrow\infty}\left(1+\frac{x}{n}\right)^{n}=e^{x}.</math> |
Solution | Solution | ||
− | If \mathbf{X}_{n} converges to \mathbf{X} in distribution, then F_{\mathbf{X}_{n}}(x)\rightarrow F_{\mathbf{X}}(x) \forall x\in\mathbf{R} , where F_{\mathbf{X}}(x) is continuous. This occurs iff \Phi_{\mathbf{X}_{n}}(\omega)\rightarrow\Phi_{\mathbf{X}}(\omega) \forall x\in\mathbf{R} . We will show that \Phi_{\mathbf{X}_{n}}(\omega) converges to e^{-\lambda\left(1-e^{i\omega}\right)} as n\rightarrow\infty , which is the characteristic function of a Poisson random variable with mean \lambda . | + | If <math>\mathbf{X}_{n}</math> converges to <math>\mathbf{X}</math> in distribution, then <math>F_{\mathbf{X}_{n}}(x)\rightarrow F_{\mathbf{X}}(x)</math><math> \forall x\in\mathbf{R}</math> , where <math>F_{\mathbf{X}}(x)</math> is continuous. This occurs iff <math>\Phi_{\mathbf{X}_{n}}(\omega)\rightarrow\Phi_{\mathbf{X}}(\omega)</math><math> \forall x\in\mathbf{R}</math> . We will show that <math>\Phi_{\mathbf{X}_{n}}(\omega)</math> converges to <math>e^{-\lambda\left(1-e^{i\omega}\right)}</math> as <math>n\rightarrow\infty</math> , which is the characteristic function of a Poisson random variable with mean <math>\lambda</math> . |
− | \Phi_{\mathbf{X}_{n}}(\omega) | + | <math>\Phi_{\mathbf{X}_{n}}(\omega)</math> |
− | Now as n\rightarrow\infty , np_{n}\rightarrow\lambda\Rightarrow p_{n}\rightarrow\frac{\lambda}{n} . | + | Now as <math>n\rightarrow\infty</math> , <math>np_{n}\rightarrow\lambda\Rightarrow p_{n}\rightarrow\frac{\lambda}{n}</math> . |
− | \lim_{n\rightarrow\infty}\Phi_{\mathbf{X}_{n}}(\omega)=\lim_{n\rightarrow\infty}\left(1+p_{n}\left(e^{i\omega}-1\right)\right)^{n}=\lim_{n\rightarrow\infty}\left(1+\frac{\lambda}{n}\left(e^{i\omega}-1\right)\right)^{n}=e^{\lambda\left(e^{i\omega}-1\right)}=e^{-\lambda\left(1-e^{i\omega}\right)}, | + | <math>\lim_{n\rightarrow\infty}\Phi_{\mathbf{X}_{n}}(\omega)=\lim_{n\rightarrow\infty}\left(1+p_{n}\left(e^{i\omega}-1\right)\right)^{n}=\lim_{n\rightarrow\infty}\left(1+\frac{\lambda}{n}\left(e^{i\omega}-1\right)\right)^{n}=e^{\lambda\left(e^{i\omega}-1\right)}=e^{-\lambda\left(1-e^{i\omega}\right)},</math> |
− | which is the characteristic function of Poisson random variable with mean \lambda . | + | which is the characteristic function of Poisson random variable with mean <math>\lambda</math> . |
c.f. | c.f. | ||
Line 313: | Line 313: | ||
Example. Secquence of exponentially distributed random variables | Example. Secquence of exponentially distributed random variables | ||
− | Let \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n} be a collection of i.i.d. exponentially distributed random variables, each having mean \mu . Define | + | Let <math>\mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}</math> be a collection of i.i.d. exponentially distributed random variables, each having mean <math>\mu</math> . Define |
− | \mathbf{Y}=\max\left\{ \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}\right\} | + | <math>\mathbf{Y}=\max\left\{ \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}\right\}</math> |
and | and | ||
− | \mathbf{Z}=\min\left\{ \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}\right\} . | + | <math>\mathbf{Z}=\min\left\{ \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}\right\}</math> . |
− | (a) Find the pdf of \mathbf{Y} . | + | (a) Find the pdf of <math>\mathbf{Y}</math> . |
− | F_{\mathbf{Y}}\left(y\right) | + | <math>F_{\mathbf{Y}}\left(y\right)</math> |
− | f_{\mathbf{Y}}(y) | + | <math>f_{\mathbf{Y}}(y)</math> |
− | (b) Find the pdf of \mathbf{Z} | + | (b) Find the pdf of <math>\mathbf{Z}</math> |
− | F_{\mathbf{Z}}(z) | + | <math>F_{\mathbf{Z}}(z)</math> |
− | f_{Z}(z) | + | <math>f_{Z}(z)</math> |
− | (c) In words, give as complete a description of the random variable \mathbf{Z} as you can. | + | (c) In words, give as complete a description of the random variable <math>\mathbf{Z}</math> as you can. |
− | \mathbf{Z} is an exponetially distributed random variable with mean \frac{\mu}{n} . | + | <math>\mathbf{Z}</math> is an exponetially distributed random variable with mean <math>\frac{\mu}{n}</math> . |
Example. Secquence of uniformly distributed random variables | Example. Secquence of uniformly distributed random variables | ||
− | Let \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n} be n i.i.d. jointly distributed random variables, each uniformly distributed on the interval \left[0,1\right] . Define the new random variables \mathbf{W}=\max\left\{ \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}\right\} | + | Let <math>\mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}</math> be <math>n</math> i.i.d. jointly distributed random variables, each uniformly distributed on the interval <math>\left[0,1\right]</math> . Define the new random variables <math>\mathbf{W}=\max\left\{ \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}\right\}</math> |
and | and | ||
− | \mathbf{Z}=\min\left\{ \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}\right\} . | + | <math>\mathbf{Z}=\min\left\{ \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}\right\}</math> . |
− | (a) Find the pdf of \mathbf{W} . | + | (a) Find the pdf of <math>\mathbf{W}</math> . |
− | F_{\mathbf{W}}(w) | + | <math>F_{\mathbf{W}}(w)</math> |
− | where f_{\mathbf{X}}(x)=\mathbf{1}_{\left[0,1\right]}(x) and F_{X}\left(x\right)=\left\{ \begin{array}{ll} | + | where <math>f_{\mathbf{X}}(x)=\mathbf{1}_{\left[0,1\right]}(x) and F_{X}\left(x\right)=\left\{ \begin{array}{ll} |
− | 0 | + | 0 ,x<0\\ |
− | x | + | x ,0\leq x<1\\ |
− | 1 | + | 1 ,x\geq1 |
− | \end{array}\right. . | + | \end{array}\right.</math> . |
− | f_{\mathbf{W}}\left(w\right) | + | <math>f_{\mathbf{W}}\left(w\right)</math> |
− | (b) Find the pdf of \mathbf{Z} . | + | (b) Find the pdf of <math>\mathbf{Z}</math> . |
− | F_{\mathbf{Z}}(z) | + | <math>F_{\mathbf{Z}}(z)</math> |
− | f_{\mathbf{Z}}(z) | + | <math>f_{\mathbf{Z}}(z)</math> |
− | (c) Find the mean of \mathbf{W} . | + | (c) Find the mean of <math>\mathbf{W}</math> . |
− | E\left[\mathbf{W}\right] | + | <math>E\left[\mathbf{W}\right] </math> |
Example. Mean of i.i.d. random variables | Example. Mean of i.i.d. random variables | ||
− | Let \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n} be M jointly distributed i.i.d. random variables with mean \mu and variance \sigma^{2} . Let \mathbf{Y}_{M}=\frac{1}{M}\sum_{n=0}^{M}\mathbf{X}_{n} . | + | Let <math>\mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}</math> be <math>M</math> jointly distributed i.i.d. random variables with mean <math>\mu</math> and variance <math>\sigma^{2}</math> . Let <math>\mathbf{Y}_{M}=\frac{1}{M}\sum_{n=0}^{M}\mathbf{X}_{n}</math> . |
− | (a) Find the variance of \mathbf{Y}_{M} . | + | (a) Find the variance of <math>\mathbf{Y}_{M}</math> . |
− | Var\left[\mathbf{Y}_{M}\right]=E\left[\mathbf{Y}_{M}^{2}\right]-\left(E\left[\mathbf{Y}_{M}\right]\right)^{2}. | + | <math>Var\left[\mathbf{Y}_{M}\right]=E\left[\mathbf{Y}_{M}^{2}\right]-\left(E\left[\mathbf{Y}_{M}\right]\right)^{2}. </math> |
− | E\left[\mathbf{Y}_{M}\right]=E\left[\frac{1}{M}\sum_{n=0}^{M}\mathbf{X}_{n}\right]=\frac{1}{M}\sum_{n=0}^{M}E\left[\mathbf{X}_{n}\right]=\frac{1}{M}\cdot M\cdot\mu=\mu. | + | <math>E\left[\mathbf{Y}_{M}\right]=E\left[\frac{1}{M}\sum_{n=0}^{M}\mathbf{X}_{n}\right]=\frac{1}{M}\sum_{n=0}^{M}E\left[\mathbf{X}_{n}\right]=\frac{1}{M}\cdot M\cdot\mu=\mu.</math> |
− | E\left[\mathbf{Y}_{M}^{2}\right]=E\left[\frac{1}{M^{2}}\sum_{m=1}^{M}\sum_{n=1}^{M}\mathbf{X}_{m}\mathbf{X}_{n}\right]=\frac{1}{M^{2}}\sum_{m=1}^{M}\sum_{n=1}^{M}E\left[\mathbf{X}_{m}\mathbf{X}_{n}\right]. | + | <math>E\left[\mathbf{Y}_{M}^{2}\right]=E\left[\frac{1}{M^{2}}\sum_{m=1}^{M}\sum_{n=1}^{M}\mathbf{X}_{m}\mathbf{X}_{n}\right]=\frac{1}{M^{2}}\sum_{m=1}^{M}\sum_{n=1}^{M}E\left[\mathbf{X}_{m}\mathbf{X}_{n}\right].</math> |
− | Now E\left[\mathbf{X}_{m}\mathbf{X}_{n}\right]=\begin{cases} | + | Now <math>E\left[\mathbf{X}_{m}\mathbf{X}_{n}\right]=\begin{cases} |
\begin{array}{ll} | \begin{array}{ll} | ||
− | E\left[\mathbf{X}_{m}^{2}\right] | + | E\left[\mathbf{X}_{m}^{2}\right] ,m=n\\ |
− | E\left[\mathbf{X}_{m}\right]E\left[\mathbf{X}_{n}\right] | + | E\left[\mathbf{X}_{m}\right]E\left[\mathbf{X}_{n}\right] ,m\neq n |
− | \end{array}\end{cases} because when m\neq n , \mathbf{X}_{m} and \mathbf{X}_{n} are independent \Rightarrow \mathbf{X}_{m} and \mathbf{X}_{n} are uncorrelated. | + | \end{array}\end{cases}</math> because when <math>m\neq n</math> , <math>\mathbf{X}_{m}</math> and <math>\mathbf{X}_{n}</math> are independent <math>\Rightarrow \mathbf{X}_{m}</math> and <math>\mathbf{X}_{n}</math> are uncorrelated. |
− | E\left[\mathbf{Y}_{M}^{2}\right]=\frac{1}{M^{2}}\left[M\left(\mu^{2}+\sigma^{2}\right)+M\left(M-1\right)\mu^{2}\right]=\frac{\left(\mu^{2}+\sigma^{2}\right)+\left(M-1\right)\mu^{2}}{M}=\frac{M\mu^{2}+\sigma^{2}}{M}. | + | <math>E\left[\mathbf{Y}_{M}^{2}\right]=\frac{1}{M^{2}}\left[M\left(\mu^{2}+\sigma^{2}\right)+M\left(M-1\right)\mu^{2}\right]=\frac{\left(\mu^{2}+\sigma^{2}\right)+\left(M-1\right)\mu^{2}}{M}=\frac{M\mu^{2}+\sigma^{2}}{M}.</math> |
− | Var\left[\mathbf{Y}_{M}\right]=\frac{M\mu^{2}+\sigma^{2}-M\mu^{2}}{M}=\frac{\sigma^{2}}{M}. | + | <math>Var\left[\mathbf{Y}_{M}\right]=\frac{M\mu^{2}+\sigma^{2}-M\mu^{2}}{M}=\frac{\sigma^{2}}{M}.</math> |
− | (b) Now assume that the \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n} are identically distributed with with mean \mu and variance \sigma^{2} , but they are only correlated rather than independent. Find the variance of \mathbf{Y}_{M} . | + | (b) Now assume that the <math>\mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}</math> are identically distributed with with mean <math>\mu</math> and variance <math>\sigma^{2}</math> , but they are only correlated rather than independent. Find the variance of <math>\mathbf{Y}_{M}</math> . |
− | Again, Var\left[\mathbf{Y}_{M}\right]=\frac{\sigma^{2}}{M} , because only uncorrelatedness was used in part (a). | + | Again, <math>Var\left[\mathbf{Y}_{M}\right]=\frac{\sigma^{2}}{M}</math> , because only uncorrelatedness was used in part (a). |
− | Example. A sum of a random number of i.i.d. Gaussians | + | =Example. A sum of a random number of i.i.d. Gaussians= |
− | Let \left\{ \mathbf{X}_{n}\right\} be a sequence of i.i.d. Gaussian random variables, each having characteristic function | + | Let <math>\left\{ \mathbf{X}_{n}\right\}</math> be a sequence of i.i.d. Gaussian random variables, each having characteristic function |
− | \Phi_{\mathbf{X}}\left(\omega\right)=e^{i\mu\omega}e^{-\frac{1}{2}\sigma^{2}\omega^{2}}. Let \mathbf{N} be a Poisson random variable with pmf | + | <math>\Phi_{\mathbf{X}}\left(\omega\right)=e^{i\mu\omega}e^{-\frac{1}{2}\sigma^{2}\omega^{2}}</math>. Let <math>\mathbf{N}</math> be a Poisson random variable with pmf |
− | p(n)=\frac{e^{-\lambda}\lambda^{n}}{n!},\; n=0,1,2,\cdots,\;\lambda>0, and assume \mathbf{N} is statistically independent of \left\{ \mathbf{X}_{n}\right\} . Define a new random variable | + | <math>p(n)=\frac{e^{-\lambda}\lambda^{n}}{n!},\; n=0,1,2,\cdots,\;\lambda>0,</math> and assume <math>\mathbf{N}</math> is statistically independent of <math>\left\{ \mathbf{X}_{n}\right\}</math> . Define a new random variable |
− | \mathbf{Y}=\mathbf{X}_{1}+\mathbf{X}_{2}+\cdots+\mathbf{X}_{N}. | + | <math>\mathbf{Y}=\mathbf{X}_{1}+\mathbf{X}_{2}+\cdots+\mathbf{X}_{N}.</math> |
Note | Note | ||
− | If \mathbf{N}=0 , then \mathbf{Y}=0 . | + | If <math>\mathbf{N}=0</math> , then <math>\mathbf{Y}=0</math> . |
− | (a) Find the mean of \mathbf{Y} . | + | (a) Find the mean of <math>\mathbf{Y}</math> . |
− | • Probability generating function of \mathbf{N} is P_{\mathbf{N}}\left(z\right)=E\left[z^{\mathbf{N}}\right]=\sum_{n=0}^{\infty}z^{n}\frac{e^{-\lambda}\lambda^{n}}{n!}=e^{-\lambda}\sum_{n=0}^{\infty}\frac{\left(z\lambda\right)^{n}}{n!}=e^{-\lambda}e^{z\lambda}=e^{-\lambda\left(1-z\right)}. | + | • Probability generating function of <math>\mathbf{N}</math> is <math>P_{\mathbf{N}}\left(z\right)=E\left[z^{\mathbf{N}}\right]=\sum_{n=0}^{\infty}z^{n}\frac{e^{-\lambda}\lambda^{n}}{n!}=e^{-\lambda}\sum_{n=0}^{\infty}\frac{\left(z\lambda\right)^{n}}{n!}=e^{-\lambda}e^{z\lambda}=e^{-\lambda\left(1-z\right)}.</math> |
− | • The characteristic function of \mathbf{Y} is \Phi_{\mathbf{Y}}\left(\omega\right)=P_{\mathbf{N}}\left(z\right)\Bigl|_{z=\Phi_{\mathbf{X}}\left(\omega\right)}=e^{-\lambda\left(1-z\right)}\Bigl|_{z=e^{i\mu\omega}e^{-\frac{1}{2}\sigma^{2}\omega^{2}}}=e^{-\lambda\left(1-e^{i\mu\omega}e^{-\frac{1}{2}\sigma^{2}\omega^{2}}\right)}. | + | • The characteristic function of <math>\mathbf{Y}</math> is <math>\Phi_{\mathbf{Y}}\left(\omega\right)=P_{\mathbf{N}}\left(z\right)\Bigl|_{z=\Phi_{\mathbf{X}}\left(\omega\right)}=e^{-\lambda\left(1-z\right)}\Bigl|_{z=e^{i\mu\omega}e^{-\frac{1}{2}\sigma^{2}\omega^{2}}}=e^{-\lambda\left(1-e^{i\mu\omega}e^{-\frac{1}{2}\sigma^{2}\omega^{2}}\right)}.</math> |
− | • Now, we can get the mean of \mathbf{Y} using the characteristic function. E\left[\mathbf{Y}\right] | + | • Now, we can get the mean of <math>\mathbf{Y}</math> using the characteristic function. <math>E\left[\mathbf{Y}\right]</math> |
Revision as of 07:56, 19 November 2010
Contents
- 1 5 Exams
- 2 Example. Addition of two independent Poisson random variables
- 3 Example. Addition of two independent Gaussian random variables
- 4 Example. Addition of two jointly distributed Gaussian random variables
- 5 Example. Two jointly distributed random variables
- 6 Example. A sum of a random number of i.i.d. Gaussians
5 Exams
Example. Addition of two independent Poisson random variables
Let $ \mathbf{Z}=\mathbf{X}+\mathbf{Y} $ where $ \mathbf{X} $ and $ \mathbf{Y} $ are independent Poisson random variables with means $ \lambda $ and $ \mu $ , respectively.
(a)
Find the pmf of $ \mathbf{Z} $ .
According to the characteristic function of Poisson random variable
$ \Phi_{\mathbf{X}}(\omega)=e^{-\lambda\left(1-e^{i\omega}\right)},\Phi_{\mathbf{Y}}(\omega)=e^{-\mu\left(1-e^{i\omega}\right)} $.
$ \mathbf{X} $ and $ \mathbf{Y} $ are independent $ \Longrightarrow \mathbf{X} $ and $ \mathbf{Y} $ are uncorrelated $ \Longrightarrow e^{i\omega\mathbf{X}} $ and $ e^{i\omega\mathbf{Y}} $ are uncorrelated.
$ \Phi_{\mathbf{Z}}(\omega)=E\left[e^{i\omega\mathbf{Z}}\right]=E\left[e^{i\omega\left(\mathbf{X}+\mathbf{Y}\right)}\right]=E\left[e^{i\omega\mathbf{X}}e^{i\omega\mathbf{Y}}\right]=E\left[e^{i\omega\mathbf{X}}\right]\cdot E\left[e^{i\omega\mathbf{Y}}\right] $$ =e^{-\lambda\left(1-e^{i\omega}\right)}\cdot e^{-\mu\left(1-e^{i\omega}\right)}=e^{-\left(\lambda+\mu\right)\left(1-e^{i\omega}\right).} $
Now, we know that \mathbf{Z} is a Poisson random variable with mean $ \lambda+\mu $ .
$ \therefore p_{\mathbf{Z}}(k)=\frac{e^{-\left(\lambda+\mu\right)}\left(\lambda+\mu\right)^{k}}{k!}. $
(b)
Show that the conditional pmf of $ \mathbf{X} $ conditioned on the event $ \left\{ \mathbf{Z}=n\right\} $ is binomially distributed, and determine the parameters of binomial distribution ($ n $ and $ p $ ).
$ P_{\mathbf{X}}\left(\mathbf{X}|\left\{ \mathbf{Z}=n\right\} \right)=P\left(\left\{ \mathbf{X}=k\right\} |\left\{ \mathbf{Z}=n\right\} \right)=\frac{P\left(\left\{ \mathbf{X}=k\right\} \cap\left\{ \mathbf{Z}=n\right\} \right)}{P\left(\left\{ \mathbf{Z}=n\right\} \right)}=\frac{P\left(\left\{ \mathbf{X}=k\right\} \cap\left\{ \mathbf{Y}=n-k\right\} \right)}{P\left(\left\{ \mathbf{Z}=n\right\} \right)} $$ =\frac{\frac{e^{-\lambda}\lambda^{k}}{k!}\cdot\frac{e^{-\mu}\mu^{n-k}}{\left(n-k\right)!}}{\frac{e^{-\left(\lambda+\mu\right)}\left(\lambda+\mu\right)^{n}}{n!}}=\left(\frac{n!}{k!\left(n-k\right)!}\right)\left(\frac{\lambda}{\lambda+\mu}\right)^{k}\left(\frac{\mu}{\lambda+\mu}\right)^{n-k} $$ =\left(\begin{array}{c} n\\ k \end{array}\right)\left(\frac{\lambda}{\lambda+\mu}\right)^{k}\left(\frac{\mu}{\lambda+\mu}\right)^{n-k}\;,\; k=0,\,1,\,2,\,\cdots $
This is a binomial pmf $ b(n,p) $ with parameters $ n $ and $ p=\frac{\lambda}{\lambda+\mu} $ .
Example. Addition of two independent Gaussian random variables
$ \mathbf{X}\sim\mathcal{N}\left(0,\sigma_{\mathbf{X}}^{2}\right),\;\mathbf{N}\sim\mathcal{N}\left(0,\sigma_{\mathbf{N}}^{2}\right),\;\mathbf{Y}=\mathbf{X}+\mathbf{N}. $
(a)
Correlation coefficient between $ \mathbf{X} $ and $ \mathbf{Y} $ .
$ \sigma_{\mathbf{Y}}=\sqrt{\sigma_{\mathbf{X}}^{2}+2r_{\mathbf{XN}}\sigma_{\mathbf{X}}\sigma_{\mathbf{N}}+\sigma_{\mathbf{N}}^{2}}=\sqrt{\sigma_{\mathbf{X}}^{2}+\sigma_{\mathbf{N}}^{2}} $
because $ \mathbf{X} $ and $ \mathbf{N} $ are independnet $ \Longrightarrow $ uncorrelated $ \Longrightarrow r_{\mathbf{XN}}=0 $ .
$ r_{\mathbf{XY}}=\frac{\text{cov}(\mathbf{X},\mathbf{Y})}{\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}}=\frac{E\left[\mathbf{XY}\right]-E\left[\mathbf{X}\right]E\left[\mathbf{Y}\right]}{\sigma_{\mathbf{X}}\sqrt{\sigma_{\mathbf{X}}^{2}+\sigma_{\mathbf{N}}^{2}}}=\frac{E\left[\mathbf{X}\left(\mathbf{X}+\mathbf{N}\right)\right]}{\sigma_{\mathbf{X}}\sqrt{\sigma_{\mathbf{X}}^{2}+\sigma_{\mathbf{N}}^{2}}}=\frac{E\left[\mathbf{X}^{2}\right]+E\left[\mathbf{XN}\right]}{\sigma_{\mathbf{X}}\sqrt{\sigma_{\mathbf{X}}^{2}+\sigma_{\mathbf{N}}^{2}}} $$ =\frac{\sigma_{\mathbf{X}}^{2}+E\left[\mathbf{X}\right]E\left[\mathbf{N}\right]}{\sigma_{\mathbf{X}}\sqrt{\sigma_{\mathbf{X}}^{2}+\sigma_{\mathbf{N}}^{2}}}=\frac{\sigma_{\mathbf{X}}}{\sqrt{\sigma_{\mathbf{X}}^{2}+\sigma_{\mathbf{N}}^{2}}}\qquad\because E\left[\mathbf{X}\right]=0. $
(b)
Conditional pmf of $ \mathbf{X} $ conditioned on the event $ \left\{ \mathbf{Y}=y\right\} $ .
$ f_{\mathbf{X}}\left(x|\left\{ \mathbf{Y}=y\right\} \right)=\frac{f_{\mathbf{XY}}\left(x,y\right)}{f_{\mathbf{Y}}(y)}=\frac{\frac{1}{2\pi\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}\sqrt{1-r^{2}}}\exp\left\{ \frac{-1}{2\left(1-r^{2}\right)}\left[\frac{x^{2}}{\sigma_{\mathbf{X}}^{2}}-\frac{2rxy}{\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}}+\frac{y^{2}}{\sigma_{\mathbf{Y}}^{2}}\right]\right\} }{\frac{1}{\sqrt{2\pi}\sigma_{Y}}\exp\left\{ \frac{-y^{2}}{2\sigma_{Y}^{2}}\right\} } $$ =\frac{1}{\sqrt{2\pi}\sigma_{\mathbf{X}}\sqrt{1-r^{2}}}\exp\left\{ \frac{-1}{2\left(1-r^{2}\right)}\left[\frac{x^{2}}{\sigma_{\mathbf{X}}^{2}}-\frac{2rxy}{\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}}+\frac{y^{2}}{\sigma_{\mathbf{Y}}^{2}}-\frac{\left(1-r^{2}\right)y^{2}}{\sigma_{\mathbf{Y}}^{2}}\right]\right\} $ $ =\frac{1}{\sqrt{2\pi}\sigma_{\mathbf{X}}\sqrt{1-r^{2}}}\exp\left\{ \frac{-1}{2\left(1-r^{2}\right)}\left[\frac{x^{2}}{\sigma_{\mathbf{X}}^{2}}-\frac{2rxy}{\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}}+\frac{r^{2}y^{2}}{\sigma_{\mathbf{Y}}^{2}}\right]\right\} $ $ =\frac{1}{\sqrt{2\pi}\sigma_{\mathbf{X}}\sqrt{1-r^{2}}}\exp\left\{ \frac{-1}{2\left(1-r^{2}\right)\sigma_{\mathbf{X}}^{2}}\left[x^{2}-\frac{2r\sigma_{\mathbf{X}}xy}{\sigma_{\mathbf{Y}}}+\frac{r^{2}\sigma_{\mathbf{X}}^{2}y^{2}}{\sigma_{\mathbf{Y}}^{2}}\right]\right\} $ $ =\frac{1}{\sqrt{2\pi}\sigma_{\mathbf{X}}\sqrt{1-r^{2}}}\exp\left\{ \frac{-1}{2\left(1-r^{2}\right)\sigma_{\mathbf{X}}^{2}}\left(x-\frac{r\sigma_{\mathbf{X}}y}{\sigma_{\mathbf{Y}}}\right)^{2}\right\} $
Noting that $ \sqrt{1-r^{2}}=\sigma_{\mathbf{X}}\sqrt{1-\left(\frac{\sigma_{\mathbf{X}}}{\sqrt{\sigma_{\mathbf{X}}^{2}+\sigma_{\mathbf{Y}}^{2}}}\right)^{2}}=\sqrt{1-\frac{\sigma_{\mathbf{X}}^{2}}{\sigma_{\mathbf{X}}^{2}+\sigma_{\mathbf{Y}}^{2}}}=\sqrt{\frac{\sigma_{\mathbf{X}}^{2}+\sigma_{\mathbf{N}}^{2}-\sigma_{\mathbf{N}}^{2}}{\sigma_{\mathbf{X}}^{2}+\sigma_{\mathbf{N}}^{2}}}=\frac{\sigma_{\mathbf{N}}}{\sqrt{\sigma_{\mathbf{X}}^{2}+\sigma_{\mathbf{N}}^{2}}} $ and
$ r\cdot\frac{\sigma_{\mathbf{X}}}{\sigma_{\mathbf{Y}}}=\frac{\sigma_{X}}{\sqrt{\sigma_{\mathbf{X}}^{2}+\sigma_{\mathbf{N}}^{2}}}\cdot\frac{\sigma_{\mathbf{X}}}{\sqrt{\sigma_{\mathbf{X}}^{2}+\sigma_{\mathbf{N}}^{2}}}=\frac{\sigma_{\mathbf{X}}^{2}}{\sigma_{\mathbf{X}}^{2}+\sigma_{\mathbf{N}}^{2}}. $ $ \therefore f_{\mathbf{X}}\left(x|\left\{ \mathbf{Y}=y\right\} \right)=\frac{1}{\sqrt{2\pi}\cdot\frac{\sigma_{\mathbf{X}}\sigma_{\mathbf{N}}}{\sqrt{\sigma_{\mathbf{X}}^{2}+\mathbf{\sigma}_{\mathbf{N}}^{\mathbf{2}}}}}\exp\left\{ \frac{-1}{2\frac{\sigma_{\mathbf{X}}^{2}\sigma_{\mathbf{N}}^{2}}{\sigma_{\mathbf{X}}^{2}+\sigma_{\mathbf{N}}^{2}}}\left(x-\frac{\sigma_{\mathbf{X}}^{2}}{\sigma_{\mathbf{X}}^{2}+\sigma_{\mathbf{N}}^{2}}\cdot y\right)^{2}\right\} $
(c)
What kind of pdf is the pdf you determined in part (b)? What is the mean and variance of a random variable with this pdf?
This is a Gaussian pdf with mean $ \frac{\sigma_{\mathbf{X}}^{2}}{\sigma_{\mathbf{X}}^{2}+\sigma_{\mathbf{N}}^{2}}\cdot y $ and variance $ \frac{\sigma_{\mathbf{X}}^{2}\sigma_{\mathbf{N}}^{2}}{\sigma_{\mathbf{X}}^{2}+\sigma_{\mathbf{N}}^{2}} $ .
(d)
What is the minimum mean-square estimate of $ \mathbf{X} $ given that $ \left\{ \mathbf{Y}=y\right\} $ ?
The minimum mean-square error estimate of $ \mathbf{X} $ given $ \mathbf{Y}=y $ is
$ \hat{x}_{MMS}(y)=E\left[\mathbf{X}|\left\{ \mathbf{Y}=y\right\} \right]=\int_{-\infty}^{\infty}x\cdot f_{\mathbf{X}}\left(x|\left\{ \mathbf{Y}=y\right\} \right)dx=\frac{\sigma_{\mathbf{X}}^{2}}{\sigma_{\mathbf{X}}^{2}+\sigma_{\mathbf{Y}}^{2}}\cdot y $ from part (b).
(e)
What is the maximum a posteriori estimate of $ \mathbf{X} $ given that $ \left\{ \mathbf{Y}=y\right\} $ ?
$ \hat{x}_{MAP}(y)=\arg\max_{x\in\mathbf{R}}\left\{ f_{\mathbf{X}}\left(x|\left\{ Y=y\right\} \right)\right\} =\frac{\sigma_{\mathbf{X}}^{2}}{\sigma_{\mathbf{X}}^{2}+\sigma_{\mathbf{Y}}^{2}}\cdot y $
as a Gaussian pdf takes on its maximum value at its mean.
(f)
Given that I observe $ \mathbf{Y}=y $ , what is $ E\left[\mathbf{X}|\left\{ \mathbf{Y}=y\right\} \right] $ ?
$ E\left[\mathbf{X}|\left\{ \mathbf{Y}=y\right\} \right]=\frac{\sigma_{\mathbf{X}}^{2}}{\sigma_{\mathbf{X}}^{2}+\sigma_{\mathbf{Y}}^{2}}\cdot y $ from part (d).
Example. Addition of two jointly distributed Gaussian random variables
Let $ \mathbf{X} $ and $ \mathbf{Y} $ be two jointly distributed Gaussian random variables. Assume $ \mathbf{X} $ has mean $ \mu_{\mathbf{X}} $ and variance $ \sigma_{\mathbf{X}}^{2} , \mathbf{Y} $ has mean $ \mu_{\mathbf{Y}} $ and variance $ \sigma_{\mathbf{Y}}^{2} $ , and that the correlation coefficient between $ \mathbf{X} $ and $ \mathbf{Y} $ is $ r $ . Define a new random variable $ \mathbf{Z}=\mathbf{X}+\mathbf{Y} $ .
(a)
Show that $ \mathbf{Z} $ is a Gaussian random variable.
If $ \mathbf{Z} $ is a Guassian random variable, then it has a characteristic function of the form
$ \Phi_{\mathbf{Z}}\left(\omega\right)=e^{i\mu_{\mathbf{Z}}\omega}e^{-\frac{1}{2}\sigma_{\mathbf{Z}}^{2}\omega^{2}}. $
$ \Phi_{\mathbf{Z}}\left(\omega\right) $
where $ \Phi_{\mathbf{XY}}\left(\omega_{1},\omega_{2}\right) $ is the joint characteristic function of $ \mathbf{X} $ and $ \mathbf{Y} $ , defined as
$ \Phi_{\mathbf{XY}}\left(\omega_{1},\omega_{2}\right)=E\left[e^{i\left(\mathbf{\omega_{1}X}+\omega_{2}\mathbf{Y}\right)}\right]. $
Now because $ \mathbf{X} $ and $ \mathbf{Y} $ are jointly Gaussian with the given parameters, we know that
$ \Phi_{\mathbf{XY}}\left(\omega_{1},\omega_{2}\right)=e^{i\left(\mu_{X}\omega_{1}+\mu_{Y}\omega_{2}\right)}e^{-\frac{1}{2}\left(\sigma_{X}^{2}\omega_{1}^{2}+2r\sigma_{X}\sigma_{Y}\omega_{1}\omega_{2}+\sigma_{Y}^{2}\omega_{2}^{2}\right)}. $
Thus,
$ \Phi_{\mathbf{Z}}\left(\omega\right)=\Phi_{\mathbf{XY}}\left(\omega,\omega\right)=e^{i\left(\mu_{X}\omega+\mu_{Y}\omega\right)}e^{-\frac{1}{2}\left(\sigma_{X}^{2}\omega^{2}+2r\sigma_{X}\sigma_{Y}\omega^{2}+\sigma_{Y}^{2}\omega^{2}\right)} $$ =e^{i\left(\mu_{X}+\mu_{Y}\right)\omega}e^{-\frac{1}{2}\left(\sigma_{X}^{2}+2r\sigma_{X}\sigma_{Y}+\sigma_{Y}^{2}\right)\omega^{2}}=e^{i\mu_{Z}\omega}e^{-\frac{1}{2}\sigma_{Z}^{2}\omega^{2}} $
where $ \mu_{Z}=\mu_{X}+\mu_{Y} $ and $ \sigma_{Z}^{2}=\sigma_{X}^{2}+2r\sigma_{X}\sigma_{Y}+\sigma_{Y}^{2} $ .
$ \mathbf{Z} $ is a Gaussian random variable with E\left[\mathbf{Z}\right]=\mu_{X}+\mu_{Y} and Var\left[\mathbf{Z}\right]=\sigma_{X}^{2}+2r\sigma_{X}\sigma_{Y}+\sigma_{Y}^{2} .
(b)
Find the variance of $ \mathbf{Z} $ .
As show in part (a) $ Var\left[\mathbf{Z}\right]=\sigma_{\mathbf{X}}^{2}+2r\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}+\sigma_{\mathbf{Y}}^{2} $ .
Example. Two jointly distributed random variables
Two joinly distributed random variables $ \mathbf{X} $ and $ \mathbf{Y} $ have joint pdf
$ f_{\mathbf{XY}}\left(x,y\right)=\begin{cases} \begin{array}{ll} c ,\text{ for }x\geq0,y\geq0,\textrm{ and }x+y\leq1\\ 0 ,\text{ elsewhere.} \end{array}\end{cases} $
(a)
Find the constant c such that $ f_{\mathbf{XY}}(x,y) $ is a valid pdf.
$ \iint_{\mathbf{R}^{2}}f_{\mathbf{XY}}\left(x,y\right)=c\cdot Area=1 $ where $ Area=\frac{1}{2} $ .
$ \therefore c=2 $
(b)
Find the conditional density of $ \mathbf{Y} $ conditioned on $ \mathbf{X}=x $ .
$ f_{\mathbf{Y}}\left(y|\left\{ \mathbf{X}=x\right\} \right)=\frac{f_{\mathbf{XY}}\left(x,y\right)}{f_{\mathbf{X}}(x)}. $
$ f_{\mathbf{X}}(x)=\int_{-\infty}^{\infty}f_{\mathbf{XY}}\left(x,y\right)dy=\int_{0}^{1-x}2dy=2\left(1-x\right)\cdot\mathbf{1}_{\left[0,1\right]}(x). $
$ f_{\mathbf{Y}}\left(y|\left\{ \mathbf{X}=x\right\} \right)=\frac{f_{\mathbf{XY}}\left(x,y\right)}{f_{\mathbf{X}}(x)}=\frac{2}{2\left(1-x\right)}=\frac{1}{1-x}\textrm{ where }0\leq y\leq1-x\Longrightarrow\frac{1}{1-x}\cdot\mathbf{1}_{\left[0,1-x\right]}\left(y\right). $
(c)
Find the minimum mean-square error estimator $ \hat{y}_{MMS}\left(x\right) $ of $ \mathbf{Y} $ given that $ \mathbf{X}=x $ .
$ \hat{y}_{MMS}\left(x\right)=E\left[\mathbf{Y}|\left\{ \mathbf{X}=x\right\} \right]=\int_{\mathbf{R}}yf_{\mathbf{Y}}\left(y|\left\{ \mathbf{X}=x\right\} \right)dy=\int_{0}^{1-x}\frac{y}{1-x}dy=\frac{y^{2}}{2\left(1-x\right)}\biggl|_{0}^{1-x}=\frac{1-x}{2}. $
(d)
Find a maximum aposteriori probability estimator.
$ \hat{y}_{MAP}\left(x\right)=\arg\max_{y}\left\{ f_{Y}\left(y|\left\{ \mathbf{X}=x\right\} \right)\right\} $ but $ f_{Y}\left(y|\left\{ \mathbf{X}=x\right\} \right)=\frac{1}{1-x}\cdot\mathbf{1}_{\left[0,1-x\right]}\left(y\right) $ . Any $ \hat{y}\in\left[0,1-x\right] $ is a MAP estimator. The MAP estimator is NOT unique.
Example. Two jointly distributed independent random variables
Let $ \mathbf{X} $ and $ \mathbf{Y} $ be two jointly distributed, independent random variables. The pdf of $ \mathbf{X} $ is
$ f_{\mathbf{X}}\left(x\right)=xe^{-x^{2}/2}\cdot\mathbf{1}_{\left[0,\infty\right)}\left(x\right) $, and $ \mathbf{Y} $ is a Gaussian random variable with mean 0 and variance 1 . Let $ \mathbf{U} $ and $ \mathbf{V} $ be two new random variables defined as $ \mathbf{U}=\sqrt{\mathbf{X}^{2}+\mathbf{Y}^{2}} $ and $ \mathbf{V}=\lambda\mathbf{Y}/\mathbf{X} $ where $ \lambda $ is a positive real number.
(a)
Find the joint pdf of $ \mathbf{U} $ and $ \mathbf{V} $ . (Direct pdf method)
$ f_{\mathbf{UV}}\left(u,v\right)=f_{\mathbf{XY}}\left(x\left(u,v\right),y\left(u,v\right)\right)\left|\frac{\partial\left(x,y\right)}{\partial\left(u,v\right)}\right| $
Solving for x and y in terms of u and v , we have $ u^{2}=x^{2}+y^{2} $ and $ v^{2}=\frac{\lambda^{2}y^{2}}{x^{2}}\Longrightarrow y^{2}=\frac{v^{2}x^{2}}{\lambda^{2}} $ .
Now, $ u^{2}=x^{2}+y^{2}=x^{2}+\frac{v^{2}x^{2}}{\lambda^{2}}=x^{2}\left(1+v^{2}/\lambda^{2}\right)\Longrightarrow x=\frac{u}{\sqrt{1+v^{2}/\lambda^{2}}}\Longrightarrow x\left(u,v\right)=\frac{u}{\sqrt{1+v^{2}/\lambda^{2}}} $ .
Thus, $ y=\frac{vx}{\lambda}=\frac{vu}{\lambda\sqrt{1+v^{2}/\lambda^{2}}}\Longrightarrow y\left(u,v\right)=\frac{vu}{\lambda\sqrt{1+v^{2}/\lambda^{2}}} $ .
Computing the Jacobian.
$ \frac{\partial\left(x,y\right)}{\partial\left(u,v\right)} $
Because $ \mathbf{X} $ and $ \mathbf{Y} $ are statistically independent
$ f_{\mathbf{XY}}\left(x,y\right)=f_{\mathbf{X}}\left(x\right)f_{\mathbf{Y}}\left(y\right)=xe^{-x^{2}/2}\cdot\mathbf{1}_{\left[0,\infty\right)}\left(x\right)\cdot\frac{1}{\sqrt{2\pi}}e^{-y^{2}/2}=\frac{x}{\sqrt{2\pi}}e^{-\left(x^{2}+y^{2}\right)/2}\cdot\mathbf{1}_{\left[0,\infty\right)}\left(x\right). $
Substituting these quantities, we get
$ f_{\mathbf{UV}}\left(u,v\right) $
(b)
Are $ \mathbf{U} $ and $ \mathbf{V} $ statistically independent? Justify your answer.
$ \mathbf{U} $ and $ \mathbf{V} $ are statistically independent iff $ f_{\mathbf{UV}}\left(u,v\right)=f_{\mathbf{U}}\left(u\right)f_{\mathbf{V}}\left(v\right) $ .
Now from part (a), we see that $ f_{\mathbf{UV}}\left(u,v\right)=c_{1}g_{1}\left(u\right)\cdot c_{2}g_{2}\left(v\right) $ where $ g_{1}\left(u\right)=u^{2}e^{-u^{2}/2}\cdot\mathbf{1}_{\left[0,\infty\right)}\left(u\right) $ and $ g_{2}\left(v\right)=\frac{1}{\left(\lambda^{2}+v^{2}\right)^{\frac{3}{2}}} $ with $ c_{1} $ and $ c_{2} $ selected such that $ f_{\mathbf{U}}\left(u\right)=c_{1}g_{1}\left(u\right) $ and $ f_{\mathbf{V}}\left(v\right)=c_{2}g_{2}\left(v\right) $ are both valid pdfs.
$ \therefore \mathbf{U} $ and $ \mathbf{V} $ are statistically independent.
Example. Two jointly distributed random variables (Joint characteristic function)
Let $ \mathbf{X} $ and $ \mathbf{Y} $ be tweo jointly distributed random variables having joint characteristic function
$ \Phi_{\mathbf{XY}}\left(\omega_{1},\omega_{2}\right)=\frac{1}{\left(1-i\omega_{1}\right)\left(1-i\omega_{2}\right)}. $
(a)
Calculate $ E\left[\mathbf{X}\right] $ .
$ \Phi_{\mathbf{X}}\left(\omega\right)=\Phi_{\mathbf{XY}}\left(\omega,0\right)=\frac{1}{1-i\omega}=\left(1-i\omega\right)^{-1} $
$ E\left[\mathbf{X}\right]=\frac{d}{d\left(i\omega\right)}\Phi_{\mathbf{X}}\left(\omega\right)|_{i\omega=0}=(-1)(1-i\omega)^{-2}(-1)|_{i\omega=0}=1 $
(b)
Calculate $ E\left[\mathbf{Y}\right] $
$ E\left[\mathbf{Y}\right]=1 $
(c)
Calculate $ E\left[\mathbf{XY}\right] $ .
$ E\left[\mathbf{XY}\right]=\frac{\partial^{2}}{\partial\left(i\omega_{1}\right)\partial\left(i\omega_{2}\right)}\Phi_{\mathbf{XY}}\left(\omega_{1},\omega_{2}\right)|_{i\omega_{1}=i\omega_{2}=0}=\left(1-i\omega_{1}\right)^{-2}\left(1-i\omega_{2}\right)^{-2}|_{i\omega_{1}=i\omega_{2}=0}=1 $
(d)
Calculate $ E\left[\mathbf{X}^{j}\mathbf{Y}^{k}\right] $ .
$ E\left[\mathbf{X}^{j}\mathbf{Y}^{k}\right] $
(e)
Calculate the correlation coefficient $ r_{\mathbf{XY}} $ between $ \mathbf{X} $ and $ \mathbf{Y} $ .
$ r_{\mathbf{XY}}=\frac{Cov\left(\mathbf{X},\mathbf{Y}\right)}{\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}}=\frac{E\left[\mathbf{XY}\right]-E\left[\mathbf{X}\right]E\left[\mathbf{Y}\right]}{\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}}=\frac{1-1\cdot1}{\sigma_{\mathbf{X}}\sigma_{\mathbf{Y}}}=0. $
Example. Geometric random variable
Let $ \mathbf{X} $ be a random variable with probability mass function
$ p_{\mathbf{X}}\left(k\right)=\alpha\left(1-\alpha\right)^{k-1},k=1,2,3,\cdots $
where $ 0<\alpha<1 $ .
Note
This is a geometric random variable with success probability $ \alpha $ .
(a) Find the characteristic function of $ \mathbf{X} $ .
$ \Phi_{\mathbf{X}}\left(\omega\right) $
since $ \left|e^{i\omega}\left(1-\alpha\right)\right|<1 $ .
$ \because0<1-\alpha<1 $ and the real term of $ e^{i\omega}=\cos\omega+i\sin\omega $ is $ \left|\cos\omega\right|<1 $ .
(b) Find the mean of $ \mathbf{X} $ .
$ E\left[\mathbf{X}\right] $
Note
You can see the other approach to find $ E\left[\mathbf{X}\right] $ and $ Var\left[\mathbf{X}\right] $ [CS1GeometricDistribution].
(c) Find the variance of $ \mathbf{X} $ .
$ E\left[\mathbf{X}^{2}\right] $
because
$ \frac{d}{d\left(i\omega\right)}e^{i\omega2}\left(1-e^{i\omega}\left(1-\alpha\right)\right)^{-2}\left|_{i\omega=0}\right. $
$ Var\left[\mathbf{X}\right]=E\left[\mathbf{X}^{2}\right]-\left(E\left[\mathbf{X}\right]\right)^{2}=\frac{2-\alpha}{\alpha^{2}}-\frac{1}{\alpha^{2}}=\frac{1-\alpha}{\alpha^{2}}. $
Example. Secquence of binomially distributed random variables
Let $ \left\{ \mathbf{X}_{n}\right\} _{n\geq1} $ be a sequence of binomially distributed random variables, with the $ n_{th} $ random variable $ \mathbf{X}_{n} $ having pmf
$ P_{\mathbf{X}_{n}}(k)=P\left(\left\{ \mathbf{X}_{n}=k\right\} \right)=\left(\begin{array}{c} n\\ k \end{array}\right)p_{n}^{k}\left(1-p_{n}\right)^{n-k}\;,\; k=0,1,\cdots,n,\; p_{n}\in\left(0,1\right). $ Show that, if the $ p_{n} $ have the property that $ np_{n}\rightarrow\lambda $ as $ n\rightarrow\infty $ , where $ \lambda $ is a positive constant, then the sequence $ \left\{ \mathbf{X}_{n}\right\} _{n\leq1} $ converges in distribution to a Poisson random variable $ \mathbf{X} $ with mean $ \lambda $ .
Hint:
You may find the following fact useful:
$ \lim_{n\rightarrow\infty}\left(1+\frac{x}{n}\right)^{n}=e^{x}. $
Solution
If $ \mathbf{X}_{n} $ converges to $ \mathbf{X} $ in distribution, then $ F_{\mathbf{X}_{n}}(x)\rightarrow F_{\mathbf{X}}(x) $$ \forall x\in\mathbf{R} $ , where $ F_{\mathbf{X}}(x) $ is continuous. This occurs iff $ \Phi_{\mathbf{X}_{n}}(\omega)\rightarrow\Phi_{\mathbf{X}}(\omega) $$ \forall x\in\mathbf{R} $ . We will show that $ \Phi_{\mathbf{X}_{n}}(\omega) $ converges to $ e^{-\lambda\left(1-e^{i\omega}\right)} $ as $ n\rightarrow\infty $ , which is the characteristic function of a Poisson random variable with mean $ \lambda $ .
$ \Phi_{\mathbf{X}_{n}}(\omega) $
Now as $ n\rightarrow\infty $ , $ np_{n}\rightarrow\lambda\Rightarrow p_{n}\rightarrow\frac{\lambda}{n} $ .
$ \lim_{n\rightarrow\infty}\Phi_{\mathbf{X}_{n}}(\omega)=\lim_{n\rightarrow\infty}\left(1+p_{n}\left(e^{i\omega}-1\right)\right)^{n}=\lim_{n\rightarrow\infty}\left(1+\frac{\lambda}{n}\left(e^{i\omega}-1\right)\right)^{n}=e^{\lambda\left(e^{i\omega}-1\right)}=e^{-\lambda\left(1-e^{i\omega}\right)}, $
which is the characteristic function of Poisson random variable with mean $ \lambda $ .
c.f.
The problem 2 of the August 2007 QE [CS1QE2007August] is identical to this example.
Example. Secquence of exponentially distributed random variables
Let $ \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n} $ be a collection of i.i.d. exponentially distributed random variables, each having mean $ \mu $ . Define
$ \mathbf{Y}=\max\left\{ \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}\right\} $
and
$ \mathbf{Z}=\min\left\{ \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}\right\} $ .
(a) Find the pdf of $ \mathbf{Y} $ .
$ F_{\mathbf{Y}}\left(y\right) $
$ f_{\mathbf{Y}}(y) $
(b) Find the pdf of $ \mathbf{Z} $
$ F_{\mathbf{Z}}(z) $
$ f_{Z}(z) $
(c) In words, give as complete a description of the random variable $ \mathbf{Z} $ as you can.
$ \mathbf{Z} $ is an exponetially distributed random variable with mean $ \frac{\mu}{n} $ .
Example. Secquence of uniformly distributed random variables
Let $ \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n} $ be $ n $ i.i.d. jointly distributed random variables, each uniformly distributed on the interval $ \left[0,1\right] $ . Define the new random variables $ \mathbf{W}=\max\left\{ \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}\right\} $
and
$ \mathbf{Z}=\min\left\{ \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}\right\} $ .
(a) Find the pdf of $ \mathbf{W} $ .
$ F_{\mathbf{W}}(w) $
where $ f_{\mathbf{X}}(x)=\mathbf{1}_{\left[0,1\right]}(x) and F_{X}\left(x\right)=\left\{ \begin{array}{ll} 0 ,x<0\\ x ,0\leq x<1\\ 1 ,x\geq1 \end{array}\right. $ .
$ f_{\mathbf{W}}\left(w\right) $
(b) Find the pdf of $ \mathbf{Z} $ .
$ F_{\mathbf{Z}}(z) $
$ f_{\mathbf{Z}}(z) $
(c) Find the mean of $ \mathbf{W} $ .
$ E\left[\mathbf{W}\right] $
Example. Mean of i.i.d. random variables
Let $ \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n} $ be $ M $ jointly distributed i.i.d. random variables with mean $ \mu $ and variance $ \sigma^{2} $ . Let $ \mathbf{Y}_{M}=\frac{1}{M}\sum_{n=0}^{M}\mathbf{X}_{n} $ .
(a) Find the variance of $ \mathbf{Y}_{M} $ .
$ Var\left[\mathbf{Y}_{M}\right]=E\left[\mathbf{Y}_{M}^{2}\right]-\left(E\left[\mathbf{Y}_{M}\right]\right)^{2}. $
$ E\left[\mathbf{Y}_{M}\right]=E\left[\frac{1}{M}\sum_{n=0}^{M}\mathbf{X}_{n}\right]=\frac{1}{M}\sum_{n=0}^{M}E\left[\mathbf{X}_{n}\right]=\frac{1}{M}\cdot M\cdot\mu=\mu. $
$ E\left[\mathbf{Y}_{M}^{2}\right]=E\left[\frac{1}{M^{2}}\sum_{m=1}^{M}\sum_{n=1}^{M}\mathbf{X}_{m}\mathbf{X}_{n}\right]=\frac{1}{M^{2}}\sum_{m=1}^{M}\sum_{n=1}^{M}E\left[\mathbf{X}_{m}\mathbf{X}_{n}\right]. $
Now $ E\left[\mathbf{X}_{m}\mathbf{X}_{n}\right]=\begin{cases} \begin{array}{ll} E\left[\mathbf{X}_{m}^{2}\right] ,m=n\\ E\left[\mathbf{X}_{m}\right]E\left[\mathbf{X}_{n}\right] ,m\neq n \end{array}\end{cases} $ because when $ m\neq n $ , $ \mathbf{X}_{m} $ and $ \mathbf{X}_{n} $ are independent $ \Rightarrow \mathbf{X}_{m} $ and $ \mathbf{X}_{n} $ are uncorrelated.
$ E\left[\mathbf{Y}_{M}^{2}\right]=\frac{1}{M^{2}}\left[M\left(\mu^{2}+\sigma^{2}\right)+M\left(M-1\right)\mu^{2}\right]=\frac{\left(\mu^{2}+\sigma^{2}\right)+\left(M-1\right)\mu^{2}}{M}=\frac{M\mu^{2}+\sigma^{2}}{M}. $
$ Var\left[\mathbf{Y}_{M}\right]=\frac{M\mu^{2}+\sigma^{2}-M\mu^{2}}{M}=\frac{\sigma^{2}}{M}. $
(b) Now assume that the $ \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n} $ are identically distributed with with mean $ \mu $ and variance $ \sigma^{2} $ , but they are only correlated rather than independent. Find the variance of $ \mathbf{Y}_{M} $ .
Again, $ Var\left[\mathbf{Y}_{M}\right]=\frac{\sigma^{2}}{M} $ , because only uncorrelatedness was used in part (a).
Example. A sum of a random number of i.i.d. Gaussians
Let $ \left\{ \mathbf{X}_{n}\right\} $ be a sequence of i.i.d. Gaussian random variables, each having characteristic function
$ \Phi_{\mathbf{X}}\left(\omega\right)=e^{i\mu\omega}e^{-\frac{1}{2}\sigma^{2}\omega^{2}} $. Let $ \mathbf{N} $ be a Poisson random variable with pmf
$ p(n)=\frac{e^{-\lambda}\lambda^{n}}{n!},\; n=0,1,2,\cdots,\;\lambda>0, $ and assume $ \mathbf{N} $ is statistically independent of $ \left\{ \mathbf{X}_{n}\right\} $ . Define a new random variable
$ \mathbf{Y}=\mathbf{X}_{1}+\mathbf{X}_{2}+\cdots+\mathbf{X}_{N}. $
Note
If $ \mathbf{N}=0 $ , then $ \mathbf{Y}=0 $ .
(a) Find the mean of $ \mathbf{Y} $ .
• Probability generating function of $ \mathbf{N} $ is $ P_{\mathbf{N}}\left(z\right)=E\left[z^{\mathbf{N}}\right]=\sum_{n=0}^{\infty}z^{n}\frac{e^{-\lambda}\lambda^{n}}{n!}=e^{-\lambda}\sum_{n=0}^{\infty}\frac{\left(z\lambda\right)^{n}}{n!}=e^{-\lambda}e^{z\lambda}=e^{-\lambda\left(1-z\right)}. $
• The characteristic function of $ \mathbf{Y} $ is $ \Phi_{\mathbf{Y}}\left(\omega\right)=P_{\mathbf{N}}\left(z\right)\Bigl|_{z=\Phi_{\mathbf{X}}\left(\omega\right)}=e^{-\lambda\left(1-z\right)}\Bigl|_{z=e^{i\mu\omega}e^{-\frac{1}{2}\sigma^{2}\omega^{2}}}=e^{-\lambda\left(1-e^{i\mu\omega}e^{-\frac{1}{2}\sigma^{2}\omega^{2}}\right)}. $
• Now, we can get the mean of $ \mathbf{Y} $ using the characteristic function. $ E\left[\mathbf{Y}\right] $