(3 intermediate revisions by 2 users not shown)
Line 1: Line 1:
=7.3 QE 2001 August=
+
=7.3 [[ECE_PhD_Qualifying_Exams|QE]] 2001 August=
  
 
'''1. (10 Points)'''
 
'''1. (10 Points)'''
Line 12: Line 12:
  
 
{| border = "1"
 
{| border = "1"
!<math>N</math>th
+
!<math class="inline">N</math>th
!<math>\left(N - 1\right)</math>th
+
!<math class="inline">\left(N - 1\right)</math>th
!<math>\left(N - 2\right)</math>th
+
!<math class="inline">\left(N - 2\right)</math>th
!<math>\left(N - 3\right)</math>th
+
!<math class="inline">\left(N - 3\right)</math>th
!<math>\cdots</math>
+
!<math class="inline">\cdots</math>
 
|-
 
|-
 
|-
 
|-
Line 23: Line 23:
 
|T
 
|T
 
|H
 
|H
|<math>\cdots</math>
+
|<math class="inline">\cdots</math>
 
|-
 
|-
 
|T
 
|T
Line 29: Line 29:
 
|H
 
|H
 
|T
 
|T
|<math>\cdots</math>
+
|<math class="inline">\cdots</math>
 
|}
 
|}
  
  
<math>P\left(\left\{ N=n\right\} \right)=\frac{2}{2^{n}}=\frac{1}{2^{n-1}}\text{ for }n\geq2.</math>  
+
<math class="inline">P\left(\left\{ N=n\right\} \right)=\frac{2}{2^{n}}=\frac{1}{2^{n-1}}\text{ for }n\geq2.</math>  
  
<math>P\left(\left\{ N\leq7\right\} \right)=\sum_{k=2}^{7}\frac{1}{2^{k-1}}=\sum_{k=1}^{6}\left(\frac{1}{2}\right)^{k}=\frac{\frac{1}{2}\left(1-\left(\frac{1}{2}\right)^{6}\right)}{1-\frac{1}{2}}=1-\frac{1}{64}=\frac{63}{64}.</math>  
+
<math class="inline">P\left(\left\{ N\leq7\right\} \right)=\sum_{k=2}^{7}\frac{1}{2^{k-1}}=\sum_{k=1}^{6}\left(\frac{1}{2}\right)^{k}=\frac{\frac{1}{2}\left(1-\left(\frac{1}{2}\right)^{6}\right)}{1-\frac{1}{2}}=1-\frac{1}{64}=\frac{63}{64}.</math>  
  
 
'''(b)'''
 
'''(b)'''
Line 41: Line 41:
 
What is the probability that this experiment terminates with an even number of coin tosses?
 
What is the probability that this experiment terminates with an even number of coin tosses?
  
<math>P\left(\left\{ N\text{ is even}\right\} \right)=\sum_{k=1}^{\infty}\frac{1}{2^{2k-1}}=2\sum_{k=1}^{\infty}\left(\frac{1}{4}\right)^{k}=2\cdot\frac{\frac{1}{4}}{1-\frac{1}{4}}=2\cdot\frac{1}{3}=\frac{2}{3}.</math>  
+
<math class="inline">P\left(\left\{ N\text{ is even}\right\} \right)=\sum_{k=1}^{\infty}\frac{1}{2^{2k-1}}=2\sum_{k=1}^{\infty}\left(\frac{1}{4}\right)^{k}=2\cdot\frac{\frac{1}{4}}{1-\frac{1}{4}}=2\cdot\frac{1}{3}=\frac{2}{3}.</math>  
  
 
'''2. (25 Points)'''
 
'''2. (25 Points)'''
  
Let <math>\mathbf{X}</math>  and <math>\mathbf{Y}</math>  be independent Poisson random variables with mean <math>\lambda</math>  and <math>\mu</math> , respectively. Let <math>\mathbf{Z}</math>  be a new random variable defined as <math>\mathbf{Z}=\mathbf{X}+\mathbf{Y}.</math>  
+
Let <math class="inline">\mathbf{X}</math>  and <math class="inline">\mathbf{Y}</math>  be independent Poisson random variables with mean <math class="inline">\lambda</math>  and <math class="inline">\mu</math> , respectively. Let <math class="inline">\mathbf{Z}</math>  be a new random variable defined as <math class="inline">\mathbf{Z}=\mathbf{X}+\mathbf{Y}.</math>  
  
 
Note
 
Note
  
This problem is identical to the example [CS1AdditionOfTwoIndependentPoissonRV].
+
This problem is identical to the example: [[ECE 600 Exams Addition of two independent Poisson random variables|Addition of two independent Poisson random variables]].
  
 
'''(a)'''  
 
'''(a)'''  
  
Find the probability mass function (pmf) of <math>\mathbf{Z}</math> .
+
Find the probability mass function (pmf) of <math class="inline">\mathbf{Z}</math> .
  
 
'''(b)'''
 
'''(b)'''
  
Find the conditional probability mass function (pmf) of <math>\mathbf{X}</math>  conditional on the event <math>\left\{ \mathbf{Z}=n\right\}</math>  . Identify the type of pmf that this is, and fully specify its parameters.
+
Find the conditional probability mass function (pmf) of <math class="inline">\mathbf{X}</math>  conditional on the event <math class="inline">\left\{ \mathbf{Z}=n\right\}</math>  . Identify the type of pmf that this is, and fully specify its parameters.
  
 
'''3. (30 Points)'''
 
'''3. (30 Points)'''
  
Let <math>\mathbf{X}_{1},\cdots,\mathbf{X}_{n},\cdots</math>  be a sequence of random variables that are not necessarily statistically independent, but that each have identical mean <math>\mu</math>  and variance <math>\sigma^{2}</math> . Let <math>\mathbf{Y}_{1},\cdots,\mathbf{Y}_{n},\cdots</math>  be a sequence of random variable with <math>\mathbf{Y}_{n}=\frac{1}{n}\sum_{k=1}^{n}\mathbf{X}_{k}.</math>  
+
Let <math class="inline">\mathbf{X}_{1},\cdots,\mathbf{X}_{n},\cdots</math>  be a sequence of random variables that are not necessarily statistically independent, but that each have identical mean <math class="inline">\mu</math>  and variance <math class="inline">\sigma^{2}</math> . Let <math class="inline">\mathbf{Y}_{1},\cdots,\mathbf{Y}_{n},\cdots</math>  be a sequence of random variable with <math class="inline">\mathbf{Y}_{n}=\frac{1}{n}\sum_{k=1}^{n}\mathbf{X}_{k}.</math>  
  
 
'''(a)'''
 
'''(a)'''
  
Given that <math>\mathbf{X}_{1},\cdots,\mathbf{X}_{n},\cdots</math>  are uncorrelated, determine whether or not <math>\left\{ \mathbf{Y}_{n}\right\}</math>  converges to <math>\mu</math>  in the mean square sense.
+
Given that <math class="inline">\mathbf{X}_{1},\cdots,\mathbf{X}_{n},\cdots</math>  are uncorrelated, determine whether or not <math class="inline">\left\{ \mathbf{Y}_{n}\right\}</math>  converges to <math class="inline">\mu</math>  in the mean square sense.
  
<math>E\left[\left|\mathbf{Y}_{n}-\mu\right|^{2}\right]=E\left[\mathbf{Y}_{n}^{2}\right]-2E\left[\mathbf{Y}_{n}\right]\mu+\mu^{2}.</math>  
+
<math class="inline">E\left[\left|\mathbf{Y}_{n}-\mu\right|^{2}\right]=E\left[\mathbf{Y}_{n}^{2}\right]-2E\left[\mathbf{Y}_{n}\right]\mu+\mu^{2}.</math>  
  
<math>E\left[\mathbf{Y}_{n}\right]=\frac{1}{n}\sum_{k=1}^{n}E\left[\mathbf{X}_{k}\right]=\mu.</math>  
+
<math class="inline">E\left[\mathbf{Y}_{n}\right]=\frac{1}{n}\sum_{k=1}^{n}E\left[\mathbf{X}_{k}\right]=\mu.</math>  
  
<math>E\left[\mathbf{Y}_{n}^{2}\right]=E\left[\frac{1}{n^{2}}\sum_{k=1}^{n}\sum_{l=1}^{n}\mathbf{X}_{k}\mathbf{X}_{l}\right]=\frac{1}{n^{2}}\sum_{k=1}^{n}\sum_{l=1}^{n}E\left[\mathbf{X}_{k}\mathbf{X}_{l}\right]</math><math>=\frac{1}{n^{2}}\sum_{k=1}^{n}E\left[\mathbf{X}_{k}^{2}\right]+\frac{1}{n^{2}}\underset{k\neq l}{\sum_{k=1}^{n}\sum_{l=1}^{n}}E\left[\mathbf{X}_{k}\right]E\left[\mathbf{X}_{l}\right]</math><math>=\frac{1}{n}\left(\mu^{2}+\sigma^{2}\right)+\frac{n\left(n-1\right)}{n^{2}}\mu^{2}=\frac{1}{n}\mu^{2}+\frac{1}{n}\sigma^{2}+\mu^{2}-\frac{1}{n}\mu^{2}</math><math>=\frac{\sigma^{2}}{n}+\mu^{2}.</math>  
+
<math class="inline">E\left[\mathbf{Y}_{n}^{2}\right]=E\left[\frac{1}{n^{2}}\sum_{k=1}^{n}\sum_{l=1}^{n}\mathbf{X}_{k}\mathbf{X}_{l}\right]=\frac{1}{n^{2}}\sum_{k=1}^{n}\sum_{l=1}^{n}E\left[\mathbf{X}_{k}\mathbf{X}_{l}\right]</math><math class="inline">=\frac{1}{n^{2}}\sum_{k=1}^{n}E\left[\mathbf{X}_{k}^{2}\right]+\frac{1}{n^{2}}\underset{k\neq l}{\sum_{k=1}^{n}\sum_{l=1}^{n}}E\left[\mathbf{X}_{k}\right]E\left[\mathbf{X}_{l}\right]</math><math class="inline">=\frac{1}{n}\left(\mu^{2}+\sigma^{2}\right)+\frac{n\left(n-1\right)}{n^{2}}\mu^{2}=\frac{1}{n}\mu^{2}+\frac{1}{n}\sigma^{2}+\mu^{2}-\frac{1}{n}\mu^{2}</math><math class="inline">=\frac{\sigma^{2}}{n}+\mu^{2}.</math>  
  
<math>E\left[\left|\mathbf{Y}_{n}-\mu\right|^{2}\right]=E\left[\mathbf{Y}_{n}^{2}\right]-2E\left[\mathbf{Y}_{n}\right]\mu+\mu^{2}=\frac{\sigma^{2}}{n}+\mu^{2}-2\mu\cdot\mu+\mu^{2}=\frac{\sigma^{2}}{n}.</math>
+
<math class="inline">E\left[\left|\mathbf{Y}_{n}-\mu\right|^{2}\right]=E\left[\mathbf{Y}_{n}^{2}\right]-2E\left[\mathbf{Y}_{n}\right]\mu+\mu^{2}=\frac{\sigma^{2}}{n}+\mu^{2}-2\mu\cdot\mu+\mu^{2}=\frac{\sigma^{2}}{n}.</math>
<math>\lim_{n\rightarrow\infty}E\left[\left|\mathbf{Y}_{n}-\mu\right|^{2}\right]=\lim_{n\rightarrow\infty}\left(\frac{\sigma^{2}}{n}\right)=0.</math>  
+
<math class="inline">\lim_{n\rightarrow\infty}E\left[\left|\mathbf{Y}_{n}-\mu\right|^{2}\right]=\lim_{n\rightarrow\infty}\left(\frac{\sigma^{2}}{n}\right)=0.</math>  
  
 
Another approach
 
Another approach
  
<math>E\left[\left|\mathbf{Y}_{n}-\mu\right|^{2}\right]=E\left[\left|\frac{1}{n}\sum_{k=1}^{n}\left(\mathbf{X}_{k}-\mu\right)\right|^{2}\right]=\frac{1}{n^{2}}\sum_{k=1}^{n}\sum_{l=1}^{n}E\left[\left(\mathbf{X}_{k}-\mu\right)\left(\mathbf{X}_{l}-\mu\right)\right]</math><math>=\frac{1}{n^{2}}\sum_{k=1}^{n}E\left[\left(\mathbf{X}_{k}-\mu\right)^{2}\right]+\frac{1}{n^{2}}\underset{k\neq l}{\sum_{k=1}^{n}\sum_{l=1}^{n}}E\left[\mathbf{X}_{k}-\mu\right]E\left[\mathbf{X}_{l}-\mu\right]</math><math>=\frac{1}{n^{2}}\cdot n\cdot\sigma^{2}+\frac{1}{n^{2}}\cdot n\left(n-1\right)\cdot0^{2}=\frac{\sigma^{2}}{n}.</math>  
+
<math class="inline">E\left[\left|\mathbf{Y}_{n}-\mu\right|^{2}\right]=E\left[\left|\frac{1}{n}\sum_{k=1}^{n}\left(\mathbf{X}_{k}-\mu\right)\right|^{2}\right]=\frac{1}{n^{2}}\sum_{k=1}^{n}\sum_{l=1}^{n}E\left[\left(\mathbf{X}_{k}-\mu\right)\left(\mathbf{X}_{l}-\mu\right)\right]</math><math class="inline">=\frac{1}{n^{2}}\sum_{k=1}^{n}E\left[\left(\mathbf{X}_{k}-\mu\right)^{2}\right]+\frac{1}{n^{2}}\underset{k\neq l}{\sum_{k=1}^{n}\sum_{l=1}^{n}}E\left[\mathbf{X}_{k}-\mu\right]E\left[\mathbf{X}_{l}-\mu\right]</math><math class="inline">=\frac{1}{n^{2}}\cdot n\cdot\sigma^{2}+\frac{1}{n^{2}}\cdot n\left(n-1\right)\cdot0^{2}=\frac{\sigma^{2}}{n}.</math>  
  
<math>\lim_{n\rightarrow\infty}E\left[\left|\mathbf{Y}_{n}-\mu\right|^{2}\right]=\lim_{n\rightarrow\infty}\left(\frac{\sigma^{2}}{n}\right)=0.</math>  
+
<math class="inline">\lim_{n\rightarrow\infty}E\left[\left|\mathbf{Y}_{n}-\mu\right|^{2}\right]=\lim_{n\rightarrow\infty}\left(\frac{\sigma^{2}}{n}\right)=0.</math>  
  
 
(b)
 
(b)
  
Given that the covariance between <math>\mathbf{X}_{j}</math>  and <math>\mathbf{X}_{k}</math>  is given by  
+
Given that the covariance between <math class="inline">\mathbf{X}_{j}</math>  and <math class="inline">\mathbf{X}_{k}</math>  is given by  
 
<br>
 
<br>
<math>cov\left(\mathbf{X}_{j},\mathbf{X}_{k}\right)=\begin{cases}
+
<math class="inline">cov\left(\mathbf{X}_{j},\mathbf{X}_{k}\right)=\begin{cases}
 
\begin{array}{lll}
 
\begin{array}{lll}
 
\sigma^{2}    \text{, for }j=k\\
 
\sigma^{2}    \text{, for }j=k\\
Line 93: Line 93:
 
\end{array}\end{cases}</math>
 
\end{array}\end{cases}</math>
 
<br>
 
<br>
where <math>-1\leq r\leq1</math> , determine whether or not <math>\left\{ \mathbf{Y}_{n}\right\}</math>  converges to <math>\mu</math>  in the mean square sense.
+
where <math class="inline">-1\leq r\leq1</math> , determine whether or not <math class="inline">\left\{ \mathbf{Y}_{n}\right\}</math>  converges to <math class="inline">\mu</math>  in the mean square sense.
  
<math>E\left[\left|\mathbf{Y}_{n}-\mu\right|^{2}\right]=E\left[\left|\frac{1}{n}\sum_{k=1}^{n}\left(\mathbf{X}_{k}-\mu\right)\right|^{2}\right]=\frac{1}{n^{2}}\sum_{k=1}^{n}\sum_{l=1}^{n}E\left[\left(\mathbf{X}_{k}-\mu\right)\left(\mathbf{X}_{l}-\mu\right)\right]</math><math>=\frac{1}{n^{2}}\sum_{k=1}^{n}E\left[\left(\mathbf{X}_{k}-\mu\right)^{2}\right]+\frac{1}{n^{2}}\underset{k\neq l}{\sum_{k=1}^{n}\sum_{l=1}^{n}}E\left[\left(\mathbf{X}_{k}-\mu\right)\left(\mathbf{X}_{l}-\mu\right)\right]</math><math>=\frac{1}{n}\sigma^{2}+\frac{2\left(n-1\right)}{n^{2}}r\sigma^{2}.</math>  
+
<math class="inline">E\left[\left|\mathbf{Y}_{n}-\mu\right|^{2}\right]=E\left[\left|\frac{1}{n}\sum_{k=1}^{n}\left(\mathbf{X}_{k}-\mu\right)\right|^{2}\right]=\frac{1}{n^{2}}\sum_{k=1}^{n}\sum_{l=1}^{n}E\left[\left(\mathbf{X}_{k}-\mu\right)\left(\mathbf{X}_{l}-\mu\right)\right]</math><math class="inline">=\frac{1}{n^{2}}\sum_{k=1}^{n}E\left[\left(\mathbf{X}_{k}-\mu\right)^{2}\right]+\frac{1}{n^{2}}\underset{k\neq l}{\sum_{k=1}^{n}\sum_{l=1}^{n}}E\left[\left(\mathbf{X}_{k}-\mu\right)\left(\mathbf{X}_{l}-\mu\right)\right]</math><math class="inline">=\frac{1}{n}\sigma^{2}+\frac{2\left(n-1\right)}{n^{2}}r\sigma^{2}.</math>  
  
<math>\lim_{n\rightarrow\infty}E\left[\left|\mathbf{Y}_{n}-\mu\right|^{2}\right]=\lim_{n\rightarrow\infty}\left(\frac{1}{n}\sigma^{2}+\frac{2\left(n-1\right)}{n^{2}}r\sigma^{2}\right)=0.</math>  
+
<math class="inline">\lim_{n\rightarrow\infty}E\left[\left|\mathbf{Y}_{n}-\mu\right|^{2}\right]=\lim_{n\rightarrow\infty}\left(\frac{1}{n}\sigma^{2}+\frac{2\left(n-1\right)}{n^{2}}r\sigma^{2}\right)=0.</math>  
  
Thus, <math>\mathbf{Y}_{n}</math>  converges in the mean square sense to <math>\mu</math> .
+
Thus, <math class="inline">\mathbf{Y}_{n}</math>  converges in the mean square sense to <math class="inline">\mu</math> .
  
 
4. (35 Points)
 
4. (35 Points)
  
Let <math>\left\{ t_{k}\right\}</math>  be the set of Poisson points corresponding to a homogeneous Poisson process with parameters <math>\lambda</math>  on the real line such that if <math>\mathbf{N}\left(t_{1},t_{2}\right)</math>  is defined as the number of points in the interval <math>\left[t_{1},t_{2}\right)</math> , then <math>P\left(\left\{ N\left(t_{1},t_{2}\right)=k\right\} \right)=\frac{\left[\lambda\left(t_{2}-t_{1}\right)\right]^{k}e^{-\lambda\left(t_{2}-t_{1}\right)}}{k!}\;,\qquad k=0,1,2,\cdots,\; t_{2}>t_{1}\geq0. Let \mathbf{X}\left(t\right)=\mathbf{N}\left(0,t\right)</math>  be the Poisson counting process for <math>t>0</math>  (note that <math>\mathbf{X}\left(0\right)=0</math> ).
+
Let <math class="inline">\left\{ t_{k}\right\}</math>  be the set of Poisson points corresponding to a homogeneous Poisson process with parameters <math class="inline">\lambda</math>  on the real line such that if <math class="inline">\mathbf{N}\left(t_{1},t_{2}\right)</math>  is defined as the number of points in the interval <math class="inline">\left[t_{1},t_{2}\right)</math> , then <math class="inline">P\left(\left\{ N\left(t_{1},t_{2}\right)=k\right\} \right)=\frac{\left[\lambda\left(t_{2}-t_{1}\right)\right]^{k}e^{-\lambda\left(t_{2}-t_{1}\right)}}{k!}\;,\qquad k=0,1,2,\cdots,\; t_{2}>t_{1}\geq0. Let \mathbf{X}\left(t\right)=\mathbf{N}\left(0,t\right)</math>  be the Poisson counting process for <math class="inline">t>0</math>  (note that <math class="inline">\mathbf{X}\left(0\right)=0</math> ).
  
 
(a)
 
(a)
  
Find the (first order) characteristic function of <math>\mathbf{X}\left(t\right)</math> .
+
Find the (first order) characteristic function of <math class="inline">\mathbf{X}\left(t\right)</math> .
  
<math>\Phi_{\mathbf{X}}\left(\omega\right)=E\left[e^{i\omega\mathbf{X}}\right]=\sum_{k=0}^{\infty}e^{i\omega k}\frac{\left(\lambda t\right)^{k}e^{-\lambda t}}{k!}=e^{-\lambda t}\sum_{k=0}^{\infty}\frac{\left(\lambda te^{i\omega}\right)^{k}}{k!}=e^{-\lambda t}e^{\lambda te^{i\omega}}=e^{-\lambda t\left(1-e^{i\omega}\right)}.</math>  
+
<math class="inline">\Phi_{\mathbf{X}}\left(\omega\right)=E\left[e^{i\omega\mathbf{X}}\right]=\sum_{k=0}^{\infty}e^{i\omega k}\frac{\left(\lambda t\right)^{k}e^{-\lambda t}}{k!}=e^{-\lambda t}\sum_{k=0}^{\infty}\frac{\left(\lambda te^{i\omega}\right)^{k}}{k!}=e^{-\lambda t}e^{\lambda te^{i\omega}}=e^{-\lambda t\left(1-e^{i\omega}\right)}.</math>  
  
 
(b)
 
(b)
  
Find the mean and variance of <math>\mathbf{X}\left(t\right)</math> .
+
Find the mean and variance of <math class="inline">\mathbf{X}\left(t\right)</math> .
  
<math>E\left[\mathbf{X}\left(t\right)\right]=\frac{d}{di\omega}\Phi_{\mathbf{X}}\left(\omega\right)\biggl|_{i\omega=0}=\frac{d}{di\omega}e^{-\lambda t}e^{\lambda te^{i\omega}}\biggl|_{i\omega=0}=e^{-\lambda t}\cdot\frac{d}{di\omega}e^{\lambda te^{i\omega}}\biggl|_{i\omega=0}</math><math>=e^{-\lambda t}\cdot e^{\lambda te^{i\omega}}\cdot\lambda te^{i\omega}\biggl|_{i\omega=0}=e^{-\lambda t}\cdot e^{\lambda t}\cdot\lambda t=\lambda t.</math>  
+
<math class="inline">E\left[\mathbf{X}\left(t\right)\right]=\frac{d}{di\omega}\Phi_{\mathbf{X}}\left(\omega\right)\biggl|_{i\omega=0}=\frac{d}{di\omega}e^{-\lambda t}e^{\lambda te^{i\omega}}\biggl|_{i\omega=0}=e^{-\lambda t}\cdot\frac{d}{di\omega}e^{\lambda te^{i\omega}}\biggl|_{i\omega=0}</math><math class="inline">=e^{-\lambda t}\cdot e^{\lambda te^{i\omega}}\cdot\lambda te^{i\omega}\biggl|_{i\omega=0}=e^{-\lambda t}\cdot e^{\lambda t}\cdot\lambda t=\lambda t.</math>  
  
<math>E\left[\mathbf{X}^{2}\left(t\right)\right]=\frac{d}{d\left(i\omega\right)^{2}}\Phi_{\mathbf{X}}\left(\omega\right)\biggl|_{i\omega=0}=\frac{d}{di\omega}\lambda te^{-\lambda t}e^{\lambda te^{i\omega}}e^{i\omega}\biggl|_{i\omega=0}</math><math>=\lambda te^{-\lambda t}\cdot\frac{d}{di\omega}e^{\lambda te^{i\omega}}e^{i\omega}\biggl|_{i\omega=0}</math><math>=\lambda te^{-\lambda t}\left(e^{\lambda te^{i\omega}}\lambda te^{i\omega}e^{i\omega}+e^{\lambda te^{i\omega}}e^{i\omega}\right)\biggl|_{i\omega=0}</math><math>=\lambda te^{-\lambda t}\left(\lambda te^{\lambda te^{i\omega}}e^{2i\omega}+e^{\lambda te^{i\omega}}e^{i\omega}\right)\biggl|_{i\omega=0}=\lambda te^{-\lambda t}\left(\lambda te^{\lambda t}+e^{\lambda t}\right)</math><math>=\lambda t\left(\lambda t+1\right)=\left(\lambda t\right)^{2}+\lambda t.</math>  
+
<math class="inline">E\left[\mathbf{X}^{2}\left(t\right)\right]=\frac{d}{d\left(i\omega\right)^{2}}\Phi_{\mathbf{X}}\left(\omega\right)\biggl|_{i\omega=0}=\frac{d}{di\omega}\lambda te^{-\lambda t}e^{\lambda te^{i\omega}}e^{i\omega}\biggl|_{i\omega=0}</math><math class="inline">=\lambda te^{-\lambda t}\cdot\frac{d}{di\omega}e^{\lambda te^{i\omega}}e^{i\omega}\biggl|_{i\omega=0}</math><math class="inline">=\lambda te^{-\lambda t}\left(e^{\lambda te^{i\omega}}\lambda te^{i\omega}e^{i\omega}+e^{\lambda te^{i\omega}}e^{i\omega}\right)\biggl|_{i\omega=0}</math><math class="inline">=\lambda te^{-\lambda t}\left(\lambda te^{\lambda te^{i\omega}}e^{2i\omega}+e^{\lambda te^{i\omega}}e^{i\omega}\right)\biggl|_{i\omega=0}=\lambda te^{-\lambda t}\left(\lambda te^{\lambda t}+e^{\lambda t}\right)</math><math class="inline">=\lambda t\left(\lambda t+1\right)=\left(\lambda t\right)^{2}+\lambda t.</math>  
  
<math>Var\left[\mathbf{X}\left(t\right)\right]=E\left[\mathbf{X}^{2}\left(t\right)\right]-\left(E\left[\mathbf{X}\left(t\right)\right]\right)^{2}=\left(\lambda t\right)^{2}+\lambda t-\left(\lambda t\right)^{2}=\lambda t.</math>  
+
<math class="inline">Var\left[\mathbf{X}\left(t\right)\right]=E\left[\mathbf{X}^{2}\left(t\right)\right]-\left(E\left[\mathbf{X}\left(t\right)\right]\right)^{2}=\left(\lambda t\right)^{2}+\lambda t-\left(\lambda t\right)^{2}=\lambda t.</math>  
  
 
(c)
 
(c)
  
Deriven an expression for the autocorrelation function of <math>\mathbf{X}\left(t\right)</math> .
+
Deriven an expression for the autocorrelation function of <math class="inline">\mathbf{X}\left(t\right)</math> .
  
<math>R_{\mathbf{XX}}\left(t_{1},t_{2}\right)</math>  
+
<math class="inline">R_{\mathbf{XX}}\left(t_{1},t_{2}\right)</math>  
  
 
(d)
 
(d)
  
Assuming that <math>t_{2}>t_{1}</math> , find an expression for <math>P\left(\left\{ \mathbf{X}\left(t_{1}\right)=m\right\} \cap\left\{ \mathbf{X}\left(t_{2}\right)=n\right\} \right)</math> , for all <math>m=0,1,2,\cdots</math>  and <math>n=0,1,2,\cdots</math> .
+
Assuming that <math class="inline">t_{2}>t_{1}</math> , find an expression for <math class="inline">P\left(\left\{ \mathbf{X}\left(t_{1}\right)=m\right\} \cap\left\{ \mathbf{X}\left(t_{2}\right)=n\right\} \right)</math> , for all <math class="inline">m=0,1,2,\cdots</math>  and <math class="inline">n=0,1,2,\cdots</math> .
  
<math>P\left(\left\{ \mathbf{X}\left(t_{1}\right)=m\right\} \cap\left\{ \mathbf{X}\left(t_{2}\right)=n\right\} \right)</math>   
+
<math class="inline">P\left(\left\{ \mathbf{X}\left(t_{1}\right)=m\right\} \cap\left\{ \mathbf{X}\left(t_{2}\right)=n\right\} \right)</math>   
  
 
----
 
----
 
[[ECE600|Back to ECE600]]
 
[[ECE600|Back to ECE600]]
  
[[ECE 600 QE|Back to ECE 600 QE]]
+
[[ECE 600 QE|Back to my ECE 600 QE page]]
 +
 
 +
[[ECE_PhD_Qualifying_Exams|Back to the general ECE PHD QE page]] (for problem discussion)

Latest revision as of 07:33, 27 June 2012

7.3 QE 2001 August

1. (10 Points)

Consider the following random experiment: A fair coin is repeatedly tossed until the same outcome (H or T) appears twice in a row.

(a)

What is the probability that this experiment terminates on or before the seventh coin toss?

Let N be the number of toss until the same outcome appears twice in a row.

$ N $th $ \left(N - 1\right) $th $ \left(N - 2\right) $th $ \left(N - 3\right) $th $ \cdots $
H H T H $ \cdots $
T T H T $ \cdots $


$ P\left(\left\{ N=n\right\} \right)=\frac{2}{2^{n}}=\frac{1}{2^{n-1}}\text{ for }n\geq2. $

$ P\left(\left\{ N\leq7\right\} \right)=\sum_{k=2}^{7}\frac{1}{2^{k-1}}=\sum_{k=1}^{6}\left(\frac{1}{2}\right)^{k}=\frac{\frac{1}{2}\left(1-\left(\frac{1}{2}\right)^{6}\right)}{1-\frac{1}{2}}=1-\frac{1}{64}=\frac{63}{64}. $

(b)

What is the probability that this experiment terminates with an even number of coin tosses?

$ P\left(\left\{ N\text{ is even}\right\} \right)=\sum_{k=1}^{\infty}\frac{1}{2^{2k-1}}=2\sum_{k=1}^{\infty}\left(\frac{1}{4}\right)^{k}=2\cdot\frac{\frac{1}{4}}{1-\frac{1}{4}}=2\cdot\frac{1}{3}=\frac{2}{3}. $

2. (25 Points)

Let $ \mathbf{X} $ and $ \mathbf{Y} $ be independent Poisson random variables with mean $ \lambda $ and $ \mu $ , respectively. Let $ \mathbf{Z} $ be a new random variable defined as $ \mathbf{Z}=\mathbf{X}+\mathbf{Y}. $

Note

This problem is identical to the example: Addition of two independent Poisson random variables.

(a)

Find the probability mass function (pmf) of $ \mathbf{Z} $ .

(b)

Find the conditional probability mass function (pmf) of $ \mathbf{X} $ conditional on the event $ \left\{ \mathbf{Z}=n\right\} $ . Identify the type of pmf that this is, and fully specify its parameters.

3. (30 Points)

Let $ \mathbf{X}_{1},\cdots,\mathbf{X}_{n},\cdots $ be a sequence of random variables that are not necessarily statistically independent, but that each have identical mean $ \mu $ and variance $ \sigma^{2} $ . Let $ \mathbf{Y}_{1},\cdots,\mathbf{Y}_{n},\cdots $ be a sequence of random variable with $ \mathbf{Y}_{n}=\frac{1}{n}\sum_{k=1}^{n}\mathbf{X}_{k}. $

(a)

Given that $ \mathbf{X}_{1},\cdots,\mathbf{X}_{n},\cdots $ are uncorrelated, determine whether or not $ \left\{ \mathbf{Y}_{n}\right\} $ converges to $ \mu $ in the mean square sense.

$ E\left[\left|\mathbf{Y}_{n}-\mu\right|^{2}\right]=E\left[\mathbf{Y}_{n}^{2}\right]-2E\left[\mathbf{Y}_{n}\right]\mu+\mu^{2}. $

$ E\left[\mathbf{Y}_{n}\right]=\frac{1}{n}\sum_{k=1}^{n}E\left[\mathbf{X}_{k}\right]=\mu. $

$ E\left[\mathbf{Y}_{n}^{2}\right]=E\left[\frac{1}{n^{2}}\sum_{k=1}^{n}\sum_{l=1}^{n}\mathbf{X}_{k}\mathbf{X}_{l}\right]=\frac{1}{n^{2}}\sum_{k=1}^{n}\sum_{l=1}^{n}E\left[\mathbf{X}_{k}\mathbf{X}_{l}\right] $$ =\frac{1}{n^{2}}\sum_{k=1}^{n}E\left[\mathbf{X}_{k}^{2}\right]+\frac{1}{n^{2}}\underset{k\neq l}{\sum_{k=1}^{n}\sum_{l=1}^{n}}E\left[\mathbf{X}_{k}\right]E\left[\mathbf{X}_{l}\right] $$ =\frac{1}{n}\left(\mu^{2}+\sigma^{2}\right)+\frac{n\left(n-1\right)}{n^{2}}\mu^{2}=\frac{1}{n}\mu^{2}+\frac{1}{n}\sigma^{2}+\mu^{2}-\frac{1}{n}\mu^{2} $$ =\frac{\sigma^{2}}{n}+\mu^{2}. $

$ E\left[\left|\mathbf{Y}_{n}-\mu\right|^{2}\right]=E\left[\mathbf{Y}_{n}^{2}\right]-2E\left[\mathbf{Y}_{n}\right]\mu+\mu^{2}=\frac{\sigma^{2}}{n}+\mu^{2}-2\mu\cdot\mu+\mu^{2}=\frac{\sigma^{2}}{n}. $ $ \lim_{n\rightarrow\infty}E\left[\left|\mathbf{Y}_{n}-\mu\right|^{2}\right]=\lim_{n\rightarrow\infty}\left(\frac{\sigma^{2}}{n}\right)=0. $

Another approach

$ E\left[\left|\mathbf{Y}_{n}-\mu\right|^{2}\right]=E\left[\left|\frac{1}{n}\sum_{k=1}^{n}\left(\mathbf{X}_{k}-\mu\right)\right|^{2}\right]=\frac{1}{n^{2}}\sum_{k=1}^{n}\sum_{l=1}^{n}E\left[\left(\mathbf{X}_{k}-\mu\right)\left(\mathbf{X}_{l}-\mu\right)\right] $$ =\frac{1}{n^{2}}\sum_{k=1}^{n}E\left[\left(\mathbf{X}_{k}-\mu\right)^{2}\right]+\frac{1}{n^{2}}\underset{k\neq l}{\sum_{k=1}^{n}\sum_{l=1}^{n}}E\left[\mathbf{X}_{k}-\mu\right]E\left[\mathbf{X}_{l}-\mu\right] $$ =\frac{1}{n^{2}}\cdot n\cdot\sigma^{2}+\frac{1}{n^{2}}\cdot n\left(n-1\right)\cdot0^{2}=\frac{\sigma^{2}}{n}. $

$ \lim_{n\rightarrow\infty}E\left[\left|\mathbf{Y}_{n}-\mu\right|^{2}\right]=\lim_{n\rightarrow\infty}\left(\frac{\sigma^{2}}{n}\right)=0. $

(b)

Given that the covariance between $ \mathbf{X}_{j} $ and $ \mathbf{X}_{k} $ is given by
$ cov\left(\mathbf{X}_{j},\mathbf{X}_{k}\right)=\begin{cases} \begin{array}{lll} \sigma^{2} \text{, for }j=k\\ r\sigma^{2} \text{, for }\left|j-k\right|=1\\ 0 \text{, elsewhere, } \end{array}\end{cases} $
where $ -1\leq r\leq1 $ , determine whether or not $ \left\{ \mathbf{Y}_{n}\right\} $ converges to $ \mu $ in the mean square sense.

$ E\left[\left|\mathbf{Y}_{n}-\mu\right|^{2}\right]=E\left[\left|\frac{1}{n}\sum_{k=1}^{n}\left(\mathbf{X}_{k}-\mu\right)\right|^{2}\right]=\frac{1}{n^{2}}\sum_{k=1}^{n}\sum_{l=1}^{n}E\left[\left(\mathbf{X}_{k}-\mu\right)\left(\mathbf{X}_{l}-\mu\right)\right] $$ =\frac{1}{n^{2}}\sum_{k=1}^{n}E\left[\left(\mathbf{X}_{k}-\mu\right)^{2}\right]+\frac{1}{n^{2}}\underset{k\neq l}{\sum_{k=1}^{n}\sum_{l=1}^{n}}E\left[\left(\mathbf{X}_{k}-\mu\right)\left(\mathbf{X}_{l}-\mu\right)\right] $$ =\frac{1}{n}\sigma^{2}+\frac{2\left(n-1\right)}{n^{2}}r\sigma^{2}. $

$ \lim_{n\rightarrow\infty}E\left[\left|\mathbf{Y}_{n}-\mu\right|^{2}\right]=\lim_{n\rightarrow\infty}\left(\frac{1}{n}\sigma^{2}+\frac{2\left(n-1\right)}{n^{2}}r\sigma^{2}\right)=0. $

Thus, $ \mathbf{Y}_{n} $ converges in the mean square sense to $ \mu $ .

4. (35 Points)

Let $ \left\{ t_{k}\right\} $ be the set of Poisson points corresponding to a homogeneous Poisson process with parameters $ \lambda $ on the real line such that if $ \mathbf{N}\left(t_{1},t_{2}\right) $ is defined as the number of points in the interval $ \left[t_{1},t_{2}\right) $ , then $ P\left(\left\{ N\left(t_{1},t_{2}\right)=k\right\} \right)=\frac{\left[\lambda\left(t_{2}-t_{1}\right)\right]^{k}e^{-\lambda\left(t_{2}-t_{1}\right)}}{k!}\;,\qquad k=0,1,2,\cdots,\; t_{2}>t_{1}\geq0. Let \mathbf{X}\left(t\right)=\mathbf{N}\left(0,t\right) $ be the Poisson counting process for $ t>0 $ (note that $ \mathbf{X}\left(0\right)=0 $ ).

(a)

Find the (first order) characteristic function of $ \mathbf{X}\left(t\right) $ .

$ \Phi_{\mathbf{X}}\left(\omega\right)=E\left[e^{i\omega\mathbf{X}}\right]=\sum_{k=0}^{\infty}e^{i\omega k}\frac{\left(\lambda t\right)^{k}e^{-\lambda t}}{k!}=e^{-\lambda t}\sum_{k=0}^{\infty}\frac{\left(\lambda te^{i\omega}\right)^{k}}{k!}=e^{-\lambda t}e^{\lambda te^{i\omega}}=e^{-\lambda t\left(1-e^{i\omega}\right)}. $

(b)

Find the mean and variance of $ \mathbf{X}\left(t\right) $ .

$ E\left[\mathbf{X}\left(t\right)\right]=\frac{d}{di\omega}\Phi_{\mathbf{X}}\left(\omega\right)\biggl|_{i\omega=0}=\frac{d}{di\omega}e^{-\lambda t}e^{\lambda te^{i\omega}}\biggl|_{i\omega=0}=e^{-\lambda t}\cdot\frac{d}{di\omega}e^{\lambda te^{i\omega}}\biggl|_{i\omega=0} $$ =e^{-\lambda t}\cdot e^{\lambda te^{i\omega}}\cdot\lambda te^{i\omega}\biggl|_{i\omega=0}=e^{-\lambda t}\cdot e^{\lambda t}\cdot\lambda t=\lambda t. $

$ E\left[\mathbf{X}^{2}\left(t\right)\right]=\frac{d}{d\left(i\omega\right)^{2}}\Phi_{\mathbf{X}}\left(\omega\right)\biggl|_{i\omega=0}=\frac{d}{di\omega}\lambda te^{-\lambda t}e^{\lambda te^{i\omega}}e^{i\omega}\biggl|_{i\omega=0} $$ =\lambda te^{-\lambda t}\cdot\frac{d}{di\omega}e^{\lambda te^{i\omega}}e^{i\omega}\biggl|_{i\omega=0} $$ =\lambda te^{-\lambda t}\left(e^{\lambda te^{i\omega}}\lambda te^{i\omega}e^{i\omega}+e^{\lambda te^{i\omega}}e^{i\omega}\right)\biggl|_{i\omega=0} $$ =\lambda te^{-\lambda t}\left(\lambda te^{\lambda te^{i\omega}}e^{2i\omega}+e^{\lambda te^{i\omega}}e^{i\omega}\right)\biggl|_{i\omega=0}=\lambda te^{-\lambda t}\left(\lambda te^{\lambda t}+e^{\lambda t}\right) $$ =\lambda t\left(\lambda t+1\right)=\left(\lambda t\right)^{2}+\lambda t. $

$ Var\left[\mathbf{X}\left(t\right)\right]=E\left[\mathbf{X}^{2}\left(t\right)\right]-\left(E\left[\mathbf{X}\left(t\right)\right]\right)^{2}=\left(\lambda t\right)^{2}+\lambda t-\left(\lambda t\right)^{2}=\lambda t. $

(c)

Deriven an expression for the autocorrelation function of $ \mathbf{X}\left(t\right) $ .

$ R_{\mathbf{XX}}\left(t_{1},t_{2}\right) $

(d)

Assuming that $ t_{2}>t_{1} $ , find an expression for $ P\left(\left\{ \mathbf{X}\left(t_{1}\right)=m\right\} \cap\left\{ \mathbf{X}\left(t_{2}\right)=n\right\} \right) $ , for all $ m=0,1,2,\cdots $ and $ n=0,1,2,\cdots $ .

$ P\left(\left\{ \mathbf{X}\left(t_{1}\right)=m\right\} \cap\left\{ \mathbf{X}\left(t_{2}\right)=n\right\} \right) $


Back to ECE600

Back to my ECE 600 QE page

Back to the general ECE PHD QE page (for problem discussion)

Alumni Liaison

ECE462 Survivor

Seraj Dosenbach