(One intermediate revision by the same user not shown)
Line 3: Line 3:
 
'''2. II'''
 
'''2. II'''
  
Identical twins come from the same egg and hence are of the same sex. Fraternal twins have a probability <math>\frac{1}{2}</math>  of being of the same sex. Among twins, the probability of a fraternal set is p  and of identical set is <math>q=1-p</math> . Given that a set of twins selected at random are of the same sex, what is the probability they are fraternal?
+
Identical twins come from the same egg and hence are of the same sex. Fraternal twins have a probability <math class="inline">\frac{1}{2}</math>  of being of the same sex. Among twins, the probability of a fraternal set is p  and of identical set is <math class="inline">q=1-p</math> . Given that a set of twins selected at random are of the same sex, what is the probability they are fraternal?
  
A. <math>\frac{q}{p}/math>  B. <math>\frac{p}{1+q}</math>  C. <math>1</math>  D. <math>p</math>  E. <math>\frac{2q}{1+p}</math>  
+
A. <math class="inline">\frac{q}{p}/math>  B. <math class="inline">\frac{p}{1+q}</math>  C. <math class="inline">1</math>  D. <math class="inline">p</math>  E. <math class="inline">\frac{2q}{1+p}</math>  
  
Solution
+
'''Solution'''
  
• Note that fraternal twin <math>\leftrightarrow</math>  identical twin
+
• Note that fraternal twin <math class="inline">\leftrightarrow</math>  identical twin
  
 
• We can define events  
 
• We can define events  
Line 21: Line 21:
 
• We know that
 
• We know that
  
– <math>P\left(F\right)=p</math>  
+
– <math class="inline">P\left(F\right)=p</math>  
  
– <math>P\left(I\right)=q=1-p</math>  
+
– <math class="inline">P\left(I\right)=q=1-p</math>  
  
– <math>P\left(S|F\right)=\frac{1}{2}</math>  
+
– <math class="inline">P\left(S|F\right)=\frac{1}{2}</math>  
  
– <math>P\left(S|I\right)=1</math>  
+
– <math class="inline">P\left(S|I\right)=1</math>  
  
• Now, by using Bayes' theorem,<math>P\left(F|S\right)=\frac{P\left(F\cap S\right)}{P\left(S\right)}=\frac{P\left(S|F\right)P\left(F\right)}{P\left(S|F\right)P\left(F\right)+P\left(S|I\right)P\left(I\right)}=\frac{\frac{1}{2}\cdot p}{\frac{1}{2}\cdot p+1\cdot q}=\frac{p}{p+2q}=\frac{p}{\left(1-q\right)+2q}=\frac{p}{1+q}.</math>  
+
• Now, by using Bayes' theorem,<math class="inline">P\left(F|S\right)=\frac{P\left(F\cap S\right)}{P\left(S\right)}=\frac{P\left(S|F\right)P\left(F\right)}{P\left(S|F\right)P\left(F\right)+P\left(S|I\right)P\left(I\right)}=\frac{\frac{1}{2}\cdot p}{\frac{1}{2}\cdot p+1\cdot q}=\frac{p}{p+2q}=\frac{p}{\left(1-q\right)+2q}=\frac{p}{1+q}.</math>  
  
 
'''2. III'''
 
'''2. III'''
Line 35: Line 35:
 
Andy writes a letter to Betty but does not receive a letter in reply. Assuming that one letter in n  is lost in the mail, what is the probability that Betty did not receive Andy's letter? (Assuming that Betty would have answered Andy's letter had she received it.)
 
Andy writes a letter to Betty but does not receive a letter in reply. Assuming that one letter in n  is lost in the mail, what is the probability that Betty did not receive Andy's letter? (Assuming that Betty would have answered Andy's letter had she received it.)
  
A. <math>\frac{n}{n-1}</math>  B. <math>\frac{n-1}{n^{2}}</math>  C. <math>\frac{n-1}{2n-1}</math>  D. <math>\frac{n}{2n-1}</math>  E. <math>\frac{n-1}{n^{2}}</math>  
+
A. <math class="inline">\frac{n}{n-1}</math>  B. <math class="inline">\frac{n-1}{n^{2}}</math>  C. <math class="inline">\frac{n-1}{2n-1}</math>  D. <math class="inline">\frac{n}{2n-1}</math>  E. <math class="inline">\frac{n-1}{n^{2}}</math>  
  
 
'''Solution'''
 
'''Solution'''
  
<math>\xymatrix{A\ar@<1ex>[r]^{\frac{n-1}{n}}</math>   <math>B\ar@<1ex>[l]^{\frac{n-1}{n}}}</math>
+
<math class="inline">\bar{A}<1ex>[r]^{\frac{n-1}{n}}</math>,    <math class="inline">\bar{B}<1ex>[l]^{\frac{n-1}{n}}</math>
 
   
 
   
  
Line 46: Line 46:
 
– A : Andy receives a letter
 
– A : Andy receives a letter
  
– <math>\bar{A}</math> : Andy does not receive a letter
+
– <math class="inline">\bar{A}</math> : Andy does not receive a letter
  
 
– B : Betty receives a letter
 
– B : Betty receives a letter
  
– <math>\bar{B}</math> : Betty does not receive a letter.  
+
– <math class="inline">\bar{B}</math> : Betty does not receive a letter.  
  
 
• We know that  
 
• We know that  
  
– <math>P\left(\textrm{lost}\right)=\frac{1}{n}</math>  
+
– <math class="inline">P\left(\textrm{lost}\right)=\frac{1}{n}</math>  
  
– <math>P\left(B\right)=1-\frac{1}{n}</math>  
+
– <math class="inline">P\left(B\right)=1-\frac{1}{n}</math>  
  
– <math>P\left(\bar{B}\right)=\frac{1}{n}</math>  
+
– <math class="inline">P\left(\bar{B}\right)=\frac{1}{n}</math>  
  
– <math>P\left(\bar{A}|\bar{B}\right)=1</math>  
+
– <math class="inline">P\left(\bar{A}|\bar{B}\right)=1</math>  
  
– <math>P\left(\bar{A}|B\right)=\frac{1}{n}</math>  
+
– <math class="inline">P\left(\bar{A}|B\right)=\frac{1}{n}</math>  
  
 
• Now, by using Bayes' theorem,  
 
• Now, by using Bayes' theorem,  
  
<math>P\left(\bar{B}|\bar{A}\right)=\frac{P\left(\bar{A}\cap\bar{B}\right)}{P\left(\bar{A}\right)}=\frac{P\left(\bar{A}|\bar{B}\right)P\left(\bar{B}\right)}{P\left(\bar{A}|\bar{B}\right)P\left(\bar{B}\right)+P\left(\bar{A}|B\right)P\left(B\right)}=\frac{1\cdot\frac{1}{n}}{1\cdot\frac{1}{n}+\frac{1}{n}\cdot\left(1-\frac{1}{n}\right)}=\frac{1}{1+1-\frac{1}{n}}=\frac{n}{2n-1}.</math>  
+
<math class="inline">P\left(\bar{B}|\bar{A}\right)=\frac{P\left(\bar{A}\cap\bar{B}\right)}{P\left(\bar{A}\right)}=\frac{P\left(\bar{A}|\bar{B}\right)P\left(\bar{B}\right)}{P\left(\bar{A}|\bar{B}\right)P\left(\bar{B}\right)+P\left(\bar{A}|B\right)P\left(B\right)}=\frac{1\cdot\frac{1}{n}}{1\cdot\frac{1}{n}+\frac{1}{n}\cdot\left(1-\frac{1}{n}\right)}=\frac{1}{1+1-\frac{1}{n}}=\frac{n}{2n-1}.</math>  
  
 
'''2. V'''
 
'''2. V'''
  
The number of hits <math>\mathbf{X}</math>  in a baseball game is a Poisson random variable. If the probability of a no-hit game is 1/3 , what is the probability of having two or more hits in a game?
+
The number of hits <math class="inline">\mathbf{X}</math>  in a baseball game is a Poisson random variable. If the probability of a no-hit game is 1/3 , what is the probability of having two or more hits in a game?
  
 
'''Solution'''
 
'''Solution'''
  
<math>P\left(\mathbf{X}=k\right)=\frac{e^{-\lambda}\lambda^{k}}{k!}.</math>  
+
<math class="inline">P\left(\mathbf{X}=k\right)=\frac{e^{-\lambda}\lambda^{k}}{k!}.</math>  
  
<math>P\left(\mathbf{X}=0\right)=e^{-\lambda}=\frac{1}{3}\Longrightarrow-\lambda=-\ln3\Longrightarrow\therefore\lambda=\ln3.</math>  
+
<math class="inline">P\left(\mathbf{X}=0\right)=e^{-\lambda}=\frac{1}{3}\Longrightarrow-\lambda=-\ln3\Longrightarrow\therefore\lambda=\ln3.</math>  
  
<math>P\left(\mathbf{X}\geq2\right)=1-P\left(\mathbf{X}=1\right)-P\left(\mathbf{X}=0\right)=1-\frac{1}{3}\ln3-\frac{1}{3}=\frac{2}{3}-\frac{\ln3}{3}=\frac{2-\ln3}{3}.</math>  
+
<math class="inline">P\left(\mathbf{X}\geq2\right)=1-P\left(\mathbf{X}=1\right)-P\left(\mathbf{X}=0\right)=1-\frac{1}{3}\ln3-\frac{1}{3}=\frac{2}{3}-\frac{\ln3}{3}=\frac{2-\ln3}{3}.</math>  
  
 
'''2. VI'''
 
'''2. VI'''
  
<math>\Phi_{\mathbf{X}}\left(\omega\right)=\frac{1}{\left(1-ia\omega\right)^{k}},\; E\left[\mathbf{X}\right]?</math>  
+
<math class="inline">\Phi_{\mathbf{X}}\left(\omega\right)=\frac{1}{\left(1-ia\omega\right)^{k}},\; E\left[\mathbf{X}\right]?</math>  
  
 
'''Solution'''
 
'''Solution'''
  
<math>E\left[\mathbf{X}\right]=\frac{d}{ds}\phi\left(s\right)|_{s=0}=\frac{d}{ds}\left(1-as\right)^{-k}|_{s=0}=-k\left(1-as\right)^{-\left(k+1\right)}\left(-a\right)|_{s=0}=ak.</math>  
+
<math class="inline">E\left[\mathbf{X}\right]=\frac{d}{ds}\phi\left(s\right)|_{s=0}=\frac{d}{ds}\left(1-as\right)^{-k}|_{s=0}=-k\left(1-as\right)^{-\left(k+1\right)}\left(-a\right)|_{s=0}=ak.</math>  
  
4
+
'''4'''
  
A biased quarter having a probability p  of coming up “heads” is tossed n  times. Each time the quarter comes up heads, a biased nickle having probability r  of coming up “heads” is tossed. Let <math>\mathbf{M}</math>  be the random variable giving the number of times the biased nickle comes up “heads” in this experiment.  
+
A biased quarter having a probability p  of coming up “heads” is tossed n  times. Each time the quarter comes up heads, a biased nickle having probability r  of coming up “heads” is tossed. Let <math class="inline">\mathbf{M}</math>  be the random variable giving the number of times the biased nickle comes up “heads” in this experiment.  
  
(a) Find the probability mass function (pmf) of <math>\mathbf{M}</math> .
+
(a) Find the probability mass function (pmf) of <math class="inline">\mathbf{M}</math> .
  
<math>\mathbf{Q}</math> : the random variable giving the number of times the biased quater comes up heads.  
+
<math class="inline">\mathbf{Q}</math> : the random variable giving the number of times the biased quater comes up heads.  
  
<math>P\left(\left\{ \mathbf{Q}=k\right\} \right)=\left(\begin{array}{c}
+
<math class="inline">P\left(\left\{ \mathbf{Q}=k\right\} \right)=\left(\begin{array}{c}
 
n\\
 
n\\
 
k
 
k
 
\end{array}\right)p^{k}\left(1-p\right)^{n-k}.</math>  
 
\end{array}\right)p^{k}\left(1-p\right)^{n-k}.</math>  
  
<math>P\left(\left\{ \mathbf{M}=m\right\} |\left\{ \mathbf{Q}=k\right\} \right)=\left(\begin{array}{c}
+
<math class="inline">P\left(\left\{ \mathbf{M}=m\right\} |\left\{ \mathbf{Q}=k\right\} \right)=\left(\begin{array}{c}
 
k\\
 
k\\
 
m
 
m
 
\end{array}\right)r^{m}\left(1-r\right)^{k-m}.</math>  
 
\end{array}\right)r^{m}\left(1-r\right)^{k-m}.</math>  
  
<math>E\left[e^{i\omega\mathbf{M}}|\left\{ \mathbf{Q}=k\right\} \right]=\sum_{m=0}^{\infty}\left(\begin{array}{c}
+
<math class="inline">E\left[e^{i\omega\mathbf{M}}|\left\{ \mathbf{Q}=k\right\} \right]=\sum_{m=0}^{\infty}\left(\begin{array}{c}
 
k\\
 
k\\
 
m
 
m
Line 114: Line 114:
 
\end{array}\right)\left(r\cdot e^{i\omega}\right)^{m}\left(1-r\right)^{k-m}=\left(r\cdot e^{i\omega}+1-r\right)^{k}. </math>
 
\end{array}\right)\left(r\cdot e^{i\omega}\right)^{m}\left(1-r\right)^{k-m}=\left(r\cdot e^{i\omega}+1-r\right)^{k}. </math>
  
<math>\Phi_{\mathbf{M}}\left(\omega\right)=\sum_{k=0}^{\infty}E\left[e^{i\omega\mathbf{M}}|\left\{ \mathbf{Q}=k\right\} \right]\cdot P\left(\left\{ \mathbf{Q}=k\right\} \right)=\sum_{k=0}^{\infty}\left(r\cdot e^{i\omega}+1-r\right)^{k}\cdot\left(\begin{array}{c}
+
<math class="inline">\Phi_{\mathbf{M}}\left(\omega\right)=\sum_{k=0}^{\infty}E\left[e^{i\omega\mathbf{M}}|\left\{ \mathbf{Q}=k\right\} \right]\cdot P\left(\left\{ \mathbf{Q}=k\right\} \right)=\sum_{k=0}^{\infty}\left(r\cdot e^{i\omega}+1-r\right)^{k}\cdot\left(\begin{array}{c}
 
n\\
 
n\\
 
k
 
k
\end{array}\right)p^{k}\left(1-p\right)^{n-k}</math><math>=\sum_{k=0}^{\infty}\left(\begin{array}{c}
+
\end{array}\right)p^{k}\left(1-p\right)^{n-k}</math><math class="inline">=\sum_{k=0}^{\infty}\left(\begin{array}{c}
 
n\\
 
n\\
 
k
 
k
Line 124: Line 124:
 
This is the characteristic function of Binomial with probability pr .
 
This is the characteristic function of Binomial with probability pr .
  
<math>\therefore P\left(\left\{ \mathbf{M}=m\right\} \right)=\left(\begin{array}{c}
+
<math class="inline">\therefore P\left(\left\{ \mathbf{M}=m\right\} \right)=\left(\begin{array}{c}
 
n\\
 
n\\
 
m
 
m
 
\end{array}\right)\left(pr\right)^{m}\left(1-pr\right)^{n-m}.</math>  
 
\end{array}\right)\left(pr\right)^{m}\left(1-pr\right)^{n-m}.</math>  
  
(b) Find the mean of <math>\mathbf{M}</math> .
+
(b) Find the mean of <math class="inline">\mathbf{M}</math> .
  
<math>E\left[\mathbf{M}\right]=\frac{\partial}{\partial\left(i\omega\right)}\Phi_{\mathbf{M}}\left(\omega\right)|_{i\omega=0}=n\left(pr\cdot e^{i\omega}+1-pr\right)^{n-1}\cdot pr\cdot e^{i\omega}|_{i\omega=0}=npr.</math>  
+
<math class="inline">E\left[\mathbf{M}\right]=\frac{\partial}{\partial\left(i\omega\right)}\Phi_{\mathbf{M}}\left(\omega\right)|_{i\omega=0}=n\left(pr\cdot e^{i\omega}+1-pr\right)^{n-1}\cdot pr\cdot e^{i\omega}|_{i\omega=0}=npr.</math>  
  
In fact, we directly conclude <math>E\left[\mathbf{M}\right]=npr</math>  because we already know that <math>\mathbf{M}</math>  is Binomial random variable with probability pr .  
+
In fact, we directly conclude <math class="inline">E\left[\mathbf{M}\right]=npr</math>  because we already know that <math class="inline">\mathbf{M}</math>  is Binomial random variable with probability pr .  
  
(c) Find the variance of <math>\mathbf{M}</math> .
+
(c) Find the variance of <math class="inline">\mathbf{M}</math> .
  
<math>E\left[\mathbf{M}^{2}\right]</math>  
+
<math class="inline">E\left[\mathbf{M}^{2}\right]</math>  
  
<math>Var\left[\mathbf{M}\right]=E\left[\mathbf{M}^{2}\right]-\left(E\left[\mathbf{M}\right]\right)^{2}=\left(npr\left(n-1\right)pr+npr\right)-\left(npr\right)^{2}=\left(npr\right)^{2}-n\left(pr\right)^{2}+npr-\left(npr\right)^{2}=npr\left(1-pr\right).</math>  
+
<math class="inline">Var\left[\mathbf{M}\right]=E\left[\mathbf{M}^{2}\right]-\left(E\left[\mathbf{M}\right]\right)^{2}=\left(npr\left(n-1\right)pr+npr\right)-\left(npr\right)^{2}=\left(npr\right)^{2}-n\left(pr\right)^{2}+npr-\left(npr\right)^{2}=npr\left(1-pr\right).</math>  
  
In fact, we directly conclude <math>Var\left[\mathbf{M}\right]=npr\left(1-pr\right)</math>  because we already know that <math>\mathbf{M}</math>  is Binomial random variable with probability pr .  
+
In fact, we directly conclude <math class="inline">Var\left[\mathbf{M}\right]=npr\left(1-pr\right)</math>  because we already know that <math class="inline">\mathbf{M}</math>  is Binomial random variable with probability pr .  
  
 
Example (MRB 2004 Spring Final)
 
Example (MRB 2004 Spring Final)
  
Assume that <math>\mathbf{X}\left(t\right)</math>  is a zero-mean, continuous-time, Gaussian white noise process with autocorrelation function <math>R_{\mathbf{XX}}\left(t_{1},t_{2}\right)=N_{0}\delta\left(t_{1}-t_{2}\right)</math>. Let <math>\mathbf{Y}\left(t\right)</math>  be a new random process defined as the output of a linear time-invariant system with impulse response <math>h\left(t\right)=1_{\left[0,T\right]}\left(t\right),</math>  where <math>T>0</math> .
+
Assume that <math class="inline">\mathbf{X}\left(t\right)</math>  is a zero-mean, continuous-time, Gaussian white noise process with autocorrelation function <math class="inline">R_{\mathbf{XX}}\left(t_{1},t_{2}\right)=N_{0}\delta\left(t_{1}-t_{2}\right)</math>. Let <math class="inline">\mathbf{Y}\left(t\right)</math>  be a new random process defined as the output of a linear time-invariant system with impulse response <math class="inline">h\left(t\right)=1_{\left[0,T\right]}\left(t\right),</math>  where <math class="inline">T>0</math> .
  
(a) What is the mean of <math>\mathbf{Y}\left(t\right)</math> ?
+
(a) What is the mean of <math class="inline">\mathbf{Y}\left(t\right)</math> ?
  
<math>E\left[\mathbf{Y}\left(t\right)\right]=E\left[\int_{-\infty}^{\infty}h\left(\tau\right)\mathbf{X}\left(t-\tau\right)d\tau\right]=\int_{-\infty}^{\infty}h\left(\tau\right)E\left[\mathbf{X}\left(t-\tau\right)\right]d\tau=\int_{-\infty}^{\infty}h\left(\tau\right)\cdot0d\tau=0.</math>  
+
<math class="inline">E\left[\mathbf{Y}\left(t\right)\right]=E\left[\int_{-\infty}^{\infty}h\left(\tau\right)\mathbf{X}\left(t-\tau\right)d\tau\right]=\int_{-\infty}^{\infty}h\left(\tau\right)E\left[\mathbf{X}\left(t-\tau\right)\right]d\tau=\int_{-\infty}^{\infty}h\left(\tau\right)\cdot0d\tau=0.</math>  
  
 
(b) What is the power spectral density of \mathbf{Y}\left(t\right) ?
 
(b) What is the power spectral density of \mathbf{Y}\left(t\right) ?
  
<math>S_{\mathbf{XX}}\left(\omega\right)=\int_{-\infty}^{\infty}N_{0}\delta\left(\tau\right)e^{-i\omega\tau}d\tau=N_{0}.</math>  
+
<math class="inline">S_{\mathbf{XX}}\left(\omega\right)=\int_{-\infty}^{\infty}N_{0}\delta\left(\tau\right)e^{-i\omega\tau}d\tau=N_{0}.</math>  
  
<math>H\left(\omega\right)=\int_{-\infty}^{\infty}h\left(t\right)e^{-i\omega t}dt=\int_{0}^{T}e^{-i\omega t}dt=\frac{e^{-i\omega t}}{-i\omega}\biggl|_{0}^{T}=\frac{e^{-i\omega T}-1}{-i\omega}=\frac{1-e^{-i\omega T}}{i\omega}.</math>  
+
<math class="inline">H\left(\omega\right)=\int_{-\infty}^{\infty}h\left(t\right)e^{-i\omega t}dt=\int_{0}^{T}e^{-i\omega t}dt=\frac{e^{-i\omega t}}{-i\omega}\biggl|_{0}^{T}=\frac{e^{-i\omega T}-1}{-i\omega}=\frac{1-e^{-i\omega T}}{i\omega}.</math>  
  
<math>\left|H\left(\omega\right)\right|^{2}=H\left(\omega\right)H^{*}\left(\omega\right)=\frac{1-e^{-i\omega T}}{i\omega}\cdot\frac{1-e^{i\omega T}}{-i\omega}=\frac{2-e^{-i\omega T}-e^{i\omega T}}{\omega^{2}}=\frac{2\left(1-\cos\omega T\right)}{\omega^{2}}</math><math>=\frac{2\left(1-\left(1-2\sin^{2}\frac{\omega T}{2}\right)\right)}{\omega^{2}}=\frac{4}{\omega^{2}}\cdot\sin^{2}\frac{\omega T}{2}.</math>  
+
<math class="inline">\left|H\left(\omega\right)\right|^{2}=H\left(\omega\right)H^{*}\left(\omega\right)=\frac{1-e^{-i\omega T}}{i\omega}\cdot\frac{1-e^{i\omega T}}{-i\omega}=\frac{2-e^{-i\omega T}-e^{i\omega T}}{\omega^{2}}=\frac{2\left(1-\cos\omega T\right)}{\omega^{2}}</math><math class="inline">=\frac{2\left(1-\left(1-2\sin^{2}\frac{\omega T}{2}\right)\right)}{\omega^{2}}=\frac{4}{\omega^{2}}\cdot\sin^{2}\frac{\omega T}{2}.</math>  
  
<math>\because e^{-i\omega T}+e^{i\omega T}=\left(\cos\omega T-i\sin\omega T\right)+\left(\cos\omega T+i\sin\omega T\right)=2\cos\omega T.</math>  
+
<math class="inline">\because e^{-i\omega T}+e^{i\omega T}=\left(\cos\omega T-i\sin\omega T\right)+\left(\cos\omega T+i\sin\omega T\right)=2\cos\omega T.</math>  
  
<math>\because\cos\left(2x\right)=\cos^{2}\left(x\right)-\sin^{2}\left(x\right)=2\cos^{2}\left(x\right)-1=1-2\sin^{2}\left(x\right).</math>  
+
<math class="inline">\because\cos\left(2x\right)=\cos^{2}\left(x\right)-\sin^{2}\left(x\right)=2\cos^{2}\left(x\right)-1=1-2\sin^{2}\left(x\right).</math>  
  
<math>S_{\mathbf{YY}}\left(\omega\right)=S_{\mathbf{XX}}\left(\omega\right)\left|H\left(\omega\right)\right|^{2}=N_{0}\cdot\frac{4}{\omega^{2}}\cdot\sin^{2}\frac{\omega T}{2}.</math>  
+
<math class="inline">S_{\mathbf{YY}}\left(\omega\right)=S_{\mathbf{XX}}\left(\omega\right)\left|H\left(\omega\right)\right|^{2}=N_{0}\cdot\frac{4}{\omega^{2}}\cdot\sin^{2}\frac{\omega T}{2}.</math>  
  
(c) What is the autocorrelation function of <math>\mathbf{Y}\left(t\right)</math> ?
+
(c) What is the autocorrelation function of <math class="inline">\mathbf{Y}\left(t\right)</math> ?
  
<math>S_{\mathbf{YY}}\left(\omega\right)=N_{0}\cdot\frac{4}{\omega^{2}}\cdot\sin^{2}\frac{\omega T}{2}=N_{0}T\cdot\frac{4\sin^{2}\left(\omega T/2\right)}{T\omega^{2}}\leftrightarrow N_{0}T\cdot\left(1-\frac{\left|\tau\right|}{T}\right)=R_{\mathbf{YY}}\left(\tau\right)\textrm{ for }\left|\tau\right|<T.</math>  
+
<math class="inline">S_{\mathbf{YY}}\left(\omega\right)=N_{0}\cdot\frac{4}{\omega^{2}}\cdot\sin^{2}\frac{\omega T}{2}=N_{0}T\cdot\frac{4\sin^{2}\left(\omega T/2\right)}{T\omega^{2}}\leftrightarrow N_{0}T\cdot\left(1-\frac{\left|\tau\right|}{T}\right)=R_{\mathbf{YY}}\left(\tau\right)\textrm{ for }\left|\tau\right|<T.</math>  
  
<math>\because\frac{4\sin^{2}\left(\omega T/2\right)}{T\omega^{2}}\leftrightarrow\left\{ \begin{array}{lll}
+
<math class="inline">\because\frac{4\sin^{2}\left(\omega T/2\right)}{T\omega^{2}}\leftrightarrow\left\{ \begin{array}{lll}
 
1-\frac{\left|\tau\right|}{T}    \left|\tau\right|<T\\
 
1-\frac{\left|\tau\right|}{T}    \left|\tau\right|<T\\
 
0    \left|\tau\right|>T\text{ (on the table given)}.
 
0    \left|\tau\right|>T\text{ (on the table given)}.
 
\end{array}\right.</math>  
 
\end{array}\right.</math>  
  
(d) Write an expression for the second-order density <math>f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right)</math>  of <math>\mathbf{Y}\left(t\right)</math> .
+
(d) Write an expression for the second-order density <math class="inline">f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right)</math>  of <math class="inline">\mathbf{Y}\left(t\right)</math> .
  
Because <math>\mathbf{X}\left(t\right)</math>  is a WSS Gaussian random process and the system is an LTI system, <math>\mathbf{Y}\left(t\right)</math>  is also a WSS Gaussian random process . <math>\mathbf{Y}\left(t\right)</math>  is a WSS Gaussian random process with <math>E\left[\mathbf{Y}\left(t\right)\right]=0</math> , <math>\sigma_{\mathbf{Y}\left(t\right)}^{2}=R_{\mathbf{YY}}\left(0\right)=N_{0}T</math> .
+
Because <math class="inline">\mathbf{X}\left(t\right)</math>  is a WSS Gaussian random process and the system is an LTI system, <math class="inline">\mathbf{Y}\left(t\right)</math>  is also a WSS Gaussian random process . <math class="inline">\mathbf{Y}\left(t\right)</math>  is a WSS Gaussian random process with <math class="inline">E\left[\mathbf{Y}\left(t\right)\right]=0</math> , <math class="inline">\sigma_{\mathbf{Y}\left(t\right)}^{2}=R_{\mathbf{YY}}\left(0\right)=N_{0}T</math> .
  
<math>r_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}=r\left(t_{1}-t_{2}\right)=\frac{C_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{\sqrt{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}}=\frac{R_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{R_{\mathbf{YY}}\left(0\right)}=1-\frac{\left|\tau\right|}{T}.</math>  
+
<math class="inline">r_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}=r\left(t_{1}-t_{2}\right)=\frac{C_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{\sqrt{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}}=\frac{R_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{R_{\mathbf{YY}}\left(0\right)}=1-\frac{\left|\tau\right|}{T}.</math>  
  
<math>f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right)=\frac{1}{2\pi\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}\sqrt{1-r^{2}\left(t_{1}-t_{2}\right)}}\exp\left\{ \frac{-1}{2\left(1-r^{2}\left(t_{1}-t_{2}\right)\right)}\left[\frac{y_{1}^{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}}-\frac{2r\left(t_{1}-t_{2}\right)y_{1}y_{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}}+\frac{y_{2}^{2}}{\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}\right]\right\} </math><math>=\frac{1}{2\pi R_{\mathbf{YY}}\left(0\right)\sqrt{1-r^{2}\left(t_{1}-t_{2}\right)}}\exp\left\{ \frac{-1}{2R_{\mathbf{YY}}\left(0\right)\left(1-r^{2}\left(t_{1}-t_{2}\right)\right)}\left[y_{1}^{2}-2r\left(t_{1}-t_{2}\right)y_{1}y_{2}+y_{2}^{2}\right]\right\} .</math>  
+
<math class="inline">f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right)=\frac{1}{2\pi\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}\sqrt{1-r^{2}\left(t_{1}-t_{2}\right)}}\exp\left\{ \frac{-1}{2\left(1-r^{2}\left(t_{1}-t_{2}\right)\right)}\left[\frac{y_{1}^{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}}-\frac{2r\left(t_{1}-t_{2}\right)y_{1}y_{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}}+\frac{y_{2}^{2}}{\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}\right]\right\} </math><math class="inline">=\frac{1}{2\pi R_{\mathbf{YY}}\left(0\right)\sqrt{1-r^{2}\left(t_{1}-t_{2}\right)}}\exp\left\{ \frac{-1}{2R_{\mathbf{YY}}\left(0\right)\left(1-r^{2}\left(t_{1}-t_{2}\right)\right)}\left[y_{1}^{2}-2r\left(t_{1}-t_{2}\right)y_{1}y_{2}+y_{2}^{2}\right]\right\} .</math>  
  
<math>\therefore \left(\mathbf{Y}\left(t_{1}\right),\mathbf{Y}\left(t_{2}\right)\right)</math>  has distribution <math>N\left[0,0,N_{0}T,N_{0}T,1-\frac{\left|\tau\right|}{T}\right]</math> .
+
<math class="inline">\therefore \left(\mathbf{Y}\left(t_{1}\right),\mathbf{Y}\left(t_{2}\right)\right)</math>  has distribution <math class="inline">N\left[0,0,N_{0}T,N_{0}T,1-\frac{\left|\tau\right|}{T}\right]</math> .
  
 
----
 
----

Latest revision as of 06:20, 1 December 2010

6.3 MRB 2004 Spring Final

2. II

Identical twins come from the same egg and hence are of the same sex. Fraternal twins have a probability $ \frac{1}{2} $ of being of the same sex. Among twins, the probability of a fraternal set is p and of identical set is $ q=1-p $ . Given that a set of twins selected at random are of the same sex, what is the probability they are fraternal?

A. $ \frac{q}{p}/math> B. <math class="inline">\frac{p}{1+q} $ C. $ 1 $ D. $ p $ E. $ \frac{2q}{1+p} $

Solution

• Note that fraternal twin $ \leftrightarrow $ identical twin

• We can define events

– F : Fraternal twins

– I : Identical twins

– S : Twins are same sex

• We know that

$ P\left(F\right)=p $

$ P\left(I\right)=q=1-p $

$ P\left(S|F\right)=\frac{1}{2} $

$ P\left(S|I\right)=1 $

• Now, by using Bayes' theorem,$ P\left(F|S\right)=\frac{P\left(F\cap S\right)}{P\left(S\right)}=\frac{P\left(S|F\right)P\left(F\right)}{P\left(S|F\right)P\left(F\right)+P\left(S|I\right)P\left(I\right)}=\frac{\frac{1}{2}\cdot p}{\frac{1}{2}\cdot p+1\cdot q}=\frac{p}{p+2q}=\frac{p}{\left(1-q\right)+2q}=\frac{p}{1+q}. $

2. III

Andy writes a letter to Betty but does not receive a letter in reply. Assuming that one letter in n is lost in the mail, what is the probability that Betty did not receive Andy's letter? (Assuming that Betty would have answered Andy's letter had she received it.)

A. $ \frac{n}{n-1} $ B. $ \frac{n-1}{n^{2}} $ C. $ \frac{n-1}{2n-1} $ D. $ \frac{n}{2n-1} $ E. $ \frac{n-1}{n^{2}} $

Solution

$ \bar{A}<1ex>[r]^{\frac{n-1}{n}} $, $ \bar{B}<1ex>[l]^{\frac{n-1}{n}} $


• We can define events

– A : Andy receives a letter

$ \bar{A} $ : Andy does not receive a letter

– B : Betty receives a letter

$ \bar{B} $ : Betty does not receive a letter.

• We know that

$ P\left(\textrm{lost}\right)=\frac{1}{n} $

$ P\left(B\right)=1-\frac{1}{n} $

$ P\left(\bar{B}\right)=\frac{1}{n} $

$ P\left(\bar{A}|\bar{B}\right)=1 $

$ P\left(\bar{A}|B\right)=\frac{1}{n} $

• Now, by using Bayes' theorem,

$ P\left(\bar{B}|\bar{A}\right)=\frac{P\left(\bar{A}\cap\bar{B}\right)}{P\left(\bar{A}\right)}=\frac{P\left(\bar{A}|\bar{B}\right)P\left(\bar{B}\right)}{P\left(\bar{A}|\bar{B}\right)P\left(\bar{B}\right)+P\left(\bar{A}|B\right)P\left(B\right)}=\frac{1\cdot\frac{1}{n}}{1\cdot\frac{1}{n}+\frac{1}{n}\cdot\left(1-\frac{1}{n}\right)}=\frac{1}{1+1-\frac{1}{n}}=\frac{n}{2n-1}. $

2. V

The number of hits $ \mathbf{X} $ in a baseball game is a Poisson random variable. If the probability of a no-hit game is 1/3 , what is the probability of having two or more hits in a game?

Solution

$ P\left(\mathbf{X}=k\right)=\frac{e^{-\lambda}\lambda^{k}}{k!}. $

$ P\left(\mathbf{X}=0\right)=e^{-\lambda}=\frac{1}{3}\Longrightarrow-\lambda=-\ln3\Longrightarrow\therefore\lambda=\ln3. $

$ P\left(\mathbf{X}\geq2\right)=1-P\left(\mathbf{X}=1\right)-P\left(\mathbf{X}=0\right)=1-\frac{1}{3}\ln3-\frac{1}{3}=\frac{2}{3}-\frac{\ln3}{3}=\frac{2-\ln3}{3}. $

2. VI

$ \Phi_{\mathbf{X}}\left(\omega\right)=\frac{1}{\left(1-ia\omega\right)^{k}},\; E\left[\mathbf{X}\right]? $

Solution

$ E\left[\mathbf{X}\right]=\frac{d}{ds}\phi\left(s\right)|_{s=0}=\frac{d}{ds}\left(1-as\right)^{-k}|_{s=0}=-k\left(1-as\right)^{-\left(k+1\right)}\left(-a\right)|_{s=0}=ak. $

4

A biased quarter having a probability p of coming up “heads” is tossed n times. Each time the quarter comes up heads, a biased nickle having probability r of coming up “heads” is tossed. Let $ \mathbf{M} $ be the random variable giving the number of times the biased nickle comes up “heads” in this experiment.

(a) Find the probability mass function (pmf) of $ \mathbf{M} $ .

$ \mathbf{Q} $ : the random variable giving the number of times the biased quater comes up heads.

$ P\left(\left\{ \mathbf{Q}=k\right\} \right)=\left(\begin{array}{c} n\\ k \end{array}\right)p^{k}\left(1-p\right)^{n-k}. $

$ P\left(\left\{ \mathbf{M}=m\right\} |\left\{ \mathbf{Q}=k\right\} \right)=\left(\begin{array}{c} k\\ m \end{array}\right)r^{m}\left(1-r\right)^{k-m}. $

$ E\left[e^{i\omega\mathbf{M}}|\left\{ \mathbf{Q}=k\right\} \right]=\sum_{m=0}^{\infty}\left(\begin{array}{c} k\\ m \end{array}\right)r^{m}\left(1-r\right)^{k-m}\cdot e^{i\omega m}=\sum_{m=0}^{\infty}\left(\begin{array}{c} k\\ m \end{array}\right)\left(r\cdot e^{i\omega}\right)^{m}\left(1-r\right)^{k-m}=\left(r\cdot e^{i\omega}+1-r\right)^{k}. $

$ \Phi_{\mathbf{M}}\left(\omega\right)=\sum_{k=0}^{\infty}E\left[e^{i\omega\mathbf{M}}|\left\{ \mathbf{Q}=k\right\} \right]\cdot P\left(\left\{ \mathbf{Q}=k\right\} \right)=\sum_{k=0}^{\infty}\left(r\cdot e^{i\omega}+1-r\right)^{k}\cdot\left(\begin{array}{c} n\\ k \end{array}\right)p^{k}\left(1-p\right)^{n-k} $$ =\sum_{k=0}^{\infty}\left(\begin{array}{c} n\\ k \end{array}\right)\left(p\left(r\cdot e^{i\omega}+1-r\right)\right)^{k}\left(1-p\right)^{n-k}=\left(p\cdot r\cdot e^{i\omega}+p-p\cdot r+1-p\right)^{n}=\left(p\cdot r\cdot e^{i\omega}+1-p\cdot r\right)^{n}. $

This is the characteristic function of Binomial with probability pr .

$ \therefore P\left(\left\{ \mathbf{M}=m\right\} \right)=\left(\begin{array}{c} n\\ m \end{array}\right)\left(pr\right)^{m}\left(1-pr\right)^{n-m}. $

(b) Find the mean of $ \mathbf{M} $ .

$ E\left[\mathbf{M}\right]=\frac{\partial}{\partial\left(i\omega\right)}\Phi_{\mathbf{M}}\left(\omega\right)|_{i\omega=0}=n\left(pr\cdot e^{i\omega}+1-pr\right)^{n-1}\cdot pr\cdot e^{i\omega}|_{i\omega=0}=npr. $

In fact, we directly conclude $ E\left[\mathbf{M}\right]=npr $ because we already know that $ \mathbf{M} $ is Binomial random variable with probability pr .

(c) Find the variance of $ \mathbf{M} $ .

$ E\left[\mathbf{M}^{2}\right] $

$ Var\left[\mathbf{M}\right]=E\left[\mathbf{M}^{2}\right]-\left(E\left[\mathbf{M}\right]\right)^{2}=\left(npr\left(n-1\right)pr+npr\right)-\left(npr\right)^{2}=\left(npr\right)^{2}-n\left(pr\right)^{2}+npr-\left(npr\right)^{2}=npr\left(1-pr\right). $

In fact, we directly conclude $ Var\left[\mathbf{M}\right]=npr\left(1-pr\right) $ because we already know that $ \mathbf{M} $ is Binomial random variable with probability pr .

Example (MRB 2004 Spring Final)

Assume that $ \mathbf{X}\left(t\right) $ is a zero-mean, continuous-time, Gaussian white noise process with autocorrelation function $ R_{\mathbf{XX}}\left(t_{1},t_{2}\right)=N_{0}\delta\left(t_{1}-t_{2}\right) $. Let $ \mathbf{Y}\left(t\right) $ be a new random process defined as the output of a linear time-invariant system with impulse response $ h\left(t\right)=1_{\left[0,T\right]}\left(t\right), $ where $ T>0 $ .

(a) What is the mean of $ \mathbf{Y}\left(t\right) $ ?

$ E\left[\mathbf{Y}\left(t\right)\right]=E\left[\int_{-\infty}^{\infty}h\left(\tau\right)\mathbf{X}\left(t-\tau\right)d\tau\right]=\int_{-\infty}^{\infty}h\left(\tau\right)E\left[\mathbf{X}\left(t-\tau\right)\right]d\tau=\int_{-\infty}^{\infty}h\left(\tau\right)\cdot0d\tau=0. $

(b) What is the power spectral density of \mathbf{Y}\left(t\right) ?

$ S_{\mathbf{XX}}\left(\omega\right)=\int_{-\infty}^{\infty}N_{0}\delta\left(\tau\right)e^{-i\omega\tau}d\tau=N_{0}. $

$ H\left(\omega\right)=\int_{-\infty}^{\infty}h\left(t\right)e^{-i\omega t}dt=\int_{0}^{T}e^{-i\omega t}dt=\frac{e^{-i\omega t}}{-i\omega}\biggl|_{0}^{T}=\frac{e^{-i\omega T}-1}{-i\omega}=\frac{1-e^{-i\omega T}}{i\omega}. $

$ \left|H\left(\omega\right)\right|^{2}=H\left(\omega\right)H^{*}\left(\omega\right)=\frac{1-e^{-i\omega T}}{i\omega}\cdot\frac{1-e^{i\omega T}}{-i\omega}=\frac{2-e^{-i\omega T}-e^{i\omega T}}{\omega^{2}}=\frac{2\left(1-\cos\omega T\right)}{\omega^{2}} $$ =\frac{2\left(1-\left(1-2\sin^{2}\frac{\omega T}{2}\right)\right)}{\omega^{2}}=\frac{4}{\omega^{2}}\cdot\sin^{2}\frac{\omega T}{2}. $

$ \because e^{-i\omega T}+e^{i\omega T}=\left(\cos\omega T-i\sin\omega T\right)+\left(\cos\omega T+i\sin\omega T\right)=2\cos\omega T. $

$ \because\cos\left(2x\right)=\cos^{2}\left(x\right)-\sin^{2}\left(x\right)=2\cos^{2}\left(x\right)-1=1-2\sin^{2}\left(x\right). $

$ S_{\mathbf{YY}}\left(\omega\right)=S_{\mathbf{XX}}\left(\omega\right)\left|H\left(\omega\right)\right|^{2}=N_{0}\cdot\frac{4}{\omega^{2}}\cdot\sin^{2}\frac{\omega T}{2}. $

(c) What is the autocorrelation function of $ \mathbf{Y}\left(t\right) $ ?

$ S_{\mathbf{YY}}\left(\omega\right)=N_{0}\cdot\frac{4}{\omega^{2}}\cdot\sin^{2}\frac{\omega T}{2}=N_{0}T\cdot\frac{4\sin^{2}\left(\omega T/2\right)}{T\omega^{2}}\leftrightarrow N_{0}T\cdot\left(1-\frac{\left|\tau\right|}{T}\right)=R_{\mathbf{YY}}\left(\tau\right)\textrm{ for }\left|\tau\right|<T. $

$ \because\frac{4\sin^{2}\left(\omega T/2\right)}{T\omega^{2}}\leftrightarrow\left\{ \begin{array}{lll} 1-\frac{\left|\tau\right|}{T} \left|\tau\right|<T\\ 0 \left|\tau\right|>T\text{ (on the table given)}. \end{array}\right. $

(d) Write an expression for the second-order density $ f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right) $ of $ \mathbf{Y}\left(t\right) $ .

Because $ \mathbf{X}\left(t\right) $ is a WSS Gaussian random process and the system is an LTI system, $ \mathbf{Y}\left(t\right) $ is also a WSS Gaussian random process . $ \mathbf{Y}\left(t\right) $ is a WSS Gaussian random process with $ E\left[\mathbf{Y}\left(t\right)\right]=0 $ , $ \sigma_{\mathbf{Y}\left(t\right)}^{2}=R_{\mathbf{YY}}\left(0\right)=N_{0}T $ .

$ r_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}=r\left(t_{1}-t_{2}\right)=\frac{C_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{\sqrt{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}}=\frac{R_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{R_{\mathbf{YY}}\left(0\right)}=1-\frac{\left|\tau\right|}{T}. $

$ f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right)=\frac{1}{2\pi\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}\sqrt{1-r^{2}\left(t_{1}-t_{2}\right)}}\exp\left\{ \frac{-1}{2\left(1-r^{2}\left(t_{1}-t_{2}\right)\right)}\left[\frac{y_{1}^{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}}-\frac{2r\left(t_{1}-t_{2}\right)y_{1}y_{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}}+\frac{y_{2}^{2}}{\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}\right]\right\} $$ =\frac{1}{2\pi R_{\mathbf{YY}}\left(0\right)\sqrt{1-r^{2}\left(t_{1}-t_{2}\right)}}\exp\left\{ \frac{-1}{2R_{\mathbf{YY}}\left(0\right)\left(1-r^{2}\left(t_{1}-t_{2}\right)\right)}\left[y_{1}^{2}-2r\left(t_{1}-t_{2}\right)y_{1}y_{2}+y_{2}^{2}\right]\right\} . $

$ \therefore \left(\mathbf{Y}\left(t_{1}\right),\mathbf{Y}\left(t_{2}\right)\right) $ has distribution $ N\left[0,0,N_{0}T,N_{0}T,1-\frac{\left|\tau\right|}{T}\right] $ .


Back to ECE600

Back to ECE 600 Finals

Alumni Liaison

Ph.D. on Applied Mathematics in Aug 2007. Involved on applications of image super-resolution to electron microscopy

Francisco Blanco-Silva