Line 149: Line 149:
 
Then <br/>
 
Then <br/>
 
<center><math>\begin{align}
 
<center><math>\begin{align}
P((X,Y)\in D_d) &= \int_{\pi}^{\pi}\int_{0}^{d}f_{XY}(r\cos\theta,r\sin\theta)rdrd\theta \\
+
P((X,Y)\in D_d) &= \int_{-\pi}^{\pi}\int_{0}^{d}f_{XY}(r\cos\theta,r\sin\theta)rdrd\theta \\
&= \int_{\pi}^{\pi}\int_{0}^{d} \frac{r}{2\pi\sigma^2}e^{-\frac{r^2}{2\sigma^2}}drd\theta \\
+
&= \int_{-\pi}^{\pi}\int_{0}^{d} \frac{r}{2\pi\sigma^2}e^{-\frac{r^2}{2\sigma^2}}drd\theta \\
 
&= 1-e^{-\frac{d^2}{2\sigma^2}}
 
&= 1-e^{-\frac{d^2}{2\sigma^2}}
 
\end{align}</math></center>
 
\end{align}</math></center>

Revision as of 12:59, 12 November 2013

Back to all ECE 600 notes


Random Variables and Signals

Topic 11: Two Random Variables: Joint Distribution




Two Random Variables

We have been considering a single random variable X and introduces the pdf f$ _X $, and pmf p$ _X $, conditional pdf f$ _X $(x|M), the conditional pmf p$ _X $(x|M), pdf f$ _Y $ or pmf p$ _Y $ when Y = g(X), expectation E[g(X)], conditional expectation E[g(X)|M], and characteristic function $ \Phi_X $. We will now define similar tools for the case of two random variables X and Y.
How do we define two random variables X,Y on a probability space (S,F,P)?


Fig 1: Mapping from S to X($ \omega $) and Y($ \omega $).


So two random variables can be viewed aw a mapping from S to R$ ^2 $, and (X,Y) is an ordered pair in R$ ^2 $. Note that we could draw the picture this way:

Fig 2: Mapping from S to X($ \omega $) and Y($ \omega $). Note that this model does not capture the joint behavior of X and Y and is hence incomplete.


but this would not capture the joint behavior of X and Y. Note also that if X and Y are defined on two different probability spaces, those two spaces can be combined to create (S,F,P).

In order for X and Y to be a valid random variable pair, we will need to consider regions D ⊂ R$ ^2 $.

$ B(\mathbb{R}^2) = \sigma (\{\mbox{all open rectangles in }\mathbb{R}^2\}) $

We need {(X,Y) ∈ O} ∈ F for any open rectangle O ⊂ R$ ^2 $, then {(X,Y) ∈ D} ∈ F ∀D ∈ B(R$ ^2 $).
But (X($ \omega $),Y($ \omega $)) ∈ O if X($ \omega $) ∈ A and Y($ \omega $) ∈ B for some A, B ∈ B(R), so {(X,Y) ∈ 0} = X$ ^{-1} $(A) ∩ Y$ ^{-1} $(B)
If X and Y are valid random variables then

$ \begin{align} &X^{-1}(A) \in \mathcal F \\ &Y^{-1}(B) \in \mathcal F \\ &\forall A,B\in B(\mathbb R) \end{align} $

So,

$ \begin{align} &X^{-1}(A)\cap Y^{-1}(B) \in \mathcal F \\ \Rightarrow &\{(X,Y)\in O\}\in\mathcal F \end{align} $

So how do we find P((X,Y) ∈ D) for D ∈ B(R$ ^2 $)?

We will use joint cdfs, pdfs, and pmfs.



Joint Cumulative Distribution Function

Knowledge of F$ _X $(x) and F$ _Y $(y) alone will not be sufficient to compute P((X,Y) ∈ D) ∀D ∈ B(R$ ^2 $), in general.

Definition $ \qquad $ The joint cumulative distribution function of random variables X,Y defined on (S,F,P) is F$ _{XY} $(x,y) ≡ P({X ≤ x} ∩ {Y ≤ y}) for x,y ∈ R.
Note that in this case, D ≡ D$ _{XY} $ = {(x',y') ∈ R$ ^2 $: x' ≤ x, y' ≤ y}

Fig 3: The shaded region represents D


Properties of F$ _{XY} $:


$ \bullet\lim_{x\rightarrow -\infty}F_{XY}(x,y) = \lim_{y\rightarrow -\infty}F_{XY}(x,y) = 0 $
$ \begin{align} \bullet &\lim_{x\rightarrow \infty}F_{XY}(x,y) = F_Y(y)\qquad \forall y\in\mathbb R \\ &\lim_{y\rightarrow \infty}F_{XY}(x,y) = F_X(x)\qquad \forall x\in\mathbb R \end{align} $
F$ _X $ and F$ _Y $ are called the marginal cdfs of X and Y.
$ \bullet P(\{x_1 < X\leq x_2\}\cap\{y_1<Y\leq y_2\}) = F_{XY}(x_2,y_2)-F_{XY}(x_1,y_2)-F_{XY}(x_2,y_1)+F_{XY}(x_1,y_1) $



The Joint Probability Density Function

Definition $ \qquad $ The joint probability density function of random variables X and Y is

$ f_{XY}(x,y) \equiv \frac{\partial^2}{\partial x\partial y}F_{XY}(x,y) $

∀(x,y) ∈ R$ ^2 $ where the derivative exists.

It can be shown that if D ∈ B(R$ ^2 $), then,

$ P((X,Y)\in D)=\int\int_Df_{XY}(x,y)dxdy $

where D ≡ D$ _{XY} $ = {(x',y') ∈ R$ ^2 $: x' ≤ x, y' ≤ y}


Properties of f$ _{XY} $:

$ \bullet f_{XY}(x,y)\geq 0\qquad\forall x,y\in\mathbb R $
$ \bullet \int\int_{\mathbb R}f_{XY}(x,y)dxdy = 1 $
$ \bullet F_{XY}(x,y) = \int_{-\infty}^{y}\int_{-\infty}^xf_{XY}(x',y')dx'dy'\qquad\forall(x,y)\in\mathbb R^2 $
$ \begin{align} \bullet &f_X(x) = \int_{-\infty}^{\infty}f_{XY}(x,y)dy \\ &f_Y(y) = \int_{-\infty}^{\infty}f_{XY}(x,y)dx \end{align} $ are the marginal pdfs of X and Y.



The Joint Probability Mass Function

If X and Y are discrete random variables, we will use the joint pdf given by

$ p_{XY}(x,y) = P(X=x,Y=y)\qquad \forall(x,y)\in\mathcal R_X \times\mathcal R_Y $

Note that if X is continuous and Y discrete (or vice versa), we will be interested in

$ P(\{X\in A\}\cap\{Y=y\}),\;\;A\in B(\mathbb R),\;y\in\mathcal R_y $

We often use a form of Bayes' Theorem, which we will discuss later, to get this probability.



Joint Gaussian Random Variables

An important case of two random variables is: X and Y are jointly Gaussian if their joint pdf is given by

$ f_{XY}(x,y)=\frac{1}{2\pi\sigma_X\sigma_Y\sqrt{1-r^2}}exp\left\{-\frac{1}{2(1-r^2)}\left[\frac{(x-\mu_X)^2}{\sigma_X^2}-\frac{2r(x-\mu_X)(y-\mu_Y)}{\sigma_X\sigma_Y}+\frac{(y-\mu_y)^2}{\sigma_Y^2}\right]\right\} $

where μ$ _X $, μ$ _Y $, σ$ _X $, σ$ _Y $, r ∈ R; σ$ _X $$ _Y $ > 0; -1 <r <1.

It can be shown that is X and Y are jointly Gaussian then X is N(μ$ _X $, σ$ _X $$ ^2 $) and Y is N(μ$ _Y $, σ$ _Y $$ ^2 $) (proof)


Special Case

We often model X and Y as jointly Gaussian with μ$ _X $ = μ$ _Y $ = 0, σ$ _X $ = σ$ _Y $ = σ, r = 0, so that

$ f_{XY}(x,y) = \frac{1}{2\pi\sigma^2}e^{-\frac{x^2+y^2}{2\sigma^2}} $


Example $ \qquad $ Let X and Y be jointly Gaussian with μ$ _X $ = μ$ _Y $ = 0, σ$ _X $ = σ$ _Y $ = σ, r= = 0. Find the probability that (X,Y) lies within a distance d from the origin.

Let

$ D_d = \{(x,y)\in\mathbb R^2:\;x^2+y^2\leq d^2\} $


Fig 4:The shaded region shows D$ _d $ = {(x,y)∈R$ ^2 $: x$ ^2 $+y$ ^2 $ ≤ d}


Then

$ P((X,Y)\in D_d) = \int\int_{D_d}\frac{1}{2\pi\sigma^2}e^{-\frac{x^2+y^2}{2\sigma^2}}dxdy $

Use polar coordinates to make integration easier: let

$ \begin{align} r&=x^2+y^2 \\ \theta &= \tan^{-1}(\frac{x}{y}) \end{align} $

Then

$ \begin{align} P((X,Y)\in D_d) &= \int_{-\pi}^{\pi}\int_{0}^{d}f_{XY}(r\cos\theta,r\sin\theta)rdrd\theta \\ &= \int_{-\pi}^{\pi}\int_{0}^{d} \frac{r}{2\pi\sigma^2}e^{-\frac{r^2}{2\sigma^2}}drd\theta \\ &= 1-e^{-\frac{d^2}{2\sigma^2}} \end{align} $


So the probability that (X,Y) lies within distance d from the origin looks like the graph in figure 5 (as a function of d).

Fig 5: P({X$ ^2 $,Y$ ^2 $ ≤ d}) plotted as a function of d



References



Questions and comments

If you have any questions, comments, etc. please post them on this page



Back to all ECE 600 notes

Alumni Liaison

Questions/answers with a recent ECE grad

Ryne Rayburn