Line 41: Line 41:
  
 
2. For any <math>x_1,x_2</math> ∈ '''R''' such that <math>x_1<x_2</math>, <br/>
 
2. For any <math>x_1,x_2</math> ∈ '''R''' such that <math>x_1<x_2</math>, <br/>
<center><math>F_X(x_1)\leqF_X(x_2)</math></center>
+
<center><math>F_X(x_1)\leq F_X(x_2)</math></center>
 
i.e. <math>F_X(x)</math> is a non decreasing function.
 
i.e. <math>F_X(x)</math> is a non decreasing function.
  

Revision as of 07:22, 1 October 2013


Random Variables and Signals

Topic 6: Random Variables: Distributions



How do we find , compute and model P(x ∈ A) for a random variable X for all A ∈ B(R)? We use three different functions:

  1. the cumulative distribution function (cdf)
  2. the probability density function (pdf)
  3. the probability mass function (pmf)

We will discuss these in this order, although we could come at this discussion in a different way and a different order and arrive at the same place.

Definition $ \quad $ The cumulative distribution function (cdf) of X is defined as

$ \begin{align} F_X(x) &= P_X((-\infty,x])\;\forall x\in \mathbb R \\ &= P(X^{-1}((-\infty, x])) \\ &=P(\{ \omega\in\mathcal S:\;X(\omega)\leq x \}) \end{align} $

Notation $ \quad $ Normally, we write this as

$ F_X(x) = P(X\leq x) $

So $ F_X(x) $ tells us P$ P_X(A) $ if A = (-∞,x] for some real x.
What about other A ∈ B(R)? It can be shown that any A ∈ B(R) can be written as a countable sequence of set operations (unions, intersections, complements) on intervals of the form (-∞,x$ _n $], so can use the probability axioms to find $ P_X(A) $ from $ F_X $ for any A ∈ B(R). This is not how we do things in practice normally. This will be discussed more later.

Can an arbitrary function $ F_X, $ be a valid cdf? No, it cannot.
Properties of a valid cdf:
1.

$ \lim_{x\rightarrow\infty}F_X(x) = 1\;\;\mbox{and}\;\;\lim_{x\rightarrow -\infty}F_X(x) = 0 $

This is because

$ \lim_{x\rightarrow\infty}F_X(x) = P(\{ \omega\in\mathcal S:\;X(\omega)\leq\infty \})=1 $

and

$ \lim_{x\rightarrow -\infty}F_X(x)= P(\varnothing)= 0 $

2. For any $ x_1,x_2 $R such that $ x_1<x_2 $,

$ F_X(x_1)\leq F_X(x_2) $

i.e. $ F_X(x) $ is a non decreasing function.

3. $ F_X $ is continuous from the right , i.e.

$ F_X(x^+) \equiv \lim_{\epsilon\rightarrow0,\epsilon>0}F_X(x+\epsilon)=F_X(x)\;\;\forall x\in\mathbb R $

Proof: First, we need some results from analysis and measure theory:
(i) For a sequence of sets, $ A_1, A_2,... $, if $ A_1 $$ A_2 $ ⊃ ..., then

$ \lim_{n\rightarrow\infty}A_n = \bigcap_{n=1}^{\infty}A_N $

(ii) If $ A_1 $$ A_2 $ ⊃ ..., then

$ P(\lim_{n\rightarrow\infty}A_n) = \lim_{n\rightarrow\infty}P(A_n) $

(iii) We can write $ F_X(x^+) $ as

$ F_X(x^+) = \lim_{n\rightarrow\infty}F_X(x+\frac{1}{n}) $

Now let

$ A = \{X\leq x+\frac{1}{n})\} $

Then

$ \begin{align} F_X(x^+) &= \lim_{n\rightarrow\infty}P(X\leq\frac{1}{n}) \\ &=\lim_{n\rightarrow\infty}P(A_n) \\ &=P(\lim_{n\rightarrow\infty}A_n) \\ &=P(\bigcap_{n=1}^{\infty}A_n) \\ &=P(\bigcap_{n=1}^{\infty}\{X\leq x+\frac{1}{n})\})\\ &=F_X(x) \end{align} $

4. $ P(X>x) = 1-F_X(x) $ for all x ∈ R

5. If $ x_1 < x_2 $, then

$ P(x_1<X\leq x_2) = F_X(x_2) - F_X(x_1)\;\forall x_1,x_2\in\mathbb R $

6. $ P(\{X=x\})= F_X(x) - F_X(x^-) $, where

$ F)X(x^-) = \lim_{\epsilon\rightarrow 0,\epsilon>0} F_X(x-\epsilon) $



The Probability Density Function

Definition $ \quad $ The probability density function (pdf) of a random variable X is the derivative of the cdf of X,

$ f_X(x) = \frac{dF_X(x)}{dx} $

at points where $ F_x $ is differentiable.
From the Fundamental Theorem of Calculus, we then have that

$ F_X(x)=\int_{-\infty}^xf_X(r)dr\;\;\forall x\in\mathbb R $

Important note: the cdf $ F_X $ might not be differentiable everywhere. At points where $ F_X $ is not differentiable, we can use the Dirac delta function to defing $ f_x $.

Definition $ \quad $ The Dirac Delta Function $ \delta(x) $ is the function satisfying the properties:
1.

$ \delta(x) = 0 \;\forall x\neq0 $

2.

$ \int_{-\infty}^{\infty} \delta(x)dx = \int_{-\epsilon}^{\epsilon}\delta(x)dx = 1\;\forall\epsilon>0 $

If $ F_X $ is not differentiable at a point, use $ \delta(x) $ at that point to represent $ f_X $.

Why do we do this? Consider the step function $ u(x) $, which is discontinuous and thus not differentiable at $ x=0 $. This is a common type of discontinuity we see in cdfs. The derivative of $ u(x) $ is defined as

$ \frac{du(x)}{x}=\lim_{h\rightarrow 0}\frac{u(x+h)-u(x)}{h} $

This limit does not exist at $ x=0 $

Let's look at the function

$ g(x) = \frac{u(x+h)-u(h)}{h} $

It looks like this:

Fig 1: g(x) for h>0

For any x ≠ 0, we have that

$ \frac{u(x+h)-u(x)}{h}=0 $

for small enough h.
Also, ∀ $ \epsilon $<0,

$ \int_{-\epsilon}^{\epsilon}\delta(x)dx= 1\; \forall h<\epsilon $

So, in the limit, the function g(x) has the properties of the $ \delta $-function as h tends to 0. A similar argument can be made for h<0.
So this is why it is sometimes written that

$ \frac{du(x)}{x} = \delta(x) $

Since we will only work with non-differentiable functions that have step discontinuities as cdfs, we write

$ f_X(x) = \frac{dF_X(x)}{dx} $

with the understanding that $ d/dx $ is not necessarily the traditional definition of the derivative.

Properties of the pdf:
1. (proof)

$ f_X(x)\geq 0\;\forall x\in \mathbb R $

2. (proof)

$ \int_{-\infty}^{\infty}f_X(x)dx = 1 $

3. (proof) if $ x_1<x_2 $</center>, then

$ P(x_1<X\leq x_2) = int_{-x_1}^{x_2}f_X(x)dx $

Some notes:

  • We introduced the concept of a pdf in our discussion of probability spaces. We could have defined the pdf of a random variable X as a function $ f_X $ satisfying properties 1 and 2 above, and then define $ F_X $ in terms of $ f_X $.
  • f_X(x) is not a probability for a fixed x, it gives us instead the "probability density", so it must be integrated to give us the probability.
  • In practice, to compute probabilities of random variable X, we normally use
$ P(X\in A) = \int_{A}f_X(x)dx $





References



Back to all ECE 600 notes

Alumni Liaison

EISL lab graduate

Mu Qiao