Random Variables and Signals
Topic 4: Statistical Independence
Definition
People generally have an idea of the concept of independence, but we will formalize it for probability theory here.
Definition $ \qquad $ Given (S,F,P) and A, B ∈ F, events A and B are statistically indendent iff
It may seem more intuitive if we consider that, from the definition above, we can say that A and B are statistically independent iff P(A|B)P(B) = P(A)P(B), which means P(A|B) = P(A)
Notation $ \qquad $ A, B are independent ⇔
Note $ \qquad $ We will usually refer to statistical independence as simply independence in this course.
Note $ \qquad $ If A and B are independent, then A' and B are also independent.
Proof:
$ \begin{align} \Rightarrow P(A'\cap B) &= P(B) - P(A)P(B) \\ &= P(B)[1-P(A)] \\ &= P(B)P(A') \end{align} $
Definition $ \qquad $ Events $ A_1,A_2,...,A_n $ are statistically independent iff for any $ k=1,2,...,n $, and any 1≤$ j_1,j_2,...,j_k $≤n, for any finite n, we have that
Note that to show that a collection of sets is disjoint, we only need to consider pairwise intersections of the sets. But to show that a collection of sets is independent, we need to consider the every possible combination of sets from the collection.
Note that for a given n, there are $ 2^n-(n+1) $ combinations. To see this, use the Binomial Theorem:
References
- M. Comer. ECE 600. Class Lecture. Random Variables and Signals. Faculty of Electrical Engineering, Purdue University. Fall 2013.