(New page: Category:ECE600 Category:lecture notes <center><font size= 4> '''Random Variables and Signals''' </font size> <font size= 3> Topic 4: Statistical Independence </font size> </cent...)
 
Line 42: Line 42:
 
'''Definition''' <math>\qquad</math>  Events <math>A_1,A_2,...,A_n</math> are statistically independent iff for any <math>k=1,2,...,n</math>, and any 1≤<math>j_1,j_2,...,j_k</math>≤n, for any finite n, we have that <br/>
 
'''Definition''' <math>\qquad</math>  Events <math>A_1,A_2,...,A_n</math> are statistically independent iff for any <math>k=1,2,...,n</math>, and any 1≤<math>j_1,j_2,...,j_k</math>≤n, for any finite n, we have that <br/>
 
<center><math> P(A_{j_1}\cap...\cap A_{j_1}) = \prod_{i=1}^k P(A_{j_1})</math></center>
 
<center><math> P(A_{j_1}\cap...\cap A_{j_1}) = \prod_{i=1}^k P(A_{j_1})</math></center>
 +
 +
Note that to show that a collection of sets is disjoint, we only need to consider pairwise intersections of the sets. But to show that a collection of sets is independent, we need to consider the every possible combination of sets from the collection. <br/>
 +
Note that for a given n, there are <math>2^n-(n+1)</math> combinations. To see this, use the Binomial Theorem: <br>
 +
<center><math> \sum_{k=0}^n {n\choose k}a^kb^{n-k} = (a+b)^n, \;\; a=b=1</math></center>
  
  

Revision as of 07:46, 30 September 2013


Random Variables and Signals

Topic 4: Statistical Independence



Definition

People generally have an idea of the concept of independence, but we will formalize it for probability theory here.

Definition $ \qquad $ Given (S,F,P) and A, B ∈ F, events A and B are statistically indendent iff

$ P(A\cap B) = P(A)P(B) $

It may seem more intuitive if we consider that, from the definition above, we can say that A and B are statistically independent iff P(A|B)P(B) = P(A)P(B), which means P(A|B) = P(A)

Notation $ \qquad $ A, B are independent ⇔

$ X \perp\!\!\!\perp Y $

Note $ \qquad $ We will usually refer to statistical independence as simply independence in this course.

Note $ \qquad $ If A and B are independent, then A' and B are also independent.

Proof:

$ \begin{align} B&=(A\cap B)\cup(A'\cap B) \\ \Rightarrow P(B)&=p(A\cap B)+P(A'\cap B) \\ &=P(A)P(B) + P(A'\cap B) \end{align} $
$ \begin{align} \Rightarrow P(A'\cap B) &= P(B) - P(A)P(B) \\ &= P(B)[1-P(A)] \\ &= P(B)P(A') \end{align} $

Definition $ \qquad $ Events $ A_1,A_2,...,A_n $ are statistically independent iff for any $ k=1,2,...,n $, and any 1≤$ j_1,j_2,...,j_k $≤n, for any finite n, we have that

$ P(A_{j_1}\cap...\cap A_{j_1}) = \prod_{i=1}^k P(A_{j_1}) $

Note that to show that a collection of sets is disjoint, we only need to consider pairwise intersections of the sets. But to show that a collection of sets is independent, we need to consider the every possible combination of sets from the collection.
Note that for a given n, there are $ 2^n-(n+1) $ combinations. To see this, use the Binomial Theorem:

$ \sum_{k=0}^n {n\choose k}a^kb^{n-k} = (a+b)^n, \;\; a=b=1 $





References



Back to all ECE 600 notes

Alumni Liaison

Correspondence Chess Grandmaster and Purdue Alumni

Prof. Dan Fleetwood