m (Protected "ECE600 F13 Random Vectors mhossain" [edit=sysop:move=sysop]) |
|||
(8 intermediate revisions by 2 users not shown) | |||
Line 3: | Line 3: | ||
[[ECE600_F13_notes_mhossain|Back to all ECE 600 notes]] | [[ECE600_F13_notes_mhossain|Back to all ECE 600 notes]] | ||
+ | |||
+ | [[Category:ECE600]] | ||
+ | [[Category:probability]] | ||
+ | [[Category:lecture notes]] | ||
+ | [[Category:slecture]] | ||
<center><font size= 4> | <center><font size= 4> | ||
− | '''Random Variables and Signals''' | + | [[ECE600_F13_notes_mhossain|'''The Comer Lectures on Random Variables and Signals''']] |
</font size> | </font size> | ||
+ | |||
+ | [https://www.projectrhea.org/learning/slectures.php Slectures] by [[user:Mhossain | Maliha Hossain]] | ||
+ | |||
<font size= 3> Topic 17: Random Vectors</font size> | <font size= 3> Topic 17: Random Vectors</font size> | ||
</center> | </center> | ||
− | |||
---- | ---- | ||
− | + | ---- | |
==Random Vectors== | ==Random Vectors== | ||
Line 20: | Line 27: | ||
is a '''random vector''' ('''RV''') on (''S,F,''P). | is a '''random vector''' ('''RV''') on (''S,F,''P). | ||
− | <center>[[Image:fig1_random_vectors.png|350px|thumb|left|Fig 1: The mapping from the sample space to the event space under X<math> | + | <center>[[Image:fig1_random_vectors.png|350px|thumb|left|Fig 1: The mapping from the sample space to the event space under X<math>_j</math>.]]</center> |
Line 36: | Line 43: | ||
: Note that B('''R'''<math>^n</math>) is the <math>\sigma</math>-field generated by the collection of all open n-dimensional hypercubes (more formally, k-cells) in '''R'''<math>^n</math>. | : Note that B('''R'''<math>^n</math>) is the <math>\sigma</math>-field generated by the collection of all open n-dimensional hypercubes (more formally, k-cells) in '''R'''<math>^n</math>. | ||
− | * The formula for the joint pdf | + | * The formula for the joint pdf of two functions of two random variables can be extended to find the pdf of n functions of n random variables (see Papoulis). |
* The random variables X<math>_1</math>,..., X<math>_n</math> are statistically independent if the events {X<math>_1</math> ∈ A<math>_1</math>},..., {X<math>_n</math> ∈ A<math>_n</math>} are independent ∀A<math>_1</math>, ..., A<math>_n</math> ∈ B('''R'''). An equivalent definition is that X<math>_1</math>,..., X<math>_n</math> are independent if <br/> | * The random variables X<math>_1</math>,..., X<math>_n</math> are statistically independent if the events {X<math>_1</math> ∈ A<math>_1</math>},..., {X<math>_n</math> ∈ A<math>_n</math>} are independent ∀A<math>_1</math>, ..., A<math>_n</math> ∈ B('''R'''). An equivalent definition is that X<math>_1</math>,..., X<math>_n</math> are independent if <br/> | ||
− | <center><math>f_{\underline X}(\underline x)=\prod_{i=1}^nf_{X_i}( | + | <center><math>f_{\underline X}(\underline x)=\prod_{i=1}^nf_{X_i}(x_i)\;\;\forall\underline x\in\mathbb R^n</math></center> |
Line 119: | Line 126: | ||
<center><math>\Phi_Z(\omega)=E\left[e^{i\omega\sum_{j=1}^nX_j}\right] = \Phi_{\underline X}(\omega,...,\omega)</math></center> | <center><math>\Phi_Z(\omega)=E\left[e^{i\omega\sum_{j=1}^nX_j}\right] = \Phi_{\underline X}(\omega,...,\omega)</math></center> | ||
− | If X<math>_1</math>,..., X<math>_n</math> are independent, | + | If X<math>_1</math>,..., X<math>_n</math> are independent, then <br/> |
<center><math>\Phi_Z(\omega)=\prod_{j=1}^n\Phi_{X_j}(\omega)</math></center> | <center><math>\Phi_Z(\omega)=\prod_{j=1}^n\Phi_{X_j}(\omega)</math></center> | ||
Line 132: | Line 139: | ||
'''Definition''' <math>\qquad</math> Let <span style="text-decoration:underline;">X</span> be a random vector on (''S,F'',P). Then <span style="text-decoration:underline;">X</span> is Gaussian and X<math>_1</math>,..., X<math>_n</math> are said to be jointly Gaussian iff <br/> | '''Definition''' <math>\qquad</math> Let <span style="text-decoration:underline;">X</span> be a random vector on (''S,F'',P). Then <span style="text-decoration:underline;">X</span> is Gaussian and X<math>_1</math>,..., X<math>_n</math> are said to be jointly Gaussian iff <br/> | ||
<center><math>Z=a_0+\sum_{j=1}^na_jX_j</math></center> | <center><math>Z=a_0+\sum_{j=1}^na_jX_j</math></center> | ||
− | is a Gaussian random variable ∀[a<math> | + | is a Gaussian random variable ∀[a<math>_0</math>,..., a<math>_n</math>] ∈ '''R'''<math>^{n+1}</math>. |
Now we will show that the characteristic function of a Gaussian random vector <span style="text-decoration:underline;">X</span> is <br/> | Now we will show that the characteristic function of a Gaussian random vector <span style="text-decoration:underline;">X</span> is <br/> | ||
Line 145: | Line 152: | ||
Then Z is a Gaussian random variable since <span style="text-decoration:underline;">X</span> is Gaussian. So <br/> | Then Z is a Gaussian random variable since <span style="text-decoration:underline;">X</span> is Gaussian. So <br/> | ||
− | <center><math>\Phi_Z(\omega)=e^{i\omega\mu_Z}e^{-\frac{1}{2}\omega^2\ | + | <center><math>\Phi_Z(\omega)=e^{i\omega\mu_Z}e^{-\frac{1}{2}\omega^2\sigma_Z^2}</math></center> |
where <br/> | where <br/> | ||
Line 173: | Line 180: | ||
Plugging the expressions for <math>\mu_Z</math> and <math>\sigma_Z</math><math>^2</math> into Φ<math>_Z</math> gives <br/> | Plugging the expressions for <math>\mu_Z</math> and <math>\sigma_Z</math><math>^2</math> into Φ<math>_Z</math> gives <br/> | ||
− | <center><math>\Phi_{\underline X}(\omega) = | + | <center><math>\Phi_{\underline X}(\omega) =e^{i\underline\Omega^T\mu_{\underline X}}e^{-\frac{1}{2}\underline\Omega^TC_{\underline X}\underline\Omega}</math></center> |
+ | |||
+ | Note that we can use the equation <br/> | ||
+ | <center><math>f_{\underline X}(\underline x) = \frac{1}{(2\pi)^n}\int_{\mathbb R^n}\Phi_{\underline X}(\underline\Omega)e^{-i\underline\Omega^T\underline x}d\underline\Omega</math></center> | ||
+ | to show that if <span style="text-decoration:underline;">X</span> is Gaussian, then <br/> | ||
+ | <center><math>f_{\underline X}(\underline x) = \frac{1}{\sqrt{(2\pi)^n|C_{\underline X}|}}e^{-\frac{1}{2}(\underline X-\mu_{\underline X})^TC_{\underline X}^{-1}(\underline X-\mu_{\underline X})}</math></center><br/> | ||
+ | <center><math>\forall \underline x\in\mathbb R^n</math></center> | ||
+ | |||
+ | Note that if X<math>_1</math>,..., X<math>_n</math> are pairwise uncorrelated, then C<span style="text-decoration: underline;"><sub>X</sub></span> is the covariance matrix of <span style="text-decoration: underline;">X</span> is diagonal and <br/> | ||
+ | <center><math>f_{\underline X}(\underline x) = \frac{e^{-\frac{1}{2}\sum_{j=1}^n\frac{x_j^2}{\sigma_j^2}}}{\sqrt{(2\pi)^n\prod_{j=1}^n\sigma_j^2}}</math></center> | ||
+ | |||
+ | Then, we can find the joint characteristic function and the joint pdf of the jointly Gaussian random variables X and Y using the forms for a Gaussian random vector with n = 2, X<math>_!</math> = X and X<math>_2</math> = Y. | ||
+ | |||
+ | |||
+ | ---- | ||
+ | |||
+ | == References == | ||
+ | |||
+ | * [https://engineering.purdue.edu/~comerm/ M. Comer]. ECE 600. Class Lecture. [https://engineering.purdue.edu/~comerm/600 Random Variables and Signals]. Faculty of Electrical Engineering, Purdue University. Fall 2013. | ||
+ | |||
+ | |||
+ | ---- | ||
+ | |||
+ | ==[[Talk:ECE600_F13_Random_Vectors_mhossain|Questions and comments]]== | ||
+ | |||
+ | If you have any questions, comments, etc. please post them on [[Talk:ECE600_F13_Random_Vectors_mhossain|this page]] | ||
+ | |||
+ | |||
+ | ---- | ||
+ | |||
+ | [[ECE600_F13_notes_mhossain|Back to all ECE 600 notes]] |
Latest revision as of 11:13, 21 May 2014
The Comer Lectures on Random Variables and Signals
Topic 17: Random Vectors
Contents
Random Vectors
Definition $ \qquad $ let X$ _1 $,..., X$ _n $ be n random variables on (S,F,P). The column vector X is given by
is a random vector (RV) on (S,F,P).
We can view X($ \omega $) as a point in R$ ^n $ ∀$ \omega $ ∈ S.
Much of what we need to work with random vectors we can get by a simple extension of what we have developed for n = 2.
For example:
- The cumulative distribution function of X is
- and the probability density function of X is
- For any D ⊂ R$ ^n $ such that D ∈ B(R$ ^2 $),
- Note that B(R$ ^n $) is the $ \sigma $-field generated by the collection of all open n-dimensional hypercubes (more formally, k-cells) in R$ ^n $.
- The formula for the joint pdf of two functions of two random variables can be extended to find the pdf of n functions of n random variables (see Papoulis).
- The random variables X$ _1 $,..., X$ _n $ are statistically independent if the events {X$ _1 $ ∈ A$ _1 $},..., {X$ _n $ ∈ A$ _n $} are independent ∀A$ _1 $, ..., A$ _n $ ∈ B(R). An equivalent definition is that X$ _1 $,..., X$ _n $ are independent if
Random Vectors: Moments
We will spend some time on moments of random vectors. We will be especially interested in pairwise covariances/correlations.
The correlation between X$ _j $ and X$ _k $ is denoted R$ _{jk} $, so
and the covariance is C$ _{jk} $:
For a random vector X, we define the correlation matrix RX as
and the covariance matrix CX as
The mean vector $ \mu $X is
Note that the correlation matrix and the covariance matrix can be written as
Note that $ \mu $X, RX and CX are the moments we most commonly use for the random vectors.
We need to discuss an important property of RX, but first, a definition from Linear Algebra.
Definition $ \qquad $ An n × m matrix B with b$ _{ij} $ as its i,j$ ^{th} $ entry is non-negative definite (NND) (or positive semidefinite) if
for all real vectors [x$ _1 $,...,x$ _n $] ∈ R$ ^n $.
That is to say that for any real vector x, the product x$ ^T $Ax, where A is a real matrix, is non negative.
Theorem $ \qquad $ For any random vector X, RX is NND.
Proof: $ \qquad $ let a be an arbitrary real vector in R$ ^n $, and let
be a scalar random variable. Then
So
and thus, RX is NND
Note: CX is also NND.
Characteristic Functions of Random Vectors
Definition $ \qquad $ let X be a random vector on (S,F,P). Then the characteristic function of X is
where
The characteristic function ΦX is extremely useful for finding pdfs of sums of random variables.
Let
Then
If X$ _1 $,..., X$ _n $ are independent, then
If, in addition, X$ _1 $,..., X$ _n $ are identically distributed with common characteristic function Φ$ _X $, then
Gaussian Random Vectors
Definition $ \qquad $ Let X be a random vector on (S,F,P). Then X is Gaussian and X$ _1 $,..., X$ _n $ are said to be jointly Gaussian iff
is a Gaussian random variable ∀[a$ _0 $,..., a$ _n $] ∈ R$ ^{n+1} $.
Now we will show that the characteristic function of a Gaussian random vector X is
where $ \mu $X is the mean vector of X and CX is the covariance matrix.
Proof $ \qquad $ Let
for
Then Z is a Gaussian random variable since X is Gaussian. So
where
and
where
and CX is the covariance matrix of X.
Now
Plugging the expressions for $ \mu_Z $ and $ \sigma_Z $$ ^2 $ into Φ$ _Z $ gives
Note that we can use the equation
to show that if X is Gaussian, then
Note that if X$ _1 $,..., X$ _n $ are pairwise uncorrelated, then CX is the covariance matrix of X is diagonal and
Then, we can find the joint characteristic function and the joint pdf of the jointly Gaussian random variables X and Y using the forms for a Gaussian random vector with n = 2, X$ _! $ = X and X$ _2 $ = Y.
References
- M. Comer. ECE 600. Class Lecture. Random Variables and Signals. Faculty of Electrical Engineering, Purdue University. Fall 2013.
Questions and comments
If you have any questions, comments, etc. please post them on this page