Line 9: Line 9:
 
</div>  
 
</div>  
  
<br> for which ''the expected value'' of ''x'' is  
+
<br> for which the expected value of ''x'' is  
  
 
<div style="margin-left: 25em;">
 
<div style="margin-left: 25em;">
Line 29: Line 29:
 
</div>
 
</div>
  
where '''x''' is a ''d''-component column vector, '''&mu;''' is the ''d''-component mean vector, '''&Sigma;''' is the ''d''-by-''d'' covariance matrix, and '''|&Sigma;|''' and '''&Sigma;<sup>-1</sup>''' are its determinant and inverse respectively. Also, ('''x - &mu;''')<sup>t<\sup> denotes the transpose of ('''x - &mu;''').
+
where '''x''' is a ''d''-component column vector, '''&mu;''' is the ''d''-component mean vector, '''&Sigma;''' is the ''d''-by-''d'' covariance matrix, and '''|&Sigma;|''' and '''&Sigma;<sup>-1<\sup>''' are its determinant and inverse respectively. Also, ('''x - &mu;''')<sup>t<\sup> denotes the transpose of ('''x - &mu;''').
  
 
and
 
and

Revision as of 18:09, 4 April 2013

Discriminant Functions For The Normal Density


       Lets begin with the continuous univariate normal or Gaussian density.

$ f_x = \frac{1}{\sqrt{2 \pi} \sigma} \exp \left [- \frac{1}{2} \left ( \frac{x - \mu}{\sigma} \right)^2 \right ] $


for which the expected value of x is

$ \mu = \mathcal{E}[x] =\int\limits_{-\infty}^{\infty} xp(x)\, dx $

and where the expected squared deviation or variance is

$ \sigma^2 = \mathcal{E}[(x- \mu)^2] =\int\limits_{-\infty}^{\infty} (x- \mu)^2 p(x)\, dx $

       The univariate normal density is completely specified by two parameters; its mean μ and variance σ2. The function fx can be written as N(μ,σ) which says that x is distributed normally with mean μ and variance σ2. Samples from normal distributions tend to cluster about the mean with a spread related to the standard deviation σ.

For the multivariate normal density in d dimensions, fx is written as

$ f_x = \frac{1}{(2 \pi)^ \frac{d}{2} |\boldsymbol{\Sigma}|^\frac{1}{2}} \exp \left [- \frac{1}{2} (\mathbf{x} -\boldsymbol{\mu})^t\boldsymbol{\Sigma}^{-1} (\mathbf{x} -\boldsymbol{\mu}) \right] $

where x is a d-component column vector, μ is the d-component mean vector, Σ is the d-by-d covariance matrix, and |Σ| and Σ-1<\sup> are its determinant and inverse respectively. Also, (x - μ)t<\sup> denotes the transpose of (x - μ).

and

$ \boldsymbol{\Sigma} = \mathcal{E} \left [(\mathbf{x} - \boldsymbol{\mu})(\mathbf{x} - \boldsymbol{\mu})^t \right] = \int(\mathbf{x} - \boldsymbol{\mu})(\mathbf{x} - \boldsymbol{\mu})^t p(\mathbf{x})\, dx $

where the expected value of a vector or a matrix is found by taking the expected value of the individual components. i.e if xi<\sub> is the ith component of x, μi<\sub> the ith component of μ, and σij the ijth component of Σ, then

$ \mu_i = \mathcal{E}[x_i] $

and

$ \sigma_ij = \mathcal{E}[(x_i - \mu_i)(x_j - \mu_j)] $

Alumni Liaison

Meet a recent graduate heading to Sweden for a Postdoctorate.

Christine Berkesch