Line 15: Line 15:
 
<br>  
 
<br>  
 
<center><math>\max_{w_{i}} \rho \left(w_{i}|x\right)</math><br></center>  
 
<center><math>\max_{w_{i}} \rho \left(w_{i}|x\right)</math><br></center>  
<br> According to Bayes rule:  
+
According to Bayes rule:  
 +
<center><math>\max_{w_{i}} \rho \left(w_{i}|x\right) = \max_{w_{i}} \rho \left(x|w_{i}\right)Prob(w_{i})</math><br></center>
 +
In our case, the data is distributed as Gaussian. So we have,
 +
<center><math>
 +
\rho \left(x|w_{i}\right) = \frac{1}{(2\pi)^{\frac{n}{2}}|\mathbf{\Sigma}|^{\frac{1}{2}}}\mbox{exp}\left[{-\frac{1}{2}(x - \mu)^T\mathbf{\Sigma}^{-1}(x - \mu)}\right]
 +
</math></center>
 +
Let
 +
<center><math>
 +
\begin{align}g_{i}(x) &= ln(\rho \left(w_{i}|x\right)) \\
 +
&= ln(\rho \left(x|w_{i}\right)Prob(w_{i})) \\
 +
&= -\frac{n}{2}ln(2\pi)-\frac{1}{2}ln(|\mathbf{\Sigma}|)-{\frac{1}{2}(x - \mu)^T\mathbf{\Sigma}^{-1}(x - \mu)}
 +
\end{align}
 +
</math></center>
 +
Now we have,
  
 
+
<br>
<center><math>\max_{w_{i}} \rho \left(w_{i}|x\right) = \max_{w_{i}} \rho \left(x|w_{i}\right)Prob(w_{i})</math><br></center>
+
<center><math>
 
+
\begin{align}\max_{w_{i}} \rho \left(w_{i}|x\right) &= \max_{w_{i}} \rho \left(x|w_{i}\right)Prob(w_{i}) \\
 
+
&= \max_{w_{i}} g_{i}(x)
In our case, the data is distributed as Gaussian. So we have,
+
\end{align}
 +
</math></center>  
 +
<br> Generate the discriminant function as  
 +
<center><math>g\left(x\right) = g_{1}\left(x\right) - g_{2}\left(x\right);</math><br></center> <center>decide w<sub>1</sub>&nbsp;if <br></center>

Revision as of 11:55, 21 April 2014


Bayes rule in practice
A slecture by Lu Wang

(partially based on Prof. Mireille Boutin's ECE 662 lecture)



1. Bayes rule for Gaussian data

    Given data x ∈ Rd and N categories {wi}, i=1,2,…,N, we decide which category the data corresponds to by computing the probability of the N events. We’ll pick the category with the largest probability. Mathematically, this can be interpreted as:


$ \max_{w_{i}} \rho \left(w_{i}|x\right) $

According to Bayes rule:

$ \max_{w_{i}} \rho \left(w_{i}|x\right) = \max_{w_{i}} \rho \left(x|w_{i}\right)Prob(w_{i}) $

In our case, the data is distributed as Gaussian. So we have,

$ \rho \left(x|w_{i}\right) = \frac{1}{(2\pi)^{\frac{n}{2}}|\mathbf{\Sigma}|^{\frac{1}{2}}}\mbox{exp}\left[{-\frac{1}{2}(x - \mu)^T\mathbf{\Sigma}^{-1}(x - \mu)}\right] $

Let

$ \begin{align}g_{i}(x) &= ln(\rho \left(w_{i}|x\right)) \\ &= ln(\rho \left(x|w_{i}\right)Prob(w_{i})) \\ &= -\frac{n}{2}ln(2\pi)-\frac{1}{2}ln(|\mathbf{\Sigma}|)-{\frac{1}{2}(x - \mu)^T\mathbf{\Sigma}^{-1}(x - \mu)} \end{align} $

Now we have,


$ \begin{align}\max_{w_{i}} \rho \left(w_{i}|x\right) &= \max_{w_{i}} \rho \left(x|w_{i}\right)Prob(w_{i}) \\ &= \max_{w_{i}} g_{i}(x) \end{align} $


Generate the discriminant function as

$ g\left(x\right) = g_{1}\left(x\right) - g_{2}\left(x\right); $
decide w1 if

Alumni Liaison

Correspondence Chess Grandmaster and Purdue Alumni

Prof. Dan Fleetwood