(New page: <br> <center><font size="4"></font> <font size="4">'''Principle Component Analysis (PCA) tutorial''' <br>) |
|||
Line 1: | Line 1: | ||
<br> | <br> | ||
<center><font size="4"></font> | <center><font size="4"></font> | ||
− | <font size="4">''' | + | <font size="4">'''Bayes rule in practice''' <br> </font> <font size="2">A [https://www.projectrhea.org/learning/slectures.php slecture] by Lu Wang </font> |
+ | |||
+ | <font size="2">(partially based on Prof. [https://engineering.purdue.edu/~mboutin/ Mireille Boutin's] ECE [[ECE662|662]] lecture) </font> | ||
+ | </center> | ||
+ | ---- | ||
+ | |||
+ | ---- | ||
+ | |||
+ | == 1. Bayes rule for Gaussian data == | ||
+ | |||
+ | Given data x ∈ R<span class="texhtml"><sup>''d''</sup></span> and N categories {w<span class="texhtml"><sub>''i''</sub></span>}, i=1,2,…,N, we decide which category the data corresponds to by computing the probability of the N events. We’ll pick the category with the largest probability. Mathematically, this can be interpreted as: | ||
+ | |||
+ | <br> | ||
+ | <center><math>\max_{w_{i}} \rho \left(w_{i}|x\right)</math><br></center> | ||
+ | <br> According to Bayes rule: | ||
+ | |||
+ | |||
+ | <center><math>\max_{w_{i}} \rho \left(w_{i}|x\right) = \max_{w_{i}} \rho \left(x|w_{i}\right)Prob(w_{i})</math><br></center> | ||
+ | |||
+ | |||
+ | In our case, the data is distributed as Gaussian. So we have, |
Revision as of 10:46, 21 April 2014
Bayes rule in practice
A slecture by Lu Wang
(partially based on Prof. Mireille Boutin's ECE 662 lecture)
1. Bayes rule for Gaussian data
Given data x ∈ Rd and N categories {wi}, i=1,2,…,N, we decide which category the data corresponds to by computing the probability of the N events. We’ll pick the category with the largest probability. Mathematically, this can be interpreted as:
According to Bayes rule:
In our case, the data is distributed as Gaussian. So we have,