Line 20: Line 20:
 
*Statistical Density Theory Context
 
*Statistical Density Theory Context
 
**Given c classes + some knowledge about features <math>x \in \mathbb{R}^n</math> (or some other space)
 
**Given c classes + some knowledge about features <math>x \in \mathbb{R}^n</math> (or some other space)
**Given training data, <math>x_j\sim\rho(x)=\sum\limits_{i=1}^n\rho(x|w_i) Prob(w_i)</math>, unknown class <math>w_{ij}</math> for <math>x_j</math> is know, \forall{j}=1,...,N</math> (N hopefully large enough)
+
**Given training data, <math>x_j\sim\rho(x)=\sum\limits_{i=1}^n\rho(x|w_i) Prob(w_i)</math>, unknown class <math>w_{ij}</math> for <math>x_j</math> is know, <math>\forall{j}=1,...,N</math> (N hopefully large enough)
  
  

Revision as of 20:07, 5 May 2014


Expected Value of MLE estimate over standard deviation and expected deviation

A slecture by ECE student Zhenpeng Zhao

Partly based on the ECE662 Spring 2014 lecture material of Prof. Mireille Boutin.




1. Motivation

  • Most likely converge as number of number of training sample increase.
  • Simpler than alternate methods such as Bayesian technique.



2. Motivation

  • Statistical Density Theory Context
    • Given c classes + some knowledge about features $ x \in \mathbb{R}^n $ (or some other space)
    • Given training data, $ x_j\sim\rho(x)=\sum\limits_{i=1}^n\rho(x|w_i) Prob(w_i) $, unknown class $ w_{ij} $ for $ x_j $ is know, $ \forall{j}=1,...,N $ (N hopefully large enough)


Zhenpeng Selecture 1.png Zhenpeng Selecture 2.png Zhenpeng Selecture 3.png Zhenpeng Selecture 4.png Zhenpeng Selecture 5.png



(create a question page and put a link below)

Questions and comments

If you have any questions, comments, etc. please post them on https://kiwi.ecn.purdue.edu/rhea/index.php/ECE662Selecture_ZHenpengMLE_Ques.


Back to ECE662, Spring 2014

Alumni Liaison

Basic linear algebra uncovers and clarifies very important geometry and algebra.

Dr. Paul Garrett