Line 15: Line 15:
 
'''1. Introduction'''
 
'''1. Introduction'''
  
This slecture describes the theoretical upper bounds for Bayes Error. First, in chapter 2, the error bound is expressed in terms of the Bayes classifiers. This error bound expression includes a $min$ function that by using a lemma, the $min$ function can be replaced with the expression of a theoretical error bound. In chapter 3, we specifically derive the Chernoff bound for the Normally distributed data. We also derive the Chernoff distance in the case of Normally distributed data. In section~\ref{example}, some examples for the Chernoff bound are provided.
+
This slecture describes the theoretical upper bounds for Bayes Error. First, in chapter 2, the error bound is expressed in terms of the Bayes classifiers. This error bound expression includes a $min$ function that by using a lemma, the $min$ function can be replaced with the expression of a theoretical error bound. In chapter 3, we specifically derive the Chernoff bound for the Normally distributed data. We also derive the Chernoff distance in the case of Normally distributed data. In section 3.2, some examples for the Chernoff bound are provided.
  
 
The materials given in this lecture are based on the lecture notes and the discussions that were shared in Prof. Boutin's ECE662 Spring 2014 course at Purdue University. Examples were designed to reflect the theories taught in the class.
 
The materials given in this lecture are based on the lecture notes and the discussions that were shared in Prof. Boutin's ECE662 Spring 2014 course at Purdue University. Examples were designed to reflect the theories taught in the class.
Line 23: Line 23:
  
 
'''2.1 Classifier using Bayes rule'''
 
'''2.1 Classifier using Bayes rule'''
To classify the dataset of feature vectors with $C$ labels, we choose class $w_{i}$ where
+
To classify the dataset of feature vectors with ''C'' labels, we choose class <math>w_{i}</math> where
  
<math>\argmax_{\omega_{i}  \in  \big\{\omega_{1}, \cdots ,\omega_{c}\big\} }  Prob \big(  \omega_{i}  \mid x \big). </math>
+
<math>\arg \max_{\omega_{i}  \in  \big\{\omega_{1}, \cdots ,\omega_{c}\big\} }  Prob \big(  \omega_{i}  \mid x \big). </math>
  
 
'''2.2 Upper Bounds for Bayes Error'''
 
'''2.2 Upper Bounds for Bayes Error'''

Revision as of 18:24, 14 April 2014

Upper Bounds for Bayes Error
A slecture by Jeehyun Choe

(partially based on Prof. Mireille Boutin's ECE 662 lecture)




1. Introduction

This slecture describes the theoretical upper bounds for Bayes Error. First, in chapter 2, the error bound is expressed in terms of the Bayes classifiers. This error bound expression includes a $min$ function that by using a lemma, the $min$ function can be replaced with the expression of a theoretical error bound. In chapter 3, we specifically derive the Chernoff bound for the Normally distributed data. We also derive the Chernoff distance in the case of Normally distributed data. In section 3.2, some examples for the Chernoff bound are provided.

The materials given in this lecture are based on the lecture notes and the discussions that were shared in Prof. Boutin's ECE662 Spring 2014 course at Purdue University. Examples were designed to reflect the theories taught in the class.


2. Upper Bounds for Bayes Error

2.1 Classifier using Bayes rule To classify the dataset of feature vectors with C labels, we choose class $ w_{i} $ where

$ \arg \max_{\omega_{i} \in \big\{\omega_{1}, \cdots ,\omega_{c}\big\} } Prob \big( \omega_{i} \mid x \big). $

2.2 Upper Bounds for Bayes Error


3. Chernoff Bound for Normally Distributed Data

3.1

3.2

Summary


Reference

Alumni Liaison

Basic linear algebra uncovers and clarifies very important geometry and algebra.

Dr. Paul Garrett