(5 intermediate revisions by 2 users not shown)
Line 1: Line 1:
=Upper Bounds for Bayes Error: Questions and comment=
+
<center><font size= 4>
 
+
Comments for [[test|Upper Bounds for Bayes Error]]
[[ test |Back to Upper Bounds for Bayes Error]]
+
</font size>
  
 +
A [https://www.projectrhea.org/learning/slectures.php slecture] by Jeehyun Choe
 +
</center>
 
----
 
----
 
Please leave me comment below if you have any questions, if you notice any errors or if you would like to discuss a topic further.
 
Please leave me comment below if you have any questions, if you notice any errors or if you would like to discuss a topic further.
 
----
 
----
 +
----
 +
Review by Dennis Lee.
  
=Questions and Comments=
+
This slecture is very well done, and it can be a good
 +
reference for the derivation of the Chernoff Bound for
 +
normally distributed data.  Jeehyun starts by introducing
 +
the upper bound for Bayes error.  The upper bound relies on
 +
a lemma that bounds the min function.  From the lemma we see
 +
a general expression for <math>\epsilon_\beta</math>.  The error at the smallest <math>\beta</math> is the Chernoff bound.  The Bhattacharya bound occurs when <math>\beta = 1/2</math>.
  
*
+
Next Jeehyun describes the Chernoff bound for normally distributed data.  This section is very well done and
 +
provides detailed steps on how the bound is derived in this case.  The first step is to write the Gaussian distribution <math>\rho(x | \omega_i)</math> as a function of the sufficient statistic <math>t(x)</math>
 +
and the parameters <math>\theta_i</math>.  Then plug this expression for <math>\rho(x|\omega_i)</math> into <math>\int_R \rho(x|\omega_p)^\beta \rho(x|\omega_q)^{1 - \beta} dx.</math> The end result is the Chernoff bound <math>\epsilon_\beta = \text{Prob}(\omega_1)^\beta \text{Prob}(\omega_2)^{1 - \beta} \text{exp}[-k(\beta)]</math>, where <math>k(\beta)</math> is given in the slecture.  Finally Jeehyun plots the Chernoff bound for different distributions where the gap between the means and the variances are varied.  From the plots we see that the Chernoff bound is generally low when the gap between the two means is high.
 +
 
 +
I do have a few suggestions to help make this slecture even
 +
better:
 +
* At the end of Section 2.2, it might help to explain more about the remark that the Chernoff bound can be used in the multi-category case.  Is it really valid to add up the error bounds from multiple two category cases?
 +
* In Eq. (3.10), it might help to remark that the integral equals 1 because the probabillity distribution  integrates to 1.
 +
* In Figs. 3.1 and 3.2, the Chernoff bound should be equal to the Bhattacharya bound, rather than just being close as mentioned in Section 3.2.
 +
----
 +
*Write comment/question here
 +
**answer here
 +
----
 +
[[ test |Back to Upper Bounds for Bayes Error]]

Latest revision as of 14:54, 30 April 2014

Comments for Upper Bounds for Bayes Error

A slecture by Jeehyun Choe


Please leave me comment below if you have any questions, if you notice any errors or if you would like to discuss a topic further.



Review by Dennis Lee.

This slecture is very well done, and it can be a good reference for the derivation of the Chernoff Bound for normally distributed data. Jeehyun starts by introducing the upper bound for Bayes error. The upper bound relies on a lemma that bounds the min function. From the lemma we see a general expression for $ \epsilon_\beta $. The error at the smallest $ \beta $ is the Chernoff bound. The Bhattacharya bound occurs when $ \beta = 1/2 $.

Next Jeehyun describes the Chernoff bound for normally distributed data. This section is very well done and provides detailed steps on how the bound is derived in this case. The first step is to write the Gaussian distribution $ \rho(x | \omega_i) $ as a function of the sufficient statistic $ t(x) $ and the parameters $ \theta_i $. Then plug this expression for $ \rho(x|\omega_i) $ into $ \int_R \rho(x|\omega_p)^\beta \rho(x|\omega_q)^{1 - \beta} dx. $ The end result is the Chernoff bound $ \epsilon_\beta = \text{Prob}(\omega_1)^\beta \text{Prob}(\omega_2)^{1 - \beta} \text{exp}[-k(\beta)] $, where $ k(\beta) $ is given in the slecture. Finally Jeehyun plots the Chernoff bound for different distributions where the gap between the means and the variances are varied. From the plots we see that the Chernoff bound is generally low when the gap between the two means is high.

I do have a few suggestions to help make this slecture even better:

  • At the end of Section 2.2, it might help to explain more about the remark that the Chernoff bound can be used in the multi-category case. Is it really valid to add up the error bounds from multiple two category cases?
  • In Eq. (3.10), it might help to remark that the integral equals 1 because the probabillity distribution integrates to 1.
  • In Figs. 3.1 and 3.2, the Chernoff bound should be equal to the Bhattacharya bound, rather than just being close as mentioned in Section 3.2.

  • Write comment/question here
    • answer here

Back to Upper Bounds for Bayes Error

Alumni Liaison

Recent Math PhD now doing a post-doctorate at UC Riverside.

Kuei-Nuan Lin