(3 intermediate revisions by the same user not shown)
Line 10: Line 10:
 
Please leave comments or questions below.
 
Please leave comments or questions below.
 
----
 
----
==Review by ...==
+
==Shaobo Fang's Review==
Shaobo Fang
+
(I have read the Statistical Inference by Casella so I can review this one if no one reserved on this piece: Shaobo Fang)
+
  
Possible a minor typo at first line:    魔钟-->某种?
+
Possible a minor typo at first line:    魔钟-->某种? corrected.
  
 
What's Impressive:
 
What's Impressive:
Line 20: Line 18:
 
This selecture is simply impressive. Author focused on Hypothesis testing, and introduced LRT, Bayesian Test and Neyman-Pearson (UMP: Uniformly Most Powerful).
 
This selecture is simply impressive. Author focused on Hypothesis testing, and introduced LRT, Bayesian Test and Neyman-Pearson (UMP: Uniformly Most Powerful).
  
Many concepts are accurately described in Chinese, which is amazing. Even if I have taken the STAT course, I did not know the corresponding Chinese name to e.g. LRT.
+
Many concepts are accurately described in Chinese, which is amazing. Even if I have taken the STAT course, I did not know the corresponding Chinese name to e.g. LRT, so the author indeed helps me a lot when I read Chinese texts on this topic in the future.
  
 
At the very end of the text, author also introduced a very interesting example, which helps reader understand the concepts better.  
 
At the very end of the text, author also introduced a very interesting example, which helps reader understand the concepts better.  
 +
 +
What's Covered:
 +
 +
The author covered all important concepts in Hypothesis Testing in a great way. For the LRT test, I guess in order to be more accurate:
 +
 +
<math> \lambda(X) = \frac{sup L(x|w_0)}{supL(x|w)}</math> 
 
      
 
      
 
Although it is already great, it would be even better if author can also briefly explain Power Function and Size of LRT.
 
Although it is already great, it would be even better if author can also briefly explain Power Function and Size of LRT.

Latest revision as of 15:37, 12 May 2014

Comments for Neyman-Pearson: How Bayes Decision Rule Controls Error

A slecture by ECE student Robert Ness



Please leave comments or questions below.


Shaobo Fang's Review

Possible a minor typo at first line: 魔钟-->某种? corrected.

What's Impressive:

This selecture is simply impressive. Author focused on Hypothesis testing, and introduced LRT, Bayesian Test and Neyman-Pearson (UMP: Uniformly Most Powerful).

Many concepts are accurately described in Chinese, which is amazing. Even if I have taken the STAT course, I did not know the corresponding Chinese name to e.g. LRT, so the author indeed helps me a lot when I read Chinese texts on this topic in the future.

At the very end of the text, author also introduced a very interesting example, which helps reader understand the concepts better.

What's Covered:

The author covered all important concepts in Hypothesis Testing in a great way. For the LRT test, I guess in order to be more accurate:

$ \lambda(X) = \frac{sup L(x|w_0)}{supL(x|w)} $

Although it is already great, it would be even better if author can also briefly explain Power Function and Size of LRT.

Tian's reply

Interesting topic. It's very surprising that the author can compose so great Chinese tutorial. The tutorial contains both theory and an application, which can really lead the reader to understand and gain a intuition on how it works.

Might be more interesting if talking a little about a changing k, basically ROC curves and how that could be further developed from this topic.

Author's reply

Typo corrected, thanks.




Back to Ness slecture 2014

Alumni Liaison

Meet a recent graduate heading to Sweden for a Postdoctorate.

Christine Berkesch