Line 14: Line 14:
 
[Xing Liu: Comments] This lecture is very well written. It starts with a detailed introduction to the frequentist estimation and the MLE. It explains the basic idea of the frequentist estimation, how to calculate the bias and variance of the estimator, and some properties of estimators for evaluation purpose. Then it focuses on the ML estimator by introducing its formula and properties. In the next section, examples are given on the MLE estimation for Gaussian distributions. Different cases are considered. The lectures ends with pertinent comments and suggestions on the selection of estimation methods.   
 
[Xing Liu: Comments] This lecture is very well written. It starts with a detailed introduction to the frequentist estimation and the MLE. It explains the basic idea of the frequentist estimation, how to calculate the bias and variance of the estimator, and some properties of estimators for evaluation purpose. Then it focuses on the ML estimator by introducing its formula and properties. In the next section, examples are given on the MLE estimation for Gaussian distributions. Different cases are considered. The lectures ends with pertinent comments and suggestions on the selection of estimation methods.   
  
The lectures is especially good in the detailed discussion on the ML estimator, which provides a supplement for those who are interested in learning more about MLE. It also provides a precise and explicite&nbsp;derivation in the second section for the ML estimtors in the Gaussian case. The structure is very clear. It might be more&nbsp;concise if example 1 and 2 can be combined since knowledge of variance doesn't affect the estimation of the mean. Following this, the bias of both estimators could be shown (or be combined into the last general p dimensional case). In the last section, it would be even complete if the connection to the classification is mentioned. The selection between a biased and unbiased estimator, and to decide which model (distribution) is a good assumption both depend on the property: the&nbsp;estimate is the most desirable if it leads to the best classification performance.&nbsp;<br>  
+
The lectures is especially good in the detailed discussion on the ML estimator, which provides a supplement for those who are interested in learning more about MLE. It also provides a precise and explicite&nbsp;derivation in the second section for the ML estimtors in the Gaussian case. The structure is very clear. It might be more&nbsp;concise if example 1 and 2 can be combined since knowledge of variance doesn't affect the estimation of the mean. Following this, the bias of both estimators could be shown (or be combined into the general p-dimensional case). In the last section, it would be even complete if the connection to the classification is mentioned. The selection between a biased and unbiased estimator, and to decide which model (distribution) is a good assumption both depend on the property: the&nbsp;estimate is the most desirable if it leads to the best classification performance.&nbsp;<br>  
  
 
<br>  
 
<br>  

Revision as of 09:15, 6 May 2014

Questions and Comments for: Maximum Likelihood Estimators and Examples

A slecture by Lu Zhang


Let me know if you have any questions or comments


Questions and Comments

[Xing Liu: Comments] This lecture is very well written. It starts with a detailed introduction to the frequentist estimation and the MLE. It explains the basic idea of the frequentist estimation, how to calculate the bias and variance of the estimator, and some properties of estimators for evaluation purpose. Then it focuses on the ML estimator by introducing its formula and properties. In the next section, examples are given on the MLE estimation for Gaussian distributions. Different cases are considered. The lectures ends with pertinent comments and suggestions on the selection of estimation methods. 

The lectures is especially good in the detailed discussion on the ML estimator, which provides a supplement for those who are interested in learning more about MLE. It also provides a precise and explicite derivation in the second section for the ML estimtors in the Gaussian case. The structure is very clear. It might be more concise if example 1 and 2 can be combined since knowledge of variance doesn't affect the estimation of the mean. Following this, the bias of both estimators could be shown (or be combined into the general p-dimensional case). In the last section, it would be even complete if the connection to the classification is mentioned. The selection between a biased and unbiased estimator, and to decide which model (distribution) is a good assumption both depend on the property: the estimate is the most desirable if it leads to the best classification performance. 





Back to Maximum Likelihood Estimators and Examples

Alumni Liaison

Recent Math PhD now doing a post-doctorate at UC Riverside.

Kuei-Nuan Lin