Line 14: | Line 14: | ||
[Xing Liu: Comments] This lecture is very well written. It starts with a detailed introduction to the frequentist estimation and the MLE. It explains the basic idea of the frequentist estimation, how to calculate the bias and variance of the estimator, and some properties of estimators for evaluation purpose. Then it focuses on the ML estimator by introducing its formula and properties. In the next section, examples are given on the MLE estimation for Gaussian distributions. Different cases are considered. The lectures ends with pertinent comments and suggestions on the selection of estimation methods. | [Xing Liu: Comments] This lecture is very well written. It starts with a detailed introduction to the frequentist estimation and the MLE. It explains the basic idea of the frequentist estimation, how to calculate the bias and variance of the estimator, and some properties of estimators for evaluation purpose. Then it focuses on the ML estimator by introducing its formula and properties. In the next section, examples are given on the MLE estimation for Gaussian distributions. Different cases are considered. The lectures ends with pertinent comments and suggestions on the selection of estimation methods. | ||
− | The lectures is especially good in the detailed discussion on the ML estimator, which provides a supplement for those who are interested in learning more about MLE. It also provides a precise and explicite derivation in the second section for the ML estimtors in the Gaussian case. The structure is very clear. It might be more concise if example 1 and 2 can be combined since knowledge of variance doesn't affect the estimation of the mean. Following this, the bias of both estimators could be shown (or be combined into the general p-dimensional case). In the | + | The lectures is especially good in the detailed discussion on the ML estimator, which provides a supplement for those who are interested in learning more about MLE. It also provides a precise and explicite derivation in the second section for the ML estimtors in the Gaussian case. The structure is very clear and the author is knowledgeable in frequentist estimation. It might be more concise if example 1 and 2 can be combined since knowledge of variance doesn't affect the estimation of the mean. Following this, the bias of both estimators could be shown (or be combined into the general p-dimensional case). In the discussion section, it would be even more complete if the connection to the classification is mentioned, such as: for our purpose of classification, the selection between a biased and unbiased estimator, or between different model(distribution) assumptions would depend on the property: the estimate is the most desirable if it leads to the best classification performance.<br> |
<br> | <br> |
Revision as of 09:20, 6 May 2014
Questions and Comments for: Maximum Likelihood Estimators and Examples
A slecture by Lu Zhang
Let me know if you have any questions or comments
Questions and Comments
[Xing Liu: Comments] This lecture is very well written. It starts with a detailed introduction to the frequentist estimation and the MLE. It explains the basic idea of the frequentist estimation, how to calculate the bias and variance of the estimator, and some properties of estimators for evaluation purpose. Then it focuses on the ML estimator by introducing its formula and properties. In the next section, examples are given on the MLE estimation for Gaussian distributions. Different cases are considered. The lectures ends with pertinent comments and suggestions on the selection of estimation methods.
The lectures is especially good in the detailed discussion on the ML estimator, which provides a supplement for those who are interested in learning more about MLE. It also provides a precise and explicite derivation in the second section for the ML estimtors in the Gaussian case. The structure is very clear and the author is knowledgeable in frequentist estimation. It might be more concise if example 1 and 2 can be combined since knowledge of variance doesn't affect the estimation of the mean. Following this, the bias of both estimators could be shown (or be combined into the general p-dimensional case). In the discussion section, it would be even more complete if the connection to the classification is mentioned, such as: for our purpose of classification, the selection between a biased and unbiased estimator, or between different model(distribution) assumptions would depend on the property: the estimate is the most desirable if it leads to the best classification performance.