Line 1: | Line 1: | ||
− | <center><font size= 4> | + | <center><font size="4"></font> |
− | Questions and Comments for: | + | <font size="4">Questions and Comments for: '''[[Maximum Likelihood Estimators and Examples]]''' </font> |
− | '''[[Maximum Likelihood Estimators and Examples ]]''' | + | |
− | </font | + | |
− | A [https://www.projectrhea.org/learning/slectures.php slecture] by Lu Zhang | + | A [https://www.projectrhea.org/learning/slectures.php slecture] by Lu Zhang |
+ | </center> | ||
+ | ---- | ||
+ | |||
+ | Let me know if you have any questions or comments | ||
− | |||
− | |||
− | |||
---- | ---- | ||
− | =Questions and Comments= | + | = Questions and Comments = |
− | + | [Xing Liu: Comments] This lecture is very well written. It starts with a detailed introduction to the frequentist estimation and the MLE, explains the basic idea of the frequentist estimation, how to calculate the bias and variance of the estimator, and some properties of estimators for evaluation purpose. Then it focuses on the ML estimator by introducing its formula and properties. In the next section, examples are given on the MLE estimation for Gaussian distributions. Different cases are considered. The lectures ends with pertinent comments and suggestions on the selection of estimation methods. | |
+ | The lectures is especially good in the detailed discussion on the ML estimator, which provides a supplement for those who are interested in learning more about MLE. It also provides a precise and explicite derivation in the second section for the ML estimtors in the Gaussian case. It is very It might be more concise if example 1 and 2 can be combined since knowledge of variance doesn't affect the estimation of the mean. Following this, the bias of both estimators could be shown (or be combined into the last general p dimensional case). In the last section, it would be even complete if the connection to the classification is mentioned. The selection between a biased and unbiased estimator, and to decide which model (distribution) is a good assumption both depend on the property: the estimate is the most desirable if it leads to the best classification performance. <br> | ||
+ | <br> | ||
+ | <br> | ||
+ | ---- | ||
---- | ---- | ||
− | + | ||
Back to '''[[Maximum Likelihood Estimators and Examples]]''' | Back to '''[[Maximum Likelihood Estimators and Examples]]''' |
Revision as of 09:12, 6 May 2014
Questions and Comments for: Maximum Likelihood Estimators and Examples
A slecture by Lu Zhang
Let me know if you have any questions or comments
Questions and Comments
[Xing Liu: Comments] This lecture is very well written. It starts with a detailed introduction to the frequentist estimation and the MLE, explains the basic idea of the frequentist estimation, how to calculate the bias and variance of the estimator, and some properties of estimators for evaluation purpose. Then it focuses on the ML estimator by introducing its formula and properties. In the next section, examples are given on the MLE estimation for Gaussian distributions. Different cases are considered. The lectures ends with pertinent comments and suggestions on the selection of estimation methods.
The lectures is especially good in the detailed discussion on the ML estimator, which provides a supplement for those who are interested in learning more about MLE. It also provides a precise and explicite derivation in the second section for the ML estimtors in the Gaussian case. It is very It might be more concise if example 1 and 2 can be combined since knowledge of variance doesn't affect the estimation of the mean. Following this, the bias of both estimators could be shown (or be combined into the last general p dimensional case). In the last section, it would be even complete if the connection to the classification is mentioned. The selection between a biased and unbiased estimator, and to decide which model (distribution) is a good assumption both depend on the property: the estimate is the most desirable if it leads to the best classification performance.