(2 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
− | + | <center><font size= 4> | |
+ | Comments for: | ||
+ | '''[[Generation of N-dimensional normally distributed random numbers from two categories with different priors|Generation of normally distributed random numbers from two categories with different priors]]''' | ||
+ | </font size> | ||
+ | |||
+ | A [https://www.projectrhea.org/learning/slectures.php slecture] by Jonghoon Jin | ||
+ | |||
+ | </center> | ||
+ | ---- | ||
+ | ---- | ||
+ | Please leave me comment below if you have any questions, if you notice any errors or if you would like to discuss a topic further. | ||
+ | ---- | ||
+ | ---- | ||
+ | '''Hyun Dok Cho Review''' | ||
+ | # summary: The author explains about how to generate binary data classes from normal distributions with prior probabilities in a perspective on numerical experiments. This lecture first starts with introducing importance of prior selection and how to do it. Then various normal distribution generation techniques are explained which are Central Limit Theorem, Inverse Transform Sampling, Box-Muller Transform, and Ziggurat Algorithm. Finally, the author explains how to convert normalized distribution to another. | ||
+ | # strengths: This slecture is organized well and has enough details yet easy to understand what important factor is and what theoretical background are there when random data is generated. In addition, the author compared pros and cons for each methods, which is very helpful to readers not only to understand those methods but to think about what specific application or situation those can be used. For example, the Ziggurat Algorithm is compared to the Box-Muller Transform and Ziggurat Algorithm is more efficient even in large sample than the latter and the downside of it is that it has complex algorithm compared to the latter; however, those complexity is caused by exponential formula which is only used for special cases. | ||
+ | #suggestions: Even though the author did a great job on explaining the techniques, it would be even more clear to understand those if the author explained with examples for each technique. However, I think that would be another big task to visualize the examples and this is enough to understand the theoretical background of generating random data | ||
+ | ---- | ||
+ | '''Write Question/Comment Here''' | ||
+ | ---- | ||
+ | '''Write Question/Comment Here''' | ||
+ | ---- | ||
+ | ---- | ||
+ | Back to '''[[Generation of N-dimensional normally distributed random numbers from two categories with different priors|Generation of normally distributed random numbers from two categories with different priors]]''' |
Latest revision as of 15:29, 14 May 2014
Comments for: Generation of normally distributed random numbers from two categories with different priors
A slecture by Jonghoon Jin
Please leave me comment below if you have any questions, if you notice any errors or if you would like to discuss a topic further.
Hyun Dok Cho Review
- summary: The author explains about how to generate binary data classes from normal distributions with prior probabilities in a perspective on numerical experiments. This lecture first starts with introducing importance of prior selection and how to do it. Then various normal distribution generation techniques are explained which are Central Limit Theorem, Inverse Transform Sampling, Box-Muller Transform, and Ziggurat Algorithm. Finally, the author explains how to convert normalized distribution to another.
- strengths: This slecture is organized well and has enough details yet easy to understand what important factor is and what theoretical background are there when random data is generated. In addition, the author compared pros and cons for each methods, which is very helpful to readers not only to understand those methods but to think about what specific application or situation those can be used. For example, the Ziggurat Algorithm is compared to the Box-Muller Transform and Ziggurat Algorithm is more efficient even in large sample than the latter and the downside of it is that it has complex algorithm compared to the latter; however, those complexity is caused by exponential formula which is only used for special cases.
- suggestions: Even though the author did a great job on explaining the techniques, it would be even more clear to understand those if the author explained with examples for each technique. However, I think that would be another big task to visualize the examples and this is enough to understand the theoretical background of generating random data
Write Question/Comment Here
Write Question/Comment Here
Back to Generation of normally distributed random numbers from two categories with different priors