(Add name to my page.) |
(Added a todo list.) |
||
Line 2: | Line 2: | ||
My [http://web.ics.purdue.edu/~yoder2 webpage] is little out of date, but you can visit it anyway! | My [http://web.ics.purdue.edu/~yoder2 webpage] is little out of date, but you can visit it anyway! | ||
+ | |||
+ | == TODO == | ||
+ | |||
+ | There are several articles I would like to write on the Kiwi when I get the time. If you would like to write them instead, please go for it, and let me know! | ||
+ | |||
+ | * [[Lower bound on performance of Bayes Classification_OldKiwi]] is <math>\frac{1}{2}</math> when the number of classes is 1 | ||
+ | * [[Ideal performance of Bayes Classification_OldKiwi]] when the two classes are Gaussian with the same variance and prior probability can be computed exactly, even when there is correlation between the dimensions | ||
+ | * [[Amount of training data needed_OldKiwi]] as a function of dimensions, covariance, etc. | ||
+ | * [[Naive Bayes_OldKiwi]]-- What it is and why everyone should know about it. | ||
+ | * [[Classification of data not in the Reals_OldKiwi]] (<math>\mathbb{R}^n</math>), such as text documents and graphs | ||
+ | * [[Fischer's Linear Discriminant_OldKiwi]] -- Why it is ideal in the case of equal-variance Gaussians, a derivation that is less heuristic than the traditional development. | ||
+ | |||
+ | And there's always copying stuff over from the old kiwi! |
Revision as of 15:16, 25 March 2008
Hi! I'm Josiah Yoder, and I'm a big fan of Kiwis... and wikis.
My webpage is little out of date, but you can visit it anyway!
TODO
There are several articles I would like to write on the Kiwi when I get the time. If you would like to write them instead, please go for it, and let me know!
- Lower bound on performance of Bayes Classification_OldKiwi is $ \frac{1}{2} $ when the number of classes is 1
- Ideal performance of Bayes Classification_OldKiwi when the two classes are Gaussian with the same variance and prior probability can be computed exactly, even when there is correlation between the dimensions
- Amount of training data needed_OldKiwi as a function of dimensions, covariance, etc.
- Naive Bayes_OldKiwi-- What it is and why everyone should know about it.
- Classification of data not in the Reals_OldKiwi ($ \mathbb{R}^n $), such as text documents and graphs
- Fischer's Linear Discriminant_OldKiwi -- Why it is ideal in the case of equal-variance Gaussians, a derivation that is less heuristic than the traditional development.
And there's always copying stuff over from the old kiwi!