Line 13: | Line 13: | ||
=Questions and Comments= | =Questions and Comments= | ||
− | This slecture | + | * [Soonam Lee Review 1:] This slecture mainly talks about curse of dimensionality using easy metaphor. Assume that our goal is finding the needle in N-D haystack. If the haystack lies in 1-D, simple search followed by one axis or basis is enough. However, as number of dimension increases, the number of basis increases and it produces exponentially expensive search problem. The author explains this concept with various pictures. Moreover, this slecture describes sparsity of samples that is from increasing dimension. With the Gaussian reconstruction using different number of samples, the author conveys the idea efficiently. Lastly, three ways are introduced to break this curse of dimensionality such as feature extraction method, dimensionality reduction techniques, and kernel method. Firstly, feature extraction method is the one way that selects and extracts meaningful features from given redundant information. On the other hands, dimensionality reduction technique gave us the way to project from the high dimensional space to low dimensional space. In this case, how to choose the project axes are very important tasks. PCA uses the largest variance axes as a projection axes whereas LDA decides these axes based on the best separation across class. Apart from these two methods, kernel methods maps data to much higher dimensions but the data should be explained with high dimension. |
+ | |||
---- | ---- | ||
Back to '''[[Curse_of_Dimensionality|Curse of Dimensionality]]''' | Back to '''[[Curse_of_Dimensionality|Curse of Dimensionality]]''' |
Revision as of 22:23, 24 April 2014
Questions and Comments for: Curse of Dimensionality
A slecture by Haonan Yu
Please leave me comment below if you have any questions, if you notice any errors or if you would like to discuss a topic further.
Questions and Comments
- [Soonam Lee Review 1:] This slecture mainly talks about curse of dimensionality using easy metaphor. Assume that our goal is finding the needle in N-D haystack. If the haystack lies in 1-D, simple search followed by one axis or basis is enough. However, as number of dimension increases, the number of basis increases and it produces exponentially expensive search problem. The author explains this concept with various pictures. Moreover, this slecture describes sparsity of samples that is from increasing dimension. With the Gaussian reconstruction using different number of samples, the author conveys the idea efficiently. Lastly, three ways are introduced to break this curse of dimensionality such as feature extraction method, dimensionality reduction techniques, and kernel method. Firstly, feature extraction method is the one way that selects and extracts meaningful features from given redundant information. On the other hands, dimensionality reduction technique gave us the way to project from the high dimensional space to low dimensional space. In this case, how to choose the project axes are very important tasks. PCA uses the largest variance axes as a projection axes whereas LDA decides these axes based on the best separation across class. Apart from these two methods, kernel methods maps data to much higher dimensions but the data should be explained with high dimension.
Back to Curse of Dimensionality