(10 intermediate revisions by 6 users not shown)
Line 19: Line 19:
 
*Decision Trees with AdaBoost. Boosting will raise the performance of decision tree significantly and this method is generally tested out to be consistently reliable for many applications. Besides, the recent theoretical research of consistency of AdaBoost shows promising application of the Boosting methods. -yuanl
 
*Decision Trees with AdaBoost. Boosting will raise the performance of decision tree significantly and this method is generally tested out to be consistently reliable for many applications. Besides, the recent theoretical research of consistency of AdaBoost shows promising application of the Boosting methods. -yuanl
  
*k-Nearest neighbors. It’s easy to understand and implement and error rate is reasonable. In my opinion other methods are more complicated but there is not much deference in final result. So I vote to an easy method!  -Golsa
+
*k-Nearest neighbors. It’s easy to understand and implement and error rate is reasonable. In my opinion other methods are more complicated but there is not much deference in final result. So I vote to an easy method!  --[[User:Gmoayeri|Gmoayeri]] 20:26, 9 May 2010 (UTC)
  
 
*Particle Filter. A simple idea of Monte Carlo estimation evolves into a great estimation method. The more I know about particle filter for object tracking, the more I get impressions about power of random sampling and Bayesian estimation. --[[User:han66|kyuseo]]
 
*Particle Filter. A simple idea of Monte Carlo estimation evolves into a great estimation method. The more I know about particle filter for object tracking, the more I get impressions about power of random sampling and Bayesian estimation. --[[User:han66|kyuseo]]
  
*Kohenen's Self Organizing Feature Maps(SOM's). This method combine two excellent features, the ability to reduce the features dimension space drastically , and to build a topological relationship between the training samples which leads to increase the classification rate for new samples.
+
*Kohenen's Self Organizing Feature Maps(SOM's). This method combines two excellent features, the ability to reduce the features dimension space drastically , and to build a topological relationships between the training samples which leads to increase the classification rate for new samples, i tested this method on different problem domains and the results were excellent provided that there was enough samples for training.--[[User:Ralazrai|Ralazrai]]
 +
 
 +
*Decision Trees - simple to understand, easy (well, easy-ish) to implement and powerful.  [[User:Pritchey|Pritchey]] 20:39, 2 May 2010 (UTC)
 +
 
 +
*KNN for simple decisions or when development time is an issue (i.e. laziness!).  ANN for when you need to make sense out of a complex scenario... or if you want to explain to people that's it basically "runs on magic." --[[User:mreeder|mreeder]]
 +
 
 +
*I prefer the k-nearest neighbor (k-NN) classifier because it is intuitive and easy to explain. There is the tradeoff between the easy implementation as well as no training and the computational complexity to search k-nearest neighbor especially for high-dimensional feature space. I think that the straightforward approach like k-NN is preferable because of its clearness even it is slow. --[[User:han84|han84]]
 +
 
 +
*My preference is KNN, easy to implement and decent results (if you are patient enough with doing cross validations for the optimal k). --[[User:sharmasr|sharmasr]]
 +
 
 +
*The course has made appreciate the benefits of probabilistic graphical models, as such techniques mix the bayesians and frequentists' approaches. Examples are Bayesian networks and Hidden Markov Models. In my research area (computer intrusion detection) there are scarce datasets available and domain knowledge is very important. So techniques that allow to mix both approaches are valuable. Generally speaking, one can expect any ML method to be context-dependent. As mentioned in Duda's book, they call this the "no free lunch theorem".  --[[User:Gmodeloh|Gmodeloh]] 12:12, 5 May 2010 (UTC)
  
 
*write your opinion here. sign your name/nickname.   
 
*write your opinion here. sign your name/nickname.   
 
----
 
----
 
[[ 2010 Spring ECE 662 mboutin|Back to 2010 Spring ECE 662 mboutin]]
 
[[ 2010 Spring ECE 662 mboutin|Back to 2010 Spring ECE 662 mboutin]]

Latest revision as of 15:26, 9 May 2010


What is your favorite decision method?

Student poll for ECE662, Spring 2010.

  • Coin flipping. ostava
    • Interesting. What is the expected rate of error for this method? -pm
    • I would think the expected error would be .5. Assume if heads decide class 1, if tails decide class 2. So P(error) = P(Heads)P(Class 2) + P(Tails)P(Class 1). I'll assume you have a fair coin so P(Heads) = P(Tails) = .5. Also, if there's only two classes, P(Class 2) + P(Class 1) = 1. Thus from the above formula, P(error) = .5(P(Class1) + P(Class2)) = .5 -ATH
    • Actually, a loaded coin might be better! Looking at the relative frequency of the training data points, one can estimate the priors and bias the coin accordingly. -Satyam.
    • Actually, not flipping a coin (or equivalently flipping a coin that is so biased that it lands on one side with certainty) will be best! Biasing the coin to match the priors is better than flipping a fair coin, but will still give an expected error rate greater than or equal to the expected error rate of always choosing the class with the higher prior. Bayes rule is optimal. - jvaught
    • Good point Jim! Somehow, I get a feeling that stating coin flipping was intended to be humorous. Of course, we (engineers) have steered it in a different direction! Ondrej, comment? -Satyam.
    • Well, it wasn't really meant seriously but it is true that I've used this method quite a lot of times in my life. But it is usually somehow difficult to measure error in these real life situations .. sometimes I think the error rate actually converges to 1 :) -ostava
  • Nearest neighbors. It reminds me of human behavior in that if we don't know what to do in certain situations (social ones in particular), we'll look at those around us to decide what to do. -ATH
  • Kernel methods in general (SVM, KDE, KPCA, etc..) since we can handle non-linearly separable data easier. I also feel that clustering techniques are very useful in my research area. --ilaguna
  • Nearest neighbor. From practical point of view, it is easy to implement and quite fast (and, surprisingly, not too bad in terms of errors). -Satyam.
  • Decision trees. Linear discriminants are not expressive enough for practical data, but nonlinear are unwieldy and more prone to misbehave computationally. Decision trees give the expressiveness of nonlinear discriminants with the efficiency of linear discriminants. Humans implicitly use this approach in the games "20 Questions" and "Guess Who?" and field identification guides are organized this way. -jvaught
  • Decision Trees with AdaBoost. Boosting will raise the performance of decision tree significantly and this method is generally tested out to be consistently reliable for many applications. Besides, the recent theoretical research of consistency of AdaBoost shows promising application of the Boosting methods. -yuanl
  • k-Nearest neighbors. It’s easy to understand and implement and error rate is reasonable. In my opinion other methods are more complicated but there is not much deference in final result. So I vote to an easy method! --Gmoayeri 20:26, 9 May 2010 (UTC)
  • Particle Filter. A simple idea of Monte Carlo estimation evolves into a great estimation method. The more I know about particle filter for object tracking, the more I get impressions about power of random sampling and Bayesian estimation. --kyuseo
  • Kohenen's Self Organizing Feature Maps(SOM's). This method combines two excellent features, the ability to reduce the features dimension space drastically , and to build a topological relationships between the training samples which leads to increase the classification rate for new samples, i tested this method on different problem domains and the results were excellent provided that there was enough samples for training.--Ralazrai
  • Decision Trees - simple to understand, easy (well, easy-ish) to implement and powerful. Pritchey 20:39, 2 May 2010 (UTC)
  • KNN for simple decisions or when development time is an issue (i.e. laziness!). ANN for when you need to make sense out of a complex scenario... or if you want to explain to people that's it basically "runs on magic." --mreeder
  • I prefer the k-nearest neighbor (k-NN) classifier because it is intuitive and easy to explain. There is the tradeoff between the easy implementation as well as no training and the computational complexity to search k-nearest neighbor especially for high-dimensional feature space. I think that the straightforward approach like k-NN is preferable because of its clearness even it is slow. --han84
  • My preference is KNN, easy to implement and decent results (if you are patient enough with doing cross validations for the optimal k). --sharmasr
  • The course has made appreciate the benefits of probabilistic graphical models, as such techniques mix the bayesians and frequentists' approaches. Examples are Bayesian networks and Hidden Markov Models. In my research area (computer intrusion detection) there are scarce datasets available and domain knowledge is very important. So techniques that allow to mix both approaches are valuable. Generally speaking, one can expect any ML method to be context-dependent. As mentioned in Duda's book, they call this the "no free lunch theorem". --Gmodeloh 12:12, 5 May 2010 (UTC)
  • write your opinion here. sign your name/nickname.

Back to 2010 Spring ECE 662 mboutin

Alumni Liaison

Recent Math PhD now doing a post-doctorate at UC Riverside.

Kuei-Nuan Lin