Revision as of 09:25, 24 April 2008 by Slitkouh (Talk)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Regularization refers to a set of techniques used to avoid overfitting. It ensures that the function computed is no more curved than necessary For example: This is achieved by adding a penalty to the error function. It is used for solving ill-conditioned parameter-estimation problems. Typical examples of regularization methods include Tikhonov Regularization, lasso, and L2-norm in SVM's. These techniques are also used for model selection, where they work by either implicitly or explicitly penalizing models based on the number of their parameters. Reference

Alumni Liaison

Ph.D. 2007, working on developing cool imaging technologies for digital cameras, camera phones, and video surveillance cameras.

Buyue Zhang