(New page: [http://balthier.ecn.purdue.edu/index.php/ECE662 ECE662 Main Page] [http://balthier.ecn.purdue.edu/index.php/ECE662#Class_Lecture_Notes Class Lecture Notes])
 
(Copied & converted to new wiki format)
Line 1: Line 1:
[http://balthier.ecn.purdue.edu/index.php/ECE662 ECE662 Main Page]
+
[[ECE662 _OldKiwi| ECE662 Main Page]]
  
[http://balthier.ecn.purdue.edu/index.php/ECE662#Class_Lecture_Notes Class Lecture Notes]
+
[[Class_Lecture_Notes _OldKiwi| Class Lecture Notes]]
 +
 
 +
== Lecture Theme ==
 +
 
 +
Decision hypersurfaces
 +
 
 +
== Actual Lecture ==
 +
 
 +
Making decisions is the art of drawing hypersurfaces.
 +
 
 +
=== [[Hypersurfaces_OldKiwi]] discussed in this lecture ===
 +
These [[Hypersurfaces_OldKiwi]] are discussed in this lecture:
 +
 
 +
* '''Linear Separations'''
 +
 
 +
** Straight lines in 2D    |line|
 +
** Planes in 3D                |plane|
 +
** Hyperplanes in nD      |hyperplane|
 +
 
 +
*  '''Union of linear separations'''
 +
*  '''Tree-based separation'''
 +
 
 +
.. |line| image:: tex
 +
:alt: tex:ax+by+c=0
 +
 
 +
.. |plane| image:: tex
 +
:alt: tex:ax+by+cz+d=0
 +
 
 +
.. |hyperplane| image:: tex
 +
:alt: tex:\sum a_{i}x_{i}=const.
 +
 
 +
 
 +
=== Combining Hypersurfaces ===
 +
Main article: [[Combining Hypersurfaces_OldKiwi]]
 +
 
 +
Hypersurfaces are [[Combining Hypersurfaces _OldKiwi| combined]] by multiplication of the functions which define them, not by intersection.
 +
 
 +
=== Tree Based Separation ===
 +
By using union of hypersurfaces, classes can be separated by a tree based structure. Tree based separation is harder, and obtaining an optimum decision tree is another difficult problem in pattern recognition.
 +
 
 +
Example:
 +
 
 +
.. image:: Treebased.jpg
 +
 
 +
 
 +
=== Question: What do we mean by '''good''' separation? ===
 +
 
 +
Question: What do we mean by '''good''' separation?
 +
 
 +
 
 +
'''Meaning 1:''' One that will make few mistakes when used to classify actual data. In statistical language: small probability of error.
 +
 
 +
'''Meaning 2:''' One that will make mistakes that will not be too costly. In statistical language: small expected cost of errors.
 +
 
 +
*Note:* If the cost function |cdef| from *Meaning 2* is defined to be 1 for an error, and 0 otherwise, then |Ecost|, so that *Meaning 1* is a special case of *Meaning 2*.
 +
 
 +
.. |cdef| image:: tex
 +
:alt: tex:c(x)
 +
 
 +
.. |Ecost| image:: tex
 +
:alt: tex:E[c(x)] = P[error]
 +
 
 +
====Illustration====
 +
Take 50,000 people in an airport. We want to be able to distinguish men from women among these 50,000 people.
 +
 
 +
Use hairlength as the single feature.
 +
 
 +
.. image:: hairlength.jpg
 +
 
 +
Histogram represents the actual numbers.
 +
 
 +
Decision hypersurface: |hair hyp|.
 +
This is the best decision rule in the sense that it minimizes the probability of error. Can not do better than this for the scenario where only hairlength is used as the feature.
 +
 
 +
.. |hair hyp| image:: tex
 +
:alt: tex:l-6=0
 +
 
 +
====Remember====
 +
* Good features produce distinct bumps. That is try to choose features that have a natural grouping for a particular class. For example, hair-length is a good feature to separate men from women as more most men are likely to have short hair-length.
 +
* Bad features (e.g. background color) produce overlapping bumps.
 +
* In order to get a good decision algorithm, we need to have good features (and good separation too).
 +
 
 +
 
 +
===Work in low dimensionality as far as possible===
 +
 
 +
Why? This is typically explained as the [[curse of dimensionality_OldKiwi]]. For more, this term can be googled. There is a nice paper in VLDB 2000 that discusses the notion of nearest-neighbors in high dimensional spaces (See [http://citeseer.ist.psu.edu/cache/papers/cs/25919/http:zSzzSzwww.informatik.uni-halle.dezSz~keimzSzPSzSzvldb2000.pdf/hinneburg00what.pdfciteseer citeseer])
 +
 
 +
Let |features| be features.  Suppose we use a large number of features to come up with a decision hypersurface |g|
 +
 
 +
.. |features| image:: tex
 +
:alt: tex:x_{1},x_{2},\ldots,x_{N}
 +
 
 +
.. |g| image:: tex
 +
:alt: tex:g(x_{1},x_{2},\ldots,x_{N})=0
 +
 
 +
Then g is the single feature we need.
 +
 
 +
Previous: [[Lecture 1-Introduction_OldKiwi]]
 +
Next: [[Lecture 3_OldKiwi]]

Revision as of 17:19, 6 March 2008

ECE662 Main Page

Class Lecture Notes

Lecture Theme

Decision hypersurfaces

Actual Lecture

Making decisions is the art of drawing hypersurfaces.

Hypersurfaces_OldKiwi discussed in this lecture

These Hypersurfaces_OldKiwi are discussed in this lecture:

  • Linear Separations
    • Straight lines in 2D |line|
    • Planes in 3D |plane|
    • Hyperplanes in nD |hyperplane|
  • Union of linear separations
  • Tree-based separation

.. |line| image:: tex

alt: tex:ax+by+c=0

.. |plane| image:: tex

alt: tex:ax+by+cz+d=0

.. |hyperplane| image:: tex

alt: tex:\sum a_{i}x_{i}=const.


Combining Hypersurfaces

Main article: Combining Hypersurfaces_OldKiwi

Hypersurfaces are combined by multiplication of the functions which define them, not by intersection.

Tree Based Separation

By using union of hypersurfaces, classes can be separated by a tree based structure. Tree based separation is harder, and obtaining an optimum decision tree is another difficult problem in pattern recognition.

Example:

.. image:: Treebased.jpg


Question: What do we mean by good separation?

Question: What do we mean by good separation?


Meaning 1: One that will make few mistakes when used to classify actual data. In statistical language: small probability of error.

Meaning 2: One that will make mistakes that will not be too costly. In statistical language: small expected cost of errors.

  • Note:* If the cost function |cdef| from *Meaning 2* is defined to be 1 for an error, and 0 otherwise, then |Ecost|, so that *Meaning 1* is a special case of *Meaning 2*.

.. |cdef| image:: tex

alt: tex:c(x)

.. |Ecost| image:: tex

alt: tex:E[c(x)] = P[error]

Illustration

Take 50,000 people in an airport. We want to be able to distinguish men from women among these 50,000 people.

Use hairlength as the single feature.

.. image:: hairlength.jpg

Histogram represents the actual numbers.

Decision hypersurface: |hair hyp|. This is the best decision rule in the sense that it minimizes the probability of error. Can not do better than this for the scenario where only hairlength is used as the feature.

.. |hair hyp| image:: tex

alt: tex:l-6=0

Remember

  • Good features produce distinct bumps. That is try to choose features that have a natural grouping for a particular class. For example, hair-length is a good feature to separate men from women as more most men are likely to have short hair-length.
  • Bad features (e.g. background color) produce overlapping bumps.
  • In order to get a good decision algorithm, we need to have good features (and good separation too).


Work in low dimensionality as far as possible

Why? This is typically explained as the curse of dimensionality_OldKiwi. For more, this term can be googled. There is a nice paper in VLDB 2000 that discusses the notion of nearest-neighbors in high dimensional spaces (See citeseer)

Let |features| be features. Suppose we use a large number of features to come up with a decision hypersurface |g|

.. |features| image:: tex

alt: tex:x_{1},x_{2},\ldots,x_{N}

.. |g| image:: tex

alt: tex:g(x_{1},x_{2},\ldots,x_{N})=0

Then g is the single feature we need.

Previous: Lecture 1-Introduction_OldKiwi Next: Lecture 3_OldKiwi

Alumni Liaison

Ph.D. on Applied Mathematics in Aug 2007. Involved on applications of image super-resolution to electron microscopy

Francisco Blanco-Silva