(87 intermediate revisions by 13 users not shown)
Line 1: Line 1:
When inserting a new item into the glossary, put the term in *'s and []'s. Write a short description in the glossary. Then create a new page by clicking on the "?," and paste your short description there, too. You can link to any term in the glossary by simply putting the word in []'s from any other page. e.g. Glossary.
+
=Glossary for "Decision Theory" ([[ECE662]])=
 +
If you want to add an entry to the glossary, create the article for the topic and enter the information there. Include the term here on this page by providing a reference to include that article, or instead write a brief description. Follow the example for the other entries with sqare brackets that link to the topic's article and double equal signs.
  
Please give your thoughts in the Forum discussion on this topic, "Separate Pages for Glossary Terms".
+
{| align="right"
 +
| __TOC__
 +
|}
 +
----
 +
== [[Active Learning_Old Kiwi|Active Learning]] ==
 +
{{:Active Learning}}
  
 +
== [[Backpropagation Learning Algorithm_Old Kiwi|Backpropagation Learning Algorithm]] ==
 +
{{:Backpropagation Learning Algorithm}}
  
== Active Learning ==
+
== [[Bayes Decision Rule_Old Kiwi|Bayes Decision Rule]] ==
Determining the optimal measurements to make under a cost constraint. A measurement is "optimal" when it is expected to give the most new information about the parameters of a model. Active learning is thus an application of decision theory to the process of learning. It is also known as experiment design.
+
 
+
Artificial Neural Networks: Ripley Chapter 5
+
 
+
Duda, Hart, and Stork Chapter 6
+
 
+
Nueral networks are a family of function approximation techniques, when the function is approximated,
+
 
+
<math>f:x \rightarrow z</math>
+
 
+
is modeled as a composition of simple functions <math>f_i</math>'s
+
 
+
<math>f=f_n\bullet f_{n-1}\cdot\cdot\cdot f_1</math>
+
 
+
== [[Bayes Decision Rule_Old Kiwi]] ==
+
 
Bayes' decision rule creates an objective function which minimizes the probability of error (misclassification). This method assumes a known distribution for the feature vectors given each class.
 
Bayes' decision rule creates an objective function which minimizes the probability of error (misclassification). This method assumes a known distribution for the feature vectors given each class.
  
== [[Bayesian Parameter Estimation_Old Kiwi]] ==
+
== [[Bayesian Parameter Estimation_Old Kiwi|Bayesian Parameter Estimation]] ==
Bayesian Parameter Estimation is a technique for parameter estimation which uses probability densities as estimates of the parameters instead of exact deterministic ones, as [MLE]? does.
+
Bayesian Parameter Estimation is a technique for parameter estimation which uses probability densities as estimates of the parameters instead of exact deterministic ones, as [[MLE_Old Kiwi]] does.
  
== [[Central Limit Theorem_Old Kiwi]] ==
+
== [[Central Limit Theorem_Old Kiwi|Central Limit Theorem]] ==
 
The CLT explains why many distributions tend to be close to the normal distribution. The important point is that the "random variable" being observed should be the sum or mean of many independent random variables. (variables need not be iid)(See the PROOF )
 
The CLT explains why many distributions tend to be close to the normal distribution. The important point is that the "random variable" being observed should be the sum or mean of many independent random variables. (variables need not be iid)(See the PROOF )
  
== [[Classes_Old Kiwi]] ==
+
== [[Classes_Old Kiwi|Classes]] ==
 
The categories into which we are trying to classify the patterns.
 
The categories into which we are trying to classify the patterns.
  
== [[Classification_Old Kiwi]] ==
+
== [[Classification_Old Kiwi|Classification]] ==
 
Assigning a class to a measurement, or equivalently, identifying the probabilistic source of a measurement. The only statistical model that is needed is the conditional model of the class variable given the measurement. This conditional model can be obtained from a joint model or it can be learned directly. The former approach is generative since it models the measurements in each class. It is more work, but it can exploit more prior knowledge, needs less data, is more modular, and can handle missing or corrupted data. Methods include mixture models and Hidden Markov Models. The latter approach is discriminative since it focuses only on discriminating one class from another. It can be more efficient once trained and requires fewer modeling assumptions. Methods include logistic regression, generalized linear classifiers, and nearest-neighbor. See "Discriminative and Learning".
 
Assigning a class to a measurement, or equivalently, identifying the probabilistic source of a measurement. The only statistical model that is needed is the conditional model of the class variable given the measurement. This conditional model can be obtained from a joint model or it can be learned directly. The former approach is generative since it models the measurements in each class. It is more work, but it can exploit more prior knowledge, needs less data, is more modular, and can handle missing or corrupted data. Methods include mixture models and Hidden Markov Models. The latter approach is discriminative since it focuses only on discriminating one class from another. It can be more efficient once trained and requires fewer modeling assumptions. Methods include logistic regression, generalized linear classifiers, and nearest-neighbor. See "Discriminative and Learning".
  
== [[Clustering_Old Kiwi]] ==
+
== [[Clustering_Old Kiwi|Clustering]] ==
Grouping similar objects in a multidimensional space. It is useful for constructing new features which are abstractions of the existing features. Some algorithms, like k-means, simply partition the feature space. Other algorithms, like single-link agglomeration, create nested partitionings which form a taxonomy.
+
Grouping similar objects in a multidimensional space. It is useful for constructing new features which are abstractions of the existing features. Some algorithms, like k-means, simply partition the feature space. Other algorithms, like single-link agglomeration, create nested partitionings which form a taxonomy. Another possibility is to learn a graph structure between the clusters, as in the Growing Neural Gas. The quality of the clustering depends crucially on the distance metric in the space. Most techniques are very sensitive to irrelevant features, so they should be combined with feature selection.  
  
== [[Consistent Estimator_Old Kiwi]] ==
+
::: '''References'''
 +
*[http://home.dei.polimi.it/matteucc/Clustering/tutorial_html/ A tutorial on clustering algorithms]
 +
*[http://home.dei.polimi.it/matteucc/Clustering/tutorial_html/kmeans.html K-means clustering]
 +
*[http://home.dei.polimi.it/matteucc/Clustering/tutorial_html/cmeans.html Fuzzy C-means clustering]
 +
*[http://home.dei.polimi.it/matteucc/Clustering/tutorial_html/hierarchical.html Hierarchical clustering]
 +
*[http://home.dei.polimi.it/matteucc/Clustering/tutorial_html/mixture.html Clustering as mixture of Gaussians]
 +
*[http://http://citeseer.ist.psu.edu/smyth97clustering.html Clustering Sequences with Hidden Markov Models]
 +
 
 +
== [[Consistent Estimator_Old Kiwi|Consistent Estimator]] ==
 
An unbiased estimator that converges more closely to the true value as the sample size increases is called a consistent estimator.
 
An unbiased estimator that converges more closely to the true value as the sample size increases is called a consistent estimator.
  
Curse of Dimensionality: Refers to the problem caused by exponential growth of hypervolume as a function of dimensionality. This term was coined by Richard Bellman in 1961.
+
== [[Curse of Dimensionality_Old Kiwi|Curse of Dimensionality]] ==
 +
Refers to the problem caused by exponential growth of hypervolume as a function of dimensionality. This term was coined by Richard Bellman in 1961.
  
Discriminant Analysis: Constructing new features via linear combination so that classification is easier. The notion of "easier" is quantified by Fisher's criterion, which applies exactly to Gaussian classes with equal variance and approximately to other models. Variants like Flexible discriminant analysis consider nonlinear combinations as well. See Duda&Hart, "Eigenfaces vs. Fisherfaces", and "Flexible Discriminant and Mixture Models".
+
== [[Decision Tree_Old Kiwi|Decision Tree]] ==
 +
A decision tree is a classifier that maps from the observation about an item to the conclusion about its target value. It is also realized to be deterministic ancestor of the hierarchical mixture of experts. Decison tree use can be traced back from problems that require probabilistic inference which is a core technology in AI, largely due to developments
 +
in graph-theoretic methods for the representation and manipulation of complex probability
 +
distributions. Whether in their guise as directed graphs (Bayesian networks) or as
 +
undirected graphs (Markov random fields), probabilistic decision trees/models have a number of
 +
virtues as representations of uncertainty and as inference engines. The foundation formed by decision trees forms the basis for Graphical Models.(see [[Graphical Models_Old Kiwi]]).
  
Discriminant Function: One way to represent classifiers. Given classes w1, w2, .., wk and feature vector x, which is in n-dimensional space, the discriminant functions g1(x), g2(x), .., gk(x) where g#(x) maps n-dimensional space to real numbers, are used to make decisions. Decisions are defined as w# if g#(x) >= gj(x) for all j.
 
  
Distance: See metric.
+
::: '''References'''
 +
1.[http://www.autonlab.org/tutorials/dtree.html Decision Tree Tutorial]
  
Drawbacks of Euclidean distance (E.D): The drawback of the E.D approach is that it assumes that the sample points are distributed about the sample mean in a spherical manner. Were the distribution to be decidedly non-spherical, for instance ellipsoidal, then we would expect the probability of a 'test point' belonging to the set to depend not only on the distance from the sample mean but also on the DIRECTION.
+
2.[http://dms.irb.hr/tutorial/tut_dtrees.php DM Tutorial-Decision Trees]
  
Dual Problem: Every linear programming problem, referred to as a primal problem, can be converted in a dual problem, which provides an upper bound to the optimal value of the primal problem. In matrix form, we express the primal problem as:
+
== [[Discriminant Analysis_Old Kiwi|Discriminant Analysis]] ==
 +
Constructing new features via linear combination so that classification is easier. The notion of "easier" is quantified by Fisher's criterion, which applies exactly to Gaussian classes with equal variance and approximately to other models. Variants like Flexible discriminant analysis consider nonlinear combinations as well. See Duda&Hart, "Eigenfaces vs. Fisherfaces", and "Flexible Discriminant and Mixture Models".
  
Maximize tex: c^Tx
+
== [[Discriminant Function_Old Kiwi|Discriminant Function]] ==
 +
One way to represent classifiers. Given classes w1, w2, .., wk and feature vector x, which is in n-dimensional space, the discriminant functions g1(x), g2(x), .., gk(x) where g#(x) maps n-dimensional space to real numbers, are used to make decisions. Decisions are defined as w# if g#(x) >= gj(x) for all j.
  
Subject to tex: Ax \geq b, x\geq 0
+
== [[Distance_Old Kiwi|Distance]] ==
 +
See [[Metric_Old Kiwi]]
 +
 
 +
== [[Drawbacks of Euclidean distance (E.D)_Old Kiwi|Drawbacks of Euclidean distance]] ==
 +
The drawback of the E.D approach is that it assumes that the sample points are distributed about the sample mean in a spherical manner. Were the distribution to be decidedly non-spherical, for instance ellipsoidal, then we would expect the probability of a 'test point' belonging to the set to depend not only on the distance from the sample mean but also on the DIRECTION.
 +
 
 +
== [[Dual Problem_Old Kiwi|Dual Problem]] ==
 +
Every linear programming problem, referred to as a primal problem, can be converted in a dual problem, which provides an upper bound to the optimal value of the primal problem. In matrix form, we express the primal problem as:
 +
 
 +
Maximize <math>c^Tx</math>
 +
 
 +
Subject to <math>Ax \geq b, x\geq 0</math>
  
 
The corresponding dual problem is:
 
The corresponding dual problem is:
  
Minimize tex: b^Ty
+
Minimize <math>b^Ty</math>
  
Subject to tex: A^Ty\geq c, y\geq 0
+
Subject to <math>A^Ty\geq c, y\geq 0</math>
  
 
Where y is used instead of x as the variable vector.
 
Where y is used instead of x as the variable vector.
Line 68: Line 88:
 
Article on Dual Problems
 
Article on Dual Problems
  
Euclidean Distance (ED): Is the distance between two points in a cartesian coordinate system that is "measured by a ruler". The Euclidean distance between the points X=[x1,x2,...xn]? and Y=[y1,y2,...yn]? is defined as dist(X,Y)=sum{(xi-yi)^2}.
+
== [[Euclidean Distance (ED)_Old Kiwi|Euclidean Distance (ED)]] ==
 +
Is the distance between two points in a cartesian coordinate system that is "measured by a ruler". The Euclidean distance between the points X=[x1,x2,...xn] and Y=[y1,y2,...yn] is defined as <math>dist(X,Y)=\sqrt{\sum_{i=1}^n{(x_i-y_i)^2}}</math>.
 +
 
 +
== [[Expectation Maximization_Old Kiwi|Expectation Maximization]] ==
 +
Expectation Maximization is a popular clustering method. Particularly, If the data to be clustered consists of Gaussian clusters, it starts with randomly initialized means and covariances for the clusters, then using maximum likelihood approach, it tries to find the actual means and the covariances of the clusters.
 +
 
 +
== [[Evidence_Old Kiwi|Evidence]] ==
 +
It a scale factor that insures that the posterior probabilities sum to one.
 +
 
 +
== [[Feature Vector_Old Kiwi|Feature Vector]] ==
 +
Vector containing a numerical representation of features.
 +
 
 +
== [[Features_Old Kiwi|Features]] ==
 +
The measurements which represent the data. The statistical model one uses is crucially dependent on the choice of features. Hence it is useful to consider alternative representations of the same measurements (i.e. different features). For example, different representations of the color values in an image. General techniques for finding new representations include discriminant analysis, principal component analysis, and clustering.
 +
 
 +
== [[Fisher Linear Discriminant_Old Kiwi|Fisher Linear Discriminant]] ==
 +
Fisher's linear discriminant is a classification method that projects high-dimensional data onto a line and performs classification in this one-dimensional space. The projection maximizes the distance between the means of the two classes while minimizing the variance within each class. See Lecture 10 for detailed explanation.
 +
 
 +
 
 +
[http://www.csd.uwo.ca/~olga/Courses//CS434a_541a//Lecture8.pdf The Lecture and Example of Fisher Linear Discriminant]
 +
 
 +
== [[Generalized Rayleigh Quotient_Old Kiwi|Generalized Rayleigh Quotient]] ==
 +
<math>J(\vec{w}) = \frac{\vec{w}^{\top}S_B \vec{w}}{\vec{w}^{\top}S_w \vec{w}}</math>
 +
 
 +
== [[Generic Property_Old Kiwi|Generic Property]] ==
 +
In mathematics, properties that hold for typical examples are called generic properties. For instance, a generic property of a class of functions is one that is true of most of those functions, as in the statements, " A generic polynomial does not have a root at zero," or "A generic matrix is invertible." As another example, a generic property of a space is a property that holds at most points of the space, as in the statementm, "if f: M->N is a smooth function between smooth manifolds, then a generic point of N is not a critical value of f" (This is by Sard's theorem.)
 +
 
 +
== [[Graphical Models_Old Kiwi|Graphical Models]] ==
 +
Graphical models allow a separation between qualitative, structural aspects of uncertain knowledge and the quantitative,
 +
parametric aspects of uncertainty—the former represented via patterns of edges in
 +
the graph and the latter represented as numerical values associated with subsets of nodes
 +
in the graph. This separation is often found to be natural by domain experts, taming some
 +
of the problems associated with structuring, interpreting, and troubleshooting the model.
 +
Even more importantly, the graph-theoretic framework has allowed for the development of
 +
general inference algorithms, which in many cases provide orders of magnitude speedups
 +
over brute-force methods.(see [[Decision trees_Old Kiwi]].
 +
 
 +
== [[Hilbert Space_Old Kiwi|Hilbert Space]] ==
 +
A Hilbert space named after the German mathematician David Hilbert, generalizes the notion of Euclidean space in a way that extends methods of vector algebra from the two-dimensional plane and three-dimensional space to infinite-dimensional space. A Hilbert space is an inner product space which is complete under the induced metric. A Hilbert space is a special class of Banach space in the norm induced by the inner product, since the norm and the inner product both induce the same metric. Any finite-dimensional inner product space is a Hilbert space, but it is worth mentioning that some authors require the space to be infinite dimensional for it to be called a Hilbert space.
 +
 
 +
 
 +
::: '''References'''
 +
1.[http://www-math.mit.edu/~rbm/18.155-F02/Lecture-notes/node6.html Hilbert space definition and proposition]
 +
 
 +
2.[http://en.wikipedia.org/wiki/Hilbert_space Wikepedia-Hilbert Space]
 +
 
 +
3.[http://mathworld.wolfram.com/HilbertSpace.html Mathworld-Hilbert Space]
 +
 
 +
== [[Histogram Density Estimation_Old Kiwi|Histogram Density Estimation]] ==
 +
Histogram Density Estimation is one of the primitive and easiest non-parametric density estimation methods. The given feature space is divided into equally-spaced bins or cells. The number of training feature samples that fall into each category is computed, (i.e) if '<math>n_i</math>' is the number of feature samples that fall into the bin 'i', and 'V' is the volume of each cell (the volume of all the cells are equal because they are equi-spaced), the density is given by <math>p(x) = n_i/V</math>.
 +
 
 +
== [[Impurity_Old Kiwi|Impurity]] ==
 +
Impurity of a class is defined as the amount of data misclassified into that class. It is zero when all training data belongs to one class. See [[Lecture 21 - Decision Trees(Continued)_Old Kiwi]] for the definitions of different types of impurities
 +
 
 +
== [[Informative Prior_Old Kiwi|Informative Prior]] ==
 +
In the Bayesian framework, the prior distribution of the variable is called an informative prior if it provide some specific information about the variable. Informative priors allows to incorporate a prior somewhat defined knowledge about the random variable under consideration. (See also: Uninformative Prior and Prior)
 +
 
 +
== [[K-means_Old Kiwi|K-means]] ==
 +
A parametric algorithm for clustering data into exactly k clusters. First, define some initial cluster parameters. Second, assign data points to clusters. Third, recompute better cluster parameters, given the data assignment. Iterate back to step two. It is related to the Expectation-Maximization algorithm for fitting a mixture of Gaussians.
 +
 
 +
== [[Kernel Functions_Old Kiwi|Kernel Functions]] ==
 +
These functions operate in the feature space without ever computing the coordinates of the data in that space, but rather by simply computing the inner products between the mappings of all pairs of data in the feature space. This operation is often computationally cheaper than the explicit computation of the coordinates. Kernel functions have been introduced for sequence data, text, images, as well as vectors. (REF: wiki)
 +
 
 +
== [[KNN-K Nearest Neighbor_Old Kiwi|KNN-K Nearest Neighbor]] ==
 +
The classifiers do not use any model to fit the data and only based on memory. The KNN uses neighborhood classification as the predication value of the new query. It has advantages - nonparametric architecture, simple and powerful, requires no traning time, but it also has disadvantage - memory intensive, classification and estimation are slow. Please refer to KNN tutorial website.
 +
 
 +
1. [http://people.revoledu.com/kardi/tutorial/KNN KNN Tutorial] : Contents are below
 +
/ How K-Nearest Neighbor (KNN) Algorithm works?
 +
/ Numerical Example (hand computation)
 +
/ KNN for Smoothing and Prediction
 +
/ How do we use the spreadsheet for KNN?
 +
/ Strength and Weakness of K-Nearest Neighbor Algorithm
 +
/ Resources for K Nearest Neighbors Algorithm
 +
 
 +
2. [http://www.nlp.org.cn/docs/20020903/36/kNN.pdf KNN]
 +
 
 +
3. [http://http/www.chem.agilent.com/cag/bsp/products/gsgx/Downloads/pdf/class_prediction.pdf Class Prediction using KNN]
 +
 
 +
4. [http://en.wikipedia.org/wiki/Nearest_neighbor_(pattern_recognition) WIKIPEDIA]
 +
 
 +
== [[Kruskal's Algorithm_Old Kiwi|Kruskal's Algorithm]] ==
 +
Kruskal's algorithm is an algorithm of graph theory that constructs the minimum spanning tree of a graph by adding edges to the spanning tree one-by-one. At all points during its execution the set of edges selected by Kruskal's algorithm forms a forest of trees. The steps in this algorithm can be seen here http://students.ceid.upatras.gr/~papagel/project/pseukrus.htm
 +
 
 +
Kruskal's algorithm is conceptually quite simple. The edges are selected and added to the spanning tree in increasing order of their weights. An edge is added to the tree only if it does not create a cycle.
 +
 
 +
== [[Lagrange Multipliers_Old Kiwi|Lagrange Multipliers]] ==
 +
In mathematical optimization problems, the method of Lagrange multipliers, named after Joseph Louis Lagrange, is a method for finding the extrema of a function of several variables subject to one or more constraints; it is the basic tool in nonlinear constrained optimization. Simply put, the technique is able to determine where on a particular set of points (such as a circle, sphere, or plane) a particular function is the smallest (or largest).
 +
 
 +
== [[Likelihood Principle_Old Kiwi|Likelihood Principle]] ==
 +
The likelihood principle is a principle of statistical inference which asserts that all of the information in a sample is contained in the likelihood function. A likelihood function arises from a conditional probability distribution considered as a function of its second argument, holding the first fixed. Eg: consider a model which gives the probability density function of observable random variable X as a function of parameter O. Then for a specific value x of X, the function L(O|x) = P(X=x|O) is a likelihood function of O. It gives a measure of how likely any particular value of O is, if we know that X has a value x. Two likelihood functions are equivalent if one is a scalar multiple of the other.
 +
 
 +
== [[Linear Discriminant Functions (LDF)_Old Kiwi|Linear Discriminant Functions (LDF)]] ==
 +
Functions that are linear combinations of x.
 +
 
 +
<math>g(x) = w^t x + w_0
 +
</math>
 +
Where <math>w</math> is the weight vector and <math>w_0</math> is the bias as threshold. In the two category case <math>g(x)</math> defines the decisions surface where: <math>w_1</math>, when <math>g(x) > 0</math> or <math>w_2</math>, when <math>g(x) < 0</math>
  
Evidence: It a scale factor that insures that the posterior probabilities sum to one.
+
== [[LUT - Look-Up Table_Old Kiwi|LUT - Look-Up Table]] ==
 +
In classification applications, LUT's can be used for comparing decisions, as long as memory is available. This method requires a discrete feature space that is reasonably small. (This method is typically not mentioned in textbooks.)
  
Feature Vector: Vector containing a numerical representation of features.
+
== [[Mahalanobis Distance_Old Kiwi|Mahalanobis Distance]] ==
 +
Differs from Euclidean Distance in that it takes into account the correlations in the data (by using the covariance matrix). Mahalanobis distance is widely used in cluster analysis and other classification techniques. (Ref - Wikipedia)
  
Features: The measurements which represent the data. The statistical model one uses is crucially dependent on the choice of features. Hence it is useful to consider alternative representations of the same measurements (i.e. different features). For example, different representations of the color values in an image. General techniques for finding new representations include discriminant analysis, principal component analysis, and clustering.
+
== [[Manhattan Distance_Old Kiwi|Manhattan Distance]] ==
 +
Also known as taxicab metric. The Manhattan distance between two points (X,Y) in a cartesian system is defined as <math>dist(X,Y)=\sum_{i=1}^n{|x_i-y_i|}</math>. This is equal to the length of paths connecting X and Y in a combination across all dimensions.
  
Fisher Linear Discriminant: Fisher's linear discriminant is a classification method that projects high-dimensional data onto a line and performs classification in this one-dimensional space. The projection maximizes the distance between the means of the two classes while minimizing the variance within each class. See Lecture 10 for detailed explanation.
+
== [[Maximum A Posteriori_Old Kiwi|MAP]] or [[Maximum A Posteriori_Old Kiwi|Maximum A Posteriori]] ==
 +
A parameter estimation heuristic that seeks parameter values that maximize the posterior density of the parameter. This implies that you have a prior distribution over the parameter, i.e. that you are Bayesian. MAP estimation has the highest chance of getting the parameter exactly right. But for predicting future data, it can be worse than Maximum Likelihood; predictive estimation is a better Bayesian method for that purpose. MAP is also not invariant to reparameterization.
  
Generalized Rayleigh Quotient: tex: J(\vec{w}) = \frac{\vec{w}^{\top}S_B \vec{w}}{\vec{w}^{\top}S_w \vec{w}}
+
Mercer's Condition Valid kernel functions are sometimes more precisely referred to as Mercer kernels , because they must satisfy Mercer's condition. for any given  <math>g(\vec{x})</math> such that  <math>\int{g(\vec{x})^2d \vec{x}}</math> is finite,
  
Generic Property: In mathematics, properties that hold for typical examples are called generic properties. For instance, a generic property of a class of functions is one that is true of most of those functions, as in the statements, " A generic polynomial does not have a root at zero," or "A generic matrix is invertible." As another example, a generic property of a space is a property that holds at most points of the space, as in the statementm, "if f: M->N is a smooth function between smooth manifolds, then a generic point of N is not a critical value of f" (This is by Sard's theorem.)
+
<math>\int{K(\vec{x},\vec{z})g(\vec{x})g(\vec{z})d\vec{x}d\vec{z}} \geq 0</math>
  
Hilbert Space: A Hilbert space named after the German mathematician David Hilbert, generalizes the notion of Euclidean space in a way that extends methods of vector algebra from the two-dimensional plane and three-dimensional space to infinite-dimensional space. A Hilbert space is an inner product space which is complete under the induced metric. A Hilbert space is a special class of Banach space in the norm induced by the inner product, since the norm and the inner product both induce the same metric. Any finite-dimensional inner product space is a Hilbert space, but it is worth mentioning that some authors require the space to be infinite dimensional for it to be called a Hilbert space.
+
We say that <math>K(\vec{x},\vec{z})</math> satisfies Mercer's condition if the above equation must hold for every <math>g(\vec{x})</math>. A kernel function <math>K(\vec{x},\vec{z})</math> must be continuous, symmetric, and have a positive definite gram matrix. Such a <math>K(\vec{x},\vec{z})</math> means that there exists a mapping to a reproducing kernel Hilbert space (a Hilbert space is a vector space closed under dot products) such that the dot product there gives the same value as the function <math>K(\vec{x},\vec{z})</math>.
  
* Reference:
+
Reference:
  
1. Hilbert space definition and proposition
+
* Wikipedia's Mercer's Condition[http://en.wikipedia.org/wiki/Mercer's_condition]
2. Wikepedia-Hilbert space
+
3. Mathworld-Hibert space
+
  
Histogram Density Estimation: Histogram Density Estimation is one of the primitive and easiest non-parametric density estimation methods. The given feature space is divided into equally-spaced bins or cells. The number of training feature samples that fall into each category is computed, (i.e) if 'n_i' is the number of feature samples that fall into the bin 'i', and 'V' is the volume of each cell (the volume of all the cells are equal because they are equi-spaced), the density is given by p(x) = n_i/V.
+
== [[Metric_Old Kiwi|Metric]] ==
 +
Also known as distance. Is a function that defines the distance between two elements of a set. Examples of metrics are the Manhattan, Euclidean and, Mahalanobis.
  
Informative Prior: In the Bayesian framework, the prior distribution of the variable is called an informative prior if it provide some specific information about the variable. Informative priors allows to incorporate a prior somewhat defined knowledge about the random variable under consideration. (See also: Uninformative Prior and Prior)
+
== [[Minimum Spanning Trees_Old Kiwi|Minimum Spanning Trees]] ==
 +
Given a connected, undirected graph, a spanning tree is a graph that is a tree and connects all the vertices together. There may be many different spanning trees for a graph. One of them can be selected based on the weights of the edges. A minimum spanning tree is  a spanning tree with weight less than or equal to the weight of every other spanning tree. A maximum spanning tree, in a similar way, is the spanning tree with the sum of the weights higher than or equal to the other possible trees. A minimum spanning tree can be determined using Prim's or Kruskal's algorithms. Also see [[Prim's Algorithm_Old Kiwi]] and [[Kruskal's Algorithm_Old Kiwi]]
  
K-means : A parametric algorithm for clustering data into exactly k clusters. First, define some initial cluster parameters. Second, assign data points to clusters. Third, recompute better cluster parameters, given the data assignment. Iterate back to step two. It is related to the Expectation-Maximization algorithm for fitting a mixture of Gaussians.
+
== [[Minkowski Metric_Old Kiwi|Minkowski Metric]] ==
 +
The k-Minkowski metric between two points <math>P_1 = (x_1,x_2,...,x_n)</math> and <math>P_2 = (y_1,y_2,...,y_n)</math> is defined as <math> d_k = (\sum_{i=1}^n \parallel x_i-y_i \parallel ^k)^{\frac{1}{k}}</math>. Manhattan metric (k=1) and Euclidean metric(k=2) are the special cases of Minkowski metric. Here <math>P_1</math> and <math>P_2</math> are the points corresponding to feature vectors.
  
Kernel Functions: These functions operate in the feature space without ever computing the coordinates of the data in that space, but rather by simply computing the inner products between the mappings of all pairs of data in the feature space. This operation is often computationally cheaper than the explicit computation of the coordinates. Kernel functions have been introduced for sequence data, text, images, as well as vectors. (REF: wiki)
+
== [[Mixture Models_Old Kiwi|Mixture Models]] ==
  
Lagrange Multipliers: In mathematical optimization problems, the method of Lagrange multipliers, named after Joseph Louis Lagrange, is a method for finding the extrema of a function of several variables subject to one or more constraints; it is the basic tool in nonlinear constrained optimization. Simply put, the technique is able to determine where on a particular set of points (such as a circle, sphere, or plane) a particular function is the smallest (or largest).
+
These models use joint distributions over latent variables. By using observed variables, they try to create complicated joint distributions. Mixture Models can be used to cluster datasets.
  
Likelihood Principle: The likelihood principle is a principle of statistical inference which asserts that all of the information in a sample is contained in the likelihood function. A likelihood function arises from a conditional probability distribution considered as a function of its second argument, holding the first fixed. Eg: consider a model which gives the probability density function of observable random variable X as a function of parameter O. Then for a specific value x of X, the function L(O|x) = P(X=x|O) is a likelihood function of O. It gives a measure of how likely any particular value of O is, if we know that X has a value x. Two likelihood functions are equivalent if one is a scalar multiple of the other.
+
== [[MLE_Old Kiwi|MLE]] ==
  
Linear Discriminant Functions (LDF): Functions that are linear combinations of x.
+
See Maximum Likelihood Estimation
  
g(x) = transpose(w)*x + w0
 
  
Where w is the weight vector and w0 is the bias as threshold. In the two category case g(x) defines the decisions surface where: w1, when g(x) > 0 w2, when g(x) < 0
 
  
LUT - Look-Up Table: In classification applications, LUT's can be used for comparing decisions, as long as memory is available. This method requires a discrete feature space that is reasonably small. (This method is typically not mentioned in textbooks.)
+
==[[Nonparametric regression/density estimation_Old Kiwi|Nonparametric regression/density estimation]]==
 +
An approach to regression/density estimation that doesn't require much prior knowledge but only a large amount of data. This includes histograms, kernel smoothing, and nearest-neighbor.
  
Mahalanobis Distance: Differs from Euclidean Distance in that it takes into account the correlations in the data (by using the covariance matrix). Mahalanobis distance is widely used in cluster analysis and other classification techniques. (Ref - Wikipedia)
+
== [[Non-parametric Model_Old Kiwi|Non-parametric Model]] ==
 +
Non-parametric models differ from parametric models in that the model structure is not specified a priori but is instead determined from data. The term nonparametric is not meant to imply that such models completely lack parameters but that the number and nature of the parameters are flexible and not fixed in advance.
 +
== [[Overfitting_Old Kiwi]] ==
  
Manhattan Distance: Also known as taxicab metric. The Manhattan distance between two points (X,Y) in a cartesian system is defined as sum{|xi-yi|}. This is equal to the length of all the paths connecting X and Y in a combination of horizontal and vertical segments.
+
In statistics, overfitting means that some of the relationships that appear statistically significant are actually just noise. A model with overfitting has much more freedom degrees than the data that we have. Consequently, this model doesn't replicate well and does a lousy job when predicting future responses.
  
MAP or Maximum A Posteriori: A parameter estimation heuristic that seeks parameter values that maximize the posterior density of the parameter. This implies that you have a prior distribution over the parameter, i.e. that you are Bayesian. MAP estimation has the highest chance of getting the parameter exactly right. But for predicting future data, it can be worse than Maximum Likelihood; predictive estimation is a better Bayesian method for that purpose. MAP is also not invariant to reparameterization.
+
In machine learning, overfitting training occurs when the training is too long or the training set is rare. As a result, the learner may adjust to very specific random features that has no causal relation to the target function. This may lead to the case when the training error is reducing while the testing is increasing. The process of applying a technique to avoid overfitting is referred to as [[Regularization_Old Kiwi]].
  
Mercer's Condition Valid kernel functions are sometimes more precisely referred to as Mercer kernels , because they must satisfy Mercer's condition. for any given tex: g(\vec{x}) such that tex: \int{g(\vec{x})^{2}d\vec{x} is finite,
+
== [[Parameter Estimation_Old Kiwi|[Parameter Estimation]] ==
 +
Density estimation when the density is assumed to be in a specific parametric family. Special cases include maximum likelihood, maximum a posteriori, unbiased estimation, and predictive estimation.
  
tex: \int{K(\vec{x},\vec{z})g(\vec{x})g(\vec{z})d\vec{x}d\vec{z}} \geq 0
 
  
We say that tex: K(\vec{x},\vec{z}) satisfies Mercy's condition if the above equation must hold for every tex: g(\vec{x}). A kernel function tex: K(\vec{x},\vec{z}) must be continuous, symmetric, and have a positive definite gram matrix. Such a tex: K(\vec{x},\vec{z}) means that there exists a mapping to a reproducing kernel Hilbert space (a Hilbert space is a vector space closed under dot products) such that the dot product there gives the same value as the function tex: K(\vec{x},\vec{z}).
+
== [[Parametric Classifiers_Old Kiwi|Parametric Classifiers]] ==
 +
We find parametric decision boundaries to approximate true decision boundaries between classes. (Discussed in [[Lecture 9 - Linear Discriminant Functions_Old Kiwi]])
  
* Reference:
+
== [[Parametric Model_Old Kiwi|Parametric Model]] ==
 +
A parametric model is a set of related mathematical equations in which alternative scenarios are defined by changing the assumed values of a set of fixed coefficients (parameters). In statistics, a parametric model is a parametrized family of probability distributions, one of which is presumed to describe the way a population is distributed.
  
1. Mercer's Condition
+
== [[Parzen Window_Old Kiwi|Parzen Window]] ==
2. Wikepedia-Mercer's Condition
+
Parzen windows are very similar to K nearest neighborhoods(KNN). Both methods can generate very complex decision boundaries. The main difference is that instead of looking at the k closest points to a piece of training data, all points within a fixed distance are considered. In practice, one difference is that datasets with large gaps get treated much different. kNN will pick far away points, but it is possible that relatively small Parzen Windows will actually enclose zero points. In this case, only the priors can be used to classify. Unfortunately, this dataset had many holes in it at the fringes Thhe Parzen-window density estimate using n training samples and the window function tex: \pi is defined by
  
Metric: Also known as distance. Is a function that defines the distance between two elements of a set. Examples of metrics are the Manhattan, Euclidean and, Mahalanobis.
+
<math>p_n(x) = 1/n \sum_{i=1}^{n} 1/V_n \pi({x-x_i}/h_n)</math>
  
[MLE]? or Maximum Likelihood Estimation: A parameter estimation heuristic that seeks parameter values that maximize the likelihood function for the parameter to calculate the best way of fitting a mathematical model to some data. This ignores the prior distribution and so is inconsistent with Bayesian probability theory, but it works reasonably well. (See Also: Lecture 7 and [BPE]?)
+
The estimate <math>p_n(x)</math> is an average of (window) functions. Usually the window function has its maximum at the origin and its values become smaller when we move further away from the origin. Then each training sample is contributing to the estimate in accordance with its distance from x.
  
Non-parametric Model: Non-parametric models differ from parametric models in that the model structure is not specified a priori but is instead determined from data. The term nonparametric is not meant to imply that such models completely lack parameters but that the number and nature of the parameters are flexible and not fixed in advance.
 
  
Parameter Estimation: Density estimation when the density is assumed to be in a specific parametric family. Special cases include maximum likelihood, maximum a posteriori, unbiased estimation, and predictive estimation.
+
::: '''References'''
 +
::::1. Parzen-window
 +
::::2. Parzen-window density estimation
 +
::::3. Parzen-window density estimation
  
Parametric Classifiers: We find parametric decision boundaries to approximate true decision boundaries between classes. (Discussed in Lecture 9)
 
  
Parametric Model: A parametric model is a set of related mathematical equations in which alternative scenarios are defined by changing the assumed values of a set of fixed coefficients (parameters). In statistics, a parametric model is a parametrized family of probability distributions, one of which is presumed to describe the way a population is distributed.
+
==[[Partial Differential Equations (PDE)_Old Kiwi|Partial Differential Equations (PDE)]]==
  
Patterns: The items that we are trying to classify are vectors of these features.
+
PDE's are differentiable equations of several independent variables in which can be differentiated with respect to those variables.  In this course PDE's are used to find cluster boundaries.  
  
Pazen Window : Parzen windows are very similar to K nearest neighborhoods(KNN). Both methods can generate very complex decision boundaries. The main difference is that instead of looking at the k closest points to a piece of training data, all points within a fixed distance are considered. In practice, one difference is that datasets with large gaps get treated much different. kNN will pick far away points, but it is possible that relatively small Parzen Windows will actually enclose zero points. In this case, only the priors can be used to classify. Unfortunately, this dataset had many holes in it at the fringes Thhe Parzen-window density estimate using n training samples and the window function tex: \pi is defined by
+
::Links:
 +
:::[https://balthier.ecn.purdue.edu/index.php/Lecture_27_-_Clustering_by_finding_valleys_of_densities Lecture 27]
 +
:::[http://en.wikipedia.org/wiki/Partial_differential_equation Wikipedia]
 +
:::[http://mathworld.wolfram.com/PartialDifferentialEquation.html Mathworld]
  
tex: p_n(x) = 1/n \sum_{i=1}^{n} 1/V_n \pi({x-x_i}/h_n)
+
== [[Patterns_Old Kiwi|Patterns]] ==
 +
The items that we are trying to classify are vectors of these features.
  
The estimate tex: p_n(x) is an average of (window) functions. Usually the window function has its maximum at the origin and its values become smaller when we move further away from the origin. Then each training sample is contributing to the estimate in accordance with its distance from x.
+
== [[Penalty Methods_Old Kiwi|Penalty Methods]] ==
 +
In optimization, penalty methods are used to reformulate a constraint optimization problem into several unconstrained optimization problems. It can be shown that the solutions of these derived unconstrained optimization problems will converge to the solution of the original constrained problem. For example, a minimization problem with inequality constraints can be recast as as unconstrained problems by adding a penalty term which value is a function of the original constraints. Except for minor variations penalty methods are very similar to the so-called Barrier methods.
  
* Reference:
+
== [[Perceptron_Old Kiwi|Perceptron]] ==
 +
Maps an input to a single binary output value. (First introduced in Lecture 9)
  
1. Pazen-window
+
== [[Posterior_Old Kiwi|Posterior]] ==
2. Pazen-window density estimation
+
The probability, using prior knowledge, that a case belongs to a group.
3. Pazen-window density estimation
+
  
Penalty Methods: In optimization, penalty methods are used to reformulate a constraint optimization problem into several unconstrained optimization problems. It can be shown that the solutions of these derived unconstrained optimization problems will converge to the solution of the original constrained problem. For example, a minimization problem with inequality constraints can be recast as as unconstrained problems by adding a penalty term which value is a function of the original constraints. Except for minor variations penalty methods are very similar to the so-called Barrier methods.
+
== [[Prim's Algorithm_Old Kiwi|Prim's Algorithm]] ==
 +
Prim's algorithm is a graph algorithm to determine a minimum spanning tree for a connected weighted graph. This means it finds a subset of the edges that forms a tree including every vertex, in which the sum of weights of all the edges in the tree is minimized. The steps in this algorithm can be seen here http://students.ceid.upatras.gr/~papagel/project/pseuprim.htm
  
Perceptron: Maps an input to a single binary output value. (First introduced in Lecture 9)
+
== [[Principal Component Analysis_Old Kiwi|Principal Component Analysis]] ==
 +
Constructing new features which are the principal components of a data set. The principal components are random variables of maximal variance constructed from linear combinations of the input features. Equivalently, they are the projections onto the principal component axes, which are lines that minimize the average squared distance to each point in the data set. To ensure uniqueness, all of the principal component axes must be orthogonal. PCA is a maximum-likelihood technique for linear regression in the presence of Gaussian noise on both x and y. In some cases, PCA corresponds to a Fourier transform, such as the DCT used in JPEG image compression.
  
Posterior: The probability, using prior knowledge, that a case belongs to a group.
+
== [[Prior_Old Kiwi|Prior]] ==
 +
The probability that adjusts the classification of cases by using known information, hence the name prior. (See Also: Informative Prior and Uninformative Prior)
  
Principal Component Analysis: Constructing new features which are the principal components of a data set. The principal components are random variables of maximal variance constructed from linear combinations of the input features. Equivalently, they are the projections onto the principal component axes, which are lines that minimize the average squared distance to each point in the data set. To ensure uniqueness, all of the principal component axes must be orthogonal. PCA is a maximum-likelihood technique for linear regression in the presence of Gaussian noise on both x and y. In some cases, PCA corresponds to a Fourier transform, such as the DCT used in JPEG image compression.
+
== [[Pruning_Old Kiwi|Pruning]] ==
 +
Pruning (also called as inverse splitting) is a term in mathematics and informatics which allows to cut parts of a decision tree. Pruning is done by taking two leaves that have a common parent and merging them if it helps to improve the classification accuracy of decision tree. Pruned parts of the tree are no longer considered because the algorithm knows based on already collected data (e.g. through sampling) that these subtrees do not contain the searched object. The decision tree is simplified by removing some decisions. Pruning increases generalization.
  
Prior: The probability that adjusts the classification of cases by using known information, hence the name prior. (See Also: Informative Prior and Uninformative Prior)
+
== [[Quadratic Programming Problem_Old Kiwi|Quadratic Programming Problem]] ==
 +
A linearly constrained optimization problem with a quadratic objective function is called a quadratic program (QP). Quadratic programming is often considered as a discipline in and of itself owing to its vast number of applications.
  
Quadratic Programming Problem: A linearly constrained optimization problem with a quadratic objective function is called a quadratic program (QP). Quadratic programming is often considered as a discipline in and of itself owing to its vast number of applications.
+
== [[Regularization_Old Kiwi|Regularization]] ==
 +
[[Regularization_Old Kiwi]] refers to a set of techniques used to avoid overfitting.
  
Reinforcement learning: A reinforcement learning machine learns to perform actions which will maximize rewards (or minimize punishments).
+
== [[Reinforcement learning_Old Kiwi|Reinforcement learning]] ==
 +
A reinforcement learning machine learns to perform actions which will maximize rewards (or minimize punishments).
  
Slack Variable: In linear programming , a slack variable is referred to as an additional variable that has been introduced to the optimization problem to turn a inequality constraint into an equality constraint. This procedure is often performed to formulate linear optimization problems into a form which can be efficiently solve using a fast algorithm: the simplex algorithm. As a result a slack variable is always positive since this is a requirement for variables in the simplex method.
+
== [[ROC curves_Old Kiwi| ROC Curves]] ==
 +
The [[ROC curves_Old Kiwi|Receiver Operating Characteristic]] (ROC) Curve is a commonly-used technique for showing the trade-off between false-positives and false-negatives as a threshold is varied.
  
Supervised learning: Imagine an organism or machine which experiences a series of inputs from different sensors: x1, x2, x3, x4, . . . The machine is also given desired outputs y1, y2, . . ., and its goal is to learn to produce the correct output given a new input x. (See also: Unsupervised Learning)
+
== [[Slack Variable_Old Kiwi|Slack Variable]] ==
 +
In linear programming , a slack variable is referred to as an additional variable that has been introduced to the optimization problem to turn a inequality constraint into an equality constraint. This procedure is often performed to formulate linear optimization problems into a form which can be efficiently solve using a fast algorithm: the simplex algorithm. As a result a slack variable is always positive since this is a requirement for variables in the simplex method.
  
Support Vector Machines or SVM: Algorithms which define optimal hyperplanes as the hyperplane whose distance from the nearest training patters (the support vectors) is maximum. http://en.wikipedia.org/wiki/Support_vector_machine
+
== [[Supervised learning_Old Kiwi|Supervised learning]] ==
 +
Imagine an organism or machine which experiences a series of inputs from different sensors: x1, x2, x3, x4, . . . The machine is also given desired outputs y1, y2, . . ., and its goal is to learn to produce the correct output given a new input x. (See also: [[Unsupervised learning_Old Kiwi]])
  
Taxicab Distance: see Manhattan Distance.
+
== [[Support Vector Machines_Old Kiwi|Support Vector Machines]] or SVM ==
 +
Algorithms which define optimal hyperplanes as the hyperplane whose distance from the nearest training patters (the support vectors) is maximum. http://en.wikipedia.org/wiki/Support_vector_machine
  
Testing: When we present a trained recognition system with an unknown pattern and ask it to classify it, we call that testing.
+
== [[Taxicab Distance_Old Kiwi|Taxicab Distance]] ==
 +
see [[Manhattan Distance_Old Kiwi]]
  
Training: When we present a pattern recognition with a set of classified patterns so that it can learn the characteristics of the set, we call it training.
+
== [[Testing_Old Kiwi|Testing]] ==
 +
When we present a trained recognition system with an unknown pattern and ask it to classify it, we call that testing.
  
Unbiased Estimator: An estimator is said to be unbiased if and only if the expected value (average) of its estimate is the true value of the thing estimated.
+
== [[Training_Old Kiwi|Training]] ==
 +
When we present a pattern recognition with a set of classified patterns so that it can learn the characteristics of the set, we call it training.
  
Uninformative Prior: An uninformative prior gives very little, a more vague or a general information about the variable. It is important to realize that the term uninformative does not mean no information. (See also: Informative Prior and Prior)
+
== [[Unbiased Estimator_Old Kiwi|Unbiased Estimator]] ==
 +
An estimator is said to be unbiased if and only if the expected value (average) of its estimate is the true value of the thing estimated.
  
Unsupervised learning: The goal of the machine is to build a model of x that can be used for reasoning, decision making, predicting things, communicating etc. (See also: Supervised Learning)
+
== [[Uninformative Prior_Old Kiwi|Uninformative Prior]] ==
 +
An uninformative prior gives very little, a more vague or a general information about the variable. It is important to realize that the term uninformative does not mean no information. (See also: Informative Prior and Prior)
  
[[Category:ECE 662]]
+
== [[Unsupervised learning_Old Kiwi|Unsupervised learning]] ==
 +
The goal of the machine is to build a model of x that can be used for reasoning, decision making, predicting things, communicating etc. (See also: Supervised Learning)
 +
----
 +
[[ECE662:BoutinSpring08_Old_Kiwi|Back to ECE662 Spring 2008 Prof. Boutin]]

Latest revision as of 17:13, 22 October 2010

Glossary for "Decision Theory" (ECE662)

If you want to add an entry to the glossary, create the article for the topic and enter the information there. Include the term here on this page by providing a reference to include that article, or instead write a brief description. Follow the example for the other entries with sqare brackets that link to the topic's article and double equal signs.

Contents


Active Learning

Active Learning

Backpropagation Learning Algorithm

Backpropagation Learning Algorithm

Bayes Decision Rule

Bayes' decision rule creates an objective function which minimizes the probability of error (misclassification). This method assumes a known distribution for the feature vectors given each class.

Bayesian Parameter Estimation

Bayesian Parameter Estimation is a technique for parameter estimation which uses probability densities as estimates of the parameters instead of exact deterministic ones, as MLE_Old Kiwi does.

Central Limit Theorem

The CLT explains why many distributions tend to be close to the normal distribution. The important point is that the "random variable" being observed should be the sum or mean of many independent random variables. (variables need not be iid)(See the PROOF )

Classes

The categories into which we are trying to classify the patterns.

Classification

Assigning a class to a measurement, or equivalently, identifying the probabilistic source of a measurement. The only statistical model that is needed is the conditional model of the class variable given the measurement. This conditional model can be obtained from a joint model or it can be learned directly. The former approach is generative since it models the measurements in each class. It is more work, but it can exploit more prior knowledge, needs less data, is more modular, and can handle missing or corrupted data. Methods include mixture models and Hidden Markov Models. The latter approach is discriminative since it focuses only on discriminating one class from another. It can be more efficient once trained and requires fewer modeling assumptions. Methods include logistic regression, generalized linear classifiers, and nearest-neighbor. See "Discriminative and Learning".

Clustering

Grouping similar objects in a multidimensional space. It is useful for constructing new features which are abstractions of the existing features. Some algorithms, like k-means, simply partition the feature space. Other algorithms, like single-link agglomeration, create nested partitionings which form a taxonomy. Another possibility is to learn a graph structure between the clusters, as in the Growing Neural Gas. The quality of the clustering depends crucially on the distance metric in the space. Most techniques are very sensitive to irrelevant features, so they should be combined with feature selection.

References

Consistent Estimator

An unbiased estimator that converges more closely to the true value as the sample size increases is called a consistent estimator.

Curse of Dimensionality

Refers to the problem caused by exponential growth of hypervolume as a function of dimensionality. This term was coined by Richard Bellman in 1961.

Decision Tree

A decision tree is a classifier that maps from the observation about an item to the conclusion about its target value. It is also realized to be deterministic ancestor of the hierarchical mixture of experts. Decison tree use can be traced back from problems that require probabilistic inference which is a core technology in AI, largely due to developments in graph-theoretic methods for the representation and manipulation of complex probability distributions. Whether in their guise as directed graphs (Bayesian networks) or as undirected graphs (Markov random fields), probabilistic decision trees/models have a number of virtues as representations of uncertainty and as inference engines. The foundation formed by decision trees forms the basis for Graphical Models.(see Graphical Models_Old Kiwi).


References

1.Decision Tree Tutorial

2.DM Tutorial-Decision Trees

Discriminant Analysis

Constructing new features via linear combination so that classification is easier. The notion of "easier" is quantified by Fisher's criterion, which applies exactly to Gaussian classes with equal variance and approximately to other models. Variants like Flexible discriminant analysis consider nonlinear combinations as well. See Duda&Hart, "Eigenfaces vs. Fisherfaces", and "Flexible Discriminant and Mixture Models".

Discriminant Function

One way to represent classifiers. Given classes w1, w2, .., wk and feature vector x, which is in n-dimensional space, the discriminant functions g1(x), g2(x), .., gk(x) where g#(x) maps n-dimensional space to real numbers, are used to make decisions. Decisions are defined as w# if g#(x) >= gj(x) for all j.

Distance

See Metric_Old Kiwi

Drawbacks of Euclidean distance

The drawback of the E.D approach is that it assumes that the sample points are distributed about the sample mean in a spherical manner. Were the distribution to be decidedly non-spherical, for instance ellipsoidal, then we would expect the probability of a 'test point' belonging to the set to depend not only on the distance from the sample mean but also on the DIRECTION.

Dual Problem

Every linear programming problem, referred to as a primal problem, can be converted in a dual problem, which provides an upper bound to the optimal value of the primal problem. In matrix form, we express the primal problem as:

Maximize $ c^Tx $

Subject to $ Ax \geq b, x\geq 0 $

The corresponding dual problem is:

Minimize $ b^Ty $

Subject to $ A^Ty\geq c, y\geq 0 $

Where y is used instead of x as the variable vector.

There are two ideas fundamental to duality theory. One is the fact that the dual of a dual linear program is the original primal linear program. Additionally, every feasible solution for a linear program gives a bound on the optimal value of the objective function of its dual.

Article on Dual Problems

Euclidean Distance (ED)

Is the distance between two points in a cartesian coordinate system that is "measured by a ruler". The Euclidean distance between the points X=[x1,x2,...xn] and Y=[y1,y2,...yn] is defined as $ dist(X,Y)=\sqrt{\sum_{i=1}^n{(x_i-y_i)^2}} $.

Expectation Maximization

Expectation Maximization is a popular clustering method. Particularly, If the data to be clustered consists of Gaussian clusters, it starts with randomly initialized means and covariances for the clusters, then using maximum likelihood approach, it tries to find the actual means and the covariances of the clusters.

Evidence

It a scale factor that insures that the posterior probabilities sum to one.

Feature Vector

Vector containing a numerical representation of features.

Features

The measurements which represent the data. The statistical model one uses is crucially dependent on the choice of features. Hence it is useful to consider alternative representations of the same measurements (i.e. different features). For example, different representations of the color values in an image. General techniques for finding new representations include discriminant analysis, principal component analysis, and clustering.

Fisher Linear Discriminant

Fisher's linear discriminant is a classification method that projects high-dimensional data onto a line and performs classification in this one-dimensional space. The projection maximizes the distance between the means of the two classes while minimizing the variance within each class. See Lecture 10 for detailed explanation.


The Lecture and Example of Fisher Linear Discriminant

Generalized Rayleigh Quotient

$ J(\vec{w}) = \frac{\vec{w}^{\top}S_B \vec{w}}{\vec{w}^{\top}S_w \vec{w}} $

Generic Property

In mathematics, properties that hold for typical examples are called generic properties. For instance, a generic property of a class of functions is one that is true of most of those functions, as in the statements, " A generic polynomial does not have a root at zero," or "A generic matrix is invertible." As another example, a generic property of a space is a property that holds at most points of the space, as in the statementm, "if f: M->N is a smooth function between smooth manifolds, then a generic point of N is not a critical value of f" (This is by Sard's theorem.)

Graphical Models

Graphical models allow a separation between qualitative, structural aspects of uncertain knowledge and the quantitative, parametric aspects of uncertainty—the former represented via patterns of edges in the graph and the latter represented as numerical values associated with subsets of nodes in the graph. This separation is often found to be natural by domain experts, taming some of the problems associated with structuring, interpreting, and troubleshooting the model. Even more importantly, the graph-theoretic framework has allowed for the development of general inference algorithms, which in many cases provide orders of magnitude speedups over brute-force methods.(see Decision trees_Old Kiwi.

Hilbert Space

A Hilbert space named after the German mathematician David Hilbert, generalizes the notion of Euclidean space in a way that extends methods of vector algebra from the two-dimensional plane and three-dimensional space to infinite-dimensional space. A Hilbert space is an inner product space which is complete under the induced metric. A Hilbert space is a special class of Banach space in the norm induced by the inner product, since the norm and the inner product both induce the same metric. Any finite-dimensional inner product space is a Hilbert space, but it is worth mentioning that some authors require the space to be infinite dimensional for it to be called a Hilbert space.


References

1.Hilbert space definition and proposition

2.Wikepedia-Hilbert Space

3.Mathworld-Hilbert Space

Histogram Density Estimation

Histogram Density Estimation is one of the primitive and easiest non-parametric density estimation methods. The given feature space is divided into equally-spaced bins or cells. The number of training feature samples that fall into each category is computed, (i.e) if '$ n_i $' is the number of feature samples that fall into the bin 'i', and 'V' is the volume of each cell (the volume of all the cells are equal because they are equi-spaced), the density is given by $ p(x) = n_i/V $.

Impurity

Impurity of a class is defined as the amount of data misclassified into that class. It is zero when all training data belongs to one class. See Lecture 21 - Decision Trees(Continued)_Old Kiwi for the definitions of different types of impurities

Informative Prior

In the Bayesian framework, the prior distribution of the variable is called an informative prior if it provide some specific information about the variable. Informative priors allows to incorporate a prior somewhat defined knowledge about the random variable under consideration. (See also: Uninformative Prior and Prior)

K-means

A parametric algorithm for clustering data into exactly k clusters. First, define some initial cluster parameters. Second, assign data points to clusters. Third, recompute better cluster parameters, given the data assignment. Iterate back to step two. It is related to the Expectation-Maximization algorithm for fitting a mixture of Gaussians.

Kernel Functions

These functions operate in the feature space without ever computing the coordinates of the data in that space, but rather by simply computing the inner products between the mappings of all pairs of data in the feature space. This operation is often computationally cheaper than the explicit computation of the coordinates. Kernel functions have been introduced for sequence data, text, images, as well as vectors. (REF: wiki)

KNN-K Nearest Neighbor

The classifiers do not use any model to fit the data and only based on memory. The KNN uses neighborhood classification as the predication value of the new query. It has advantages - nonparametric architecture, simple and powerful, requires no traning time, but it also has disadvantage - memory intensive, classification and estimation are slow. Please refer to KNN tutorial website.

1. KNN Tutorial : Contents are below / How K-Nearest Neighbor (KNN) Algorithm works? / Numerical Example (hand computation) / KNN for Smoothing and Prediction / How do we use the spreadsheet for KNN? / Strength and Weakness of K-Nearest Neighbor Algorithm / Resources for K Nearest Neighbors Algorithm

2. KNN

3. Class Prediction using KNN

4. WIKIPEDIA

Kruskal's Algorithm

Kruskal's algorithm is an algorithm of graph theory that constructs the minimum spanning tree of a graph by adding edges to the spanning tree one-by-one. At all points during its execution the set of edges selected by Kruskal's algorithm forms a forest of trees. The steps in this algorithm can be seen here http://students.ceid.upatras.gr/~papagel/project/pseukrus.htm

Kruskal's algorithm is conceptually quite simple. The edges are selected and added to the spanning tree in increasing order of their weights. An edge is added to the tree only if it does not create a cycle.

Lagrange Multipliers

In mathematical optimization problems, the method of Lagrange multipliers, named after Joseph Louis Lagrange, is a method for finding the extrema of a function of several variables subject to one or more constraints; it is the basic tool in nonlinear constrained optimization. Simply put, the technique is able to determine where on a particular set of points (such as a circle, sphere, or plane) a particular function is the smallest (or largest).

Likelihood Principle

The likelihood principle is a principle of statistical inference which asserts that all of the information in a sample is contained in the likelihood function. A likelihood function arises from a conditional probability distribution considered as a function of its second argument, holding the first fixed. Eg: consider a model which gives the probability density function of observable random variable X as a function of parameter O. Then for a specific value x of X, the function L(O|x) = P(X=x|O) is a likelihood function of O. It gives a measure of how likely any particular value of O is, if we know that X has a value x. Two likelihood functions are equivalent if one is a scalar multiple of the other.

Linear Discriminant Functions (LDF)

Functions that are linear combinations of x.

$ g(x) = w^t x + w_0 $ Where $ w $ is the weight vector and $ w_0 $ is the bias as threshold. In the two category case $ g(x) $ defines the decisions surface where: $ w_1 $, when $ g(x) > 0 $ or $ w_2 $, when $ g(x) < 0 $

LUT - Look-Up Table

In classification applications, LUT's can be used for comparing decisions, as long as memory is available. This method requires a discrete feature space that is reasonably small. (This method is typically not mentioned in textbooks.)

Mahalanobis Distance

Differs from Euclidean Distance in that it takes into account the correlations in the data (by using the covariance matrix). Mahalanobis distance is widely used in cluster analysis and other classification techniques. (Ref - Wikipedia)

Manhattan Distance

Also known as taxicab metric. The Manhattan distance between two points (X,Y) in a cartesian system is defined as $ dist(X,Y)=\sum_{i=1}^n{|x_i-y_i|} $. This is equal to the length of paths connecting X and Y in a combination across all dimensions.

MAP or Maximum A Posteriori

A parameter estimation heuristic that seeks parameter values that maximize the posterior density of the parameter. This implies that you have a prior distribution over the parameter, i.e. that you are Bayesian. MAP estimation has the highest chance of getting the parameter exactly right. But for predicting future data, it can be worse than Maximum Likelihood; predictive estimation is a better Bayesian method for that purpose. MAP is also not invariant to reparameterization.

Mercer's Condition Valid kernel functions are sometimes more precisely referred to as Mercer kernels , because they must satisfy Mercer's condition. for any given $ g(\vec{x}) $ such that $ \int{g(\vec{x})^2d \vec{x}} $ is finite,

$ \int{K(\vec{x},\vec{z})g(\vec{x})g(\vec{z})d\vec{x}d\vec{z}} \geq 0 $

We say that $ K(\vec{x},\vec{z}) $ satisfies Mercer's condition if the above equation must hold for every $ g(\vec{x}) $. A kernel function $ K(\vec{x},\vec{z}) $ must be continuous, symmetric, and have a positive definite gram matrix. Such a $ K(\vec{x},\vec{z}) $ means that there exists a mapping to a reproducing kernel Hilbert space (a Hilbert space is a vector space closed under dot products) such that the dot product there gives the same value as the function $ K(\vec{x},\vec{z}) $.

Reference:

  • Wikipedia's Mercer's Condition[1]

Metric

Also known as distance. Is a function that defines the distance between two elements of a set. Examples of metrics are the Manhattan, Euclidean and, Mahalanobis.

Minimum Spanning Trees

Given a connected, undirected graph, a spanning tree is a graph that is a tree and connects all the vertices together. There may be many different spanning trees for a graph. One of them can be selected based on the weights of the edges. A minimum spanning tree is a spanning tree with weight less than or equal to the weight of every other spanning tree. A maximum spanning tree, in a similar way, is the spanning tree with the sum of the weights higher than or equal to the other possible trees. A minimum spanning tree can be determined using Prim's or Kruskal's algorithms. Also see Prim's Algorithm_Old Kiwi and Kruskal's Algorithm_Old Kiwi

Minkowski Metric

The k-Minkowski metric between two points $ P_1 = (x_1,x_2,...,x_n) $ and $ P_2 = (y_1,y_2,...,y_n) $ is defined as $ d_k = (\sum_{i=1}^n \parallel x_i-y_i \parallel ^k)^{\frac{1}{k}} $. Manhattan metric (k=1) and Euclidean metric(k=2) are the special cases of Minkowski metric. Here $ P_1 $ and $ P_2 $ are the points corresponding to feature vectors.

Mixture Models

These models use joint distributions over latent variables. By using observed variables, they try to create complicated joint distributions. Mixture Models can be used to cluster datasets.

MLE

See Maximum Likelihood Estimation


Nonparametric regression/density estimation

An approach to regression/density estimation that doesn't require much prior knowledge but only a large amount of data. This includes histograms, kernel smoothing, and nearest-neighbor.

Non-parametric Model

Non-parametric models differ from parametric models in that the model structure is not specified a priori but is instead determined from data. The term nonparametric is not meant to imply that such models completely lack parameters but that the number and nature of the parameters are flexible and not fixed in advance.

Overfitting_Old Kiwi

In statistics, overfitting means that some of the relationships that appear statistically significant are actually just noise. A model with overfitting has much more freedom degrees than the data that we have. Consequently, this model doesn't replicate well and does a lousy job when predicting future responses.

In machine learning, overfitting training occurs when the training is too long or the training set is rare. As a result, the learner may adjust to very specific random features that has no causal relation to the target function. This may lead to the case when the training error is reducing while the testing is increasing. The process of applying a technique to avoid overfitting is referred to as Regularization_Old Kiwi.

[Parameter Estimation

Density estimation when the density is assumed to be in a specific parametric family. Special cases include maximum likelihood, maximum a posteriori, unbiased estimation, and predictive estimation.


Parametric Classifiers

We find parametric decision boundaries to approximate true decision boundaries between classes. (Discussed in Lecture 9 - Linear Discriminant Functions_Old Kiwi)

Parametric Model

A parametric model is a set of related mathematical equations in which alternative scenarios are defined by changing the assumed values of a set of fixed coefficients (parameters). In statistics, a parametric model is a parametrized family of probability distributions, one of which is presumed to describe the way a population is distributed.

Parzen Window

Parzen windows are very similar to K nearest neighborhoods(KNN). Both methods can generate very complex decision boundaries. The main difference is that instead of looking at the k closest points to a piece of training data, all points within a fixed distance are considered. In practice, one difference is that datasets with large gaps get treated much different. kNN will pick far away points, but it is possible that relatively small Parzen Windows will actually enclose zero points. In this case, only the priors can be used to classify. Unfortunately, this dataset had many holes in it at the fringes Thhe Parzen-window density estimate using n training samples and the window function tex: \pi is defined by

$ p_n(x) = 1/n \sum_{i=1}^{n} 1/V_n \pi({x-x_i}/h_n) $

The estimate $ p_n(x) $ is an average of (window) functions. Usually the window function has its maximum at the origin and its values become smaller when we move further away from the origin. Then each training sample is contributing to the estimate in accordance with its distance from x.


References
1. Parzen-window
2. Parzen-window density estimation
3. Parzen-window density estimation


Partial Differential Equations (PDE)

PDE's are differentiable equations of several independent variables in which can be differentiated with respect to those variables. In this course PDE's are used to find cluster boundaries.

Links:
Lecture 27
Wikipedia
Mathworld

Patterns

The items that we are trying to classify are vectors of these features.

Penalty Methods

In optimization, penalty methods are used to reformulate a constraint optimization problem into several unconstrained optimization problems. It can be shown that the solutions of these derived unconstrained optimization problems will converge to the solution of the original constrained problem. For example, a minimization problem with inequality constraints can be recast as as unconstrained problems by adding a penalty term which value is a function of the original constraints. Except for minor variations penalty methods are very similar to the so-called Barrier methods.

Perceptron

Maps an input to a single binary output value. (First introduced in Lecture 9)

Posterior

The probability, using prior knowledge, that a case belongs to a group.

Prim's Algorithm

Prim's algorithm is a graph algorithm to determine a minimum spanning tree for a connected weighted graph. This means it finds a subset of the edges that forms a tree including every vertex, in which the sum of weights of all the edges in the tree is minimized. The steps in this algorithm can be seen here http://students.ceid.upatras.gr/~papagel/project/pseuprim.htm

Principal Component Analysis

Constructing new features which are the principal components of a data set. The principal components are random variables of maximal variance constructed from linear combinations of the input features. Equivalently, they are the projections onto the principal component axes, which are lines that minimize the average squared distance to each point in the data set. To ensure uniqueness, all of the principal component axes must be orthogonal. PCA is a maximum-likelihood technique for linear regression in the presence of Gaussian noise on both x and y. In some cases, PCA corresponds to a Fourier transform, such as the DCT used in JPEG image compression.

Prior

The probability that adjusts the classification of cases by using known information, hence the name prior. (See Also: Informative Prior and Uninformative Prior)

Pruning

Pruning (also called as inverse splitting) is a term in mathematics and informatics which allows to cut parts of a decision tree. Pruning is done by taking two leaves that have a common parent and merging them if it helps to improve the classification accuracy of decision tree. Pruned parts of the tree are no longer considered because the algorithm knows based on already collected data (e.g. through sampling) that these subtrees do not contain the searched object. The decision tree is simplified by removing some decisions. Pruning increases generalization.

Quadratic Programming Problem

A linearly constrained optimization problem with a quadratic objective function is called a quadratic program (QP). Quadratic programming is often considered as a discipline in and of itself owing to its vast number of applications.

Regularization

Regularization_Old Kiwi refers to a set of techniques used to avoid overfitting.

Reinforcement learning

A reinforcement learning machine learns to perform actions which will maximize rewards (or minimize punishments).

ROC Curves

The Receiver Operating Characteristic (ROC) Curve is a commonly-used technique for showing the trade-off between false-positives and false-negatives as a threshold is varied.

Slack Variable

In linear programming , a slack variable is referred to as an additional variable that has been introduced to the optimization problem to turn a inequality constraint into an equality constraint. This procedure is often performed to formulate linear optimization problems into a form which can be efficiently solve using a fast algorithm: the simplex algorithm. As a result a slack variable is always positive since this is a requirement for variables in the simplex method.

Supervised learning

Imagine an organism or machine which experiences a series of inputs from different sensors: x1, x2, x3, x4, . . . The machine is also given desired outputs y1, y2, . . ., and its goal is to learn to produce the correct output given a new input x. (See also: Unsupervised learning_Old Kiwi)

Support Vector Machines or SVM

Algorithms which define optimal hyperplanes as the hyperplane whose distance from the nearest training patters (the support vectors) is maximum. http://en.wikipedia.org/wiki/Support_vector_machine

Taxicab Distance

see Manhattan Distance_Old Kiwi

Testing

When we present a trained recognition system with an unknown pattern and ask it to classify it, we call that testing.

Training

When we present a pattern recognition with a set of classified patterns so that it can learn the characteristics of the set, we call it training.

Unbiased Estimator

An estimator is said to be unbiased if and only if the expected value (average) of its estimate is the true value of the thing estimated.

Uninformative Prior

An uninformative prior gives very little, a more vague or a general information about the variable. It is important to realize that the term uninformative does not mean no information. (See also: Informative Prior and Prior)

Unsupervised learning

The goal of the machine is to build a model of x that can be used for reasoning, decision making, predicting things, communicating etc. (See also: Supervised Learning)


Back to ECE662 Spring 2008 Prof. Boutin

Alumni Liaison

Recent Math PhD now doing a post-doctorate at UC Riverside.

Kuei-Nuan Lin