Assessment |
Biopsychology |
Comparative |
Cognitive |
Developmental |
Language |
Individual differences |
Personality |
Philosophy |
Social |

Methods |
Statistics |
Clinical |
Educational |
Industrial |
Professional items |
World psychology |

**Other fields of psychology:**
AI ·
Computer ·
Consulting ·
Consumer ·
Engineering ·
Environmental ·
Forensic ·
Military ·
Sport ·
Transpersonal ·
Index

**Supervised learning** is a machine learning technique for deducing a function from training data. The training data consist of pairs of input objects (typically vectors), and desired outputs. The output of the function can be a continuous value (called regression), or can predict a class label of the input object (called classification). The task of the supervised learner is to predict the value of the function for any valid input object after having seen a number of training examples (i.e. pairs of input and target output). To achieve this, the learner has to generalize from the presented data to unseen situations in a "reasonable" way (see inductive bias).
(Compare with unsupervised learning.) The parallel task in human and animal psychology is often referred to as concept learning.

## Contents

## Overview[edit | edit source]

Supervised learning can generate models of two types. Most commonly, supervised learning generates a global model that maps input objects to desired outputs. In some cases, however, the map is implemented as a set of local models (such as in case-based reasoning or the nearest neighbor algorithm).

In order to solve a given problem of supervised learning (e.g. learning to recognize handwriting) one has to consider various steps:

- Determine the type of training examples. Before doing anything else, the engineer should decide what kind of data is to be used as an example. For instance, this might be a single handwritten character, an entire handwritten word, or an entire line of handwriting.
- Gathering a training set. The training set needs to be characteristic of the real-world use of the function. Thus, a set of input objects is gathered and corresponding outputs are also gathered, either from human experts or from measurements.
- Determine the input feature representation of the learned function. The accuracy of the learned function depends strongly on how the input object is represented. Typically, the input object is transformed into a feature vector, which contains a number of features that are descriptive of the object. The number of features should not be too large, because of the curse of dimensionality; but should be large enough to accurately predict the output.
- Determine the structure of the learned function and corresponding learning algorithm. For example, the engineer may choose to use artificial neural networks or decision trees.
- Complete the design. The engineer then runs the learning algorithm on the gathered training set. Parameters of the learning algorithm may be adjusted by optimizing performance on a subset (called a
*validation*set) of the training set, or via cross-validation. After parameter adjustment and learning, the performance of the algorithm may be measured on a test set that is separate from the training set.

Another term for supervised learning is classification. A wide range of classifiers are available, each with its strengths and weaknesses. Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems; this is also referred to as the No free lunch theorem. Various empirical tests have been performed to compare classifier performance and to find the characteristics of data that determine classifier performance. Determining a suitable classifier for a given problem is however still more an art than a science.

The most widely used classifiers are the Neural Network (Multilayer perceptron), Support Vector Machines, k-nearest neighbor algorithm, Gaussian mixture model, Gaussian, naive Bayes, decision tree and radial basis function classifiers.

## Empirical risk minimization[edit | edit source]

*Main article: Empirical risk minimization*

The goal of supervised learning of a global model is to find a function *g*, given a set of points of the form (*x*, *g*(*x*)).

It is assumed that the set of points for which the behavior of *g* is known is an independent and identically-distributed random variables sample drawn according to an unknown probability distribution *p* of a larger, possibly infinite, population. Furthermore, one assumes the existence of a task-specific loss function *L* of type

where *Y* is the codomain of *g*, and *L* maps into the nonnegative real numbers (further restrictions may be placed on *L*). The quantity *L*(*z*, *y*) is the loss incurred by predicting *z* as the value of *g* at a given point when the true value is *y*.

The *risk* associated with a function *f* is then defined as the expectation of the loss function, as follows:

if the probability distribution *p* is discrete (the analogous continuous case employs a definite integral and a probability density function).

The goal is now to find a function *f* * among a fixed subclass of functions for which the risk *R*(*f* *) is minimal.

However, since the behavior of *g* is generally only known for a finite set of points (*x*_{1}, *y*_{1}), ..., (*x*_{n}, *y*_{n}), one can only *approximate* the true risk, for example with the *empirical risk*:

Selecting the function *f* * that minimizes the empirical risk is known as the principle of *empirical risk minimization*. Statistical learning theory investigates under what conditions empirical risk minimization is admissible and how good the approximations can be expected to be.

## Active Learning[edit | edit source]

There are situations in which unlabeled data is abundant but labeling data is expensive. In such a scenario the learning algorithm can actively query the user/teacher for labels. This type of iterative supervised learning is called active learning. Since the learner chooses the examples, the number of examples to learn a concept can often be much lower than the number required in normal supervised learning. With this approach there is a risk that the algorithm might focus on unimportant or even invalid examples.

Active learning can be especially useful in biological research problems such as Protein engineering where a few proteins have been discovered with a certain interesting function and one wishes to determine which of many possible mutants to make next that will have a similar function^{[1]}.

### Definitions[edit | edit source]

Let be the total set of all data under consideration. For example, in a protein engineering problem, would include all proteins that are known to have a certain interesting activity and all additional proteins that one might want to test for that activity.

During each iteration, , is broken up into three subsets:

- : Data points where the label is
**known**. - : Data points where the label is
**unknown**. - : A subset of that is
**chosen**to be labeled.

Most of the current research in active learning involves the best method to chose the data points for .

### Minimum Marginal Hyperplane[edit | edit source]

Some active learning algorithms are built upon Support vector machines (SVMs) and exploit the structure of the SVM to determine which data points to label. Such methods usually calculate the margin, , of each unlabeled datum in and treat as an n-dimensional distance from that datum to separating hyperplane.

Minimum Marginal Hyperplane methods assume that the data with the smallest are those that the SVM is most uncertain about and therefore should be placed in to be labeled. Other similar methods, such as Maximum Marginal Hyperplane, choose data with the largest . Tradeoff methods choose a mix of the smallest and largest s.

### Maximum Curiosity[edit | edit source]

Another active learning method, that typically learns a data set with fewer examples than Minimum Marginal Hyperplane but is more computationally intensive and only works for discrete classifiers is Maximum Curiosity^{[2]}.

Maximum curiosity takes each unlabeled datum in and assumes all possible labels that datum might have. This datum with each assumed class is added to and then the new is cross-validated. It is assumed that when the datum is paired up with its correct label, the cross-validated accuracy (or correlation coefficient) of will most improve. The datum with the most improved accuracy is placed in to be labeled

## Approaches and algorithms[edit | edit source]

- Analytical learning
- Artificial neural network
- Backpropagation
- Boosting
- Bayesian statistics
- Case-based reasoning
- Decision tree learning
- Inductive logic programming
- Gaussian process regression
- Learning Automata
- Minimum message length (decision trees, decision graphs, etc.)
- Naive bayes classifier
- Nearest Neighbor Algorithm
- Probably approximately correct learning (PAC) learning
- Ripple down rules, a knowledge acquisition methodology
- Symbolic machine learning algorithms
- Subsymbolic machine learning algorithms
- Support vector machines
- Random Forests
- Ensembles of Classifiers
- Ordinal Classification
- Data Pre-processing
- Handling imbalanced datasets

## Applications[edit | edit source]

- Bioinformatics
- Cheminformatics
- Handwriting recognition
- Information retrieval
- Object recognition in computer vision
- Optical character recognition
- Spam detection
- Pattern recognition
- Speech recognition
- Forecasting Fraudulent Financial Statements

## General issues[edit | edit source]

- Computational learning theory
- Inductive bias
- Overfitting (machine learning)
- (Uncalibrated) Class membership probabilities
- Version spaces

## Notes[edit | edit source]

- ↑ Danziger, S.A., Swamidass, S.J., Zeng, J., Dearth, L.R., Lu, Q., Chen, J.H., Cheng, J., Hoang, V.P., Saigo, H., Luo, R., Baldi, P., Brachmann, R.K. and Lathrop, R.H.
**Functional census of mutation sequence spaces: the example of p53 cancer rescue mutants**, (2006)*IEEE/ACM transactions on computational biology and bioinformatics*,**3**, 114-125. - ↑ Danziger, S.A., Zeng, J., Wang, Y., Brachmann, R.K. and Lathrop, R.H.
**Choosing where to look next in a mutation sequence space: Active Learning of informative p53 cancer rescue mutants**,(2007)*Bioinformatics*,**23(13)**, 104-114.[1]

## References[edit | edit source]

- S. Kotsiantis, Supervised Machine Learning: A Review of Classification Techniques, Informatica Journal 31 (2007) 249-268 (http://www.informatica.si/PDF/31-3/11_Kotsiantis%20-%20Supervised%20Machine%20Learning%20-%20A%20Review%20of...pdf).

## External links[edit | edit source]

This page uses Creative Commons Licensed content from Wikipedia (view authors). |