Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Vapnik–Chervonenkis theory (also known as VC theory) was developed during 1960–1990 by Vladimir Vapnik and Alexey Chervonenkis. The theory is a form of computational learning theory, which attempts to explain the learning process from a statistical point of view.
VC theory covers at least four parts (as explained in The Nature of Statistical Learning Theory):
- Theory of consistency of learning processes
- What are (necessary and sufficient) conditions for consistency of a learning process based on the empirical risk minimization principle ?
- Nonasymptotic theory of the rate of convergence of learning processes
- How fast is the rate of convergence of the learning process?
- Theory of controlling the generalization ability of learning processes
- How can one control the rate of convergence (the generalization ability) of the learning process?
- Theory of constructing learning machines
- How can one construct algorithms that can control the generalization ability?
The last part of VC theory introduced a well-known learning algorithm: the support vector machine.
References[edit | edit source]
- ^ Vapnik, Vladimir N (2000). The Nature of Statistical Learning Theory, Springer-Verlag.
- Vapnik, Vladimir N (1989). 'Statistical Learning Theory', Wiley-Interscience.
- See references in articles: Richard M. Dudley, empirical processes, shattering.
|This page uses Creative Commons Licensed content from Wikipedia (view authors).|