Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Vapnik-Chervonenkis theory (also known as VC theory) was developed during 1960-1990 by Vladimir Vapnik and Alexey Chervonenkis. The theory is a form of computational learning theory, which attempts to explain the learning process from a statistical point of view.
VC theory covers at least four parts (as explained in The Nature of Statistical Learning Theory):
- Theory of consistency of learning processes
- What are (necessary and sufficient) conditions for consistency of a learning process based on the empirical risk minimization principle ?
- Nonasymptotic theory of the rate of convergence of learning processes
- How fast is the rate of convergence of the learning process?
- Theory of controlling the generalization ability of learning processes
- How can one control the rate of convergence (the generalization ability) of the learning process?
- Theory of constructing learning machines
- How can one construct algorithms that can control the generalization ability?
The last part of VC theory introduced a well-known learning algorithm: the support vector machine.
References[edit | edit source]
- The Nature of Statistical Learning Theory, Vladimir Vapnik, Springer-Verlag, (1999), ISBN 0-387-98780-0
- Statistical Learning Theory, Vladimir Vapnik, Wiley-Interscience, (1998), ISBN 0-471-03003-1
- See references in articles: Richard M. Dudley, empirical processes, R. S. Wenocur, shattering.
|This page uses Creative Commons Licensed content from Wikipedia (view authors).|