Assessment |
Biopsychology |
Comparative |
Cognitive |
Developmental |
Language |
Individual differences |
Personality |
Philosophy |
Social |

Methods |
Statistics |
Clinical |
Educational |
Industrial |
Professional items |
World psychology |

**Statistics:**
Scientific method ·
Research methods ·
Experimental design ·
Undergraduate statistics courses ·
Statistical tests ·
Game theory ·
Decision theory

In statistics, the **Kolmogorov-Smirnov** test (often referred to as the **K-S test**) is used to determine whether two underlying probability distributions differ from each other or whether an underlying probability distribution differs from a hypothesized distribution, in either case based on finite samples.

In the one-sample case the KS test compares the empirical distribution function with the cumulative distribution function specified by the null hypothesis. The main applications are for testing goodness of fit with the normal and uniform distributions. For normality testing, minor improvements made by Lilliefors lead to the Lilliefors test. In general the Shapiro-Wilk test or Anderson-Darling test are more powerful alternatives to the Lilliefors test for testing normality.

The two sample KS-test is one of the most useful and general nonparametric methods for comparing two samples, as it is sensitive to differences in both location and shape of the empirical cumulative distribution functions of the two samples.

## Mathematical statistics Edit

The empirical distribution function *F*_{n} for *n* observations *y _{i}* is defined as

- $ F_n(x)={1 \over n}\sum_{i=1}^n \left\{\begin{matrix}1 & \mathrm{if}\ y_i\leq x, \\ 0 & \mathrm{otherwise}.\end{matrix}\right. $

The two one-sided Kolmogorov-Smirnov test statistics are given by

- $ D_n^{+}=\max(F_n(x)-F(x))\, $

- $ D_n^{-}=\max(F(x)-F_n(x))\, $

where *F*(*x*) is the hypothesized distribution or another empirical distribution. The probability distributions of these two statistics, given that the null hypothesis of equality of distributions is true, does not depend on what the hypothesized distribution is, as long as it is continuous. Knuth gives a detailed description of how to analyze the significance of this pair of statistics. Many people use max(*D*_{n}^{+}, *D*_{n}^{−}) instead, but the distribution of this statistic is more difficult to deal with.

## Miscellaneous Edit

Note that when the underlying independent variable is cyclic, as with day of the year or day of the week, then Kuiper's test is more appropriate. *Numerical Recipes* is a good source of information on this.
Note furthermore, that the Kolmogorov-Smirnov test is more sensitive at points near the median of the distribution than on its tails. The Anderson-Darling test is a test that provides equal sensitivity at the tails.

## See alsoEdit

## External linksEdit

- One-sided KS test explanation
- JavaScript implementation of one- and two-sided tests
- Numerical Recipes (ISBN 0521431085) is a prime resource for this sort of thing (see http://www.nr.com/nronline_switcher.html for a discussion).
- The Legacy of Andrei Nikolaevich Kolmogorov
- Short introductionde:Kolmogorow-Smirnow-Test

es:Prueba de Kolmogorov-Smirnov fr:Test de Kolmogorov-Smirnovnl:Kolmogorov-Smirnovtoets pt:Teste Kolmogorov-Smirnov su:Uji Kolmogorov-Smirnov

This page uses Creative Commons Licensed content from Wikipedia (view authors). |