Psychology Wiki
Advertisement

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory


In statistics, the KolmogorovSmirnov test (K–S test) is a form of minimum distance estimation used as a nonparametric test of equality of one-dimensional probability distributions used to compare a sample with a reference probability distribution (one-sample K-S test), or to compare two samples (two-sample K-S test).

The Kolmogorov-Smirnov statistic quantifies a distance between the empirical distribution function of the sample and the cumulative distribution function of the reference distribution, or between the empirical distribution functions of two samples. The null distribution of this statistic is calculated under the null hypothesis that the samples are drawn from the same distribution (in the two-sample case) or that the sample is drawn from the reference distribution (in the one-sample case). In each case, the distributions considered under the null hypothesis are continuous distributions but are otherwise unrestricted.

The two-sample KS test is one of the most useful and general nonparametric methods for comparing two samples, as it is sensitive to differences in both location and shape of the empirical cumulative distribution functions of the two samples.

The Kolmogorov-Smirnov test can be modified to serve as a goodness of fit test. In the special case of testing for normality of the distribution, samples are standarised and compared with a standard normal distribution. This is equivalent to setting the mean and variance of the reference distribution equal to the sample estimates, and it is known that using the sample to modify the null hypothesis reduces the power of a test. Correcting for this bias leads to the Lilliefors test. However, even Lilliefors' modification is less powerful than the Shapiro-Wilk test or Anderson-Darling test for testing normality.[1]

Kolmogorov–Smirnov statistic[]

The empirical distribution function Fn for n iid observations Xi is defined as

where is the indicator function, equal to 1 if Xi ≤ x and equal to 0 otherwise.

The Kolmogorov-Smirnov statistic for a given cumulative distribution function F(x

where sup S is the supremum of set S. By the Glivenko-Cantelli theorem, if the sample comes from distribution F(x), then Dn converges to 0 almost surely. Kolmogorov strengthened this result, by effectively providing the rate of this convergence (see below). The Donsker theorem provides yet stronger result.

Kolmogorov distribution[]

The Kolmogorov distribution is the distribution of the random variable

where B(t) is the Brownian bridge. The cumulative distribution function of K is given by

Kolmogorov–Smirnov test[]

Under null hypothesis that the sample comes from the hypothesized distribution F(x),

in distribution, where B(t) is the Brownian bridge.

If F is continuous then under the null hypothesis converges to the Kolmogorov distribution, which does not depend on F. This result may also be known as the Kolmogorov theorem; see Kolmogorov's theorem for disambiguation.

The goodness-of-fit test or the Kolmogorov–Smirnov test is constructed by using the critical values of the Kolmogorov distribution.

The null hypothesis is rejected at level if

where Kα is found from

The asymptotic power of this test is 1. If the form or parameters of F(x) are determined from the Xi, the inequality may not hold. In this case, Monte Carlo or other methods are required to determine the rejection level α.

Two-sample Kolmogorov–Smirnov test[]

The Kolmogorov-Smirnov test may also be used to test whether two underlying one-dimensional probability distributions differ. In this case, the Kolmogorov-Smirnov statistic is

and the null hypothesis is rejected at level if

Setting confidence limits for the shape of a distribution function[]

While the Kolmogorov-Smirnov test is usually used to test whether a given F(x) is the underlying probability distribution of Fn(x), the procedure may be inverted to give confidence limits on F(x) itself. If one chooses a critical value of the test statistic Dα such that P(Dn > Dα) = α, then a band of width ±Dα around Fn(x) will entirely contain F(x) with probability 1 − α.

See also[]




References[]

  1. Stephens, M. A. (1974). EDF Statistics for Goodness of Fit and Some Comparisons. Journal of the American Statistical Association 69: 730–737.
  • Eadie, W.T.; D. Drijard, F.E. James, M. Roos and B. Sadoulet (1971). Statistical Methods in Experimental Physics, 269–271, Amsterdam: North-Holland.
  • Stuart, Alan; Keith Ord and Steven Arnold (1999). Kendall's Advanced Theory of Statistics, 25.37–25.43, London: Arnold, a member of the Hodder Headline Group.

External links[]


This page uses Creative Commons Licensed content from Wikipedia (view authors).
Advertisement