Central limit theorem

Central limit theorems are a set of weak-convergence results in probability theory. Intuitively, they all express the fact that any sum of many independent identically distributed random variables will tend to be distributed according to a particular "attractor distribution". The most important and famous result is called simply The Central Limit Theorem which states that if the sum of the variables has a finite variance, then it will be approximately normally distributed. Since many real processes yield distributions with finite variance, this explains the ubiquity of the normal distribution. Several generalizations for finite variance exist which do not require identical distribution but incorporate some condition which guarantees that none of the variables exert a much larger influence than the others. Two such conditions are the Lindeberg condition and the Lyapunov condition. Other generalizations even allow some "weak" dependence of the random variables. Also, a generalization due to Gnedenko and Kolmogorov states that the sum of a number of random variables with power-law tail distributions decreasing as 1/|x|&alpha;+1 with 0 < &alpha; < 2 (and therefore having infinite variance) will tend to a symmetric stable Lévy distribution as the number of variables grows. This article will only be concerned with the central limit theorem as it applies to distributions with finite variance.

The reader may find it helpful to consider this illustration of the central limit theorem.

Classical central limit theorem
The theorem most often called the central limit theorem is the following. Let X1, X2, X3, ... be a sequence of random variables which are defined on the same probability space, share the same probability distribution D and are independent. Assume that both the expected value &mu; and the standard deviation &sigma; of D exist and are finite.

Consider the sum :Sn = X1 + ... + Xn. Then the expected value of Sn is n&mu; and its standard deviation is &sigma; n&frac12;. Furthermore, informally speaking, the distribution of Sn approaches the normal distribution N(n&mu;,&sigma;2n) as n approaches &infin;.

In order to clarify the word "approaches" in the last sentence, we standardize Sn by setting


 * $$Z_n = \frac{S_n - n \mu}{\sigma \sqrt{n}}.$$

Then the distribution of Zn converges towards the standard normal distribution N(0,1) as n approaches &infin; (this is convergence in distribution). This means: if &Phi;(z) is the cumulative distribution function of N(0,1), then for every real number z, we have


 * $$\lim_{n \to \infty} \mbox{Pr}(Z_n \le z) = \Phi(z),$$

or, equivalently,


 * $$\lim_{n\rightarrow\infty}\mbox{Pr}\left(\frac{\overline{X}_n-\mu}{\sigma/\sqrt{n}}\leq z\right)=\Phi(z)$$

where
 * $$\overline{X}_n=S_n/n=(X_1+\cdots+X_n)/n$$

is the sample mean.

Proof of the central limit theorem
For a theorem of such fundamental importance to statistics and applied probability, the central limit theorem has a remarkably simple proof using characteristic functions. It is similar to the proof of a (weak) law of large numbers. For any random variable, Y, with zero mean and unit variance (var(Y) = 1), the characteristic function of Y is, by Taylor's theorem,


 * $$\varphi_Y(t) = 1 - {t^2 \over 2} + o(t^2), \quad t \rightarrow 0$$

where o (t2 ) is "little o notation" for some function of t that goes to zero more rapidly than t2. Letting Yi be (Xi &minus; &mu;)/&sigma;, the standardised value of Xi, it is easy to see that the standardised mean of the observations X1, X2, ..., Xn is just


 * $$Z_n = \frac{\overline{X}_n-\mu}{\sigma/\sqrt{n}} = \sum_{i=1}^n {Y_i \over \sqrt{n}}.$$

By simple properties of characteristic functions, the characteristic function of Zn is


 * $$\left[\varphi_Y\left({t \over \sqrt{n}}\right)\right]^n = \left[ 1 - {t^2

\over 2n} + o\left({t^2 \over n}\right) \right]^n \, \rightarrow \, e^{-t^2/2}, \quad n \rightarrow \infty.$$

But, this limit is just the characteristic function of a standard normal distribution, N(0,1), and the central limit theorem follows from the L&eacute;vy continuity theorem, which confirms that the convergence of characteristic functions implies convergence in distribution.

Convergence to the limit
If the third central moment E((X1 &minus; &mu;)3) exists and is finite, then the above convergence is uniform and the speed of convergence is at least on the order of 1/n&frac12; (see Berry-Ess&eacute;en theorem).

Pictures of a distribution being "smoothed out" by summation (showing original distribution and three subsequent convolutions):



(See Illustration of the central limit theorem for further details on these images.)

An equivalent formulation of this limit theorem starts with An = (X1 + ... + Xn) / n which can be interpreted as the mean of a random sample of size n. The expected value of An is &mu; and the standard deviation is &sigma; / n&frac12;. If we standardize An by setting Zn = (An - &mu;) / (&sigma; / n&frac12;), we obtain the same variable Zn as above, and it approaches a standard normal distribution.

Note the following apparent "paradox": by adding many independent identically distributed positive variables, one gets approximately a normal distribution. But for every normally distributed variable, the probability that it is negative is non-zero! How is it possible to get negative numbers from adding only positives? The reason is simple: the theorem applies to terms centered about the mean. Without that standardization, the distribution would, as intuition suggests, escape away to infinity.

The Central Limit Theorem, as an approximation for a finite number of observations, provides a reasonable approximation only when close to the peak of the normal distribution; it requires a very large number of observations to stretch into the tails.

The Central Limit theorem also applies to sums of independent and identical discrete random variables, although in this case the convergence of the sum toward a normal distribution has singular properties: namely, a sum of discrete random variables is still a discrete random variable, so that we are confronted to a series of discrete random variables whose probability distribution converges towards a probability density function corresponding to a continuous variable (namely the normal distribution). This means that if we build an organigram of the realisations of the sum of n independent identical discrete variables, the curve that joins the centers of the upper faces of the rectangles forming the organigram converges toward a gaussian curve as n approaches $$\infty$$. The binomial distribution article details such an application of the central limit theorem in the simple case of a discrete variable taking only two possible values.

Density functions
The density of the sum of two or more independent variables is the convolution of their densities (if these densities exist). Thus the central limit theorem can be interpreted as a statement about the properties of density functions under convolution: the convolution of a number of density functions tends to the normal density as the number of density functions increases without bound, under the conditions stated above.

Since the characteristic function of a convolution is the product of the characteristic functions of the densities involved, the central limit theorem has yet another restatement: the product of the characteristic functions of a number of density functions tends to the characteristic function of the normal density as the number of density functions increases without bound, under the conditions stated above.

An equivalent statement can be made about Fourier transforms, since the characteristic function is essentially a Fourier transform.

Products of random variables
The central limit theorem tells us what to expect about the sum of independent random variables, but what about the product? Well, the logarithm of a product is simply the sum of the logs of the factors, so the log of a product of random variables tends to have a normal distribution, which makes the product itself have a log-normal distribution. Many physical quantities (especially mass or length, which are a matter of scale and cannot be negative) are the product of different random factors, so they follow a log-normal distribution.

Lyapunov condition
See also Lyapunov's central limit theorem.

Let Xn be a sequence of independent random variables defined on the same probability space. Assume that Xn has finite expected value &mu;n and finite standard deviation &sigma;n. We define


 * $$s_n^2 = \sum_{i = 1}^n \sigma_i^2.$$

Assume that the third central moments


 * $$r_n^3 = \sum_{i = 1}^n \mbox{E}\left({\left| X_n - \mu_n \right|}^3 \right)$$

are finite for every n, and that


 * $$\lim_{n \to \infty} \frac{r_n}{s_n} = 0.$$

(This is the Lyapunov condition). We again consider the sum Sn=X1+...+Xn. The expected value of Sn is mn = &sum;undefined&mu;i and its standard deviation is sn. If we standardize Sn by setting


 * $$Z_n = \frac{S_n - m_n}{s_n}$$

then the distribution of Zn converges towards the standard normal distribution N(0,1) as above.

Lindeberg condition
In the same setting and with the same notation as above, we can replace the Lyapunov condition with the following weaker one (from Lindeberg in 1920). For every &epsilon; > 0



\lim_{n \to \infty} \sum_{i = 1}^{n} \mbox{E}\left(   \frac{(X_i - \mu_i)^2}{s_n^2}    :    \left| X_i - \mu_i \right| > \varepsilon s_n  \right) = 0 $$

where E( U : V > c) is E( U 1{V > c}), i.e., the expectation of the random variable U 1{V > c} whose value is U if V > c and zero otherwise. Then the distribution of the standardized sum Zn converges towards the standard normal distribution N(0,1).

Non-independent case
There are some theorems which treat the case of sums of non-independent variables, for instance the m-dependent central limit theorem, the martingale central limit theorem and the central limit theorem for mixing processes.

History
Tijms (2004, p.169) writes:
 * The central limit theorem has an interesting history. The first version of this theorem was postulated by the French-born English mathematician Abraham de Moivre, who, in a remarkable article published in 1733, used the normal distribution to approximate the distribution of the number of heads resulting from many tosses of a fair coin. This finding was far ahead of its time, and was nearly forgotten unitl the famous French mathematician Pierre-Simon Laplace rescued it from obscurity in his monumental work Théorie Analytique des Probabilités, which was published in 1812. Laplace expanded De Moivre's finding by approximating the binomial distribution with the normal distribution. But as with De Moivre, Laplace's finding received little attention in his own time. It was not until the nineteenth century was at an end that the importance of the central limit theorem was discerned, when, in 1901, Russian mathematician Aleksandr Lyapunov defined it in general terms and proved precisely how it worked mathematically. Nowadays, the central limit theorem is considered to be the unofficial sovereign of probability theory.

Reference
Henk Tijms, Understanding Probability: Chance Rules in Everyday Life, Cambridge: Cambridge University Press, 2004.