Variance

In probability theory and statistics, the variance of a random variable is a measure of its statistical dispersion, indicating how far from the expected value its values typically are. The variance of a real-valued random variable is its second central moment, and it also happens to be its second cumulant. The variance of a random variable is the square of its standard deviation.

Definition
If μ = E(X) is the expected value (mean) of the random variable X, then the variance is


 * $$\operatorname{var}(X) = \operatorname{E}( ( X - \mu ) ^ 2 ).$$

That is, it is the expected value of the square of the deviation of X from its own mean. In plain language, it can be expressed as "The average of the square of the distance of each data point from the mean". It is thus the mean squared deviation. The variance of random variable X is typically designated as $$\operatorname{var}(X)$$, $$\sigma_X^2$$, or simply $$\sigma^2$$.

Note that the above definition can be used for both discrete and continuous random variables.

Many distributions, such as the Cauchy distribution, do not have a variance because the relevant integral diverges. In particular, if a distribution does not have expected value, it does not have variance either. The opposite is not true: there are distributions for which expected value exists, but variance does not.

Properties
If the variance is defined, we can conclude that it is never negative because the squares are positive or zero. The unit of variance is the square of the unit of observation. For example, the variance of a set of heights measured in centimeters will be given in square centimeters. This fact is inconvenient and has motivated many statisticians to instead use the square root of the variance, known as the standard deviation, as a summary of dispersion.

It can be proven easily from the definition that the variance does not depend on the mean value $$\mu$$. That is, if the variable is "displaced" an amount b by taking X+b, the variance of the resulting random variable is left untouched. By contrast, if the variable is multiplied by a scaling factor a, the variance is multiplied by a2. More formally, if a and b are real constants and X is a random variable whose variance is defined,


 * $$\operatorname{var}(aX+b)=a^2\operatorname{var}(X)$$

Another formula for the variance that follows in a straightforward manner from the linearity of expected values and the above definition is:


 * $$\operatorname{var}(X)= \operatorname{E}(X^2 - 2\,X\,\operatorname{E}(X) + (\operatorname{E}(X))^2 )

= \operatorname{E}(X^2) - 2(\operatorname{E}(X))^2 + (\operatorname{E}(X))^2 = \operatorname{E}(X^2) - (\operatorname{E}(X))^2.$$

This is often used to calculate the variance in practice.

One reason for the use of the variance in preference to other measures of dispersion is that the variance of the sum (or the difference) of independent random variables is the sum of their variances. A weaker condition than independence, called uncorrelatedness also suffices. In general,


 * $$\operatorname{var}(aX+bY) =a^2 \operatorname{var}(X) + b^2 \operatorname{var}(Y) + 2ab \operatorname{cov}(X, Y).$$

Here $$\operatorname{cov}$$ is the covariance, which is zero for independent random variables (if it exists).

Approximating the variance of a function
The Delta method uses second-order Taylor expansions to approximate the variance of a function of one or more random variables. For example, the approximate variance of a function of one variable is given by


 * $$\operatorname{var}\left[f(X)\right]\approx \left(f'(\operatorname{E}\left[X\right])\right)^2\operatorname{var}\left[X\right]$$

provided that $$f(\cdot)$$ is twice differentiable and that the mean and variance of $$X$$ are finite.

Population variance and sample variance
In general, the population variance of a finite population is given by
 * $$\sigma^2 = \sum_{i=1}^N

\left(x_i - \overline{x} \right)^ 2 \, \Pr(x_i),$$

where $$\overline{x}$$ is the population mean. This is merely a special case of the general definition of variance introduced above, but restricted to finite populations.

In many practical situations, the true variance of a population is not known a priori and must be computed somehow. When dealing with large finite populations, it is almost never possible to find the exact value of the population variance, due to time, cost, and other resource constraints. When dealing with infinite populations, this is generally impossible.

A common method of estimating the variance of large (finite or infinite) populations is sampling. We start with a finite sample of values taken from the overall population. Suppose that our sample is the sequence $$(y_1,\dots,y_N)$$. There are two distinct things we can do with this sample: first, we can treat it as a finite population and describe its variance; second, we can estimate the underlying population variance from this sample.

The variance of the sample $$(y_1,\dots,y_N)$$, viewed as a finite population, is


 * $$\sigma^2 = \frac{1}{N} \sum_{i=1}^N

\left(y_i - \overline{y} \right)^ 2,$$

where $$\overline{y}$$ is the sample mean. This is sometimes known as the sample variance; however, that term is ambiguous. Some electronic calculators can calculate $$\sigma^2$$ at the press of a button, in which case that button is usually labelled "$$\sigma^2$$".

When using the sample $$(y_1,\dots,y_N)$$ to estimate the variance of the underlying larger population the sample was drawn from, it may be tempting to equate the population variance with $$\sigma^2$$. However, $$\sigma^2$$ is a biased estimator of the population variance. The following is an unbiased estimator:


 * $$s^2 = \frac{1}{N-1} \sum_{i=1}^N

\left(y_i - \overline{y} \right)^ 2,$$

where $$\overline{y}$$ is the sample mean. Note that the term $$N-1$$ in the denominator above contrasts with the equation for $$\sigma^2$$, which has $$N$$ in the denominator. Note that $$s^2$$ is generally not identical to the true population variance; it is merely an estimate, though perhaps a very good one if $$N$$ is large. Because $$s^2$$ is a variance estimate and is based on a finite sample, it too is sometimes referred to as the sample variance.

One common source of confusion is that the term sample variance may refer to either the unbiased estimator $$s^2$$ of the population variance, or to the variance $$\sigma^2$$ of the sample viewed as a finite population. Both can be used to estimate the true population variance, but $$s^2$$ is unbiased. Intuitively, computing the variance by dividing by $$N$$ instead of $$N-1$$ underestimates the population variance. This is because we are using the sample mean $$\overline{y}$$ as an estimate of the unknown population mean $$\mu$$, and the raw counts of repeated elements in the sample instead of the unknown true probabilities.

In practice, for large $$N$$, the distinction is often a minor one. In the course of statistical measurements, sample sizes so small as to warrant the use of the unbiased variance virtually never occur. In this context Press et al. commented that if the difference between n and n&minus;1 ever matters to you, then you are probably up to no good anyway - e.g., trying to substantiate a questionable hypothesis with marginal data.

An unbiased estimator
We will demonstrate why $$s^2$$ is an unbiased estimator of the population variance. An estimator $$\hat{\theta}$$ for a parameter $$\theta$$ is unbiased if $$\operatorname{E}\{ \hat{\theta}\} = \theta$$. Therefore, to prove that $$s^2$$ is unbiased, we will show that $$\operatorname{E}\{ s^2\} = \sigma^2$$. As an assumption, the population which the $$x_i$$ are drawn from has mean $$\mu$$ and variance $$\sigma^2$$.


 * $$ \operatorname{E} \{ s^2 \}

= \operatorname{E} \left\{ \frac{1}{n-1} \sum_{i=1}^n \left( x_i - \overline{x} \right) ^ 2 \right\}

$$



= \frac{1}{n-1} \sum_{i=1}^n \operatorname{E} \left\{ \left( x_i - \overline{x} \right) ^ 2 \right\}

$$



= \frac{1}{n-1} \sum_{i=1}^n \operatorname{E} \left\{ \left( (x_i - \mu) - (\overline{x} - \mu) \right) ^ 2 \right\}

$$



= \frac{1}{n-1} \sum_{i=1}^n \operatorname{E} \left\{ (x_i - \mu)^2 \right\}

- 2 \operatorname{E} \left\{ (x_i - \mu) (\overline{x} - \mu) \right\}

+ \operatorname{E} \left\{ (\overline{x} - \mu) ^ 2 \right\}

$$



= \frac{1}{n-1} \sum_{i=1}^n \sigma^2

- 2 \left( \frac{1}{n} \sum_{j=1}^n \operatorname{E} \left\{ (x_i - \mu) (x_j - \mu) \right\} \right)

+ \frac{1}{n^2} \sum_{j=1}^n \sum_{k=1}^n \operatorname{E} \left\{ (x_j - \mu) (x_k - \mu) \right\}

$$



= \frac{1}{n-1} \sum_{i=1}^n \sigma^2

- \frac{2 \sigma^2}{n}

+ \frac{\sigma^2}{n}

$$



= \frac{1}{n-1} \sum_{i=1}^n \frac{(n-1)\sigma^2}{n} $$



= \frac{(n-1)\sigma^2}{n-1} = \sigma^2

$$

See also algorithms for calculating variance.

Alternate proof

 * $$E\left[ \sum_{i=1}^n {(X_i-\overline{X})^2}\right]

=E\left[ \sum_{i=1}^n {X_i^2}\right] - nE[ \overline{X}^2] $$



=nE[X_i^2] - \frac{1}{n} E\left[\left(\sum_{i=1}^n X_i\right)^2\right] $$



=n(\operatorname{var}[X_i] + (E[X_i])^2) - \frac{1}{n} E\left[\left(\sum_{i=1}^n X_i\right)^2\right] $$



=n\sigma^2 + \frac{1}{n}(nE[X_i])^2 - \frac{1}{n}E\left[\left(\sum_{i=1}^n X_i\right)^2\right] $$



=n\sigma^2 - \frac{1}{n}\left( E\left[\left(\sum_{i=1}^n X_i\right)^2\right] - \left(E\left[\sum_{i=1}^n X_i\right]\right)^2\right) $$



=n\sigma^2 - \frac{1}{n}\left(\operatorname{var}\left[\sum_{i=1}^n X_i\right]\right) =n\sigma^2 - \frac{1}{n}(n\sigma^2) =(n-1)\sigma^2. $$

Generalizations
If X is a vector-valued random variable, with values in Rn, and thought of as a column vector, then the natural generalization of variance is E[(X &minus; μ)(X &minus; μ)T], where μ = E(X) and XT is the transpose of X, and so is a row vector. This variance is a nonnegative-definite square matrix, commonly referred to as the covariance matrix.

If X is a complex-valued random variable, then its variance is E[(X &minus; μ)(X &minus; μ)*], where X* is the complex conjugate of X. This variance is a nonnegative real number.

History
The term variance was first introduced by Ronald Fisher in his 1918 paper The Correlation Between Relatives on the Supposition of Mendelian Inheritance.

Moment of inertia
The variance of a probability distribution is analagous to the moment of inertia in classical mechanics of a corresponding linear mass distribution, with respect to rotation about its center of mass. It is because of this analogy that such things as the variance are called moments of probability distributions.