Standard error (statistics)

In statistics, the standard error of a measurement, value or quantity is the standard deviation of the process by which it was generated, after adjusting for sample size. In other words the standard error is the standard deviation of the sample mean.

Standard errors provide simple measures of uncertainty in a value and are often used because:
 * If the standard error of several individual quantities is known then the standard error of some function of the quantities can be easily calculated in many cases;
 * Where the probability distribution of the value is known, they can be used to calculate an exact confidence interval; and
 * Where the probability distribution is unknown, relationships like Chebyshev's or the Vysochanskiï-Petunin inequality can be used to calculate a conservative confidence interval.

The standard error of a sample from a population is the standard deviation of the sampling distribution and may be estimated by the formula:


 * $$\frac{\sigma}{\sqrt{n}}$$

where $$\sigma$$ is the standard deviation of the population distribution and $$n$$ is the size (number of items) in the sample.

Single Sample

 * $$\sigma_\overline{x} = \frac{\sigma}{\sqrt{n}}$$
 * $$\sigma_\hat p= \sqrt{\frac{p(1-p)}{n}}$$

Two Samples

 * $$\sigma_{\overline{x}_1-\overline{x}_2}=\sqrt{{\sigma_1^2\over n_1}+{\sigma_2^2\over n_2}}$$
 * $$\sigma_{\hat p_1-\hat p_2}=\sqrt{\frac{\hat p_1(1-\hat p_1)}{n_1}+\frac{\hat p_2(1-\hat p_2)}{n_2}}$$

A very important implication of this formula is that it is possible to halve the measurement error by quadrupling the sample size. When designing statistical studies where cost is a factor, this may have a factor in understanding cost-benefit tradeoffs.

In summary, the estimator of the standard deviation of the sample mean is the standard error of the sample.