## FANDOM

34,292 Pages

The standard error of a method of measurement or estimation is the estimated standard deviation of the error in that method. Namely, it is the standard deviation of the difference between the measured or estimated values and the true values. Notice that the true value is, by definition, unknown and this implies that the standard error of an estimate is itself an estimated value.

In particular, the standard error of a sample statistic (such as sample mean) is the estimated standard deviation of the error in the process by which it was generated. In other words, it is the standard deviation of the sampling distribution of the sample statistic. The notation for standard error can be any one of $SE$, $SEM$ (for standard error of measurement or mean), or $S_E$.

Standard errors provide simple measures of uncertainty in a value and are often used because:

The standard error of the mean of a sample from a population is the standard deviation of the sampling distribution of the mean, and may be estimated by the formula:

$S_E = \frac{\widehat\sigma}{\sqrt{n}}$

where

$\widehat\sigma$ is an estimate of the standard deviation σ of the population, and
n is the size (number of items) of the sample.

Note: Standard error may also be defined as the standard deviation of the residual error term. (Kenney and Keeping, p. 187; Zwillinger 1995, p. 626)