Certainty series:

Uncertainty is a term used in subtly different ways in a number of fields, including philosophy, physics, statistics, economics, finance, insurance, psychology, sociology, engineering, and information science. It applies to predictions of future events, to physical measurements already made, or to the unknown.

Concepts

Main article: Knightian uncertainty

In his seminal work Risk, Uncertainty, and Profit[1] University of Chicago economist Frank Knight (1921) established the important distinction between risk and uncertainty:

"Uncertainty must be taken in a sense radically distinct from the familiar notion of risk, from which it has never been properly separated.... The essential fact is that 'risk' means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomena depending on which of the two is really present and operating.... It will appear that a measurable uncertainty, or 'risk' proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all."

Although the terms are used in various ways among the general public, many specialists in decision theory, statistics and other quantitative fields have defined uncertainty and risk more specifically. Doug Hubbard defines uncertainty and risk as:[2]

1. Uncertainty: The lack of certainty, A state of having limited knowledge where it is impossible to exactly describe existing state or future outcome, more than one possible outcome.
2. Measurement of Uncertainty: A set of possible states or outcomes where probabilities are assigned to each possible state or outcome - this also includes the application of a probability density function to continuous variables
3. Risk: A state of uncertainty where some possible outcomes have an undesired effect or significant loss.
4. Measurement of Risk: A set of measured uncertainties where some possible outcomes are losses, and the magnitudes of those losses - this also includes loss functions over continuous variables.

There are other different taxonomy of uncertainties and decisions that include a more broad sense of uncertainty and how it should be approached from an ethics perspective [3]:

For example, if you do not know whether it will rain tomorrow, then you have a state of uncertainty. If you apply probabilities to the possible outcomes using weather forecasts or even just a calibrated probability assessment, you have quantified the uncertainty. Suppose you quantify your uncertainty as a 90% chance of sunshine. If you are planning a major, costly, outdoor event for tomorrow then you have risk since there is a 10% chance of rain and rain would be undesirable. Furthermore, if this is a business event and you would lose \$100,000 if it rains, then you have quantified the risk (a 10% chance of losing \$100,000). These situations can be made even more realistic by quantifying light rain vs. heavy rain, the cost of delays vs. outright cancellation, etc.

Some may represent the risk in this example as the "expected opportunity loss" (EOL) or the chance of the loss multiplied by the amount of the loss (10% x \$100,000 = \$10,000). That is useful if the organizer of the event is "risk neutral" which most people are not. Most would be willing to pay a premium to avoid the loss. An insurance company, for example, would compute an EOL as a minimum for any insurance coverage, then add on to that other operating costs and profit. Since many people are willing to buy insurance for many reasons, then clearly the EOL alone is not the perceived value of avoiding the risk.

Quantitative uses of the terms uncertainty and risk are fairly consistent from fields such as probability theory, actuarial science, and information theory. Some also create new terms without substantially changing the definitions of uncertainty or risk. For example, surprisal is a variation on uncertainty sometimes used in information theory. But outside of the more mathematical uses of the term, usage may vary widely. In cognitive psychology, uncertainty can be real, or just a matter of perception, such as expectations, threats, etc.

Vagueness or ambiguity are sometimes described as "second order uncertainty", where there is uncertainty even about the definitions of uncertain states or outcomes. The difference here is that this uncertainty is about the human definitions and concepts not an objective fact of nature. It has been argued that ambiguity, however, is always avoidable while uncertainty (of the "first order" kind) is not necessarily avoidable.[4]:

Uncertainty may be purely a consequence of a lack of knowledge of obtainable facts. That is, you may be uncertain about whether a new rocket design will work, but this uncertainty can be removed with further analysis and experimentation. At the subatomic level, however, uncertainty may be a fundamental and unavoidable property of the universe. In quantum mechanics, the Heisenberg Uncertainty Principle puts limits on how much an observer can ever know about the position and velocity of a particle. This may not just be ignorance of potentially obtainable facts but that there is no fact to be found. There is some controversy in physics as to whether such uncertainty is an irreducible property of nature or if there are "hidden variables" that would describe the state of a particle even more exactly than Heisenberg's uncertainty principle allows.

Measurements

Main article: Measurement uncertainty

In metrology, physics, and engineering, the uncertainty or margin of error of a measurement is stated by giving a range of values which are likely to enclose the true value. This may be denoted by error bars on a graph, or by the following notations:

• measured value ± uncertainty
• measured value(uncertainty)

The latter "concise notation" is used for example by IUPAC in stating the atomic mass of elements. There, the uncertainty applies only to the least significant figure of x. For instance, 1.00794(7) stands for 1.00794 ± 0.00007.

Often, the uncertainty of a measurement is found by repeating the measurement enough times to get a good estimate of the standard deviation of the values. Then, any single value has an uncertainty equal to the standard deviation. However, if the values are averaged, then the mean measurement value has a much smaller uncertainty, equal to the standard error of the mean, which is the standard deviation divided by the square root of the number of measurements.

When the uncertainty represents the standard error of the measurement, then about 68.2% of the time, the true value of the measured quantity falls within the stated uncertainty range. For example, it is likely that for 31.8% of the atomic mass values given on the list of elements by atomic mass, the true value lies outside of the stated range. If the width of the interval is doubled, then probably only 4.6% of the true values lie outside the doubled interval, and if the width is tripled, probably only 0.3% lie outside. These values follow from the properties of the normal distribution, and they apply only if the measurement process produces normally distributed errors. In that case, the quoted standard errors are easily converted to 68.3% ("one sigma"), 95.4% ("two sigma"), or 99.7% ("three sigma") confidence intervals.

In this context, uncertainty depends on both the accuracy and precision of the measurement instrument. The least the accuracy and precision of an instrument are, the larger the measurement uncertainty is. Notice that precision is often determined as the standard deviation of the repeated measures of a given value, namely using the same method described above to assess measurement uncertainty. However, this method is correct only when the instrument is accurate. When it is inaccurate, the uncertainty is larger than the standard deviation of the repeated measures, and it appears evident that the uncertainty does not depend only on instrumental precision.