Extrapolation

In mathematics, extrapolation is the process of constructing new data points outside a discrete set of known data points. It is similar to the process of interpolation, which constructs new points between known points, but its results are often less meaningful, and are subject to greater uncertainty.

Linear extrapolation
This means creating a tangent line at the end of the known data and extending it beyond that limit. A linear extrapolation will only provide good results when used to extend the graph of an approximately linear function. A linear extrapolation can be done easily with a ruler on a written graph or with a computer. An example is a trend line.

Conic extrapolation
A conic section can be created using five points near the end of the known data. If the conic section created is an ellipse or circle, it will curve back on itself. A parabolic or hyperbolic curve will not, but may curve back relative to the X-axis. This type of extrapolation could be done with a conic sections template on a written graph or with a computer.

Polynomial extrapolation
A polynomial curve can be created through the entire known data or just near the end. The resulting curve can then be extended beyond the end of the known data. Polynomial extrapolation is typically done by means of Lagrange interpolation or using Newton's method of finite differences to create a Newton series that fits the data. The resulting polynomial may be used to extrapolate the data.

Quality of extrapolation
Typically, the quality of a particular method of extrapolation is limited by the assumptions about the function made by the method. If the method assumes the data is smooth, then a non-smooth function will be poorly extrapolated.

Even for proper assumptions about the function, the extrapolation can diverge exponentially from the function. The classic example is truncated power series representations of sin(x) and related trigonometric functions. For instance, taking only data from near the x = 0, we may estimate that the function behaves as sin(x) ~ x. In the neighborhood of x = 0, this is an excellent estimate. Away from x = 0 however, the extrapolation moves arbitrarily away from the x-axis while sin(x) remains in the interval [&minus;1,1]. I.e., the error increases without bound.

Taking more terms in the power series of sin(x) around x = 0 will produce better agreement over a larger interval near x = 0, but will still produce extrapolations that diverge away from the x-axis.

This divergence is a specific property of extrapolation methods and is only circumvented when the functional forms assumed by the extrapolation method (inadvertently or intentionally due to additional information) accurately represent the nature of the function being extrapolated. For particular problems, this additional information may be available, but in the general case, it is impossible to satisfy all possible function behaviors with a workably small set of potential behaviors.

The extent to which an extrapolation is accurate is known as the "prediction confidence interval," and is usually expressed as an upper and lower boundary within which the prediction is expected to be accurate 19 times out of 20 (a 95% confidence interval).

Examples of extrapolation error
An extrapolation's reliability is indicated by its prediction confidence interval, which often diverges to impossible values. Extrapolating beyond that range can lead to misleading results.

For example, the death rate from a new disease may increase dramatically early on. If the graph of the death rate is then extrapolated linearly, it might appear that the entire human population will be dead from the disease in a short number of years. In reality, the death rate from a newly discovered disease may fall as the susceptible die off and the remainder alter their behavior to avoid contracting the disease. Those who remain may also have a natural immunity to the disease or an acquired immunity due to exposure. Medical treatments affecting the spread and death rate of the disease may be developed, as well. A simple linear extrapolation assumes that there is an infinite population, and if the trend is growing faster than the population it will predict that more will have died than have ever been alive.

Similarly, if the amount of water in a lake is decreasing over time, a linear extrapolation will predict that there will be a negative amount of water shortly after the water is gone. This is an absurd result which indicates that the extrapolation is being performed in the wrong domain.

Selection of an improper domain, such as an infinite domain when all possible values are finite, or a negative domain for nonnegative values, is the second most common extrapolation error after failure to include a prediction confidence interval. See also: logistic curve.

Extrapolation in the complex plane
In complex analysis, a problem of extrapolation may be converted into an interpolation problem by the change of variable z  1/z. This transform exchanges the part of the complex plane inside the unit circle with the part of the complex plane outside of the unit circle. In particular, the compactification point at infinity is mapped to the origin and vice versa. Care must be taken with this transform however, since the original function may have had "features", for example poles and other singularities, at infinity that were not evident from the sampled data.

Another problem of extrapolation is loosely related to the problem of analytic continuation, where (typically) a power series representation of a function is expanded at one of its points of convergence to produce a power series with a larger radius of convergence. In effect, a set of data from a small region is used to extrapolate a function onto a larger region.

Again, analytic continuation can be thwarted by function features that were not evident from the initial data.

Also, one may use  sequence transformations like Padé approximants and Levin-type sequence transformations as extrapolation methods that lead to a summation of  power series that are divergent outside the original radius of convergence. In this case, one often obtains rational approximants.