Fourier transform


 * This article specifically discusses Fourier transformation of functions on the real line; for other kinds of Fourier transformation, see Fourier analysis and list of Fourier-related transforms.

In mathematics, the Fourier transform is a certain linear operator that maps functions to other functions. Loosely speaking, the Fourier transform decomposes a function into a continuous spectrum of its frequency components, and the inverse transform synthesizes a function from its spectrum of frequency components. A useful analogy is the relationship between a series of pure notes (the frequency components) and a musical chord (the function itself). In mathematical physics, the Fourier transform of a signal $$x(t)\,$$ can be thought of as that signal in the "frequency domain." This is similar to the basic idea of the various other Fourier transforms including the Fourier series of a periodic function. (See also fractional Fourier transform and linear canonical transform for generalizations.)

Definition
Suppose $$x\,$$ is a complex-valued Lebesgue integrable function. The Fourier transform to the frequency domain, $$\omega\,$$, is given by the function:


 * $$ X(\omega) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^\infty x(t) e^{- i\omega t}\,dt $$,  for every real number $$\omega. \,$$

When the independent variable t represents time (with SI unit of seconds), the transform variable ω represents angular frequency (in radians per second).

Other notations for this same function are: $$\hat{x}(\omega)\,$$  and  $$\mathcal{F}\{x\}(\omega)\,$$. The function is complex-valued in general. ($$i\,$$ represents the imaginary unit.)

If $$X(\omega)\,$$ is defined as above, and $$x(t)\,$$ is sufficiently smooth, then it can be reconstructed by the inverse transform:


 * $$ x(t) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} X(\omega) e^{ i\omega t}\,d\omega $$,  for every real number $$t. \,$$

The interpretation of $$X(\omega)\,$$ is aided by expressing it in polar coordinate form, $$X(\omega) = A(\omega )\cdot e^{i \phi (\omega )} \,$$, where:


 * $$A(\omega ) = |X(\omega)| \, $$ the amplitude
 * $$\phi (\omega ) = \angle X(\omega) \, $$ the phase

Then the inverse transform can be written:


 * $$ x(t) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} A(\omega) e^{ i(\omega t +\phi (\omega ))}\,d\omega $$

which is a recombination of all the frequency components of $$x(t)\,$$. Each component is a complex sinusoid of the form $$e^{ i\omega t}\,$$ whose amplitude is proportional to $$A(\omega)\,$$ and whose initial phase angle (at t = 0) is $$\phi (\omega )\,$$.

Normalization factors and alternative forms
The factors $$1\over\sqrt{2\pi}$$ before each integral ensure that there is no net change in amplitude when one transforms from one domain to the other and back. The actual requirement is that their product be $$1 \over 2 \pi$$. When they are chosen to be equal, the transform is referred to as unitary. A common non-unitary convention is shown here:


 * $$ X(\omega) = \int_{-\infty}^\infty x(t) e^{- i\omega t}\,dt $$


 * $$ x(t) = \frac{1}{2\pi} \int_{-\infty}^{\infty} X(\omega) e^{ i\omega t}\,d\omega $$

As a rule of thumb, mathematicians generally prefer the unitary transform (for symmetry reasons), and physicists use either convention depending on the application.

The non-unitary form is preferred by some engineers as a special case of the bilateral Laplace transform. The substitution $$\omega = 2\pi f\,$$, where $$f\,$$ is ordinary frequency (hertz), results in another unitary transform that is popular in the field of signal processing and communications systems:


 * $$ X(f) = \int_{-\infty}^\infty x(t) e^{-i 2\pi f t}\,dt $$
 * $$ x(t) = \int_{-\infty}^\infty X(f) e^{i 2\pi f t}\,df $$

We note that $$X(f)\,$$ and $$X(\omega)\,$$ represent different, but related, functions, as shown in the table below.

Variations of all three forms can be created by conjugating the complex-exponential kernel of both the forward and the reverse transform. The signs must be opposites. Other than that, the choice is (again) a matter of convention.

Generalization
There are several ways to define the Fourier transform pair. The "forward" and "inverse" transforms are always defined so that the operation of both transforms in either order on a function will return the original function. In other words, the composition of the transform pair is defined to be the identity transformation. Using two arbitrary real constants $$a$$ and $$b$$, the most general definition of the forward 1-dimensional Fourier transform is given by


 * $$X(\omega) = \sqrt{\frac{|b|}{(2 \pi)^{1-a}}} \int_{-\infty}^{+\infty} x(t) e^{-i b \omega t} \, dt $$

and the inverse is given by


 * $$x(t) = \sqrt{\frac{|b|}{(2 \pi)^{1+a}}} \int_{-\infty}^{+\infty} X(\omega) e^{i b \omega t} \, d\omega. $$

Note that the transform definitions are symmetric; they can be reversed by simply changing the signs of a and b.

The convention adopted in this article is $$(a,b) = (0,1)$$. The choice of a and b is usually chosen so that it is geared towards the context in which the transform pairs are being used. The non-unitary convention above is $$(a,b)=(1,1)$$. Another very common definition is $$(a,b)=(0,2\pi)$$ which is often used in signal processing applications. In this case, the angular frequency $$\omega$$ becomes ordinary frequency f. If f (or ω) and t carry units, then their product must be dimensionless. For example, t may be in units of time, specifically seconds, and f (or ω) would be in hertz (or radian/s).

Properties
In this section, all the results are derived for the following definition (normalization) of the Fourier transform:


 * $$ F(\omega) = \mathcal{F}\{f\}(\omega) = \int_{-\infty}^\infty f(t) e^{- i\omega t}\,dt $$

See also the "Table of important Fourier transforms" section below for other properties of the continuous Fourier transform.

Completeness
We define the Fourier transform on the set of compactly-supported complex-valued functions of $$\mathbb{R}$$ and then extend it by continuity to the Hilbert space of square-integrable functions with the usual inner-product. Then $$ \mathcal{F}:L^2(\mathbb{R})\rightarrow L^2(\mathbb{R})$$ is a unitary operator. That is. $$ \mathcal{F}^*=\mathcal{F}^{-1}$$ and the transform preserves inner-products (see Parseval's theorem, also described below). Note that, $$\mathcal{F}^*$$ refers to adjoint of the Fourier Transform operator. Moreover we can check that,
 * $$ \mathcal{F}^2 = \mathcal{J},\quad \mathcal{F}^3 = \mathcal{F}^*=\mathcal{F}^{-1}, \quad \mbox{and} \quad \mathcal{F}^4 = \mathcal{I}\quad $$

where $$\mathcal{J}$$ is the Time-Reversal operator defined as,
 * $$ ||\mathcal{J}\{f\}(t) - f(-t)||_2 =0 $$

and $$\mathcal{I}$$ is the Identity operator defined as,
 * $$ ||\mathcal{I}\{f\}(t) - f(t)||_2 =0$$

Extensions
The Fourier transform can also be extended to the space integrable functions defined on $$ \mathbb{R}^n $$


 * $$ \mathcal{F}:L^1(\mathbb{R}^n)\rightarrow C(\mathbb{R}^n).$$

where,


 * $$ L^1(\mathbb{R}^n) = \{f: \, \mathbb{R}^n \to \mathbb{C} \;\big|\; \int_{\mathbb{R}^n} |f(x)|\, dx < \infty\}.$$

and $$ C(\mathbb{R}^n) $$ is the space of continuous functions on $$ \mathbb{R}^n $$.

In this case the definition usually appears as


 * $$ \mathcal{F}\{f\}(w) \ \stackrel{\mathrm{def}}{=}\ \int_{\R^n} f(x)e^{-i\omega\cdot x}\,dx.$$

where $$\omega\in \mathbb{R}^n$$ and $$ \omega \cdot x$$ is the inner product of the two vectors $$\omega$$ and $$x$$.

One may now use this to define the continuous Fourier transform for compactly supported smooth functions, which are dense in $$L^2(\mathbb{R}^n).$$ The Plancherel theorem then allows us to extend the definition of the Fourier transform to functions on $$L^2(\mathbb{R}^n)$$ (even those not compactly supported) by continuity arguments. All the properties and formulas listed on this page apply to the Fourier transform so defined.

Unfortunately, further extensions become more technical. One may use the Hausdorff-Young inequality to define the Fourier transform for $$f\in L^p(\mathbb{R}^n)$$ for $$ 1\leq p\leq 2$$. The Fourier transform of functions in $$ L^p $$ for the range $$ 2<p<\infty $$ requires the study of distributions, since the Fourier transform of some functions in these spaces is no longer a function, but rather a distribution.

The Plancherel theorem and Parseval's theorem
It should be noted that depending on the author either of these theorems might be referred to as the Plancherel theorem or as Parseval's theorem.

If $$ f(t) $$ and $$ g(t) $$ are square-integrable and $$ F(\omega) $$ and $$ G(\omega)$$ are their Fourier transforms, then we have Parseval's theorem:


 * $$\int_{\mathbb{R}^n} f(t) \bar{g}(t) \, dt = \int_{\mathbb{R}^n} F(\omega) \bar{G}(\omega) \, d\omega,$$

where the bar denotes complex conjugation. Therefore, the Fourier transformation yields an isometric automorphism of the Hilbert space $$L^2(\mathbb{R}^n)$$.

The Plancherel theorem, a special case of Parseval's theorem, states that
 * $$\int_{\mathbb{R}^n} \left| f(t) \right|^2\, dt = \int_{\mathbb{R}^n} \left| F(\omega) \right|^2\, d\omega. $$

This theorem is usually interpreted as asserting the unitary property of the Fourier transform. See Pontryagin duality for a general formulation of this concept in the context of locally compact abelian groups.

Localization property
As a rule of thumb: the more concentrated $$f(t)$$ is, the more spread out is $$F(\omega)$$. In particular, if we "squeeze" a function in $$t$$, it spreads out in $$\omega$$ and vice-versa; and we cannot arbitrarily concentrate both the function and its Fourier transform.

Therefore a function which equals its Fourier transform strikes a precise balance between being concentrated and being spread out. It is easy in theory to construct examples of such functions (called self-dual functions) because the Fourier transform has order 4 (that is, iterating it four times on a function returns the original function). The sum of the four iterated Fourier transforms of any function will be self-dual. There are also some explicit examples of self-dual functions, the most important being constant multiples of the Gaussian function


 * $$f(t) = \exp \left( \frac{-t^2}{2} \right).$$

This function is related to Gaussian distributions, and in fact, is an eigenfunction of the Fourier transform operators. Again, it is worth stressing that the mere fact that the Gaussian is self-dual does not make it in any way special: many self-dual functions exist.

The trade-off between the compaction of a function and its Fourier transform can be formalized. Suppose $$f(t)$$ and $$F(\omega)$$ are a Fourier transform pair. Without loss of generality, we assume that $$f(t)$$ is normalized:


 * $$\int_{-\infty}^\infty f(t)\bar{f}(t)\,dt=1.$$

It follows from Parseval's theorem that F(ω) is also normalized. Define the expected value of a function A(t) as:


 * $$\langle A\rangle \ \stackrel{\mathrm{def}}{=}\ \int_{-\infty}^\infty A(t)f(t)\bar{f}(t)\,dt$$

and the expectation value of a function $$B(\omega)$$ as:


 * $$\langle B\rangle \ \stackrel{\mathrm{def}}{=}\ \int_{-\infty}^\infty B(\omega)F(\omega)\bar{F}(\omega)\,d\omega$$

Also define the variance of $$A(t)$$ as:


 * $$\Delta^2 A\ \stackrel{\mathrm{def}}{=}\ \langle (A-\langle A\rangle) ^2\rangle $$

and similarly define the variance of $$B(\omega)$$. Then it can be shown that


 * $$\Delta t\, \Delta \omega \ge \frac{1}{2}.$$

The equality is achieved for the Gaussian function listed above, which shows that the gaussian function is maximally concentrated in "time-frequency". The most famous practical application of this property is found in quantum mechanics. The momentum and position wave functions are Fourier transform pairs to within a factor of $$h \over 2 \pi$$ and are normalized to unity. The above expression then becomes a statement of the Heisenberg uncertainty principle.

The Fourier transform also translates between smoothness and decay: if $$f(t)$$ is several times differentiable, then $$F(\omega)$$ decays rapidly towards zero for $$\omega \to \plusmn \infin$$.

Analysis of differential equations
Fourier transforms, and the closely related Laplace transforms are widely used in solving differential equations. The Fourier transform is compatible with differentiation in the following sense: if f(t) is a differentiable function with Fourier transform F(ω), then the Fourier transform of its derivative is given by iω F(ω). This can be used to transform differential equations into algebraic equations. Note that this technique only applies to problems whose domain is the whole set of real numbers. By extending the Fourier transform to functions of several variables (as outlined below), partial differential equations with domain $$\mathbb{R}^n$$ can also be translated into algebraic equations.

Convolution theorem

 * Main article: Convolution theorem

The Fourier transform translates between convolution and multiplication of functions. If $$f(t)$$ and $$h(t)$$ are integrable functions with Fourier transforms $$F(\omega)$$ and $$H(\omega)$$ respectively, and if the convolution of $$f$$ and $$h$$ exists and is absolutely integrable, then the Fourier transform of the convolution is given by the product of the Fourier transforms $$F(\omega) H(\omega)$$ (possibly multiplied by a constant factor depending on the Fourier normalization convention).

In the current normalization convention, this means that if
 * $$g(t) = \{f*h\}(t) = \int_{-\infty}^\infty f(s)h(t - s)\,ds$$

where * denotes the convolution operation; then
 * $$G(\omega) = \sqrt{2\pi}\cdot F(\omega)H(\omega).\,$$

The above formulas hold true for functions defined on both one- and multi-dimension real space. In linear time invariant (LTI) system theory, it is common to interpret $$h(t)$$ as the impulse response of an LTI system with input $$f(t)$$ and output $$g(t)$$, since substituting the unit impulse for $$f(t)$$ yields $$g(t)=h(t)$$. In this case, $$H(\omega)$$ represents the frequency response of the system.

Conversely, if $$f(t)$$ can be decomposed as the product of two other functions $$p(t)$$ and $$q(t)$$ such that their product $$ p(t) q(t) $$ is integrable, then the Fourier transform of this product is given by the convolution of the respective Fourier transforms $$P(\omega)$$ and $$Q(\omega)$$, again with a constant scaling factor.

In the current normalization convention, this means that if
 * $$f(t) = p(t) q(t)\,$$

then
 * $$F(\omega) = \frac{1}{\sqrt{2\pi}}  \bigg( P(\omega) * Q(\omega)  \bigg) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^\infty P(\alpha)Q(\omega - \alpha)\,d\alpha.$$

Cross-correlation theorem
In an analogous manner, it can be shown that if $$g(t)$$ is the cross-correlation of $$f(t)$$ and $$h(t)$$:


 * $$g(t)=(f\star h)(t) = \int_{-\infty}^\infty \bar{f}(s)\,h(t+s)\,ds$$

then the Fourier transform of $$g(t)$$ is:


 * $$G(\omega) = \sqrt{2\pi}\,\bar{F}(\omega)\,H(\omega)$$

where capital letters are again used to denote the Fourier transform.

Tempered distributions
The most general and useful context for studying the continuous Fourier transform is given by the tempered distributions; these include all the integrable functions mentioned above and have the added advantage that the Fourier transform of any tempered distribution is again a tempered distribution and the rule for the inverse of the Fourier transform is universally valid. Furthermore, the useful Dirac delta is a tempered distribution but not a function; its Fourier transform is the constant function $$1/\sqrt{2\pi}$$. Distributions can be differentiated and the above mentioned compatibility of the Fourier transform with differentiation and convolution remains true for tempered distributions.

Table of important Fourier transforms
The following table records some important Fourier transforms. $$G$$ and $$H$$ denote Fourier transforms of $$g(t)$$ and $$h(t)$$, respectively. $$g$$ and $$h$$ may be integrable functions or tempered distributions. Note that the two most common unitary conventions are included.

Fourier transform properties
Notation: $$f(t) \iff F(\omega)$$ denotes that $$f(t)\,$$ and $$F(\omega)\,$$ are a Fourier transform pair.


 * Conjugation
 * $$\overline{f(t)} \iff \overline{F(-\omega)}$$


 * Scaling
 * $$ f(at) \iff \frac{1}{|a|}F\biggl(\frac{\omega}{a}\biggr), \qquad a \in \mathbb{R}, a \ne 0$$


 * Time reversal
 * $$f(-t) \iff F(-\omega)$$


 * Time shift
 * $$f(t-t_0) \iff e^{-i\omega t_0}F(\omega)$$


 * Modulation (multiplication by complex exponential)
 * $$f(t)\cdot e^{i\omega_{0}t} \iff F(\omega-\omega_{0})\qquad \omega_{0} \in \mathbb{R},$$


 * Multiplication by sin $$\omega$$0t
 * $$f(t)\sin \omega_{0}t \iff \frac{i}{2}[F(\omega+\omega_{0})-F(\omega-\omega_{0})]\,$$


 * Multiplication by cos $$\omega$$0t
 * $$f(t)\cos \omega_{0}t \iff \frac{1}{2}[F(\omega+\omega_{0})+F(\omega-\omega_{0})]\,$$


 * Integration
 * $$\int_{-\infty}^{t} f(u)\, du \iff \frac{1}{j\omega}F(\omega)+\pi F(0)\delta(\omega)\,$$


 * Parseval's theorem
 * $$\int_{\mathbb{R}} f(t)\cdot \overline{g(t)}\, dt = \int_{\mathbb{R}} F(\omega)\cdot \overline{G(\omega)}\, d\omega \,$$