Quantum logic

In mathematical physics and quantum mechanics, quantum logic is an operator algebraic system for constructing and manipulating logical combinations of quantum mechanical events. It can be regarded as a kind of propositional logic suitable for understanding the apparent anomalies regarding quantum measurement, most notably those concerning composition of measurement operations of complementary variables. This research area and its name originated in the 1936 paper by Garrett Birkhoff and John von Neumann, who attempted to reconcile some of the apparent inconsistencies of classical boolean logic with the facts related to measurement and observation in quantum mechanics.

Quantum logic has been proposed as the correct logic for propositional inference generally, most notably by the philosopher Hilary Putnam, at least at one point in his career. This thesis was an important ingredient in Putnam's paper Is Logic Empirical? in which he analysed the epistemological status of the rules of propositional logic. Putnam attributes the idea that anomalies associated to quantum measurements originate with anomalies in the logic of physics itself to the physicist David Finkelstein. It should be noted, however, that this idea had been around for some time and had been revived several years earlier by George Mackey's work on group representations and symmetry.

This logic has some unusual properties; for instance, the distributive law of propositional logic,


 * p and (q or r) = (p and q) or (p and r),

fails in this logic.

The more common view regarding quantum logic, however, is that it provides a formalism for relating observables, system preparation filters and states. In this view, the quantum logic approach resembles more closely the C*-algebraic approach to quantum mechanics; in fact with some minor technical assumptions it can be subsumed by it. The similarities of the quantum logic formalism to a system of deductive logic is regarded more as a curiosity than as a fact of fundamental philosophical importance.

Introduction
In his classic treatise Mathematical Foundations of Quantum Mechanics, von Neumann noted that projections on a Hilbert space can be viewed as propositions about physical observables. The set of principles for manipulating these quantum propositions was called quantum logic by von Neumann and Birkhoff. In his book (also called Mathematical Foundations of Quantum Mechanics) Mackey attempted to provide a set of axioms for this propositional system as an orthocomplemented lattice. Mackey viewed elements of this set as potential yes or no questions an observer might ask about the state of a physical system, questions that would be settled by some measurement. Moreover Mackey defined a physical observable in terms of these basic questions. Mackey's axiom system is somewhat unsatisfactory though, since it assumes that the partially ordered set is actually given as the orthocomplemented closed subspace lattice of a separable Hilbert space. Piron, Ludwig and others have attempted to give axiomatizations which do not require such explicit relations to the lattice of subspaces.

The remainder of the following article assumes the reader is familiar with the spectral theory of self-adjoint operators on a Hilbert space. However, the main ideas can be understood using the finite-dimensional spectral theorem.

Projections as propositions
The so-called Hamiltonian formulations of classical mechanics have three ingredients: states, observables and dynamics. In the simplest case of a single particle moving in R3, the state space is the position-momentum space R6. We will merely note here that an observable is some real-valued function f on the state space. Examples of observables are position, momentum or energy of a particle. For classical systems, the value f(x), that is the value of f for some particular system state x, is obtained by a process of measurement of f. The propositions concerning a classical system are generated from basic statements of the form
 * Measurement of f yields a value in the interval [a, b] for some real numbers a, b.

It follows easily from this characterization of propositions in classical systems that the corresponding logic is identical to that of some boolean algebra of subsets of the state space. By logic in this context we mean the rules that relate set operations and ordering relations, such as de Morgan's laws. These are analogous to the rules relating boolean conjunctives and material implication in classical propositional logic. For technical reasons, we will also assume that the algebra of subsets of the state space is that of all Borel sets. The set of propositions is ordered by the natural ordering of sets and has a complementation operation. In terms of observables, the complement of the proposition {f &ge; a} is {f < a}.

We summarize these remarks as follows:
 * The proposition system of a classical system is a lattice with a distinguished orthocomplementation operation: The lattice operations of meet and join are respectively set intersection and set union.  The orthocomplementation  operation is set complement.  Moreover this lattice is sequentially complete, in the sense that any sequence {Ei}i of elements of the lattice has a least upper bound, specifically the set-theoretic union:
 * $$ \operatorname{LUB}(\{E_i\}) = \bigcup_{i=1}^\infty E_i $$

In the Hilbert space formulation of quantum mechanics as presented by von Neumann, a physical observable is represented by some (possibly unbounded) densely-defined self-adjoint operator A on a Hilbert space H. A has a spectral decomposition, which is a projection-valued measure E defined on the Borel subsets of R. In particular, for any bounded Borel function f, the following equation holds:
 * $$ f(A) = \int_{\mathbb{R}} f(\lambda) d \operatorname{E}(\lambda)$$

In case f is the indicator function of an interval [a, b], the operator f(A) is a self-adjoint projection, and can be interpreted as the quantum analogue of the classical proposition
 * Measurement of A yields a value in the interval [a, b].

The propositional lattice of a quantum mechanical system
This suggests the following quantum mechanical replacement for the orthocomplemented lattice of propositions in classical mechanics. This is essentially Mackey's Axiom VII:


 * The orthocomplemented lattice Q of propositions of a quantum mechanical system is the lattice of closed subspaces of a complex Hilbert space H where orthocomplementation of V is the orthogonal complement V&perp;.

Q is also sequentially complete: any pairwise disjoint sequence{Vi}i of elements of Q has a least upper bound. Here disjointness of W1 and W2 means W2 is a subspace of W1&perp;. The least upper bound of {Vi}i is the closed internal direct sum.

Henceforth we identify elements of Q with self-adjoint projections on the Hilbert space H.

The structure of Q immediately points to a difference with the partial order structure of a classical proposition system. In the classical case, given a proposition p, the equations
 * $$ I = p \vee q$$
 * $$ 0 = p\wedge q $$

have exactly one solution, namely the set-theoretic complement of p. In these equations I refers to the atomic proposition which is identically true and 0 the atomic proposition which is identically false. In the case of the lattice of projections there are infinitely many solutions to the above equations.

Having made these preliminary remarks, we turn everything around and attempt to define observables within the projection lattice framework and using this definition establish the correspondence between self-adjoint operators and observables : A Mackey observable is a countably additive homomorphism from the orthocomplemented lattice of the Borel subsets of R to  Q. To say the mapping &phi; is a countably additive homomorphism means that for any sequence {Si}i of pairwise disjoint Borel subsets of R, {&phi;(Si)}i are pairwise orthogonal projections and
 * $$ \phi(\bigcup_{i=1}^\infty S_i) = \sum_{i=1}^\infty \phi(S_i) $$

Theorem. There is a bijective correspondence between Mackey observables and densely-defined self-adjoint operators on H.

This is the content of the spectral theorem as stated in terms of spectral measures.

Statistical structure
Imagine a forensics lab which has some apparatus to measure the speed of a bullet fired from a gun. Under carefully controlled conditions of temperature, humidity, pressure and so on the same gun is fired repeatedly and speed measurements taken. This produces some distribution of speeds. Though we will not get exactly the same value for each individual measurement, for each cluster of measurements, we would expect the experiment to lead to the same distribution of speeds. In particular, we can expect to assign probability distributions to propositions such as {a &le; speed &le; b}. This leads naturally to propose that under controlled conditions of preparation, the measurement of a classical system can be described by a probability measure on the state space. This same statistical structure is also present in quantum mechanics.

A quantum probability measure is a function P defined on Q with values in [0,1] such that P(0)=0, P(I)=1 and if {Ei}i is a sequence of pairwise orthogonal elements of Q then
 * $$ \operatorname{P}\!\left(\sum_{i=1}^\infty E_i\right) = \sum_{i=1}^\infty \operatorname{P}(E_i). $$

The following highly non-trivial theorem is due to A. Gleason:

Theorem. Suppose H is a separable Hilbert space of complex dimension at least 3. Then for any quantum probability measure on Q there exists a unique trace class operator S such that
 * $$ \operatorname{P}(E) = \operatorname{Tr}(S E) $$

for any self-adjoint projection E.

The operator S is necessarily non-negative (that is all eigenvalues are non-negative) and of trace 1. Such an operator is often called a density operator.

Physicists commonly regard a density operator as being represented by a (possibly infinite) density matrix relative to some orthonormal basis.

For more information on statistics of quantum systems, see quantum statistical mechanics.

Automorphisms
An automorphism of Q is a bijective mapping &alpha;:Q &rarr; Q which preserves the orthocomplemented structure of Q, that is:
 * $$ \alpha\!\left(\sum_{i=1}^\infty E_i\right) = \sum_{i=1}^\infty \alpha(E_i) $$

for any sequence {Ei}i of pairwise orthogonal self-adjoint projections. Note that this property implies monotonicity of &alpha;. If P is a quantum probability measure on Q, then E &rarr; &alpha;(E) is also a quantum probability measure on Q. By the Gleason theorem characterizing quantum probability measures quoted above, any automorphism &alpha; induces a mapping &alpha;* on the density operators by the following formula:
 * $$ \operatorname{Tr}(\alpha^*(S) E) = \operatorname{Tr}(S \alpha(E))$$

The mapping &alpha;* is bijective and preserves convex combinations of density operators. This means
 * $$ \alpha^*(r_1 S_1 + r_2 S_2) = r_1\alpha^*(S_1) + r_2 \alpha^*(S_2) \quad $$

whenever 1 = r1 + r2 and r1, r2 are non-negative real numbers. Now we use a theorem of Richard Kadison:

Theorem. Suppose &beta; is a bijective map from density operators to density operators which is convexity preserving. Then there is an operator U on the Hilbert space which is either linear or conjugate-linear, preserves the inner product and is such that
 * $$ \beta(S) = U S U^* $$

for every density operator S. In the first case we say U is unitary, in the second case U is anti-unitary.

 Remark. This note is included for technical accuracy only, and should not concern most readers. The result quoted above is not directly stated in Kadison's paper, but can be reduced to it by noting first that &beta; extends to a positive trace preserving map on the trace class operators, then applying duality and finally applying a result of Kadison's paper.

The operator U is not quite unique; if r is a complex scalar of modulus 1, then r U will be unitary or anti-unitary if U is and will implement the same automorphism. In fact, this is the only ambiguity possible.

It follows that automorphisms of Q are in bijective correspondence to unitary or anti-unitary operators modulo multiplication by scalars of modulus 1. Moreover, we can regard automorphisms in two equivalent ways: as operating on states (represented as density operators) or as operating on Q.

Non-relativistic dynamics
In non-relativistic physical systems, there is no ambiguity in referring to time evolution since there is a global time parameter. Moreover an isolated quantum system evolves in a deterministic way: if the system is in a state S at time t then at time s> t, the system is in  a state Fs,t(S). Moreover, we assume


 * The dependence is reversible: The operators Fs,t are bijective.


 * The dependence is homogeneous: Fs,t = Fs-t,0.


 * The dependence is convexity preserving: That is, each Fs,t(S) is convexity preserving.


 * The dependence is weakly continuous: The mapping R&rarr; R given by t &rarr; Tr(Fs,t(S) E) is continuous for every E in Q.

By Kadison's theorem, there is a 1-parameter family of unitary or anti-unitary operators {Ut}t such that
 * $$ \operatorname{F}_{s,t}(S) = U_{s-t} S U_{s-t}^* $$

In fact,

Theorem. Under the above assumptions, there is a strongly continuous 1-parameter group of unitary operators {Ut}t such that the above equation holds.

Note that it easily from uniqueness from Kadison's theorem that
 * $$ U_{t+s} = \sigma(t,s) U_t U_s $$

where &sigma;(t,s) has modulus 1. Now the square of an anti-unitary is a unitary, so that all the Ut are unitary. The remainder of the argument shows that &sigma;(t,s) can  be chosen to be 1 (by modifying each Ut by a scalar of modulus 1.)

Pure states
A convex combinations of statistical states S1 and S2 is a state of the form S = p2 S1 +p2 S2 where p1, p2 are non-negative and p1 + p2 =1. Considering the statistical state of system as specified by lab conditions used for its preparation, the convex combination S can be regarded as the state formed in the following way: toss a biased coin with outcome probabilities p1, p2 and depending on outcome choose system prepared to S1 or S2

Density operators form a convex set. The convex set of density operators has extreme points; these are the density operators given by a projection onto a one-dimensional space. To see that any extreme point is such a projection, note that by the spectral theorem S can be represented by a diagonal matrix; since S is non-negative all the entries are non-negative and since S has trace 1, the diagonal entries must add up to 1. Now if it happens that the diagonal matrix has more than one non-zero entry it is clear that we can express it as a convex combination of other density operators.

The extreme points of the set of density operators are called pure states. If S is the projection on the 1-dimensional space generated by a vector &psi; of norm 1 then
 * $$ \operatorname{Tr}(S E) = \langle E \psi | \psi \rangle $$

for any E in Q. In physics jargon, if
 * $$S = | \psi \rangle \langle \psi  |, $$

where &psi; has norm 1, then
 * $$ \operatorname{Tr}(S E) = \langle \psi | E | \psi \rangle .$$

Thus pure states can be identified with rays in the Hilbert space H.

The measurement process
Consider a quantum mechanical system with lattice Q which is in some statistical state given by a density operator S. This essentially means an ensemble of systems specified by a repeatable lab preparation process. The result of a cluster of measurements intended to determine the truth value of proposition E, is just as in the classical case, a probability distribution of truth values T and F. Say the probabilities are p for T and q = 1 - p for F. By the previous section p = Tr(S E) and q = Tr(S (I-E)).

Perhaps the most fundamental difference between classical and quantum systems is the following: regardless of what process is used to determine E immediately after the measurement the system will be in one of two statistical states:
 * If the result of the measurement is T
 * $$ \frac{1}{\operatorname{Tr}(E S)} E S E. $$


 * If the result of the measurement is F
 * $$ \frac{1}{\operatorname{Tr}((I-E) S)}(I- E) S (I- E). $$

(We leave to the reader the handling of the degenerate cases in which the denominators may be 0.) We now form the convex combination of these two ensembles using the relative frequencies p and q. We thus obtain the result that the measurement process applied to a statistical ensemble in state S yields another ensemble in statistical state:
 * $$ \operatorname{M}_E(S) = E S E + (I - E) S (I - E) $$

We see that a pure ensemble becomes a mixed ensemble after measurement. Measurement, as described above, is a special case of quantum operations.

Limitations of quantum logic
Quantum logic provides a satisfactory foundation for a theory of reversible quantum processes. Examples of such processes are the covariance transformations relating two frames of reference, such as change of time parameter or the transformations of special relativity. Quantum logic also provides a satisfactory understanding of density matrices. Quantum logic can be stretched to account for some kinds of measurement processes corresponding to answering yes-no questions about the state of a quantum system. However, for more general kinds of measurement operations (that is quantum operations), a more complete theory of filtering processes is necessary. Such an approach is provided by the consistent histories formalism.

In any case, these quantum logic formalisms must be generalized in order to deal with supergeometry (which is needed to handle Fermi-fields) and non-commutative geometry (which is needed in string theory and quantum gravity theory). Both of these theories use a partial algebra with an "integral" or "trace". The elements of the partial algebra are not observables; instead the "trace" yields "greens functions" which generate scattering amplitudes. One thus obtains a local S-matrix theory (see D. Edwards).

Since around 1978 the Flato school ( see F. Bayen ) has been developing an alternative to the quantum logics approach called deformation quantization (see Weyl quantization ).