Church-Turing thesis

In computability theory the Church–Turing thesis (also known as Church's thesis, Church's conjecture and Turing's thesis) is a combined hypothesis about the nature of effectively calculable (computable) functions by recursion (Church's Thesis), by mechanical device equivalent to a Turing machine (Turing's Thesis) or by use of Church's λ-calculus:


 * Church's thesis: "Every effectively calculable function (effectively decidable predicate) is general recursive" (Kleene 1952:300)


 * Turing's thesis: "Turing's thesis that every function which would naturally be regarded as computable is computable under his definition, i.e. by one of his machines, is equivalent to Church's thesis by Theorem XXX." (Kleene 1952:376)

The three computational processes (recursion, λ-calculus, and Turing machine) were shown to be equivalent by Alonzo Church, Stephen Kleene and J.B. Rosser (1934-6) and by Alan Turing (1936-7). Although Stephen Kleene proved the two theses equivalent, the fundamental premise behind the theses — the notion of "effectively computable" or "effectively calculable" — is "a vague intuitive one" based as it is on (i) “heuristic [observational, experiential] evidence”, (ii) the “equivalence of 'diverse formulations'” (e.g. the three computational processes ) and (iii) on an observational analysis of a human with a pencil and paper following a set of rules (Turing's analysis ) and of a "worker" in boxes (Emil Post's analysis 1936 ). Thus, as they stand, neither thesis can be proven. The various notions for "effective calculability/computability" — the fly in the ointment — have been proposed as a "formal definition" by Church, Turing and Rosser , as a "working hypothesis" leading by inductive reasoning to a "natural law" by Post (an idea "sharply" criticized by Church ), or as an axiom or set of axioms (suggested by Kurt Gödel to Church in 1934-5 but discounted in Post 1936 ).

Today the thesis has near-universal acceptance. Informally the Church–Turing thesis states that if an algorithm (a procedure that terminates) exists then there is an equivalent Turing machine or applicable λ-function for that algorithm.

Formal statement

 * See also: Effectively calculable

In the following, the words "effectively calculable" will mean "produced by any intuitively 'effective' means whatsoever" and "computable" will mean "produced by a Turing-machine or equivalent mechanical device". Turing's 1939 "definitions" are virtually the same:
 * "†We shall use the expression "computable function" to mean a function calculable by a machine, and we let "effectively calculable" refer to the intuitive idea without particular identification with any one of these definitions." (cf the footnote † in Turing 1939 (his Ordinals paper) in Davis 1965:160).

The thesis can be stated as follows:


 * Every effectively calculable function is a computable function.

Turing stated it this way:
 * " It was stated ... that 'a function is effectively calculable if its values can be found by some purely mechanical process.' We may take this literally, understanding that by a purely mechanical process one which could be carried out by a machine. The development ... leads to ... an identification of computability † with effective calculability" († is the footnote above, ibid).

1930–1970
One of the important problems for logicians in the 1930s was David Hilbert's Entscheidungsproblem, which asked if there was a mechanical procedure for separating mathematical truths from mathematical falsehoods.

In the course of studying the problem, Alonzo Church and his student Stephen Kleene introduced the notion of λ-definable functions, and they were able to prove that several large classes of functions frequently encountered in number theory were λ-definable. Church proposed to Kurt Gödel that one should define the "effectively computable" functions as the λ-definable functions. Kurt Gödel, however, was not convinced and called the proposal "thoroughly unsatisfactory". As a counter-proposal Gödel offered his (primitive) recursion, as modified by Herbrand's suggestion, that he (Gödel) had detailed in his 1934 lectures in Princeton NJ (Kleene and another student J. B. Rosser transcribed the notes.) Kleene, with help of Church and Rosser, then produced proofs to show that the two calculi are equivalent. Church modified his methods to include use of Herbrand-Gödel recursion and then proved that the Entscheidungsproblem is unsolvable: There is no generalized "effective calculation" (method, algorithm) that can determine whether or not a formula in either the recursive- or λ-calculus is "valid" (more precisely: that a well formed formula is in "normal form").

Within a year, in his 1936-37 paper "On Computable Numbers, with an Application to the Entscheidungsproblem" Alan Turing asserted his notion of "effective computability" with the introduction of his a-machines (now known as the Turing machine abstract computational model). He proposed, like Church and Kleene before him, that his formal definition of mechanical computing agent was the correct one.

In a proof-sketch added as an "Appendix" to his 1936-37 paper, Turing showed that the classes of functions defined by λ-calculus and Turing machines coincided. However, although Church's claim predated Turing's, it was Turing who, in the opinions of Gödel and others, finally gave a convincing argument for why these functions really contained all functions that one would be inclined to call "effectively computable". Indeed by the 1960's Gödel had become thoroughly convinced, writing about Alan Turing's machines:


 * "In consequence of later advances, in particular of the fact that due to A. M. Turing's work69 a precise and unquestionably adequate definition of the general notion of formal system70 can now be given ..."
 * "69 See Turing 1937, p. 249 [page number in original journal]
 * "70 In my opinion the term "formal system" or "formalism" should never be used for anything but this notion ... [of] formal systems in the proper sense of the term, whose characteristic property is that reasoning in them, in principle, can be completely replaced by mechanical devices."

In a 1964 "Postscriptum" that he added to Davis's anthology The Undecidable, Gödel further expressed his opinion that recursion and the λ-calculus "are much less suitable for our purpose". Notice that in this "less suitable" category he is including his own Herbrand-Gödel recursion.

Origin of the phrase "Church-Turing thesis" :

A definition describes "the meaning" of a word, word-group, or symbol; it has about it the notion of a dogmatic statement: unquestionable, decided. A thesis, on the other hand, is a proposition or proposal that one asserts and then defends by argument, or "an hypothesis" that is to be proved (or perhaps merely asserted without proof).

Both Church and Turing individually proposed their "formal systems" should be definitions of "effective calculability" ; neither framed their assertions as theses. Indeed Post 1936 disagreed with Church's assertion of his "definition" and insisted it should be a "working hypothesis". ) Rosser (1939) identified (that is: asserted the equivalence of) the three notions-as-definitions: "All three definitions are equivalent, so it does not matter which one is used."

The overt expression of a "thesis" — an "hypothesis" — had to be left to Kleene. In his 1943 paper Recursive Predicates and Quantifiers Kleene proposed his "THESIS I":
 * "This heuristic fact [general recursive functions are effectively calculable]...led Church to state the following thesis(22). The same thesis is implicit in Turing's description of computing machines(23).
 * "THESIS I. Every effectively calculable function (effectively decidable predicate) is general recursive [Kleene's italics]
 * "Since a precise mathematical definition of the term effectively calculable (effectively decidable) has been wanting, we can take this thesis ... as a definition of it..."
 * "(22) references Church 1936
 * "(23) references Turing 1936-7

Kleene goes on to note that:
 * "...the thesis has the character of an hypothesis — a point emphasized by Post and by Church(24). If we consider the thesis and its converse as definition, then the hypothesis is an hypothesis about the application of the mathematical theory developed from the definition. For the acceptance of the hypothesis, there are, as we have suggested, quite compelling grounds."
 * "(24) references Post 1936 of Post and Church's Formal definitions in the theory of ordinal numbers, Fund. Math. vol 28 (1936) pp.11-21 (see ref. #2, Davis 1965:286).

A few years later (1952) Kleene would overtly name, defend, express the two "theses" as quoted in the lead-in paragraph, and then "identify" them (show equivalence) by use of his Theorem XXX.

Later developments: Axiomatizing the notion of "effective calculation/computation"
Gödel in correspondence with Church (ca 1934-5) proposed axiomatizing "effective calculability" (Eventually Gödel suggested to Church the use of Herbrand-Gödel recursion as its definition; after Turing's 1936-7 he supported Turing's definition):
 * "...it might be possible, in terms of effective calculability as an undefined notion, to state a set of axioms which would embody the generally accepted properties of this notion, and to do something on that basis" (Sieg 1997:160).

An attempt to understand the notion better led Robin Gandy (Turing's student and friend) in 1980 to analyze machine computation (as opposed to human-computation acted out by a Turing machine). Gandy's curiosity about, and analysis of, "cellular automata", "Conway's game of life", "parallelism" and "crystalline automata" led him to propose four "principles (or constraints) ... which it is argued, any machine must satisfy." His most-important fourth, "the principle of causality" is based on the "finite velocity of propagation of effects and signals; contemporary physics rejects the possibility of instaneous action at a distance." From these principles and some additional constraints — (1a) a lower bound on the linear dimensions of any of the parts, (1b) an upper bound on speed of propagation (the velocity of light), (2) discrete progress of the machine, and (3) deterministic behavior — he produces a theorem that "What can be calculated by a device satisfying principles I-IV is computable. ".

In the late 1990's Wilfried Sieg analyzed Turing's and Gandy's notions of "effective calculability" with the intent of "sharpening the informal notion, formulating its general features axiomatically, and investigating the axiomatic framework". In his 2002 he presents a series of constraints reduced to, roughly: "(B) Boundedness ... (L) Locality ... (D) Determinancy".

Success of the thesis
Other formalisms (besides recursion, the lambda calculus, and the Turing machine) have been proposed for describing effective calculability/computability. Stephen Kleene (1952) adds to the list the functions " reckonable in the system S1" of Kurt Gödel 1936, and Emil Post's (1943, 1946) "canonical [also called normal] systems". In the 1950's Hao Wang and Martin Davis greatly simplified the one-tape Turing-machine model (see Post-Turing machine). Marvin Minsky expanded the model to two or more tapes and greatly simplified the tapes into "up-down counters", which Melzak and Lambek further evolved into what is now known as the counter machine model. In the late 1960's and early 1970's researchers expanded the counter machine model into the register machine, a close cousins to the modern notion of the computer. Other models include combinatory logic and Markov algorithms. Gurevich adds the pointer machine model of Kolmogorov and Uspensky (1953, 1958): "...they just wanted to ... convince themselves that there is no way to extend the notion of computable function."

All these contributions involve proofs that the models are computationally equivalent to the Turing machine; such models are said to be Turing complete. Because all these different attempts at formalizing the concept of "effective calculability/computability" have yielded equivalent results, it is now generally assumed that the Church–Turing thesis is correct. In fact, Gödel (1936) proposed something stronger than this; he observed that there was something "absolute" about the concept of "reckonable in S1":
 * "It may also be shown that a function which is computable ['reckonable'] in one of the systems Si, or even in a system of transfinite type, is already computable [reckonable] in S1. Thus the concept 'computable' ['reckonable'] is in a certain definite sense 'absolute', while practically all other familiar metamathematical concepts (e.g. provable, definable, etc.) depend quite essentially on the system to which they are defined"

The success of the Church–Turing thesis prompted variations of the thesis to be proposed. For example, the Physical Church–Turing thesis (PCTT) states:


 * "Every function that can be physically computed can be computed by a Turing machine."

Another variation is the Strong Church–Turing Thesis (SCTT), which is not due to Church or Turing, but rather was realized gradually in the development of complexity theory. It states (cf. Bernstein, Vazirani 1997):


 * "Any 'reasonable' model of computation can be efficiently simulated on a probabilistic Turing machine."

The word 'efficiently' here means up to polynomial-time reductions. The Strong Church–Turing Thesis, then, posits that all 'reasonable' models of computation yield the same class of problems that can be computed in polynomial time. Assuming the conjecture that probabilistic polynomial time (BPP) equals deterministic polynomial time (P), the word 'probabilistic' is optional in the Strong Church–Turing Thesis.

If quantum computers are physically possible, they could invalidate the Strong Church–Turing Thesis, since it is also conjectured that quantum polynomial time (BQP) is larger than BPP. In other words, there are efficient quantum algorithms that perform tasks that are not known to have efficient probabilistic algorithms; for example, factoring integers. They would not however invalidate the original or Physical Church-Turing thesis, since a quantum computer can always be simulated by a Turing machine.

Philosophical implications
The Church–Turing thesis has been alleged to have some profound implications for the philosophy of mind. There are also some important open questions which cover the relationship between the Church–Turing thesis and physics, and the possibility of hypercomputation. When applied to physics, the thesis has several possible meanings:


 * 1) The universe is equivalent to a Turing machine; thus, computing non-recursive functions is physically impossible. This has also been termed the strong Church–Turing thesis (not to be confused with the previously mentioned SCTT) and is a foundation of digital physics.
 * 2) The universe is not equivalent to a Turing machine (i.e., the laws of physics are not Turing-computable), but incomputable physical events are not "harnessable" for the construction of a hypercomputer. For example, a universe in which physics involves real numbers, as opposed to computable reals, might fall into this category.
 * 3) The universe is a hypercomputer, and it is possible to build physical devices to harness this property and calculate non-recursive functions. For example, it is an open question whether all quantum mechanical events are Turing-computable, although it is known that rigorous models such as quantum Turing machines are equivalent to deterministic Turing machines. (They are not necessarily efficiently equivalent; see above.)  John Lucas and, more famously, Roger Penrose have suggested that the human mind might be the result of some kind of quantum-mechanically enhanced, "non-algorithmic" computation, although there is no scientific evidence for this proposal.

There are many other technical possibilities which fall outside or between these three categories, but these serve to illustrate the range of the concept.

Non computable functions
One can formally define functions that are not computable. A well known example of such a function is the busy beaver function. This function takes an input n and returns the largest number of steps that a Turing machine with n states can execute before halting, when run with no input. Using particular models of Turing machines, researchers have computed the value of this function for small values of n: 0 through 4. Simulations of Turing machines with 5 and 6 states have been performed, but without conclusive results. For higher values, only lower bounds have been given. Finding an upper bound on the busy beaver function is equivalent to solving the halting problem, a problem known to be unsolvable by Turing machines. Since the busy beaver function cannot be computed by Turing machines, the Church-Turing thesis asserts that this function cannot be effectively computed by any method.

Mark Burgin, Eugene Eberbach, Peter Kugel, and other researchers argue that super-recursive algorithms such as inductive Turing machines disprove the Church–Turing thesis. Their argument relies on a definition of algorithm broader than the ordinary one, so that non-computable functions obtained from some inductive Turing machines are called computable. This interpretation of the Church-Turing thesis differs from the interpretation commonly accepted in computability theory, discussed above. The argument that super-recursive algorithms are indeed algorithms in the sense of the Church-Turing thesis has not found broad acceptance within the computability research community.