Optimality Theory

Optimality theory or OT is a linguistic model proposed by the linguists Alan Prince and Paul Smolensky in 1993, and expanded by John J. McCarthy and Alan Prince in 1993. Although much of the interest in OT has been associated with its use in phonology (the area to which OT was first applied), the theory is also applicable to other subfields of linguistics (e.g. syntax, semantics). Optimality theory is usually considered a development of generative grammar, which shares its focus on the investigation of universal principles, linguistic typology and language acquisition.

OT is often called a connectionist theory of language, because it has its roots in neural network research, though the relationship is now largely of historical interest. It arose in part as a successor to the theory of harmonic grammar, developed in 1990 by Géraldine Legendre, Yoshiro Miyata and Paul Smolensky.

The main idea of OT is that the observed, "surface" forms of the language arise from the resolution of conflicts between grammatical constraints. These constraints are minimally violated in that the form that surfaces is the one which incurs the least serious violations, compared to a set of possible candidates. The seriousness of a violation is defined in terms of hierarchies of constraints; the violations of higher-ranked constraints are most serious. This domination is said to be strict in that higher constraints take absolute priority over lower constraints. That is, given a constraint C1, ranked above C2 and C3, the expression of the language that surfaces (the winning candidate) may perform worse than its competitors in both C2 and C3, as long as it performs better in C1. Constraints are also violable; the winning candidate need not satisfy all constraints, as long as for any rival candidate that does better than the winner on some constraint, there is a higher ranked constraint on which the winner does better than that rival. Constraints are generally regarded as universal (though not by all OT researchers), but their ranking differs from language to language, accommodating language variation. Acquisition of a language can be roughly described as the process of adjusting the ranking of these constraints to match the language one is learning (and, of course, learning a lexicon).

Constraints can be grouped into two main types: faithfulness constraints and markedness constraints. Faithfulness constraints require that the observed surface form (the output) match the underlying or lexical form (the input) in some particular way; that is, these constraints require identity between input and output forms. Markedness constraints impose requirements on the structural well-formedness of the output.

Example
As a simplified example, consider the manifestation of the English plural morpheme:    and. In the  case, the form   passes all the markedness constraints:  it is a well-formed word that is pronounceable. Thus, the faithfulness constraint wins, and the output is. In the case of, there is a markedness constraint that prohibits the   form (one cannot have a sequence of two /s/ sounds, or an /sz/ sequence, within an English phonological word). The markedness constraint is ranked higher, so the faithfulness constraint is over-ridden, and  is preferred to.

As mentioned above, in the OT of Prince & Smolensky 1993, all constraints are assumed to be present in all languages. Within a language, a constraint may be ranked high enough that it is always obeyed; it may be ranked low enough that it has no observable effects; or, it may have some intermediate ranking. The term 'the emergence of the unmarked' (or TETU) describes situations in which a markedness constraint has an intermediate ranking, so that it is violated in some forms, but nonetheless has observable effects when higher-ranked constraints are irrelevant. An early example proposed by McCarthy & Prince (1994) is the constraint NoCoda, which prohibits syllables from ending in consonants. In Balangao, NoCoda is not ranked high enough to be always obeyed, as witness roots like  (faithfulness to the input prevents deletion of the final /n/). But, in the reduplicated form  'repeatedly be left behind', the final /n/ is not copied. Under McCarthy & Prince's analysis, this is because faithfulness to the input does not apply to reduplicated material, and NoCoda is thus free to prefer  over hypothetical   (which has an additional violation of NoCoda).

Optimality Theory makes the claim that all phonological interactions can be analyzed as the interaction of faithfulness and markedness. No phonological process should be found in which an optimal candidate has worse faithfulness violations without having better markedness violations than a competing candidate. Many linguists believe that this is a falsifiable prediction, in the sense of Karl Popper and that Optimality Theory is thus a scientific theory. For instance, Idsardi (2000) has argued that OT has been disproved by violations of the above claim relating to phonological opacity. Others, like Sanders (2003) and Green (2005), have countered that all cases of opacity brought forward to date are influenced by the morphology of the language in question and that only purely phonological opacity would disprove OT. A related, falsifiable prediction about possible input-output mappings is made by Moreton (2004). A current limitation of OT is that different workers in the field use different sets of constraints and assumptions; OT is thus best thought of as a means of representing language, a paradigm in the sense of Thomas Samuel Kuhn rather than a theory. The same is true of other theories of phonology and syntax as well.