Philosophy of language

{PhilPsy}}

Philosophy of language is the branch of philosophy that studies language. Its primary concerns include the nature of linguistic meaning, reference, language use, language learning and creation, language understanding, truth, thought and experience (to the extent that both are linguistic), communication, interpretation, and translation.

At heart, the discipline is concerned with five fundamental issues.
 * How are sentences composed into a meaningful whole, and what are the meanings of the parts of sentences?
 * What is the nature of meaning? (What exactly is a meaning?)
 * What do we do with language? (How do we use it socially? What is the purpose of language?)
 * How does language relate to the mind, both of the speaker and the interpreter?
 * How does language relate to the world?

Overview


Philosophers of language are not much concerned with what individual words or sentences mean. The nearest dictionary or encyclopedia may solve the problem of the meaning of words, and to speak a language correctly is generally to know what most sentences mean. What is more interesting for philosophers is the question of what it means for an expression to mean something. Why do expressions have the meanings they have? Which expressions have the same meaning as other expressions, and why? How can these meanings be known? And the best, and simplest, question might be, "what does the word 'meaning' mean?"

In a similar vein, philosophers wonder about the relationship between meaning and truth. Philosophers tend to be less concerned with which sentences are actually true, and more with what kinds of meanings can be true or false. Some examples of questions a truth-oriented philosopher of language might ask include: Can meaningless sentences be true or false? What about sentences about things that don't exist? Is it sentences that are true or false, or is it the usage of sentences?

Language, how things 'mean' something, and truth are important not just because they are used in everyday life; language shapes human development, from earliest childhood and continuing to death. Knowledge itself may be intertwined with language. Notions of self, experience, and existence may depend entirely on how language is used and what is learned through it.

The topic of learning language leads to all kinds of interesting questions. Is it possible to have any thoughts without having a language? What kinds of thoughts need a language to happen? How much does language influence knowledge of the world and how one acts in it? Can anyone reason at all without using language?

The philosophy of language is important because, for all of the above reasons, language is important, and language is important because it is inseparable from how one thinks and lives. People in general have a set of vital concepts which are connected with signs and symbols, including all words (symbols): "object," "love," "good," "God," "masculine," "feminine," "art," "government," and so on. By incorporating "meaning," everyone has shaped (or has had shaped for us) a view of the universe and how they have "meaning" within it.

Set for the task, many philosophical discussions of language begin by clarifying terminology. Some philosophers -- for instance some semiotic outlooks, and some works by linguist Noam Chomsky -- worry that the term "language" is too vague. Entire systems have been developed to clarify the field.

History
The inquiry into language stretches back to the beginnings of western philosophy with Plato, Aristotle, and the Stoics.

Plato argued in the dialogue Cratylus that there was a natural correctness to names. To do this, he pointed out that compound words and phrases have a range of correctness. For example, it is obviously wrong to say that the term "houseboat" is any good when referring to, say, a cat, because cats have nothing to do with houses or boats. He also argued that primitive names (or morphemes) also had a natural correctness, because each phoneme represented basic ideas or sentiments. For example, the letter and sound of "l" for Plato represented the idea of softness. However, by the end of the Cratylus, he had admitted that some social conventions were also involved, and that there were faults in the idea that phonemes had individual meanings. (A link to the full text of the Cratylus can be found here, courtesy of M.I.T.)

Aristotle concerned himself with the issues of logic, categories, and meaning creation. He separated all things into notions of species and genus. He thought that the meaning of a predicate was established through an abstraction of the similarities between various individual things. This is called a theory of nominalism (see the section below for more details).

Medieval philosophers also had some interest in the subject -- for many of them, the interest was provoked by a dependence upon their job of translating Greek texts. Of particular interest is the work of Peter Abelard, noteworthy for his remarkable anticipation of modern ideas of language.

Many modern western philosophers such as Umberto Eco, Ferdinand de Saussure, J.L. Austin, J. R. Searle, Leibniz, John Locke, Vico, Johann Georg Hamann, Johann Gottfried Herder, Immanuel Kant, Georg Wilhelm Friedrich Hegel, Wilhelm von Humboldt, Charles Peirce and Friedrich Nietzsche also saw the field as important.

Though philosophers had always discussed language, it took on a central role in philosophy beginning in the late nineteenth century, especially in the English speaking world and parts of Europe. The philosophy of language was so pervasive that for a time, in analytic philosophy circles, philosophy as a whole was understood to be a matter of mere philosophy of language. In the 20th century, "language" became an even more central 'theme' within the most diverse traditions of philosophy. The phrase "the linguistic turn", was used to describe the noteworthy emphasis that modern-day philosophers put upon language.

Composition and parts
A major question in the field - perhaps the single most important question for formalist and structuralist thinkers - is, "how does the meaning of a sentence emerge out of its parts?"

Principle of compositionality
Much about composition of sentences is addressed in the work of linguistics of syntax.

More logic-oriented semantics tend to look towards the principle of compositionality in order to explain the relationship between meaningful parts and whole sentences. The principle of compositionality asserts that a sentence can be understood on the basis of the meaning of the parts of the sentence (words) along with an understanding of its structure.

Problem of universals and composition
One debate that has captured the interest of many philosophers is the debate over the meaning of universals. One might ask, for example, "when people say the word, "rocks", what do they mean?" Two general answers have emerged to this question. Some have said that the expression stands for some real entity out in the world called "rocks". Others have said that it stands for some collection of particular rocks that we put into a common category. The former position has been called philosophical realism, and the latter has been called nominalism.

From the radical realist's perspective, the connection between S and M is a connection between two abstract entities. There is an entity, "man", and an entity, "Socrates". These two things connect together in some way or overlap one another. Plato's theory of forms was an instance of this.

From a nominalist's perspective, the connection between S and M is the connection between a particular entity (Socrates) and a vast collection of particular things (men). To say that Socrates is a man is to say that Socrates is a part of the class of "men".

Another perspective is to consider "man" to be a property of the entity, "Socrates". A property is a characteristic of the thing.

Still another perspective considers "man" to be the product of a propositional function. A propositional function is an operation of language that takes an entity (Socrates) and outputs a proposition. In other words, a propositional function is like an algorithm. The meaning of man is whatever takes the entity, "Socrates", and turns it into the statement, "Socrates is a man".

The nature of meaning
The answer to the question, "What is the meaning of meaning?", is not immediately obvious. One section of philosophy of language tries to answer this very question.

Types of meaning
Geoffrey Leech posited that there are two essentially different types of linguistic meaning: conceptual and associative.

The conceptual meanings of an expression have to do with the definitions of words themselves, and the features of those definitions. This kind of meaning is treated by using a technique called the semantic feature analysis. The conceptual meaning of an expression inevitably involves both definition (also called "connotation" and "intension" in the literature) and extension (also called "denotation").

The associative meaning of an expression has to do with individual mental understandings of the speaker. They, in turn, can be broken up into six sub-types: connotative, collocative, social, affective, reflected and thematic (Mwihaki 2004).

Vagueness
One issue that has bothered philosophers and ordinary people for as long as there have been words is the problem of the vagueness of words. Often, meanings expressed by the speaker are not as explicit as the listener would like them to be. The consequences of vagueness can be disastrous to classical logic because they give rise to the Sorites paradox.

Ideas and meaning
To the question, "what is meaning?", some have answered "meanings are ideas". By such accounts, "ideas" are used to refer to images as held in the mind, or to mental activity in general.

Each idea is understood to be necessarily about something external and/or internal, real or imaginary. For example, in contrast to the abstract meaning of the universal "dog", the referent "this dog" may mean a particular real life chihuahua. In both cases, the word is about something, but in the former it is about the class of dogs as generally understood, while in the latter it is about a very real and particular dog in the real world.

Empiricism and words
The classical empiricists are usually taken to be the most strident defenders of idea theories of meaning.

David Hume is well-known for his belief that thoughts were kinds of imaginable entities. (See his Enquiry Concerning Human Understanding, section 2). It might be inferred that this perspective also applied to his theory of meaning.

His forebearer, Locke, seemed a bit more skeptical, considering all ideas to be both imaginable objects of sensation and the very unimaginable objects of reflection. He stressed, in the Essay Concerning Human Understanding, that words are used both as signs for ideas -- but also to signify the lack of certain ideas.

Mental images, sounds, and recollections have been called "mental representations" in current literature. Those who defend this view are called representationalists.

Critique of idea theories
Over the past century, idea theories of meaning have been criticized by many philosophers for several reasons.

One criticism made as early as George Berkeley and as late as Ludwig Wittgenstein, was that ideas alone are unable to account for the different variations within a general meaning. For example, any hypothetical image of the meaning of "dog" has to include such varied images as a chihuahua, a pug, and a Black Lab; and this seems impossible to imagine, all of those particular breeds looking very different from one another. Another way to see this point is to question why it is that, if we have an image of a specific type of dog (say of a chihuahua), why it should be entitled to represent the entire concept.

Another criticism is that some meaningful words, known as non-lexical items, don't have any meaningfully associated image. For example, the word "the" has a meaning, but one would be hard-pressed to find a mental representation that fits it.

Another is a problem of composition - that it is difficult to explain how words and phrases combine into sentences if only ideas were involved in meaning.

Still another objection lies in the observation that certain linguistic items name something in the real world, and are meaningful, yet which we have no mental representations to deal with. For instance, it is not known what Bismarck's mother looked like, yet the phrase "Bismarck's mother" still has meaning.

A cognitive idea theory


But the idea theory of meaning has lately been defended in new form. Called the theory of prototypes, it suggests that classes are understood on the basis of the ideas we might have about particular, ideal member(s) of the class.

For example, the category of "birds" may have the idea of a robin as the prototype -- the ideal kind of bird. With experience, we come to grade the members of the class as being more or less bird-like by comparing the members to the prototype. So, for example, a penguin or an ostrich would sit at the edge of the meaning of "bird", because a penguin is unlike a robin.

If true, then this theory would account for the concern expressed by Wittgenstein (above). In which case, one of the more decisive criticisms against the idea theory of meaning would be overcome.

This theory of prototypes has been defended by contemporary cognitive scientists Eleanor Rosch and George Lakoff.

Truth and meaning
Some have asserted that meaning is nothing substantially more or less than the truth conditions they involve. For such theories, an emphasis is placed upon reference to actual things in the world to account for meaning, with the caveat that reference more or less explains the greater part (or all of) meaning itself.

Logic and language
A set of philosophers who advocated a truth-theory of meaning were the logical positivists, putting stock in the notion that the meaning of a statement arose from how it is verified.

In their analysis, logic was at the core of understanding truth and meaning. To understand this insight, some explanation of the history of logic is necessary.

Classical logicians had known since Aristotle how to codify certain common patterns of reasoning. But the turn toward language philosophy is tied closely to the development of modern logic. It began with the work of the German logician Gottlob Frege in the late nineteenth century. Frege, simultaneously with George Boole and Charles Sanders Peirce, advanced logic significantly by showing how to codify inferences using Sentential connectives, like and, or and if-then, and quantifiers like all and some. Much of this work was made possible by the development of set theory.

Logical analysis was further advanced by Bertrand Russell and Alfred North Whitehead in their groundbreaking Principia Mathematica, which attempted to produce a formal language with which the truth of all mathematical statements could be demonstrated from first principles. Russell differed from Frege greatly on many points, however. He rejected (or perhaps misunderstood) Frege's sense-reference distinction. He also disagreed that language was of fundamental significance to philosophy, and saw the project of developing formal logic as a way of eliminating all of the confusions caused by ordinary language, and hence at creating a perfectly transparent medium in which to conduct traditional philosophical argument. He hoped, ultimately, to extend the proofs of the Principia to all possible true statements, a scheme he called logical atomism. For a while it appeared that his pupil Wittgenstein had succeeded in this plan with his "Tractatus Logico-Philosophicus".

Russell's work, and that of his colleague G. E. Moore, developed in response to what they perceived as the nonsense dominating British philosophy departments at the turn of the century, a kind of British Idealism most of which was derived (albeit very distantly) from the work of Hegel. In response Moore developed an approach ("Common Sense Philosophy") which sought to examine philosophical difficulties by a close analysis of the language used in order to determine its meaning. In this way Moore sought to expunge philosophical absurdities such as "time is unreal". Moore's work would have significant, if oblique, influence (largely mediated by Wittgenstein) on Ordinary language philosophy.

Davidson, Tarski, and truth theories
The Vienna Circle, a famous group of logical positivists from the early 20th century (closely allied with Russell and Frege), adopted the verificationist theory of meaning. The verificationist theory of meaning (in at least one of its forms) states that to say that an expression is meaningful is to say that there are some conditions of experience that could exist to show that the expression is true. As noted, Frege and Russell were two proponents of this way of thinking.

A semantic theory of truth was produced by Alfred Tarski for the semantics of logic. According to Tarski's account, meaning consists of a recursive set of rules that end up yielding an infinite set of sentences, "'p' is true if and only if p", covering the whole language. His innovation produced the notion of propositional functions discussed on the section on universals (which he called "sentential functions"), and a model-theoretic approach to semantics (as opposed to a proof-theoretic one). Finally, some links were forged to the correspondence theory of truth (Tarski, 1944).

Perhaps the most influential current approach in the contemporary theory of meaning is that sketched by Donald Davidson in his introduction to the collection of essays Truth and Meaning in 1967. There he argued for the following two theses:
 * Any learnable language must be statable in a finite form, even if it is capable of a theoretically infinite number of expressions--as we may assume that natural human languages are, at least in principle. If it could not be stated in a finite way then it could not be learned through a finite, empirical method such as the way humans learn their languages. It follows that it must be possible to give a theoretical semantics for any natural language which could give the meanings of an infinite number of sentences on the basis of a finite system of axioms.
 * Giving the meaning of a sentence, he further argued, was equivalent to stating its truth conditions. He proposed that it must be possible to account for language as a set of distinct grammatical features together with a lexicon, and for each of them explain its workings in such a way as to generate trivial (obviously correct) statements of the truth conditions of all the (infinitely many) sentences built up from these.

The result is a theory of meaning that rather resembles, by no accident, Tarski's account.

Davidson's account, though brief, constitutes the first systematic presentation of truth-conditional semantics. He proposed simply translating natural languages into first-order predicate calculus in order to reduce meaning to a function of truth.

Critiques of truth-theories of meaning
Quine attacked both verificationism and the very notion of meaning in his famous essay, "Two Dogmas of Empiricism". In it, he suggested that meaning was nothing more than a vague and dispensable notion. Instead, he asserted, what was more interesting to study was the synonymy between signs. He also pointed out that verificationism was tied to the distinction between analytic and synthetic statements, and asserted that such a divide was defended ambiguously. He also suggested that the unit of analysis for any potential investigation into the world (and, perhaps, meaning) would be the entire body of statements taken as a collective, not just individual statements on their own.

Other criticisms can be raised on the basis of the limitations that truth-conditional theorists themselves admit to. Tarski, for instance, recognized that truth-conditional theories of meaning only make sense of statements, but fail to explain the meanings of the lexical parts that make up statements. Rather, the meaning of the parts of statements is presupposed by an understanding of the truth-conditions of a whole statement, and explained in terms of what he called "satisfaction conditions".

Still another objection (noted by Frege and others) was that some kinds of statements don't seem to have any truth-conditions at all. For instance, "Hello!" has no truth-conditions, because it doesn't even attempt to tell the listener anything about the state of affairs in the world. In other words, different propositions have different grammatical moods.

Deflationist accounts of truth, sometimes called 'irrealist' accounts, are the staunchest source of criticism of truth-conditional theories of meaning. According to them, "truth" is a word with no serious meaning or function in discourse except to affirm an expression. For instance, for the deflationist, the sentences "It's true that Tiny Tim is trouble" and "Tiny Tim is trouble" are equivalent. In consequence, for the deflationist, any appeal to truth as an account of meaning has little explanatory power.

The sort of truth-theories presented here can also be attacked for their formalism both in practice and principle. The principle of formalism is challenged by the informalists, who suggest that language is largely a construction of the speaker, and so, not compatible with formalization. The practice of formalism is challenged by those who observe that formal languages (such as present-day quantificational logic) fail to capture the expressive power of natural languages (as is arguably demonstrated in the awkward character of the quantificational explanation of definite description statements, as laid out by Bertrand Russell).

Finally, over the past century, forms of logic have been developed that are not dependent exclusively on the notions of truth and falsity. Some of these types of logic have been called modal logics. They explain how certain logical connectives such as "if-then" work in terms of necessity and possibility. Indeed, modal logic was the basis of one of the most popular and rigorous formulations in modern semantics called the Montague grammar. The successes of such systems naturally give rise to the argument that these systems have captured the natural meaning of connectives like if-then far better than an ordinary, truth-functional logic ever could.

Wittgenstein's turn
The philosopher Ludwig Wittgenstein was originally an artificial language philosopher, following the influence of Russell, Frege, and the Vienna Circle. However, as he matured, he came to appreciate more and more the phenomenon of natural language. Philosophical Investigations, published after his death, signalled a sharp departure from his earlier work with its focus upon ordinary language use.

His work would come to inspire future generations and spur forward a whole new discipline, which explained meaning in a new way. Meaning in natural languages was seen as primarily a question of how the speaker uses language to express intentions.

This close examination of natural language proved to be a powerful philosophical technique. Practitioners who were influenced by Wittgenstein's approach have included an entire tradition of thinkers, featuring J. L. Austin, P. F. Strawson, John Searle, Paul Grice, R. M. Hare, R. S. Peters, and Jürgen Habermas.

Peter Strawson, Keith Donnellan, and usage
Past philosophers had understood reference to be tied to words themselves. However, Sir Peter Strawson disagreed in his seminal essay, "On Referring", where he argued that there is nothing true about statements on their own; rather, only the uses of statements could be considered to be true or false.

Indeed, one of the hallmarks of the ordinary use perspective is its insistence upon the distinctions between meaning and use. "Meanings", for ordinary language philosophers, are the instructions for usage of words - the common and conventional definitions of words. Usage, on the other hand, is the actual meanings that individual speakers have - they things that an individual speaker in a particular context wants to refer to. The word "dog" is an example of a meaning, but pointing at a nearby dog and shouting "This dog smells foul!" is an example of usage. From this distinction between usage and meaning arose the divide between the fields of Pragmatics and Semantics.

Yet another distinction is of some utility in discussing language: "mentioning". Mention is when an expression refers to itself as a linguistic item, usually surrounded by quotation marks. For instance, in the expression "'Opopanax' is hard to spell", what is referred to is the word itself ("opopanax") and not what it means (an obscure gum resin). Frege had referred to instances of mentioning as "opaque contexts".

In his essay, "Reference and Definite Descriptions", Keith Donnellan sought to improve upon Strawson's distinction. He pointed out that there are two uses of definite descriptions: attributive and referential. Attributive uses provide a description of whoever is being referred to, while referential uses point out the actual referent. Attributive uses are like mediated references, while referential uses are more directly referential.

Paul Grice
The philosopher Paul Grice, working within the ordinary language tradition, understood "meaning" to have two kinds: natural and non-natural. Natural meaning had to do with cause and effect, for example with the expression "these spots mean measels". Non-natural meaning, on the other hand, had to do with the intentions of the speaker in communicating something to the listener.

In his essay, Logic and Conversation, Grice went on to explain and defend an explanation of how conversations work. His guiding maxim was called the cooperative principle, which claimed that the speaker and the listener will have mutual expectations of the kind of information that will be shared. The principle is broken down into four maxims: Quality (which demands truthfulness and honesty), Quantity (demand for just enough information as is required), Relation (relevance of things brought up), and Manner (lucidity). This principle, if and when followed, lets the speaker and listener figure out the meaning of certain implications by way of inference.

The works of Grice led to an avalanche of research and interest in the field, both supportive and critical. One spinoff was called Relevance theory, developed by Dan Sperber and Deirdre Wilson during the mid-1980s, whose goal was to make the notion of relevance more clear.

Jurgen Habermas
In his work, "Universal pragmatics", Habermas began a program that sought to improve upon the work of the ordinary language tradition. In it, he laid out the goal of a valid conversation as a pursuit of mutual understanding.

Conceptual and inferential role semantics
Main article: Inferential role semantics

Michael Dummett argued against the kind of truth-conditional semantics presented by Davidson; instead he argued that basing semantics on assertion conditions avoids a number of difficulties with truth-conditional semantics, such as the transcendental nature of certain kinds of truth condition. He leverages work done in proof-theoretic semantics to provide a kind of inferential role semantics, where: A semantics based upon assertion conditions is called a verificationist semantics: cf. the verificationism of the Vienna Circle.
 * The meaning of sentences and grammatical constructs is given by their assertion conditions; and
 * Such a semantics is only guaranteed to be coherent if the inferences associated with the parts of language are in logical harmony.

Other work has been done by Gilbert Harman on the closely related subject of conceptual role semantics.

Critiques of use theories of meaning
Cognitive scientist Jerry Fodor has noted that use theories (of the Wittgensteinian kind) seem to be committed to the notion that language is a public phenomenon -- that there is no such thing as a "private language". Fodor criticizes such claims because he thinks it is necessary to create or describe the language of thought, which would seemingly require the existence of a "private language".

Some philosophers of language, such as Christopher Gauker, have attacked use theories of meaning by denying that the discovery of a speaker's intentions is a necessary part of the listener's strategies in decoding and inferring.

Consequences and meaning
Still another perspective comes courtesy of the Pragmatists, who insist that the meaning of an expression lies in its consequences. Philosopher and polymath Charles Sanders Peirce wrote the following:

"The whole function of thought is to produce habits of action... To develop its meaning, we have, therefore, simply to determine what habits it produces, for what a thing means is simply what habits it involves. Now, the identity of a habit depends on how it might lead us to act, not merely under such circumstances as are likely to arise, but under such as might possibly occur, no matter how improbable they may be."

"...I only desire to point out how impossible it is that we should have an idea in our minds which relates to anything but conceived sensible effects of things. Our idea of anything is our idea of its sensible effects; and if we fancy that we have any other we deceive ourselves." (from the essay "How to Make Our Ideas Clear", hosted courtesy of peirce.org). In a sense, Peirce adovates a theory of meaning that is somewhat like verificationism in these statements, but is unique in how he arrives at that point.

Outside of the Pragmatic tradition was Canadian 20th century philosopher of media Marshall McLuhan. His famous dictum, "the medium is the message", can be understood to be a consequentialist theory of meaning. His idea was that the medium which is used to communicate carries with it information: namely, the consequences that arise from the fact that the medium has become popular. For example, one "meaning" of the lightbulb might be the idea of being able to read during the night.

The controversial social psychologist and ethicist Thomas Szasz also seemed to hold this view, stating that "a word means its consequences" in debate.

Language and the world
Investigations into how language interacts with the world are called "theories of reference".


 * Gottlob Frege was an advocate of a mediated reference theory, which appealed to the sense of a referent (the sense being the way the referent is presented).
 * By contrast, in response to British idealism, Bertrand Russell sought to scrap all "unreal" things from language. To do this, he created a direct reference theory.

Frege's mediated reference theory seems to differ from Russell's direct reference theory in that the former seems to leave room for senses, while the latter does not. This is problematic because it seemingly fails to recognize the difference in meaning between two statements that have the same referent but have different meanings. For example, "The President of the United States in 2004" and "George W. Bush" refer to the same thing, but in one case the person is presented in a certain light - as the President - while in the other they are presented just by name. There has to be something in between that accounts for this meaningful difference.

Innateness and learning
Some of the major issues in the philosophy of language that deal with the mind are paralleled by modern psycholinguistics. Some important questions: how much of language is innate? Is language acquisition a special faculty in the mind? What's the connection between thought and language?

There are three general perspectives on the issue of language learning:
 * The behaviorist perspective, which dictates that not only is the solid bulk of language learned, but it is learned via conditioning;
 * The hypothesis testing perspective, which states that syntactic rules and meanings are triangulated by a child using hypotheses, in much the same way that any learning occurs;
 * The innatist perspective, which states that at least some of the syntactic settings are innate and hardwired.

There are varying notions of the structure of the brain when it comes to language, as well:
 * Connectionist models, which emphasize the idea that person's lexicon and their thoughts operate in a kind of network;
 * Nativist models, which assert that there are specialized devices in the brain that are dedicated to language acquisition;
 * Computation models, which emphasize the work done related to logic-like processing of the mind;
 * Emergentist models, which focus upon the notion that natural faculties are a complex system that emerge out of simpler biological parts;
 * Reductionist models

Language and thought
Another important question relating to language and the mind is, to what extent does language influence thought (and vice-versa)? There have been a number of different perspectives on this issue, ranging across a number of suggestions.

For example, linguists Sapir and Whorf suggested that language limited the extent to which members of a linguistic community can think about certain subjects (a hypothesis paralleled in George Orwell's novel "1984"). To a lesser extent, issues in the philosophy of rhetoric (including the notion of framing of debate) suggest the influence of language upon thought.

There is also some controversy about the very meaning of a "thought". Gottlob Frege believed that thought occupied a "third realm", that was neither psychological nor a part of the universe, and believed that his Begriffsschrift calculus was a theory of thought. By contrast, Wittgenstein - in the Tractatus Logico-Philosophicus - considered thought to be a "significant proposition".

Social interaction and language
Metasemantics is a term of art used to describe all those fields that examine the social conditions that give rise to meanings and languages. Etymology (the study of the origins of words) and Stylistics (philosophical argumentation over what makes "good grammar", relative to a particular language) are two examples of metasemantic fields.

Meaning and social structures
One of the major fields of sociology, symbolic interactionism, is based on the insight that human social organization is based almost entirely on the use of meanings.

Common ground
Common ground is a key notion in Pragmatics, given popular formulation by Herbert Clark. He investigates how all communication depends on a store of common knowledge between speaker and listener.

Rhetoric and discourse analysis
Rhetoric is the study of the particular words that people use in order to achieve the proper emotional and rational effect in the listener, be it to persuade, provoke, endear, teach, etc. Some offshoots include:
 * The examination of propaganda and didacticism;
 * The examination of the purposes of swearing and pejoratives (especially how it influences the behavior of others, and defines relationships);
 * The effects of gendered language;
 * Linguistic transparency, or speaking in an accessible manner, inspired by George Orwell's essay, Politics and the English Language;
 * Performative utterances and the various tasks that language can perform (called "speech acts"), pioneered by J.L. Austin's book, How to Do Things With Words.
 * The logical concept of the domain of discourse.

Literary theory
Literary theory is a discipline that overlaps with the philosophy of language. It emphasizes the methods that readers and critics use in understanding a text. This field, being an outgrowth of the study of how to properly interpret messages, is closely tied to the ancient discipline of hermeneutics.

Miscellaneous
In 1950s, an artificial language loglan was invented that is based on first order predicate logic.

Important theorists
Among the most important theorists in the philosophy of language are:
 * Plato and Aristotle - classical philosophers
 * Ferdinand de Saussure - founder of linguistic Structuralism
 * John Stuart Mill - influential in theories of reference
 * Ludwig Wittgenstein - creator of the "meaning is use" dictum
 * Ernst Cassirer - theory of language as part of a general theory of symbolic forms
 * Walter Benjamin, Martin Heidegger - philosophers tied to the Humboldtian tradition
 * Valentin Voloshinov, Rossi-Landi - Marxist theoreticians of language
 * Michel Foucault, Jacques Derrida - Post-structuralist figures
 * Hélène Cixous, Julia Kristeva, Judith Butler - feminist theoreticians of language
 * Mikhail Bakhtin, Maurice Blanchot, Paul de Man - Theoreticians of literature whose work is of philosophical relevance
 * Charles Peirce, Umberto Eco - advocates of philosophically oriented forms of semiotics
 * Gottlob Frege, Bertrand Russell, Saul Kripke, Richard Montague - analytical philosophers of language rooted in logic-like analysis of language
 * Noam Chomsky and Jerry Fodor - syntactic, computational, and knowledge-oriented perspectives
 * Keith Donnellan, Jürgen Habermas, J.L. Austin, H. P. Grice, and John Searle - use-oriented theorists

Important topics and terms

 * Fields of interest
 * Pragmatics, Rhetoric, Semantics, Semiotics, Syntax
 * Semantics of logic
 * General semantics
 * Symbolic interactionism


 * Parts of speech
 * Speaker / (or "Encoder")
 * Interpreter / (or "Decoder")
 * Intentionality
 * Signs and Phonemes
 * Tone
 * Truth conditions (and / or satisfaction conditions)
 * Meaning
 * Ideas
 * Sense and reference
 * Speech acts
 * Linguistic Context (see also deixis)
 * Linguistic community


 * Essential aspects of meaning
 * Concepts
 * Categories, sets, classes, and Natural kinds
 * Types and tokens
 * Genus and Species
 * Connotation and denotation (intension and extension)
 * Statements and propositions
 * Subject and predicate
 * Synonyms, antonyms, and all other -onyms


 * Essential aspects of reference
 * Entities
 * Properties
 * Relations
 * Deixis
 * Referential use
 * Attributive use


 * Linguistic phenomena
 * Demonstratives and Indexicals
 * Descriptions, esp. Definite descriptions
 * Proper names
 * Metaphor
 * "Is" (of identity, predication, existence)
 * Sentences (Commandative, Indicative, and Performative)