Instrumentalism

In the philosophy of science, instrumentalism is the view that concepts and theories are merely useful instruments whose worth is measured not by whether the concepts and theories are true or false (or correctly depict reality), but by how effective they are in explaining and predicting phenomena.

Instrumentalism relates closely to pragmatism. This methodological viewpoint often contrasts with scientific realism, which defines theories as specially being more or less true. However, instrumentalism is more of a pragmatic approach to science, information and theories than an ontological statement. Oftentimes instrumentalists (just like pragmatists) have been accused of being relativists, even though many instrumentalists are also believers in sturdy objective realism (such as Karl Popper).

Instrumentalism denies that theories are truth-evaluable, and that they should be treated like a black box into which you feed observed data, and through which you produce observable predictions. This requires a distinction between theory and observation, and within each type a distinction between terms and statements. Observation statements (O-statements) have their meaning fixed by observable truth conditions, e.g. "the litmus paper is red", whilst observation terms (O-terms) have their meaning fixed by their referring to observable things or properties, e.g. "red". Theoretical statements (T-statements) have their meaning fixed by their function within a theory and aren't truth evaluable, e.g. "the solution is acidic", whilst theoretical terms (T-terms) have their meaning fixed by their systematic function within a theory and don't refer to any observable thing or property, e.g. "acidic". Though you may think that "acidic" refers to a real property in an object, the meaning of the term can only be explained by reference to a theory about acidity, in contrast to "red", which is a property you can observe. Statements that mix both T-terms and O-terms are therefore T-statements, since their totality cannot be directly observed.

There is some criticism of this distinction, however, as it confuses "non-theoretical" with "observable", and likewise "theoretical" with "non-observable". For example, the term "gene" is theoretical (so a T-term) but it can also be observed (so an O-term). Whether a term is theoretical or not is a semantic matter, because it involves the different ways in which the term gets its meaning (from a theory or from an observation). Whether a term is observable or not is an epistemic matter, because it involves how we can come to know about it. Instrumentalists contend that the distinctions are the same, that we can only come to know about something if we can understand its meaning according to truth-evaluable observations. So in the above example, "gene" is a T-term because, although it is observable, we cannot understand its meaning from observation alone.

Instrumentalist morality thus resembles utilitarianism in defining moral rules only as tools for moral good. Thus the moral code rising from a given population is simply a collection of rules that are useful to the population. David Hume was perhaps the first person to suggest that there might not be any intrinsic or metaphysical value of rules, but that they are simply secular and natural rules that are human-made.

Political instrumentalism is a view first suggested by John Dewey and later by the Chicago school of economists, which sees politics as simply means to an end. Milton Friedman paraphrased the viewpoint by explaining that he had no ideological love for free markets, but he might as simply be a socialist if socialism fulfilled the ends most people seem to want. The fallibilistic epistemology of Karl Popper adds to this a belief that we should empirically measure all politics and verify whether they fulfill their goals, and try to falsify our politics, critique them and come up with better ways to reach the ends.

In the philosophy of mind, instrumentalism is the view (sometimes, somewhat controversially, attributed to Daniel Dennett), that propositional attitudes such as belief are not concepts on which we can base scientific investigations of the mind and brain, but that acting as if other beings do have beliefs is often a successful strategy. For example, acting as if the chess playing computer has the belief that taking the queen will give it a significant advantage is a successful strategy, despite the fact that few people would argue simple electronics devices have beliefs as we normally think of them.