Inductive reasoning is the complement of deductive reasoning. For other article subjects named induction see induction.

Induction or inductive reasoning, sometimes called inductive logic, is the process of reasoning in which the premises of an argument support the conclusion, but do not ensure it. It is used to ascribe properties or relations to types based on limited observations of particular tokens; or to formulate laws based on limited observations of recurring phenomenal patterns. Induction is used, for example, in using specific propositions such as:

• This ice is cold.
• A billiard ball moves when struck with a cue.

to infer general propositions such as:

• All ice is cold. or: There is no ice in the Sun.
• For every action, there is an equal and opposite reaction.

## Contents

#### Examples

Strong:

All observed crows are black.
Therefore all crows are black.

This exemplifies the nature of induction: inducing the universal from the particular. And clearly the conclusion is not certain. Unless we've seen every crow - and how do we know that? - maybe there are some rare blue ones.

Weak:

I always hang pictures on nails.
Therefore all pictures hang from nails.

In this example, the premise is built upon a certainty: "I always hang pictures on nails", but not all people hang pictures on nails and those that do use nails may only do some of the time. There are a number of objects that may be used to hang picture, including, but not limited to: screws, bolts, and clips. The conclusion I draw is an overgeneralization and is, in some instances, false.

Teenagers get lots of speeding tickets.
Therefore all teenagers speed.

In this example, the foundational premise is not built upon a certainty: not every teenager we've observed speeding has received a ticket. It may be in the general nature of teenagers to speed - as it is crows to be black - but the premise is based more on wishful thinking than direct observation.

## Validity

Formal logic as most people learn it is deductive rather than inductive. Some philosophers claim to have created systems of inductive logic, but it is controversial whether a logic of induction is even possible. In contrast to deductive reasoning, conclusions arrived at by inductive reasoning do not necessarily have the same degree of certainty as the initial premises. For example, a conclusion that all swans are white is obviously wrong, but may have been thought correct in Europe until the settlement of Australia. Inductive arguments are never binding but they may be cogent. Inductive reasoning is deductively invalid. (An argument in formal logic is valid if and only if it is not possible for the premises of the argument to be true whilst the conclusion is false.)

In induction there are always many conclusions that can reasonably be related to certain premises. Inductions are open; deductions are closed.

The classic philosophical treatment of the problem of induction, meaning the search for a justification for inductive reasoning, was by the Scotsman David Hume. Hume highlighted the fact that our everyday reasoning depends on patterns of repeated experience rather than deductively valid arguments. For example we believe that bread will nourish us because it has in the past, but it is at least conceivable that bread in the future will poison us.

Someone who insisted on sound deductive justifications for everything would starve to death, said Hume. Instead of unproductive radical skepticism about everything, he advocated a practical skepticism based on common-sense, where the inevitability of induction is accepted.

Induction is sometimes framed as reasoning about the future from the past, but in its broadest sense it involves reaching conclusions about unobserved things on the basis of what is observed. Inferences about the past from present evidence (e.g. archaeology) count as induction. Induction could also be across space rather than time, e.g. conclusions about the whole universe from what we observe in our galaxy or national economic policy based on local economic performance.

20th Century developments have framed the problem of induction very differently. Rather than a choice about what predictions to make about the future, it can be seen as a choice of what concepts to fit to observation (see the entry for grue) or of what graphs to fit to a set of observed data points. Nelson Goodman posed a “new riddle of induction” by coming up with a property grue to which induction does not apply.

## Types of inductive reasoning

Generalization
A generalization, or inductive generalization, proceeds from a premise about a sample to a conclusion about the population.
1. A proportion Q of the sample has attribute A.
2. Conclusion: Q of the population has attribute A.

The support which the premises provide for the conclusion is dependent on the number of individuals in the sample group compared to the number in the population, and the randomness of the sample. The hasty generalization and biased sample are fallacies related to generalization.

Statistical syllogism
A statistical syllogism proceeds from a generalization to a conclusion about an individual.
1. A proportion Q of population P has attribute A.
2. An individual I is a member of P.
3. Conclusion: There is a probability which corresponds to Q that I has A.

The proportion in premise 1 can be a word like '3/5 of', 'all' or 'few'. Two dicto simpliciter fallacies can occur in statistical syllogisms. They are "accident" and "converse accident".

Simple Induction
Simple induction proceeds from a premise about a sample group to a conclusion about another individual.
1. Proportion Q of known instances of population P has attribute A.
2. Individual I is another member of P.
3. Conclusion: There is a probability which corresponds to Q that I has A.

This is actually a combination of a generalization and a statistical syllogism, where the conclusion of the generalization is also the first premise of the statistical syllogism.

Argument from analogy
An (inductive) analogy proceeds from known similarities between two things to a conclusion about an additional attribute that is common to both things:
1. Thing P is similar to thing Q.
2. P has attribute A.
3. Conclusion: Q has attribute A.

An analogy relies on the inference that the known shared properties (similarities) imply that A is also a shared property. The support which the premises provide for the conclusion is dependent upon the relevance and number of the similarities between P and Q.

Causal inference
A causal inference draws a conclusion about a causal connection based on the conditions of the occurrence of an effect.

Premises about the correlation of two things can indicate a causal relationship between them, but additional factors must be confirmed to establish the exact form of the causal relationship.

Prediction
A prediction draws a conclusion about a future individual from a past sample.
1. Proportion Q of observed members of group G have had attribute A.
2. There is a probability which corresponds to Q that the next observed member of G will have A.
Argument from authority
An argument from authority draws a conclusion about the truth of a statement based on the proportion of true propositions which a sources says. It has the same form as a prediction.
1. Proportion Q of the claims of authority A have been true.
2. There is a probability which corresponds to Q that this claim of A is true.

Example:

All observed claims from websites about logic are true.
This information came from websites about logic.
Therefore, this information is (probably) true.

## Bayesian inference

Of the candidate systems of inductive logic, the most influential is Bayesianism, which uses probability theory as a framework for induction. Bayes theorem is used to calculate how much the strength of one’s belief in a hypothesis should change, given some evidence.

There is debate around what it is that informs the original degree of belief. Objective Bayesians seek an objective value for the degree of probability of a hypothesis being correct, and so do not avoid the philosophical criticisms of objectivism. Subjective Bayesians hold that the prior probabilities represent subjective degrees of belief, but that repeated application of Bayes’ theorem leads to a high degree of agreement on the posterior probability. They therefore fail to provide an objective standard for choosing between conflicting hypotheses. The theorem can be used to rationally justify belief in some hypothesis, but at the expense of rejecting objectivism. Such a scheme cannot be used, for instance, to objectively decide between conflicting scientific paradigms.

Edwin Jaynes, an outspoken physicist and Bayesian, argued that 'subjective' elements are present in all of inference (e.g. in choosing axioms for deductive inference, in choosing initial degrees of belief or prior probabilities, and in choosing likelihoods), and sought a series of principles for assigning probabilities from qualitative knowledge. Maximum entropy (a generalization of the principle of indifference) and transformation groups are the two resulting tools he produced; both attempt to alleviate the subjectivity of probability assignment in specific situations by converting knowledge of e.g. symmetries of a situation into unambiguous choices for probability distributions.

Bayesians feel entitled to call their system an inductive logic because of Cox's Theorem, which derives probability from a set of logical constraints on a system of inductive reasoning..