Bayesian inference

Bayesian inference is statistical inference in which evidence or observations are used to update or to newly infer the probability that a hypothesis may be true. The name "Bayesian" comes from the frequent use of Bayes' theorem in the inference process. Bayes' theorem was first derived by the Reverend Thomas Bayes.

Evidence and changing beliefs
Bayesian inference uses aspects of the scientific method, which involves collecting evidence that is meant to be consistent or inconsistent with a given hypothesis. As evidence accumulates, the degree of belief in a hypothesis changes. With enough evidence, it will often become very high or very low. Thus, it could in theory be considered a suitable logical basis to discriminate between conflicting hypotheses- hypotheses with a very high degree of belief should be accepted as true; those with a very low degree of belief should be rejected as false. In practice, while the general mathematical framework of Bayesian inference does hold, it necessitates the assigning of a priori probabilities to hypotheses that might be subject to arbitrary bias.


 * An example of Bayesian inference is
 * For billions of years, the sun has risen after it has set. The sun has set tonight. With very high probability (or I strongly believe that or it is true that) the sun will rise tomorrow. With very low probability (or I do not at all believe that or it is false that) the sun will not rise tomorrow.

Bayesian inference uses a numerical estimate of the degree of belief in a hypothesis before evidence has been observed and calculates a numerical estimate of the degree of belief in the hypothesis after evidence has been observed. Bayesian inference usually relies on degrees of belief, or subjective probabilities, in the induction process and does not necessarily claim to provide an objective method of induction. Nonetheless, some Bayesian statisticians believe probabilities can have an objective value and therefore Bayesian inference can provide an objective method of induction. See scientific method.

Bayes' theorem adjusts probabilities given new evidence in the following way:
 * $$P(H_0|E) = \frac{P(E|H_0)\;P(H_0)}{P(E)}$$

where
 * $$H_0$$ represents a hypothesis, called a null hypothesis, that was inferred before new evidence, $$E$$, became available.
 * $$P(H_0)$$ is called the prior probability of $$H_0$$.
 * $$P(E|H_0)$$ is called the conditional probability of seeing the evidence $$E$$ given that the hypothesis $$H_0$$ is true. It is also called the likelihood function when it is expressed as a function of $$H_0$$ given $$E$$.
 * $$P(E)$$ is called the marginal probability of $$E$$: the probability of witnessing the new evidence $$E$$ under all mutually exclusive hypotheses. It can be calculated as the sum of the product of all probabilities of mutually exclusive hypotheses and corresponding conditional probabilities: $$\sum P(E|H_i)P(H_i)$$.
 * $$P(H_0|E)$$ is called the posterior probability of $$H_0$$ given $$E$$.

The factor $$P(E|H_0) / P(E)$$ represents the impact that the evidence has on the belief in the hypothesis. If it is likely that the evidence will be observed when the hypothesis under consideration is true, then this factor will be large. Multiplying the prior probability of the hypothesis by this factor would result in a large posterior probability of the hypothesis given the evidence. Under Bayesian inference, Bayes' theorem therefore measures how much new evidence should alter a belief in a hypothesis.

Bayesian statisticians argue that even when people have very different prior subjective probabilities, new evidence from repeated observations will tend to bring their posterior probabilities closer together. While this is true for perfectly rational people with similar tendencies of attributing degrees of belief, differences radical enough in these tendencies can (and often do) vastly impede this process.

Multiplying the prior probability $$P(H_0)$$ by the factor $$P(E|H_0) / P(E)$$ will never yield a probability that is greater than 1. Since $$P(E)$$ is at least as great as $$P(E \cap H_0)$$, which equals $$P(E|H_0) \cdot P(H_0)$$, replacing $$P(E)$$ with $$P(E \cap H_0)$$ in the factor $$P(E|H_0) / P(E)$$ will yield a posterior probability of 1. Therefore, the posterior probability could yield a probability greater than 1 only if $$P(E)$$ were less than $$P(E \cap H_0),$$ which is never true.

The probability of $$E$$ given $$H_0$$, $$P(E|H_0)$$, can be represented as a function of its second argument with its first argument held at a given value. Such a function is called a likelihood function; it is a function of $$H_0$$ given $$E$$. A ratio of two likelihood functions is called a likelihood ratio, $$\Lambda $$. For example,


 * $$\Lambda = \frac{L(H_0|E)}{L(\mbox{not } H_0|E)} = \frac{P(E|H_0)}{P(E|\mbox{not } H_0)} $$

The marginal probability, $$P(E)$$, can also be represented as the sum of the product of all probabilities of mutually exclusive hypotheses and corresponding conditional probabilities: $$P(E|H_0)P(H_0)+ P(E|\mbox{not }H_0)P(\mbox{not }H_0) $$.

As a result, we can rewrite Bayes' theorem as


 * $$P(H_0|E) = \frac{P(E|H_0)P(H_0)}{P(E|H_0)P(H_0)+ P(E|\mbox{not }H_0)P(\mbox{not }H_0)} = \frac{\Lambda P(H_0)}{\Lambda P(H_0) + P(\mbox{not } H_0)}$$

With two independent pieces of evidence $$E_1$$ and $$E_2$$, Bayesian inference can be applied iteratively. We could use the first piece of evidence to calculate an initial posterior probability, and then use that posterior probability as a new prior probability to calculate a second posterior probability given the second piece of evidence.

Independence of evidence implies that
 * $$P(E_1, E_2 | H_0) = P(E_1 | H_0) \times P(E_2 | H_0)$$
 * $$P(E_1, E_2) = P(E_1) \times P(E_2)$$
 * $$P(E_1,E_2|\mbox{not }H_0) = P(E_1|\mbox{not }H_0) \times P(E_2|\mbox{not }H_0)$$

Bayes' theorem applied iteratively implies
 * $$P(H_0|E_1, E_2) = \frac{P(E_1|H_0)\times P(E_2|H_0)\;P(H_0)}{P(E_1)\times P(E_2)}$$

Using likelihood ratios, we find that
 * $$P(H_0|E_1, E_2) = \frac{\Lambda_1 \Lambda_2 P(H_0)}{\Lambda_1 \Lambda_2 P(H_0) + P(\mbox{not } H_0)} $$,

This iteration of Bayesian inference could be extended with more independent pieces of evidence.

Bayesian inference is used to calculate probabilities for decision making under uncertainty. In addition to probabilities, a loss function should be calculated in order to reflect the consequences of making an error. Probabilities represent the chance or belief of being wrong. A loss function represents the consequences of being wrong.

From which bowl is the cookie?
To illustrate, suppose there are two bowls full of cookies. Bowl #1 has 10 chocolate chip and 30 plain cookies, while bowl #2 has 20 of each. Our friend Fred picks a bowl at random, and then picks a cookie at random. We may assume there is no reason to believe Fred treats one bowl differently from another, likewise for the cookies. The cookie turns out to be a plain one. How probable is it that Fred picked it out of bowl #1?

Intuitively, it seems clear that the answer should be more than a half, since there are more plain cookies in bowl #1. The precise answer is given by Bayes' theorem. Let H1 correspond to bowl #1, and H2 to bowl #2. It is given that the bowls are identical from Fred's point of view, thus P(H1) = P(H2), and the two must add up to 1, so both are equal to 0.5. The datum D is the observation of a plain cookie. From the contents of the bowls, we know that P(D | H1) = 30/40 = 0.75 and P(D | H2) = 20/40 = 0.5. Bayes' formula then yields

\begin{matrix} P(H_1 | D) &=& \frac{P(H_1) \cdot P(D | H_1)}{P(H_1) \cdot P(D | H_1) + P(H_2) \cdot P(D | H_2)} \\ \\  \ & =& \frac{0.5 \times 0.75}{0.5 \times 0.75 + 0.5 \times 0.5} \\  \\  \ & =& 0.6 \end{matrix} $$ Before observing the cookie, the probability that Fred chose bowl #1 is the prior probability, P(H1), which is 0.5. After observing the cookie, we revise the probability to P(H1|D), which is 0.6.

It's worth noting that our belief that observing the plain cookie should somehow affect the prior probability $$P(H_1)$$ has formed the posterior probability $$P(H_1|D)$$, increased from 0.5 to 0.6. This reflects our intuition that the cookie is more likely from the bowl 1, since it has a higher ratio of plain to chocolate cookies than the other. The decision is given as a probability, which is different from classical statistics.

False positives in a medical test
False positives result when a test falsely or incorrectly reports a positive result. For example, a medical test for a disease may return a positive result indicating that patient has a disease even if the patient does not have the disease. We can use Bayes' theorem to determine the probability that a positive result is in fact a false positive. We find that if a disease is rare, then the majority of positive results may be false positives, even if the test is accurate.

Suppose that a test for a disease generates the following results:
 * if a tested patient has the disease, the test returns a positive result 99% of the time, or with probability 0.99
 * if a tested patient does not have the disease, the test returns a negative result 95% of the time, or with probability 0.95.

Suppose also that only 0.1% of the population has that disease, so that a randomly selected patient has a 0.001 prior probability of having the disease.

We can use Bayes' theorem to calculate the probability that positive test result is a false positive.

Let A represent the condition in which the patient has the disease, and B represent the evidence of a positive test result. Then, probability that the patient actually has the disease given the positive test result is
 * $$\begin{matrix} P(A | B) &=& \frac{P(B | A) P(A)}{P(B | A)P(A) + P(B |\mbox{not } A)P(\mbox{not }A)} \\ \\

P(A|B) &= &\frac{0.99\times 0.001}{0.99 \times 0.001 + 0.05\times 0.999}\, ,\\ ~\\ &\approx &0.019\, .\end{matrix}$$

and hence the probability that a positive result is a false positive is about (1 &minus; 0.019) = 0.981.

Despite the apparent high accuracy of the test, the incidence of the disease is so low that the vast majority of patients who test positive do not have the disease. Nonetheless, the fraction of patients who test positive who have the disease (.019) is 19 times the fraction of people who have not yet taken the test (.001). Thus the test is not useless, and re-testing may improve the reliability of the result.

In order to reduce the problem of false positives, a test should be very accurate in reporting a negative result when the patient does not have the disease. If the test reported a negative result in patients without the disease with probability 0.999, then
 * $$P(A|B) = \frac{0.99\times 0.001}{0.99 \times 0.001 + 0.001\times 0.999} \approx 0.5 $$,

so that 1- 0.5 = 0.5 now is the probability of a false positive.

On the other hand, false negatives result when a test falsely or incorrectly reports a negative result. For example, a medical test for a disease may return a negative result indicating that patient does not have a disease even though the patient actually has the disease. We can also use Bayes' theorem to calculate the probability of a false negative. In the first example above,
 * $$\begin{matrix} P(A |\mbox{not } B) &=& \frac{P(\mbox{not }B | A) P(A)}{P(\mbox{not }B | A)P(A) + P(\mbox{not }B |\mbox{not } A)P(\mbox{not }A)} \\ \\

P(A|\mbox{not }B) &= &\frac{0.01\times 0.001}{0.01 \times 0.001 + 0.95\times 0.999}\, ,\\ ~\\ &\approx &0.0000105\, .\end{matrix}$$

The probability that a negative result is a false negative is about 0.0000105 or 0.00105%. When a disease is rare, false negatives will not be a major problem with the test.

But if 60% of the population had the disease, then the probability of a false negative would be greater. With the above test, the probability of a false negative would be


 * $$\begin{matrix} P(A |\mbox{not } B) &=& \frac{P(\mbox{not }B | A) P(A)}{P(\mbox{not }B | A)P(A) + P(\mbox{not }B |\mbox{not } A)P(\mbox{not }A)} \\ \\

P(A|\mbox{not }B) &= &\frac{0.01\times 0.6}{0.01 \times 0.6 + 0.95\times 0.4}\, ,\\ ~\\ &\approx &0.0155\, .\end{matrix}$$

The probability that a negative result is a false negative rises to 0.0155 or 1.55%.

In the courtroom
Bayesian inference can be used in a court setting by an individual juror to coherently accumulate the evidence for and against the guilt of the defendant, and to see whether, in totality, it meets their personal threshold for 'beyond a reasonable doubt'.


 * Let G be the event that the defendant is guilty.


 * Let E be the event that the defendant's DNA matches DNA found at the crime scene.


 * Let p(E | G) be the probability of seeing event E assuming that the defendant is guilty. (Usually this would be taken to be unity.)


 * Let p(G | E) be the probability that the defendant is guilty assuming the DNA match event E


 * Let p(G) be the juror's personal estimate of the probability that the defendant is guilty, based on the evidence other than the DNA match. This could be based on his responses under questioning, or previously presented evidence.

Bayesian inference tells us that if we can assign a probability p(G) to the defendant's guilt before we take the DNA evidence into account, then we can revise this probability to the conditional probability p(G | E), since


 * p(G | E) = p(G) p(E | G) / p(E)

Suppose, on the basis of other evidence, a juror decides that there is a 30% chance that the defendant is guilty. Suppose also that the forensic evidence is that the probability that a person chosen at random would have DNA that matched that at the crime scene was 1 in a million, or 10-6.

The event E can occur in two ways. Either the defendant is guilty (with prior probability 0.3) and thus his DNA is present with probability 1, or he is innocent (with prior probability 0.7) and he is unlucky enough to be one of the 1 in a million matching people.

Thus the juror could coherently revise his opinion to take into account the DNA evidence as follows:


 * p(G | E) = (0.3 &times; 1.0) /(0.3 &times; 1.0 + 0.7 &times; 10-6) = 0.99999766667.

The benefit of adopting a Bayesian approach is that it gives the juror a formal mechanism for combining the evidence presented. The approach can be applied successively to all the pieces of evidence presented in court, with the posterior from one stage becoming the prior for the next.

The juror would still have to have a prior for the guilt probability before the first piece of evidence is considered. It has been suggested that this could be the guilt probability of a random person of the appropriate sex taken from the town where the crime occurred. Thus, for a crime committed by an adult male in a town containing 50,000 adult males the appropriate initial prior probability might be 1/50,000.

For the purpose of explaining Bayes' theorem to jurors, it will usually be appropriate to give it in the form of betting odds rather than probabilities, as these are more widely understood. In this form Bayes' theorem states that


 * Posterior odds = prior odds x Bayes factor

In the example above, the juror who has a prior probability of 0.3 for the defendant being guilty would now express that in the form of odds of 3:7 in favour of the defendant being guilty, the Bayes factor is one million, and the resulting posterior odds are 3 million to 7 or about 429,000 to one in favour of guilt.

In the United Kingdom, Bayes' theorem was explained to the jury in the odds form by a statistician expert witness in the rape case of Regina versus Denis John Adams. A conviction was secured but the case went to Appeal, as no means of accumulating evidence had been provided for those jurors who did not want to use Bayes' theorem. The Court of Appeal upheld the conviction and gave their opinion that "To introduce Bayes' Theorem, or any similar method, into a criminal trial plunges the Jury into inappropriate and unnecessary realms of theory and complexity, deflecting them from their proper task." No further appeal was allowed and the issue of Bayesian assessment of forensic DNA data remains controversial.

Gardner-Medwin argues that the criterion on which a verdict in a criminal trial should be based is not the probability of guilt, but rather the probability of the evidence, given that the defendant is innocent. He argues that if the posterior probability of guilt is to be computed by Bayes' theorem, the prior probability of guilt must be known. This will depend on the incidence of the crime and this is an odd piece of evidence to consider in a criminal trial. Consider the following three propositions:

A: The known facts and testimony could have arisen if the defendant is guilty,

B: The known facts and testimony could have arisen if the defendant is innocent,

C: The defendant is guilty.

Gardner-Medwin argues that the jury should believe both A and not-B in order to convict. A and not-B implies the truth of C, but the reverse is not true. It is possible that B and C are both true, but in this case he argues that a jury should acquit, even though they know that they are probably acquitting a guilty person.

Other court cases in which probabilistic arguments played some role were the Howland will forgery trial and the Sally Clark case.

Search theory
In May 1968 the US nuclear submarine USS Scorpion (SSN-589) failed to arrive as expected at her home port of Norfolk, Virginia. The US Navy was convinced that the vessel had been lost off the Eastern seaboard but an extensive search failed to discover the wreck. The US Navy's deep water expert, John Craven USN, believed that it was elsewhere and he organised a search south west of the Azores based on a controversial approximate triangulation by hydrophones. He was allocated only a single ship, the USNS Mizar, and he took advice from a firm of consultant mathematicians in order to maximise his resources. A Bayesian search methodology was adopted. Experienced submarine commanders were interviewed to construct hypotheses about what could have caused the loss of the Scorpion.

The sea area was divided up into grid squares and a probability assigned to each square, under each of the hypotheses, to give a number of probability grids, one for each hypothesis. These were then added together to produce an overall probability grid. The probability attached to each square was then the probability that the wreck was in that square. A second grid was constructed with probabilities that represented the probability of successfully finding the wreck if that square were to be searched and the wreck were to be actually there. This was a known function of water depth. The result of combining this grid with the previous grid is a grid which gives the probability of finding the wreck in each grid square of the sea if it were to be searched.

This sea grid was systematically searched in a manner which started with the high probability regions first and worked down to the low probability regions last. Each time a grid square was searched and found to be empty its probability was reassessed using Bayes' theorem. This then forced the probabilities of all the other grid squares to be reassessed (upwards), also by Bayes' theorem. The use of this approach was a major computational challenge for the time but it was eventually successful and the Scorpion was found in October of that year. Suppose a grid square has a probability p of containing the wreck and that the probability of successfully detecting the wreck if it is there is q. If the square is searched and no wreck is found, then, by Bayes' theorem, the revised probability of the wreck being in the square is given by


 * $$ p' = \frac{p(1-q)}{(1-p)+p(1-q)}.$$

Naive Bayes classifier
See naive Bayes classifier.

Posterior distribution of the binomial parameter
In this example we consider the computation of the posterior distribution for the binomial parameter. This is the same problem considered by Bayes in Proposition 9 of his essay.

We are given m observed successes and n observed failures in a binomial experiment. The experiment may be tossing a coin, drawing a ball from an urn, or asking someone their opinion, among many other possibilities. What we know about the parameter (let's call it a) is stated as the prior distribution, p(a).

For a given value of a, the probability of m successes in m+n trials is


 * $$ p(m,n|a) = \begin{pmatrix} n+m \\ m \end{pmatrix} a^m (1-a)^n. $$

Since m and n are fixed, and a is unknown, this is a likelihood function for a. From the continuous form of the law of total probability we have


 * $$ p(a|m,n) = \frac{p(m,n|a)\,p(a)}{\int_0^1 p(m,n|a)\,p(a)\,da}

= \frac{\begin{pmatrix} n+m \\ m \end{pmatrix} a^m (1-a)^n\,p(a)} {\int_0^1 \begin{pmatrix} n+m \\ m \end{pmatrix} a^m (1-a)^n\,p(a)\,da}. $$

For some special choices of the prior distribution p(a), the integral can be solved and the posterior takes a convenient form. In particular, if p(a) is a beta distribution with parameters m0 and n0, then the posterior is also a beta distribution with parameters m+m0 and n+n0.

A conjugate prior is a prior distribution, such as the beta distribution in the above example, which has the property that the posterior is the same type of distribution.

What is "Bayesian" about Proposition 9 is that Bayes presented it as a probability for the parameter a. That is, not only can one compute probabilities for experimental outcomes, but also for the parameter which governs them, and the same algebra is used to make inferences of either kind. Interestingly, Bayes actually states his question in a way that might make the idea of assigning a probability distribution to a parameter palatable to a frequentist. He supposes that a billiard ball is thrown at random onto a billiard table, and that the probabilities p and q are the probabilities that subsequent billiard balls will fall above or below the first ball. By making the binomial parameter a depend on a random event, he cleverly escapes a philosophical quagmire that was an issue he most likely was not even aware of.

Computer applications
Bayesian inference has applications in artificial intelligence and expert systems. Bayesian inference techniques have been a fundamental part of computerized pattern recognition techniques since the late 1950s. There is also an ever growing connection between Bayesian methods and simulation Monte Carlo techniques since complex models cannot be processed in closed form by a Bayesian analysis, while the graphical model structure inherent to all statistical models, even the most complex ones, allows for efficient simulation algorithms like the Gibbs sampling and other Metropolis-Hastings algorithm schemes. Recently Bayesian inference has gained popularity amongst the phylogenetics community for these reasons; applications such as BEAST and MrBayes allow many demographic and evolutionary parameters to be estimated simultaneously.

As a particular application of statistical classification, Bayesian inference has been used in recent years to develop algorithms for identifying unsolicited bulk e-mail spam. Applications which make use of Bayesian inference for spam filtering include Bogofilter, SpamAssassin, InBoxer, and Mozilla. Spam classification is treated in more detail in the article on the naive Bayes classifier.

In some applications fuzzy logic is an alternative to Bayesian inference. Fuzzy logic and Bayesian inference, however, are mathematically and semantically not compatible: You cannot, in general, understand the degree of truth in fuzzy logic as probability and vice versa.