Type II errors

In statistics, a false negative, also called a Type II error or miss, exists when a test incorrectly reports that a result was not detected, when it was really present.

Detection algorithms of all kinds often create misses. For example, if a radar does not detect an enemy air plane when an enemy air plane is present within the radar scanned area, that is a false negative.

False negative rate
The false negative rate is the proportion of positive instances that were erroneously reported as negative. It is equal to 1 minus the sensitivity of the test.


 * $${\rm false\ negative\ rate} = \frac{\rm number\ of\ false\ negatives}{\rm number\ of\ positives}$$

In statistical hypothesis testing, this fraction is given the symbol β, and $$1 - \beta$$ is defined as the power of the test. Increasing the sensitivity of the test lowers the probability of Type II errors, but raises the probability of Type I errors (false positives that reject the null hypothesis when it is true).

When developing detection algorithms or tests, a balance must be chosen between risks of false negatives and false positives. Usually there is a threshold of how close a match to a given sample must be achieved before the algorithm reports a match. The higher this threshold, the more false negatives and the fewer false positives.

Medical testing
False negatives are a significant issue in medical testing. In some cases, there are two or more (often many) tests that could be used, one of which is simpler and less expensive, but less accurate, than the other. For example, the simplest tests for HIV and hepatitis in blood have a significant rate of false positives. These tests are used to screen out possible blood donors, but more expensive and more precise tests are used in medical practice, to determine whether a person is actually infected with these diseases.

False negatives in medical testing provide false, incorrect reassurance to both patients and physicians that patients are free of disease which is actually present. This in turn leads to people receiving inappropriate understanding and a lack of better advice and treatment to better protect their interests. A common example is relying on cardiac stress tests to detect coronary atherosclerosis, even though cardiac stress tests are known to only detect limitations of coronary artery blood flow due to advanced stenosis.

False negatives produce serious and counterintuitive problems, especially when the condition being searched for is common. If a test with a false negative rate of only 10%, is used to test a population with a true occurrence rate of 70%, many of the "negatives" detected by the test will be falsely incorrect. (See Bayes' theorem below.)

Biometrics
False negatives are also a problem in biometric scans, such as retina scans or facial recognition, when the scanner incorrectly identifies someone as not matching a known person, when in actually, it is the same person whose scan was in the system.

Bayes' theorem
The probability that an observed negative result is a false negative versus a true negative may be calculated (and the problem of false negatives demonstrated) using Bayes' theorem. The key concept of Bayes' theorem is that the true rates of false positives and false negatives are not a function of the accuracy of the test alone, but also the actual rate within the population. Often, the more powerful issue is the actual rates of the condition within the sample being tested.

False negatives and Anti-Spam
The term False negative is also used when Spam email is not detected as such but rather classified as good email. A low number of false negatives is an indicator for the efficiency of Spam filtering methods.