Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Simpson's paradox (or the Yule-Simpson effect) is a statistical paradox described by E. H. Simpson in 1951 and G. U. Yule in 1903, in which the successes of several groups seem to be reversed when the groups are combined. This seemingly impossible result is encountered surprisingly often in social science and medical statistics, and occurs when a weighting variable which is not relevant to the individual group assessment must be used in the combined assessment.
Explanation by example[edit | edit source]
To illustrate the paradox, suppose two people, Lisa and Bart, are let loose on the Psychology Wiki In the first week, Lisa improves 60 percent of the articles she edits while Bart improves 90 percent of the articles he edits. In the second week, Lisa improves just 10 percent of the articles she edits, while Bart improves 30 percent.
Both times, Bart improved a much higher percentage of articles than Lisa - yet when the two tests are combined, Lisa has improved a much higher percentage than Bart!
|Week 1||Week 2||Total|
This strange looking result comes about because the combined total was calculated knowing the actual number of articles each edited, which had changed from week to week. This information did not become useful until the combination was made. In the first week, Lisa edits 100 articles, improving 60 of them, while Bart edits just 10 articles, improving all but one. In the second week, Lisa edits only 10 articles, improving one, while Bart edits 100 articles, improving 30. When two week's worth of work is combined, both edited the same number of articles, yet Lisa improved 55% of them (61 in total) while Bart improved only 35% of them (39 in total).
|Week 1||Week 2||Total|
|Lisa||60 / 100||1 / 10||61 / 110|
|Bart||9 / 10||30 / 100||39 / 110|
It appears that the two sets of data separately support a certain hypothesis, but, considered together, support the opposite hypothesis.
To recap, introducing some notation that will be useful later:
- In the first week
- — Lisa improved 60% of the many articles she edited.
- — Bart had a 90% success rate during that time.
- Success is associated with Bart.
- In the second week
- — Lisa managed 10% in her busy life.
- — Bart achieved a 30% success rate.
- Success is associated with Bart.
On both occasions Bart's edits were more successful than Lisa's. But if we combine the two sets, we see that Lisa and Bart both edited 110 articles, and:
- — Lisa improved 61 articles.
- — Bart improved only 39.
- — Success is now associated with Lisa.
Bart is better for each set but worse overall!
The arithmetical basis of the paradox is uncontroversial. If and we feel that must be greater than . However if different weights are used to form the overall score for each person then this feeling may be disappointed. Here the first test is weighted for Lisa and for Bart while the weights are reversed on the second test.
By more extreme reweighting A's overall score can be pushed up towards 60% and B's down towards 30%.
Who is more accomplished? Lisa and Bart's mutual friends think Lisa is better—her overall success rate is higher. But it is possible to retell the story so that it appears obvious that Bart is more diligent. Suppose the case were as follows:
In the first week, Lisa and Bart muddle around fixing spelling errors or accidentally Americanising the pages. In the second week, both try their hands as wordsmiths, adding clarity in some cases and resulting in lateral change for most. The numerical data is as before: Bart is better at either task, but his overall success rate is worse because almost all of his changes (100 out of 110) required some deal of thought, while almost all of Lisa's (100 out of 110) were trivial. The association of success with Lisa in that case would be misleading, even spurious.
Real-world examples[edit | edit source]
The batting average paradox[edit | edit source]
The most common example of the paradox in America involves batting averages in baseball. It is possible — and in rare occasions it has actually happened — for one player to hit for a higher batting average than another player during the first half of the year, and to do so again during the second half, but to have a lower batting average for the entire year, as shown in this example:
First Half Second Half Total season Player A 4/10 (.400) 25/100 (.250) 29/110 (.264) Player B 35/100 (.350) 2/10 (.200) 37/110 (.336)
A kidney stone treatment example[edit | edit source]
The first table shows the overall success rates and numbers of treatments for both treatments.
|Treatment A||Treatment B|
|78% (273/350)||83% (289/350)|
This seems to show treatment B is more effective. If we include data about kidney stone size, however, the same set of treatments reveals a different answer.
|small stones||large stones|
|Treatment A||Treatment B||Treatment A||Treatment B|
|93% (81/87)||87% (234/270)||73% (192/263)||69% (55/80)|
The information about stone size has reversed our conclusion about the effectiveness of each treatment. Now treatment A is seen to be more effective in both cases. In this example the lurking variable (or confounding variable) of stone size was not previously known to be important until its effects were included.
Which treatment is considered better is determined by an inequality between two ratios (successes/total). The reversal of the inequality between these ratios, which creates Simpson's paradox, happens because two effects occur together:
- the lurking variable has a large effect on the ratios
- the sizes of the groups which are combined when the lurking variable is ignored are very different
The Berkeley sex bias case[edit | edit source]
One of the best known real life examples of Simpson's paradox occurred when U. C. Berkeley was sued for bias against women applying to grad school. The admission figures showed that men applying were more likely than women to be admitted, and the difference was so large that it was unlikely to be due to chance.  However when examining the individual departments, it was found that no department was significantly biased against women; in fact, most departments had a small (and not very significant) bias against men.
The explanation turned out to be that women tended to apply to departments with low rates of admission, while men tended to apply to departments with high rates of admission.
"Lurking variable"[edit | edit source]
Simpson's paradox shows us an extreme example of the importance of including data about possible confounding variables when attempting to calculate correlations.
The "lurking variable" principle also works with the Electoral College, which determines the winner of U.S. presidential elections. For example, if Candidate A wins 35 of the states and Candidate B wins 15 of the states, the color-coded map will appear to be a landslide for Candidate A; but if Candidate A's states are less populated and Candidate B's states are more populated, it is still possible for Candidate B to win. The lurking variable is the differering number of electoral votes each state carries.
See also[edit | edit source]
- Low birth weight paradox, an example of Simpson's paradox in action.
References[edit | edit source]
- Simpson,E. H. (1951). The Interpretation of Interaction in Contingency Tables. Journal of the Royal Statistical Society, Ser. B 13: 238-241.
- Bickel, P. J., Hammel, E. A., and O'Connell, J. W. (1975). Sex Bias in Graduate Admissions: Data From Berkeley. Science 187: 398-404.
[edit | edit source]
For a brief history of the origins of the paradox see the entries on Simpson's Paradox and Spurious Correlation in
- Simpson's Paradox: An Anatomy by Judea Pearl
- Mediant Fractions at cut-the-knot
- Simpson's Paradox at cut-the-knot
- Stanford Encyclopedia of Philosophy entry
|This page uses Creative Commons Licensed content from Wikipedia (view authors).|