Psychology Wiki
Register
No edit summary
No edit summary
 
(3 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 
{{ExpPsy}}
 
{{ExpPsy}}
  +
{{Expert}}
   
'''Experimenter's bias''' is the phenomenon in [[experimental science]] by which the outcome of an experiment tends to be biased towards a result expected by the human experimenter. The inability of a human being to remain completely objective is the ultimate source of this bias. It occurs more often in sociological and medical sciences, for which reason [[double blind]] techniques are often employed to combat the bias. But experimenter's bias can also be found in some physical sciences, where the experimenter rounds-off measurements. If the signal being measured is actually smaller than the rounding error and the data are over-averaged, a positive result for the measurement can be found in the data where none exists (i.e. a more precise experimental apparatus would conclusively show no such signal).
+
'''Experimenter's bias''' is the phenomenon in [[experimental science]] by which the outcome of an experiment tends to be biased towards a result expected by the human experimenter. The inability of a human being to remain completely objective is the ultimate source of this bias. It occurs more often in sociological and medical sciences, for which reason [[double blind]] techniques are often employed to combat the bias. But experimenter's bias can also be found in some physical sciences, where the experimenter rounds-off measurements. If the signal being measured is actually smaller than the rounding error and the data are over-averaged, a positive result for the measurement can be found in the data where none exists (i.e. a more precise experimental apparatus would conclusively show no such signal).
   
  +
In principle, if a measurement has a resolution of
In principle, if a measurement has a resolution of <math>R</math>, then if the experimenter averages <math>N</math> independent measurements the average will have a resolution of <math>R/\sqrt{N}</math> (this is the [[central limit theorem]] of statistics). This is an important experimental technique used to reduce the impact of randomness on an experiment's outcome. But note that this requires that the measurements be statistically independent, and there are several reasons why that independence may fail. If it does then the average may not actually be a better measurement but may merely reflect the correlations among the individual measurements and their non-independent nature.
 
   
  +
<math>R</math>
The most common cause of non-independence is [[error|systematic errors]] (errors affecting all measurements equally, causing the different measurements to be highly correlated, so the average is no better than any single measurement). But another cause can be due to the inability of a human observer to round off measurements in a truly random manner. If an experiment is searching for a sidereal variation of some measurement, and if the measurement is rounded-off by a human who knows the sidereal time of the measurement, and if hundreds of measurements are averaged to extract a "signal" which is smaller than the apparatus' actual resolution, then it should be clear that this "signal" can come from the non-random round-off, and not from the apparatus itself. In such cases a single-blind experimental protocol is required; if the human observer does not know the sidereal time of the measurements, then even though the round-off is non-random it cannot introduce a spurious sidereal variation.
 
  +
, then if the experimenter averages
   
  +
<math>N</math>
Note that modern electronic or computerized data acquisition techniques have greatly reduced the likelihood of such bias, but it can still be introduced by a poorly-designed analysis technique. Experimenter's bias was not well recognized until the 1950's and 60's, and then it was primarily in medical experiments and studies. Its effects on experiments in the physical sciences have not always been fully recognized.
 
  +
independent measurements the average will have a resolution of
   
  +
<math>R/\sqrt{N}</math>
In [[experimental science]], '''experimenter's bias''' is [[bias]] towards a result expected by the human experimenter. David Sackett<ref>Sackett, D.L. Bias in analytic research. ''Journal of Chronic Diseases'', 1979; 32: 51-63.</ref>, in a useful review of biases in clinical studies, states that biases can occur in any one of seven stages of research:
 
 
(this is the [[central limit theorem]] of statistics). This is an important experimental technique used to reduce the impact of randomness on an experiment's outcome. But note that this requires that the measurements be statistically independent, and there are several reasons why that independence may fail. If it does then the average may not actually be a better measurement but may merely reflect the correlations among the individual measurements and their non-independent nature.
# in reading-up on the field,
 
  +
# in specifying and selecting the study sample,
 
 
The most common cause of non-independence is [[error|systematic errors]] (errors affecting all measurements equally, causing the different measurements to be highly correlated, so the average is no better than any single measurement). But another cause can be due to the inability of a human observer to round off measurements in a truly random manner. If an experiment is searching for a sidereal variation of some measurement, and if the measurement is rounded-off by a human who knows the sidereal time of the measurement, and if hundreds of measurements are averaged to extract a "signal" which is smaller than the apparatus' actual resolution, then it should be clear that this "signal" can come from the non-random round-off, and not from the apparatus itself. In such cases a single-blind experimental protocol is required; if the human observer does not know the sidereal time of the measurements, then even though the round-off is non-random it cannot introduce a spurious sidereal variation.
# in executing the experimental manoeuvre (or exposure),
 
  +
# in measuring exposures and outcomes,
 
 
Note that modern electronic or computerized data acquisition techniques have greatly reduced the likelihood of such bias, but it can still be introduced by a poorly-designed analysis technique. Experimenter's bias was not well recognized until the 1950's and 60's, and then it was primarily in medical experiments and studies. Its effects on experiments in the physical sciences have not always been fully recognized.
# in analyzing the data,
 
  +
# in interpreting the analysis, and
 
 
In [[experimental science]], '''experimenter's bias''' is [[bias]] towards a result expected by the human experimenter. David Sackett<ref>Sackett, D.L. Bias in analytic research. ''Journal of Chronic Diseases'', 1979; 32: 51-63.</ref>, in a useful review of biases in clinical studies, states that biases can occur in any one of seven stages of research:
 
# in reading-up on the field,
 
# in specifying and selecting the study sample,
 
# in executing the experimental manoeuvre (or exposure),
 
# in measuring exposures and outcomes,
 
# in analyzing the data,
 
# in interpreting the analysis, and
 
# in publishing the results.
 
# in publishing the results.
The inability of a human being to remain completely objective is the ultimate source of this bias. It occurs more often in sociological and medical sciences, for which reason [[double blind]] techniques are often employed to combat the bias. But experimenter's bias can also be found in some physical sciences, where the experimenter rounds off measurements.
+
The inability of a human being to remain completely objective is the ultimate source of this bias. It occurs more often in sociological and medical sciences, for which reason [[double blind]] techniques are often employed to combat the bias. But experimenter's bias can also be found in some physical sciences, where the experimenter rounds off measurements.
   
==Classification of experimenter's biases==
+
==Classification of experimenter's biases==
   
Modern electronic or computerized data acquisition techniques have greatly reduced the likelihood of such bias, but it can still be introduced by a poorly-designed analysis technique. Experimenter's bias was not well recognized until the 1950's and 60's, and then it was primarily in medical experiments and studies. Sackett (1979) catalogued 56 biases that can arise in sampling and measurement in clinical research, among the above-stated first six stages of research. These are as follows:
+
Modern electronic or computerized data acquisition techniques have greatly reduced the likelihood of such bias, but it can still be introduced by a poorly-designed analysis technique. Experimenter's bias was not well recognized until the 1950's and 60's, and then it was primarily in medical experiments and studies. Sackett (1979) catalogued 56 biases that can arise in sampling and measurement in clinical research, among the above-stated first six stages of research. These are as follows:
 
# '''''In reading-up the field'''''
 
# '''''In reading-up the field'''''
 
## the biases of rhetoric
 
## the biases of rhetoric
Line 88: Line 98:
   
 
==Statistical background==
 
==Statistical background==
  +
In principle, if a measurement has a resolution of
   
  +
<math>R</math>
In principle, if a measurement has a resolution of <math>R</math>, then if the experimenter averages <math>N</math> independent measurements the average will have a resolution of <math>R/\sqrt{N}</math> (this is the [[central limit theorem]] of statistics). This is an important experimental technique used to reduce the impact of randomness on an experiment's outcome. This requires that the measurements be statistically independent; there are several reasons why they may not be. If independence is not satisfied, then the average may not actually be a better statistic but may merely reflect the correlations among the individual measurements and their non-independent nature.
 
  +
, then if the experimenter averages
   
  +
<math>N</math>
The most common cause of non-independence is [[systematic error]]s (errors affecting all measurements equally, causing the different measurements to be highly correlated, so the average is no better than any single measurement). Experimenter bias is another potential cause of non-independence.
 
  +
independent measurements the average will have a resolution of
  +
  +
<math>R/\sqrt{N}</math>
 
(this is the [[central limit theorem]] of statistics). This is an important experimental technique used to reduce the impact of randomness on an experiment's outcome. This requires that the measurements be statistically independent; there are several reasons why they may not be. If independence is not satisfied, then the average may not actually be a better statistic but may merely reflect the correlations among the individual measurements and their non-independent nature.
  +
 
The most common cause of non-independence is [[systematic error]]s (errors affecting all measurements equally, causing the different measurements to be highly correlated, so the average is no better than any single measurement). Experimenter bias is another potential cause of non-independence.
   
 
==Biological and medical sciences==
 
==Biological and medical sciences==
Line 98: Line 116:
   
 
==Physical sciences==
 
==Physical sciences==
If the signal being measured is actually smaller than the rounding error and the data are over-averaged, a positive result for the measurement can be found in the data where none exists (i.e. a more precise experimental apparatus would conclusively show no such signal). If an experiment is searching for a sidereal variation of some measurement, and if the measurement is rounded-off by a human who knows the [[sidereal time]] of the measurement, and if hundreds of measurements are averaged to extract a "signal" which is smaller than the apparatus' actual resolution, then it should be clear that this "signal" can come from the non-random round-off, and not from the apparatus itself. In such cases a single-blind experimental protocol is required; if the human observer does not know the sidereal time of the measurements, then even though the round-off is non-random it cannot introduce a spurious sidereal variation.
+
If the signal being measured is actually smaller than the rounding error and the data are over-averaged, a positive result for the measurement can be found in the data where none exists (i.e. a more precise experimental apparatus would conclusively show no such signal). If an experiment is searching for a sidereal variation of some measurement, and if the measurement is rounded-off by a human who knows the [[sidereal time]] of the measurement, and if hundreds of measurements are averaged to extract a "signal" which is smaller than the apparatus' actual resolution, then it should be clear that this "signal" can come from the non-random round-off, and not from the apparatus itself. In such cases a single-blind experimental protocol is required; if the human observer does not know the sidereal time of the measurements, then even though the round-off is non-random it cannot introduce a spurious sidereal variation.
   
 
==Social sciences==
 
==Social sciences==
The experimenter may introduce [[cognitive bias]] into a study in several ways. First, in what is called the [[observer-expectancy effect]], the experimenter may subtly communicate their expectations for the outcome of the study to the participants, causing them to alter their behavior to conform to those expectations. After the data is collected, bias may be introduced during data interpretation and analysis.
+
The experimenter may introduce [[cognitive bias]] into a study in several ways. First, in what is called the [[observer-expectancy effect]], the experimenter may subtly communicate their expectations for the outcome of the study to the participants, causing them to alter their behavior to conform to those expectations. After the data is collected, bias may be introduced during data interpretation and analysis.
   
 
==Forensic sciences==
 
==Forensic sciences==
Observer effects are rooted in the universal human tendency to interpret data in a manner consistent with one’s expectations <ref>R. Rosenthal, Experimenter Effects in Behavioral Research (NY: Appleton-Century-Crofts 1966).</ref>. This tendency is particularly likely to distort the results of a scientific test when the underlying data are ambiguous and the scientist is exposed to domain-irrelevant information that engages emotions or desires<ref>D. M. Risinger, M. J. Saks, W. C. Thompson, R. Rosenthal, Calif. L. Rev. (January, 2002).</ref>. Despite impressions to the contrary, forensic DNA analysts often must resolve ambiguities, particularly when interpreting difficult evidence samples such as those that contain mixtures of DNA from two or more individuals, degraded or inhibited DNA, or limited quantities of DNA template. The full potential of forensic DNA testing can only be realized if observer effects are minimized.<ref>{{Cite journal
+
Observer effects are rooted in the universal human tendency to interpret data in a manner consistent with one’s expectations <ref>R. Rosenthal, Experimenter Effects in Behavioral Research (NY: Appleton-Century-Crofts 1966).</ref>. This tendency is particularly likely to distort the results of a scientific test when the underlying data are ambiguous and the scientist is exposed to domain-irrelevant information that engages emotions or desires<ref>D. M. Risinger, M. J. Saks, W. C. Thompson, R. Rosenthal, Calif. L. Rev. (January, 2002).</ref>. Despite impressions to the contrary, forensic DNA analysts often must resolve ambiguities, particularly when interpreting difficult evidence samples such as those that contain mixtures of DNA from two or more individuals, degraded or inhibited DNA, or limited quantities of DNA template. The full potential of forensic DNA testing can only be realized if observer effects are minimized.<ref>{{Cite journal
 
| url = http://www.bioforensics.com/articles/sequential_unmasking.html
 
| url = http://www.bioforensics.com/articles/sequential_unmasking.html
 
| author = D. Krane, S. Ford, J. Gilder, K. Inman, A. Jamieson, R. Koppl, I. Kornfield, D. Risinger, N. Rudin, M. Taylor, W.C. Thompson
 
| author = D. Krane, S. Ford, J. Gilder, K. Inman, A. Jamieson, R. Koppl, I. Kornfield, D. Risinger, N. Rudin, M. Taylor, W.C. Thompson
Line 129: Line 147:
   
   
 
{{enWP|Experimenter's bias}}
 
[[Category:Measurement]]
 
[[Category:Measurement]]
 
[[Category:Bias]]
 
[[Category:Bias]]
 
[[Category:Types of scientific fallacy]]
 
[[Category:Types of scientific fallacy]]
 
{{enWP|Experimenter's bias}}
 

Latest revision as of 20:40, 19 August 2020

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Cognitive Psychology: Attention · Decision making · Learning · Judgement · Memory · Motivation · Perception · Reasoning · Thinking  - Cognitive processes Cognition - Outline Index


This article is in need of attention from a psychologist/academic expert on the subject.
Please help recruit one, or improve this page yourself if you are qualified.
This banner appears on articles that are weak and whose contents should be approached with academic caution.

Experimenter's bias is the phenomenon in experimental science by which the outcome of an experiment tends to be biased towards a result expected by the human experimenter. The inability of a human being to remain completely objective is the ultimate source of this bias. It occurs more often in sociological and medical sciences, for which reason double blind techniques are often employed to combat the bias. But experimenter's bias can also be found in some physical sciences, where the experimenter rounds-off measurements. If the signal being measured is actually smaller than the rounding error and the data are over-averaged, a positive result for the measurement can be found in the data where none exists (i.e. a more precise experimental apparatus would conclusively show no such signal).

In principle, if a measurement has a resolution of

, then if the experimenter averages

independent measurements the average will have a resolution of

(this is the central limit theorem of statistics). This is an important experimental technique used to reduce the impact of randomness on an experiment's outcome. But note that this requires that the measurements be statistically independent, and there are several reasons why that independence may fail. If it does then the average may not actually be a better measurement but may merely reflect the correlations among the individual measurements and their non-independent nature.

The most common cause of non-independence is systematic errors (errors affecting all measurements equally, causing the different measurements to be highly correlated, so the average is no better than any single measurement). But another cause can be due to the inability of a human observer to round off measurements in a truly random manner. If an experiment is searching for a sidereal variation of some measurement, and if the measurement is rounded-off by a human who knows the sidereal time of the measurement, and if hundreds of measurements are averaged to extract a "signal" which is smaller than the apparatus' actual resolution, then it should be clear that this "signal" can come from the non-random round-off, and not from the apparatus itself. In such cases a single-blind experimental protocol is required; if the human observer does not know the sidereal time of the measurements, then even though the round-off is non-random it cannot introduce a spurious sidereal variation.

Note that modern electronic or computerized data acquisition techniques have greatly reduced the likelihood of such bias, but it can still be introduced by a poorly-designed analysis technique. Experimenter's bias was not well recognized until the 1950's and 60's, and then it was primarily in medical experiments and studies. Its effects on experiments in the physical sciences have not always been fully recognized.

In experimental science, experimenter's bias is bias towards a result expected by the human experimenter. David Sackett[1], in a useful review of biases in clinical studies, states that biases can occur in any one of seven stages of research:

  1. in reading-up on the field,
  2. in specifying and selecting the study sample,
  3. in executing the experimental manoeuvre (or exposure),
  4. in measuring exposures and outcomes,
  5. in analyzing the data,
  6. in interpreting the analysis, and
  7. in publishing the results.

The inability of a human being to remain completely objective is the ultimate source of this bias. It occurs more often in sociological and medical sciences, for which reason double blind techniques are often employed to combat the bias. But experimenter's bias can also be found in some physical sciences, where the experimenter rounds off measurements.

Classification of experimenter's biases

Modern electronic or computerized data acquisition techniques have greatly reduced the likelihood of such bias, but it can still be introduced by a poorly-designed analysis technique. Experimenter's bias was not well recognized until the 1950's and 60's, and then it was primarily in medical experiments and studies. Sackett (1979) catalogued 56 biases that can arise in sampling and measurement in clinical research, among the above-stated first six stages of research. These are as follows:

  1. In reading-up the field
    1. the biases of rhetoric
    2. the all's well literature bias
    3. one-sided reference bias
    4. positive results bias
    5. hot stuff bias
  2. In specifying nd selecting the study sample
    1. popularity bias
    2. centripetal bias
    3. referral filter bias
    4. diagnostic access bias
    5. diagnostic suspicion bias
    6. unmasking (detection signal) bias
    7. mimicry bias
    8. previous opinion bias
    9. wrong sample size bias
    10. admission rate (Berkson) bias
    11. prevalence-incidence (Neyman) bias
    12. diagnostic vogue bias
    13. diagnostic purity bias
    14. procedure selection bias
    15. missing clinical data bias
    16. non-contemporaneous control bias
    17. starting time bias
    18. unacceptable disease bias
    19. migrator bias
    20. membership bias
    21. non-respondent bias
    22. volunteer bias
  3. In executing the experimental manuoeuvre (or exposure)
    1. contamination bias
    2. withdrawal bias
    3. compliance bias
    4. therapeutic personality bias
    5. bogus control bias
  4. In measuring exposures and outcomes
    1. insensitive measure bias
    2. underlying cause bias (rumination bias)
    3. end-digit preference bias
    4. apprehension bias
    5. unacceptability bias
    6. obsequiousness bias
    7. expectation bias
    8. substitution game
    9. family information bias
    10. exposure suspicion bias
    11. recall bias
    12. attention bias
    13. instrument bias
  5. In analyzing the data
    1. post-hoc significance bias
    2. data dredging bias (looking for the pony)
    3. scale degradation bias
    4. tidying-up bias
    5. repeated peeks bias
  6. In interpreting the analysis
    1. mistaken identity bias
    2. cognitive dissonance bias
    3. magnitude bias
    4. significance bias
    5. correlation bias
    6. under-exhaustion bias

The effects of bias on experiments in the physical sciences have not always been fully recognized.

Statistical background

In principle, if a measurement has a resolution of

, then if the experimenter averages

independent measurements the average will have a resolution of

(this is the central limit theorem of statistics). This is an important experimental technique used to reduce the impact of randomness on an experiment's outcome. This requires that the measurements be statistically independent; there are several reasons why they may not be. If independence is not satisfied, then the average may not actually be a better statistic but may merely reflect the correlations among the individual measurements and their non-independent nature.

The most common cause of non-independence is systematic errors (errors affecting all measurements equally, causing the different measurements to be highly correlated, so the average is no better than any single measurement). Experimenter bias is another potential cause of non-independence.

Biological and medical sciences

The complexity of living systems and the ethical impossibility of performing fully-controlled experiments with certain species of animals and humans provide a rich, and difficult to control, source of experimental bias. The scientific knowledge about the phenomenon under study, and the systematic elimination of probable causes of bias, by detecting confounding factors, is the only way to isolate true cause-effect relationships. Is is also in epidemiology that experimenter bias has been better studied than in other sciences.

Physical sciences

If the signal being measured is actually smaller than the rounding error and the data are over-averaged, a positive result for the measurement can be found in the data where none exists (i.e. a more precise experimental apparatus would conclusively show no such signal). If an experiment is searching for a sidereal variation of some measurement, and if the measurement is rounded-off by a human who knows the sidereal time of the measurement, and if hundreds of measurements are averaged to extract a "signal" which is smaller than the apparatus' actual resolution, then it should be clear that this "signal" can come from the non-random round-off, and not from the apparatus itself. In such cases a single-blind experimental protocol is required; if the human observer does not know the sidereal time of the measurements, then even though the round-off is non-random it cannot introduce a spurious sidereal variation.

Social sciences

The experimenter may introduce cognitive bias into a study in several ways. First, in what is called the observer-expectancy effect, the experimenter may subtly communicate their expectations for the outcome of the study to the participants, causing them to alter their behavior to conform to those expectations. After the data is collected, bias may be introduced during data interpretation and analysis.

Forensic sciences

Observer effects are rooted in the universal human tendency to interpret data in a manner consistent with one’s expectations [2]. This tendency is particularly likely to distort the results of a scientific test when the underlying data are ambiguous and the scientist is exposed to domain-irrelevant information that engages emotions or desires[3]. Despite impressions to the contrary, forensic DNA analysts often must resolve ambiguities, particularly when interpreting difficult evidence samples such as those that contain mixtures of DNA from two or more individuals, degraded or inhibited DNA, or limited quantities of DNA template. The full potential of forensic DNA testing can only be realized if observer effects are minimized.[4]


See also


References

  1. Sackett, D.L. Bias in analytic research. Journal of Chronic Diseases, 1979; 32: 51-63.
  2. R. Rosenthal, Experimenter Effects in Behavioral Research (NY: Appleton-Century-Crofts 1966).
  3. D. M. Risinger, M. J. Saks, W. C. Thompson, R. Rosenthal, Calif. L. Rev. (January, 2002).
  4. D. Krane, S. Ford, J. Gilder, K. Inman, A. Jamieson, R. Koppl, I. Kornfield, D. Risinger, N. Rudin, M. Taylor, W.C. Thompson (2008). Sequential unmasking: A means of minimizing observer effects in forensic DNA interpretation. Journal of Forensic Sciences 53 (4): 1006–1007.


This page uses Creative Commons Licensed content from Wikipedia (view authors).