Risk perception

Risk perception is the subjective judgment that people make about the characteristics and severity of a risk. The phrase is most commonly used in reference to natural hazards and threats to the environment or health, such as nuclear power. Several theories have been proposed to explain why different people make different estimates of the dangerousness of risks. Two major families of theory have been developed by social scientists: the Psychometric Paradigm and Cultural Theory.

Early theories
The study of risk perception arose out of the observation that experts and lay people often disagreed about how risky various technologies and natural hazards were. For example, most experts concluded that nuclear power is relatively safe, but a substantial portion of the public sees it as dangerous. On the other hand, experts claim that radon is a major threat, but homeowners are usually unconcerned about it. The obvious explanation seemed to be that the experts, having considered the evidence carefully and objectively, had a more accurate picture of the risks than did irrational lay people. Many experts continue to believe this theory. However, social science research on risk perception has been largely dedicated to challenging it and proposing alternate explanations.

A key early paper was written in 1969 by Chauncey Starr. Starr used a revealed preference approach to find out what risks are considered acceptable by society. He assumed that society had reached equilibrium in its judgment of risks, so whatever risk levels actually existed in society were acceptable. His major finding was that people will accept risks 1,000 greater if they are voluntary (e.g. driving a car) than if they are involuntary (e.g. a nuclear disaster).

Heuristics and biases
The earliest psychometric research was done by psychologists Daniel Kahneman and Amos Tversky, who performed a series of gambling experiments to see how people evaluated probabilities. Their major finding was that people use a number of heuristics to evaluate information. These heuristics are usually useful shortcuts for thinking, but they may lead to inaccurate judgments in some situations -- in which case they become cognitive biases.
 * The Availability heuristic: events that can be more easily brought to mind or imagined are judged to be more likely than events that could not easily be imagined.
 * The Anchoring heuristic: people will often start with one piece of known information and then adjust it to create an estimate of an unknown risk -- but the adjustment will usually not be big enough.
 * Asymmetry between gains and losses: People are risk averse with respect to gains, preferring a sure thing over a gamble with a higher expected utility but which presents the possibility of getting nothing. On the other hand, people will be risk-seeking about losses, preferring to hope for the chance of losing nothing rather than taking a sure, but smaller, loss (e.g. insurance).
 * Threshold effects: People prefer to move from uncertainty to certainty over making a similar gain in certainty that does not lead to full certainty. For example, most people would choose a vaccine that reduces the incidence of disease A from 10% to 0% over one that reduces the incidence of disease B from 20% to 10%.

Another key finding in early psychometric research was that the experts are not necessarily any better at estimating probabilities than lay people. Experts were often overconfident in the exactness of their estimates, and put too much stock in small samples of data.

Cognitive theories
In the 1980s, a group of researchers from Decision Research in Oregon, led by Paul Slovic, Sarah Lichtenstein, and Baruch Fischhoff, proposed a survey-based method for studying risk perception that remains influential today.

These researchers first challenged Starr's article by examining expressed preference -- how much risk people say they are willing to accept. They found that, contrary to Starr's basic assumption, people generally saw most risks in society as being unacceptably high. They also found that the gap between voluntary and involuntary risks was not nearly as great as Starr claimed.

Slovic and others then tested the theory that the experts have a more accurate and objective view of risks. They asked groups of experts and lay people to rank a list of risks according to their riskiness. Then they asked them to estimate the annual number of fatalities from each risk. They found that both experts and lay people had a basically accurate view of which risks kill more people. The experts' rankings of risk correlated closely with their estimates of fatalities, indicating that to experts, "riskier" means "kills more people." On the other hand, lay people's judgments of riskiness did not correlate with their estimates of fatalities, suggesting that there are other aspects of risk that laypeople take into account.

Slovic and others then asked groups of laypeople to rate a series of risks on a number of dimensions, such as new-old, known to science-not known to science, and catastrophic-chronic. By using factor analysis, they found that two main factors could explain why lay people saw some risks as more dangerous than others. These factors are referred to as "dread" and "unknown." A dread risk elicits a visceral feeling of dread, is uncontrollable, is catastrophic, is fatal, is inequitable, and is involuntary. An unknown risk is delayed, new, and unknown to science. Nuclear power is both dreaded and poorly understood, explaining why the public fears it so much. Meanwhile the heretofor unresolved problem of disposal of nuclear waste is known, but often discounted by experts.

The same two factors of dread and unknown have emerged in a number of studies outside of the United States as well. (Though in some Asian studies, people consider unknown risks less risky than known ones.) It was also found that in the United States, women saw all risks as higher than men did, and minorities saw risks as higher than whites. Later studies showed that minority men and women, white women, and even most white men saw risks similarly, but there is a small group of conservative, highly-educated, authoritarian white men who see all risks as very low.

Affective theories
Research within the psychometric paradigm has more recently turned to focus on the roles of affect, emotion, and stigma in influencing risk perception. Melissa Finucane and Paul Slovic have been among the key researchers here.

The basic premise of the turn toward affective theories is that affect -- a positive or negative feeling toward an object -- causes evaluations of an object's riskiness, rather than the other way around, the so called affect heuristic. A key finding in support of this theory is the strong negative correlation between people's judgments of the risk and benefit of an activity. Activities judged to have a high risk are nearly always seen as having low benefit, and vice-versa. This is explained by positing that a general positive or negative disposition toward a potentially hazardous activity is rationalized by assigning that activity positive or negative scores on various dimensions such as riskiness and benefit.

Stigma refers to a metaphorical mark of disgrace attached to certain risky activities. Stigmatized activities are seen as morally objectionable, completely unacceptable, and polluting to anyone or anything associated with them. Affective judgments are believed to be critical to explaining why certain risks are stigmatized.

Cultural theory
Cultural theory refers to theories of risk perception that focus on culture, rather than individual psychology as an explanation for differences in risk judgments.

Douglas and Wildavsky's Cultural Theory of risk
The most influential cultural theory is called simply "The Cultural Theory of risk" (with capital C and T). Cultural Theory is based on the work of anthropologist Mary Douglas and political scientist Aaron Wildavsky.

Cultural Theory makes two basic claims. First, it argues that views of risk are produced by, and support, social structures. Fear of certain types of risks serves to uphold the social structure.

Second, Cultural Theory proposes that there are four basic "ways of life," each corresponding to a particular social structure and a particular outlook on risk. The four ways of life, also called "cultural biases," are defined by their levels of "grid" and "group." "Grid" refers to the degree to which people are constrained in their social role. "Group" refers to a feeling of belonging or solidarity.
 * High group and high grid produces a Hierarchist way of life, characterized by a reliance on authority and regulation. Hierarchists fear crime, delinquency, and other risks that would disrupt the careful ordering of society.
 * Low grid and high group produces an Egalitarian way of life, characterized by voluntary associations in which all members are equal. Egalitarians focus on low-probability but catastrophic risks such as nuclear power, because fear of disaster keeps members in line.
 * Low grid and low group produce an Individualist way of life, characterized by competition in the market. Individualists fear anything that would impair the functioning of the market, such as war.
 * High grid and low group produces a Fatalist way of life, characterized by a feeling of lack of control over the world. Fatalists do not bother fearing risks, as they don't think they can prevent them -- rather, they hope to simply be able to roll with the punches.

Attempts have been made to validate Cultural Theory with survey research, but controversy remains over whether to interpret their results as supporting or refuting the grid/group typology.

Other cultural theories
Other theorists have retained the idea that culture is critical to explaining differences in risk perception, but reject Douglas and Wildavsky's typology of ways of life.

Trust
It is widely agreed that trust is a key factor in influencing people's perceptions of risk. There are two main ways trust is said to shape risk perceptions:
 * An activity is perceived as more risky if the people or agencies managing it are perceived as untrustworthy. For example, if an American does not trust the US Department of Energy, he or she is likely to see the danger of nuclear power as greater.
 * Information presented by trusted sources is given more credence than information from untrusted sources.