Are determined by comparing the probabilities of harm involved in various activities?

Throughout recorded history people have engaged in hazardous activities, and governments have taken action to control some of those activities in the public interest. But in recent times the hazards of greatest concern, and knowledge about them, have changed in ways that make informed decisions harder to reach. Once the focus was simply on the presence or absence of danger. If a food was “adulterated,” if water was determined to be “impure,” if a bridge or dam was declared “unsafe,” or if a workplace was “dangerous,” action was called for. When people called on government to take action, they wanted simple, clear-cut measures: ban sale of the food, supply pure water, condemn the bridge, eliminate the workplace hazard. But with increased understanding of the nature of the choices, it has become harder to maintain a simple view. Responsible decision makers need to know more about the alternatives than that one of them is hazardous.

In this chapter we outline the many kinds of knowledge a well-informed decision requires and the ways in which this knowledge is often incomplete and uncertain. We show how, under such conditions, the judgments of both experts and nonexperts can be affected by preexisting biases and cognitive limitations and how human values and concerns inevitably enter into the analytic process. These factors often lead experts to disagree with each other and with non-experts about the significance of risks, even when the facts are not in dispute.

TOWARD QUANTIFICATION OF HAZARDS

One reason decision makers need more knowledge is that it has become clear that eliminating one danger can create a new one. To rid the water supply of organisms that cause typhoid and other infectious diseases, water has been chlorinated since early in this century. This action resulted in chemical reactions in the water that produced chloroform and other carcinogenic chlorinated hydrocarbons. To choose between the dangers, one must answer difficult questions: Which danger is more worth avoiding? How much decreased danger from typhoid is enough to justify a certain amount of increased danger of cancer? Experts agree that there will be fewer deaths from chlorination-induced cancer than there once were from typhoid, but is that enough information to make a decision? It may be important to consider that typhoid and cancer are very different kinds of dangers. Typhoid is an acute disease and cancer is a chronic one; typhoid is much more treatable; and there are alternatives to chlorination for preventing it, although the alternatives also present hazards, as yet poorly understood.

Society is faced with many choices that trade one danger for another and that raise similar questions. For instance, regulated commercial canning of food reduced the danger of botulism compared with home canning, but the use of lead solder in “tin” cans introduced a toxin not present in home canning jars. Lighter automobiles use less fuel and generate less air pollution, but in a collision with an older, heavier vehicle they are more dangerous to their occupants.

Societal choices also involve the benefits associated with hazards and the costs of hazard reduction. Industries that pollute air and water also provide jobs and profits; before requiring pollution controls, public officials usually want to consider the probable effects of the available options on those benefits. Cities may install traffic lights to reduce fatalities and injuries, but officials usually want to consider whether this is the best way to spend scarce revenues. Thus decision makers want good estimates of how much each alternative will reduce hazards so that they can judge the potential benefits against the potential costs.

Decision makers need detailed knowledge because it has become clear that making the world safer for most people can make it more dangerous for some. Pesticides and herbicides have helped make wholesome food more available and have helped improve the diets of low-income consumers, but they expose agricultural workers to hazardous chemicals and can be a significant polluter of water supplies. The total danger to society may have decreased greatly, but that knowledge may be of no comfort to farm workers. Nuclear power offers some people the benefit of cleaner air but may expose different people to radioactivity in the event of an accident. How is society to weigh small benefits to many against what are sometimes larger dangers for a relative few?

Decision makers need detailed knowledge for another reason as well: the hazards of greatest concern today are more difficult to observe and evaluate than the major hazards of the past. Half a century ago most of the major health and safety hazards were of immediate onset: accidents, bacterial infections, poisonings, and the like. Most of the hazards that are now controversial are of delayed onset, sometimes not being evident for decades after exposure and sometimes affecting only the offspring of those who were exposed. It can be hard to know what the hazards of a substance or activity are before a generation of experience has accumulated.

To make informed choices, it helps to look carefully and analytically at the hazards each alternative entails. It is important to develop quantitative knowledge: How much cancer might be caused by chlorinating water? How much pesticide are farm workers exposed to? For this kind of analysis, some conceptual distinctions are useful. The most basic of these is between “hazard” and “risk.” An act or phenomenon is said to pose a hazard when it has the potential to produce harm or other undesirable consequences to some person or thing. The magnitude of the hazard is the amount of harm that may result, including the number of people or things exposed and the severity of consequence. The concept of risk further quantifies hazards by attaching the probability of being realized to each level of potential harm.1 Thus an area that experiences a severe hurricane once in 200 years faces the same hazard but only one-tenth the risk of a similar area that experiences an equally severe hurricane once in 20 years. The concept of risk makes clear that hazards of the same magnitude do not always pose equal risks.

Risks of the same magnitude do not always pose equal concerns, either. Most quantitative measures of risk combine the undesirability of a hazard and its probability of occurrence into a single summary measure. Use of such summary measures can simplify large amounts of data but can be unsatisfying to people who want to consider different kinds of injuries or deaths separately because, for instance, they believe that certain types of individuals are worthy of special protection or that certain types of injuries or illnesses are especially to be avoided. Some ways of characterizing risk take such concerns into account. These involve calculating separate risk estimates for each hazardous effect, giving heavier weight to qualitative characteristics of risk (e.g., Fischhoff et al., 1984; Okrent, 1980) and using explicit measures of values and risk attitudes (Raiffa, 1968).

KNOWLEDGE NEEDED FOR RISK DECISIONS

What kinds of knowledge must be collected so that the process of communication will be an informed dialogue leading to reasonable choices? Understanding the risks is not enough, because organizations and individuals never choose between risks. Rather, they choose between options, each of which presents some risks. Each also presents benefits, which are as crucial to the choices as the risks are. Understanding risks can be difficult, but understanding the benefits of a set of decision alternatives can be as difficult. Both kinds of knowledge are needed for an informed choice.

This section outlines the many kinds of relevant knowledge. It summarizes four kinds of knowledge decision makers need: (1) about risks and benefits associated with a particular option, (2) about alternative options and their risks and benefits, (3) about the uncertainty of the relevant information, and (4) about the management situation.

Information About the Nature of Risks and Benefits

“Risk assessment” is the term generally used to refer to the characterization of the potential adverse effects of exposures to hazards. Risk assessment therefore addresses the questions listed below. “Benefit assessment,” a term not commonly used, addresses many similar questions. Some benefit questions are mentioned below, in parentheses.

1.

What are the hazards of concern as a consequence of a substance or activity? What environments, species, individuals, or organ systems might be harmed? How serious is each potential consequence? Is it reversible? (What are the benefits associated with a substance or activity? Who benefits and in what ways?)

2.

What is the probable exposure to each hazard in total number of people or valued things? How do the exposures cumulate over time? A single exposure over a short period of time can have effects different from those due to exposure to the same amount of a hazard in several episodes or chronically at low levels over a longer period of time. (How many people benefit? How long do the benefits last?)

3.

What is the probability of each type of harm from a given exposure to each hazard? How potent is the hazardous substance or activity at the relevant exposures? What is the relation of exposure or “dose” to response? (What is the probability that the projected benefits will actually follow from the activity in question? What events might intervene to prevent those benefits from being received? What are the probabilities of these events?)

4.

What is the distribution of exposure? In particular, which groups receive a disproportionate share of the exposure? (Which groups get a disproportionate share of the benefits?)

5.

What are the sensitivities of different populations of individuals to each hazard? What is the appropriate estimate of harm for highly sensitive populations that bear a significant proportion of the overall risk? What are those populations, where are they located, and what proportion of the total risk do they bear?

6.

How do exposures interact with exposures to other hazards? Sometimes one exposure can make people more sensitive to another hazard—a synergistic effect—and, occasionally, exposure to one hazard may decrease sensitivity to another—a blocking effect. What is known about such effects?

7.

What are the qualities of the hazard? For instance, do those exposed have an option to reduce or eliminate their exposure (and at what cost)? Would harm come to exposed people one at a time or as a mass, in a potential catastrophe? Is the hazard deadly or not? Does the harm take the form of accident or illness, acute or chronic disease, damage to the young or the old, to the living or the unborn? If the hazard is an illness, is it treatable? Is it a dread illness, such as cancer, or one that creates less of an emotional reaction? Table 2.1 lists qualities of risk that make a difference in most people's judgments. (What are the qualities of the benefits? Do they appear as increased income, saved time, physical comfort, improved health, more stable ecosystems, more beautiful surroundings, improved welfare for low-income people or the elderly, or in other forms?)

8.

What is the total population risk, taking into account all of the above? To arrive at such an estimate, one must somehow calculate a summation across different types of harm, people of different sensitivities, and exposures to the hazard in different amounts and in combination with various other hazards. (What is the total benefit?)

Are determined by comparing the probabilities of harm involved in various activities?

TABLE 2.1

Qualitative Factors Affecting Risk Perception and Evaluation.

Information on Alternatives

The term “risk control assessment” may be used to describe the activity of characterizing alternative interventions to reduce or eliminate a hazard. More generally, decision makers need responses to questions such as the following about all the alternatives to any option under consideration:

1.

What are the alternatives that would prevent the hazard in question? Some involve the choice of alternative processes or substances, while others involve action that might prevent or reduce exposure, mitigate the consequences, or compensate for damage.

2.

What are the risks of alternative actions and of a decision not to act? How are these risks distributed? Since there are an infinite number of alternatives, it is possible to assess only a few, but a complete analysis should examine those alternatives being prominently discussed and should work to identify others worthy of consideration. (What benefits does each alternative promise, other than risk reduction?)

3.

What is the effectiveness of each alternative? That is, how much does it reduce the risks it is intended to reduce, and how is the risk reduction distributed across relevant populations? (What benefits does each provide, and how are they distributed?)

4.

What are the costs of each alternative, and how are these distributed across relevant populations?

Uncertainties in Knowledge About Risks and Benefits

Assessments of the risks and benefits of all available options, to be complete, should address the following questions about their own reliability:

1.

What are the weaknesses of the available data? Information needed to estimate the risks and benefits of an activity or substance and the effects and costs of alternatives often does not exist. Sometimes experts dispute the accuracy or reliability of the data that are available. And often not enough is known to extrapolate confidently from those data to estimates of risks (or benefits) for a whole population.

2.

What are the assumptions and models on which the estimates are based when data are missing or uncertain or when methods of estimation are in dispute? How much dispute exists among experts about the choice of assumptions and models?

3.

How sensitive are the estimates to changes in the assumptions or models? That is, how much would the estimate change if it used different plausible assumptions about exposures or incidences of harm (or benefits) or different methods for converting available data into estimates? What are the boundaries or confidence limits within which the correct risk (or benefit) estimate probably falls? What is the basis for concluding that the correct estimate is not likely to lie outside those bounds?

4.

How sensitive is the decision to changes in the estimates? That is, if, because of uncertainty, an estimate of risk or benefit were wrong by a factor of 2, or 10, or 100, would the decision maker's choice be different?

5.

What other risk and risk control assessments have been made, and why are they different from those now being offered?

Information on Management

“Risk management” is a term used to describe processes surrounding choices about risky alternatives. In common usage, assessments of the risks and benefits of various options are seen as technical activities that yield information for decision makers, whose decisions are called risk management decisions (National Research Council, 1983a). [If one accepts the distinction between risk assessment and risk management (see the list of terms in Appendix E), communication about risks that involves nonexperts would generally be part of risk management.] In addition to information about risks and benefits, decision makers need answers to managerial questions such as these:

1.

Who is responsible for the decision? Who is responsible for preventing, mitigating, or compensating for damage? Who is responsible for generating and evaluating data? Who has oversight?

2.

What issues have legal importance? Do the applicable laws take benefits into consideration? Do they allow consideration of the risks of alternatives? Do they require the analysis of economic and social impacts of the activity in question or its alternatives?

3.

What constrains the decision? What technical, physical, biological, or financial limits constrain some possible choices? What are the limits of authority of the person or organization making the decision? Are there time limits imposed on the decision process? What difference could public opinion or political intervention make?

4.

What resources are available for implementing the decision? What personnel and financial resources are available to the decision maker? To others involved in debating the decision?

Other Relevant Knowledge

In addition to items on the above lists, other considerations are also important. Technological choices involve risks and benefits not only to the life, health, and safety of individual humans but also to nonhuman organisms, ecological balances, the structures of human communities, political and religious values, and other things that concern decision makers but that are not easily evaluated by the quantitative approaches implied by the above lists. The assessment of such risks and benefits is not standard practice in the field of risk assessment. Such factors are commonly discussed, however, in activities and documents described as “impact assessments” or “technology assessments.” These broadly conceived activities and documents often address a wide range of the questions just outlined.

Summary

In sum, a well-informed choice about activities that present hazards and risks requires a wide range of knowledge. It depends on understanding of the physical, chemical, and biological mechanisms by which hazardous substances and activities cause harm; on knowledge about exposures to hazards or, where knowledge is incomplete, on analysis and modeling of exposures; on statistical expertise; on knowledge of the economic, social, esthetic, ecological, and other costs and benefits of various options; on understanding of the social values reflected in differential reactions to the qualities of risks; on knowledge of the constraints on and responsibilities of risk managers; and on the ability to integrate these disparate kinds of knowledge, data, and analysis. Needless to say, it is often impossible in practice to gather all this knowledge. Nevertheless, the more complete the knowledge and the more quantitative answers are found, the better informed the ultimate decision will be.

GAPS AND UNCERTAINTIES IN KNOWLEDGE

The above summary of needed knowledge clearly suggests that decisions about risky activities and hazardous substances are frequently made with incomplete information. In this section we elaborate on some of the points just raised. We focus on risks, even though there are major gaps and uncertainties in knowledge about benefits as well, and we list several important ways that information about the nature and magnitude of risk is often incomplete and uncertain (see Figure 2.1).

Are determined by comparing the probabilities of harm involved in various activities?

FIGURE 2.1

SOURCE: Drawing by Richter; ©1987 The New Yorker Magazine, Inc.

Identification of Hazards

It is sometimes difficult even to determine whether a hazard exists. For activities or substances whose hazards are delayed in onset (such as possible causes of cancer or birth defects) and for substances to which people are exposed in very small quantities, it is difficult to connect effects to causes. Analysts often use experiments with animals or bacteria to determine whether such activities or substances are hazardous under controlled conditions, but not all potential hazards are studied, even in the laboratory. A National Research Council panel reviewed the testing that had been done on a random sample of 675 substances (National Research Council, 1984). Within this group, 75 percent of the drugs and inert chemicals in drug formulations had had some testing for acute toxicity and 62 percent had had some testing for subchronic effects. For pesticides and ingredients in pesticide formulations, these values were 59 percent and 51 percent, respectively. Testing for chronic, mutagenic, or reproductive and developmental effects was less frequently done than testing for acute and subchronic effects, and testing of all kinds was less frequently done for substances on the Toxic Substance Control Act's list of chemicals in commerce. The panel concluded that toxicity studies had not yet been done on the majority of the chemicals—amounting to tens of thousands—now in industrial use in the United States.

Even when studies have been done with lower organisms, it is uncertain whether there is a human hazard. Substances that cause cancer, mutations, or birth defects in some species of animals often have no demonstrable effect on other species, and the reasons for these differences are not yet understood. For instance, a review by the Food and Drug Administration indicated that of 38 compounds demonstrated or suspected to cause birth defects in humans, all except one tested positive in at least one animal species and more than 80 percent were positive in more than one species. Eighty-five percent of the 38 compounds caused birth defects in mice, 80 percent in rats, 60 percent in rabbits, 45 percent in hamsters, and 30 percent in primates (National Research Council, 1986b). Thus some substances that do not cause cancer or birth defects in test species appear to have these harmful effects on humans. And the reverse may also be true. Scientists may agree that positive results in an animal test on a particular substance are strong evidence of a human hazard, but there is always some uncertainty about that judgment.

Estimation of Exposure

Data are frequently inadequate on exposures to hazards. Many hazardous substances are diffused in the air or in surface or underground waterways and in the process undergo physical or chemical changes that transform them into other substances that may be less hazardous—or that may be more so, although more dilute. Many hazardous substances are transformed by biological processes before they reach humans. And even in the human body, metabolic processes can alter hazardous chemicals before they reach the organs to which they present hazards, sometimes making them less toxic, but sometimes making them more so (National Research Council, 1986b). Thus the hazardous substances released into the environment at the source may be very different in quantity and even in kind from those to which people are ultimately exposed. The measurement of exposure is therefore most accurate at the dispersed sites where people live and work. As a result, it can be very expensive to collect accurate exposure data. The problems and the expense multiply when researchers try to address questions about unequal distributions of exposure and about possibly sensitive populations. Many more measurements must be made to compare the exposures of a variety of populations. For these reasons exposures are usually estimated from data on releases of hazardous substances. Inferring exposures requires numerous assumptions about the transport, dispersion, and transformations of substances, many of which are based on incomplete theory and limited evidence (National Research Council, 1988a). The use of estimates rather than measurements of exposure adds a layer of uncertainty to risk estimates.

Further uncertainty is introduced by the fact that many hazards produce their effects by exposure over time. It is known that exposure to radiation and some hazardous substances in a given amount will have different effects depending on whether it occurs at once, is spread over several smaller exposures, or is continuous at a low rate over a long period of time (National Research Council, 1988b). It is not known, however, how much difference this time dimension makes for particular hazards or which rate of exposure carries the greatest risk (National Research Council, 1984:60).

Estimation of the Probability of Harm

Knowledge about the probability of harm from a given hazard is also frequently inadequate or uncertain. The best way to estimate the probability of harm is to examine the accumulated experience of people exposed to the hazard. Only rarely, however, as with automobile travel and other familiar hazards whose effects are easy to observe, is there sufficient human experience to calculate accurate probabilities from observational data. Past experience does not exist for many controversial hazards because they involve new technologies. For many others, including carcinogens and most air pollutants, past experience is hard to interpret because it is difficult to tell which illnesses or deaths are attributable to the hazard rather than to other causes. For yet other hazards the meaning of past experience is in dispute because the greatest concern is about very low probability but potentially disastrous events, such as a nuclear reactor core meltdown or the escape of a virulent organism from a laboratory. The fact that a disaster has not happened may mean that there is no potential for harm, that the potential is high but luck has been good, or that the probability of harm is very low. But when considering major disasters, even a very low probability can mean the risk to the population, defined as the probability multiplied by the magnitude of the consequence, is large.

When knowledge from experience is unavailable or unreliable, analysts develop methods of estimating the risk. To assess the risk from carcinogens, they commonly use data from laboratory experiments on nonhuman organisms. Adding assumptions about how humans differ from the experimental organisms and about how to extrapolate from the 2-year exposures to high doses usually given to laboratory rodents to the long-term low doses characteristic of natural human exposures, they estimate the human risk. An extensive literature debates the merits of different methods of making these extrapolations across species, dosages (National Research Council, 1980), and exposure times (Kaufman, 1988). Risk analysts also use epidemiological studies that correlate evidence of exposure and evidence of harm, but interpretations of these studies are often controversial because they are open to alternative explanations. For instance, illnesses in exposed groups may be due to some other hazard to which they were also exposed or to some synergistic interaction of hazards. Only very infrequently do analysts have access to data on humans whose exposures to the relevant hazards are well known.

A different sort of uncertainty arises in assessing the risk of disasters that result from the breakdown of complex technological systems, particularly types of catastrophic accidents that have not previously occurred. Risk analysts sometimes address this problem with “fault-tree” analysis, a technique that uses experience to estimate the probabilities of various events that might contribute to a disaster and then combines the probabilities to estimate the likelihood that enough contributing factors will occur at once to trigger the disaster. The analysts then use available data and models to estimate potential exposures and their consequences. Needless to say, these methods of estimation are full of untested assumptions and uncertainties. In particular, an extensive literature debates the errors of omission and commission in fault-tree analyses of the probability of technological disasters, such as in the nuclear power industry (Campbell and Ott, 1979; Fischhoff et al, 1981a; McCormick, 1981). The uncertainties in these methods are legion, so several different and even conflicting conclusions can often be defended by competent scientists. It is difficult and sometimes proves impossible to reach a consensual judgment about what the probabilities are, let alone what to do about the attendant risks (see Figure 2.2).

Are determined by comparing the probabilities of harm involved in various activities?

FIGURE 2.2

SOURCE: Drawing by Richter; ©1988 The New Yorker Magazine, Inc.

Identification of Synergistic Effects

Additional uncertainty in risk estimates exists because exposure to one hazard can affect a person's sensitivity to another. For instance, asbestos is estimated to be about 10 times as dangerous to smokers as to nonsmokers (Breslow et al., 1986). This may occur because chemical reactions between the substances yield products of different toxicity or because one substance increases the availability to the body of another one that would not have been toxic by itself (National Research Council, 1988a). In such ways, exposure to one substance can potentiate the adverse effects of another or, less commonly, decrease another substance's toxic effect. There is very little knowledge, however, about how frequent or how strong such synergistic or blocking effects are or about which combinations of substances and activities are likely to exhibit the effects. The knowledge that such effects exist, however, gives reason to consider almost all estimates of health risk based on studies of single hazardous substances as somewhat uncertain, even when they are based on the most careful analysis possible.

Summary

In sum, any scientific risk estimate is likely to be based on incomplete knowledge combined with assumptions, each of which is a source of uncertainty that limits the accuracy that should be ascribed to the estimate. Does the existence of multiple sources of uncertainty mean that the final estimate is that much more uncertain, or can the different uncertainties be expected to cancel each other out? The problem of how best to interpret multiple uncertainties is one more source of uncertainty and disagreement about risk estimates.

SCIENTIFIC JUDGMENT AND ERRORS IN JUDGMENT

What do analysts do when confronted with knowledge so full of uncertainties? Scientists' training, which teaches them to accurately represent certain types of uncertainties, comes into conflict with the pressure to give succinct, unambiguous answers that can inform the social and personal decisions nonexperts must make about risks. If the experts remain silent or equivocal, choices will be made without taking into account what they know. Once they begin to convey what they know, however, experts must inevitably make judgments about the meaning of available information and about the degree to which uncertainty makes it less reliable. But because experts rely on ordinary cognitive processes to make sense of the wealth of data they have available, their judgments about the meaning and conclusiveness of available information can suffer from some of the same frailties that affect human cognition in general.

Inappropriate Reliance on Limited Data

Even statistically sophisticated individuals often have poor intuitions about how many observations are necessary to support a reliable conclusion about a research hypothesis (Tversky and Kahneman, 1971). In particular, they tend to draw conclusions from small samples that are only justified with much larger samples. Thus they may be prone to conclude that a phenomenon such as a toxic effect does not exist when in fact the data are so sparse that the only appropriate conclusion is that the search for the phenomenon is in its early stages. They may also err in the opposite direction, sounding an alarm on the basis of extremely limited preliminary data. The tendency for scientists to draw conclusions from “low-power” research has been documented in fields from psychology (Cohen, 1962) to toxicology (Page, 1981). Low-power research uses measurements and methods that are unlikely to reveal small effects without very large numbers of measurements. Where the tendency to premature conclusion operates, expert judgment will err by underreporting or overreporting effects, both hazardous and beneficial.

Tendency to Impose Order on Random Events

People who are seeking explanations for events, including experts working in their areas of expertise, have a tendency to see meaning even when the events are random (Kahneman and Tversky, 1972). For instance, stock market analysts develop elaborate theories of market fluctuations, but their predictions rarely do better than the market average (Dreman, 1979), and clinical psychologists see patterns they expect to find even in randomly generated test data (O'Leary et al., 1974). In interpreting statistics relating the incidence of cancer to occupational exposures to particular chemicals, there is a temptation to interpret a correlation between exposure to a particular chemical and the incidence of a particular cancer as evidence of an effect. But some such evidence is to be expected even in random data, if large numbers of chemicals and cancers are examined. Similarly, occasional “cancer clusters” are likely to be present in large epidemiological studies even by chance. Replication on a new sample is the best way to check the reliability of such relationships, but new samples are often hard to find. Sometimes, conclusions are reported and publicized as definite before they have been adequately checked.

Such instances, including the interpretation of “unusual” cases, are at heart issues of the proper conduct of scientific analysis. Although recent attention on scientific misconduct may attach greater significance to unusual cases than is actually warranted, it is nonetheless important to recognize the natural human tendency to find order even when the evidence is tenuous and to recognize that when analysts are strongly motivated to find particular results they may overinterpret the evidence.

Tendency to Fit Ambiguous Evidence into Predispositions

When faced with ambiguous or uncertain information, people have a tendency to interpret it as confirming their preexisting beliefs; with new data they tend to accept information that confirms their beliefs but to question new information that conflicts with them (Ross and Anderson, 1982). Because of the high degree of ambiguity in the data underlying risk assessments, this cognitive bias may act to perpetuate erroneous early impressions about risks even as new evidence makes them less tenable.

Tendency to Systematically Omit Components of Risk

In analyses of complex technological systems, certain features are commonly omitted, possibly because they are absent from operating theories of how the technological systems work. In particular, analysts are prone to overlook the ways human errors or deliberate human interventions can affect technological systems; the ways different parts of the system interact; the ways human vigilance may flag when automatic safety measures are introduced; and the possibility of “common-mode failures,” problems that simultaneously affect parts of the technological system that had been assumed to be independent [for elaboration and citations of the evidence, see Fischhoff et al. (1981a)]. Typically, people who were not involved in performing the analyses are unlikely to notice such omissions—in fact, in a complex technical analysis, observers are likely to overlook even major omissions in the analysis. Although most of these oversights tend to lead to underestimates of overall risk, this need not always be the case.

Overconfidence in the Reliability of Analyses

Weather forecasters are remarkably accurate in judging their own forecasts. When they predict a 70 percent chance of rain, there is measurable precipitation just about 70 percent of the time. They seem to be so successful because of the following characteristics of their situation: (1) they make numerous forecasts of the same kind, (2) extensive statistical data are available on the average probability of the events they are estimating, (3) they receive computer-generated predictions for specific periods prior to making their forecasts, (4) a readily verifiable criterion event allows for quick and unambiguous knowledge of results, and (5) their profession admits its imprecision and the need for training (Fischhoff, 1982; Murphy and Brown, 1983; Murphy and Winkler, 1984). Most of these conditions do not hold for professional risk assessors, however, and the predictable result is overconfidence among experts. For instance, civil engineers do not normally assess the likelihood that a completed dam will fail, even though about 1 in 300 does so when first filled with water (U.S. Committee on Government Operations, 1978).2

Summary

These normal cognitive tendencies can lead expert risk analysts to convey incorrect impressions of the nature and reliability of scientific knowledge. Some of the tendencies predispose to premature judgment that a risk is low or high. Several of them bias scientific judgment in the direction of overconfidence about the certainty of whatever currently seems to be known. Although the net effect of these cognitive tendencies has not been determined, their existence justifies a certain amount of skepticism on the part of decision makers, including individuals, about definitive claims made by risk analysts.

INFLUENCES OF HUMAN VALUES ON KNOWLEDGE ABOUT RISK

Although it is useful conceptually to separate risk assessment and risk control assessment from value judgment, there are many respects in which it is not possible to accomplish the separation in practice. Judgments made by scientists on which types of hazardous consequences to study and by analysts on which ones to measure are based in part on technical information—what knowledge already exists, what additional knowledge would be relevant to a decision at hand, what the relative costs are of collecting different kinds of data, and what kinds of information would be most useful for estimating particular risks. But they are also based on value judgments about which types of hazard are most serious and therefore most worthy of being reduced. This section discusses two of the ways that human values enter understanding of risks: through the choice of numbers to summarize knowledge about the magnitude of risks and through the weighting of different attributes of hazards.

Choices of Numerical Measures for Risk

The need to quantify risks as an aid to decision making creates special difficulties because the choice of which numerical measure to use depends on values and not only on science. This fact is evident even in a simple problem of risk measurement—the choice of a number to summarize information on fatalities. Different risk analysts have used different summary statistics to represent the risk of death from an activity or technology.3 Among the measures used are the annual number of fatalities, deaths per person exposed or per unit of time, reduction of life expectancy, and working days lost as a result of reduced life expectancy. The choice of one measure or another can make a technology look either more or less risky. For instance, in the period from 1950 to 1970, coal mines became much less risky in terms of deaths from accidents per ton of coal, but they became marginally riskier in terms of deaths from accidents per employee (Crouch and Wilson, 1982). This is because with increasing mechanization fewer workers were required to produce the same amount of coal. So although there were fewer deaths per year in the industry, the risk to an individual miner actually increased during this period. Which measure is more appropriate for decisions depends on one's point of view. As some observers have argued, “From a national point of view, given that a certain amount of coal has to be obtained, deaths per million tons of coal is the more appropriate measure of risk, whereas from a labor leader's point of view, deaths per thousand persons employed may be more relevant” (Crouch and Wilson, 1982:13).

Each way of summarizing deaths embodies its own set of values. For example, “reduction in life expectancy” treats deaths of young people as more important than deaths of older people, who have less life expectancy to lose. Simply counting fatalities treats deaths of the old and young as equivalent; it also treats as equivalent deaths that come immediately after mishaps and deaths that follow painful and debilitating disease or long periods during which many who will not suffer disease live in daily fear of that outcome. Using “number of deaths” as the summary indicator of risk implies that it is equally important to prevent deaths of people who engage in an activity by choice and deaths of those who bear its effects unwillingly. It also implies that it is equally important to protect people who have been benefiting from a risky activity or technology and those who get no benefit from it. One can easily imagine a range of arguments to justify different kinds of unequal weightings for different kinds of deaths, but to arrive at any selection requires a value judgment concerning which deaths one considers most undesirable. To treat the deaths as equal also involves a value judgment.

There are additional value choices involved in calculations based on fatalities. A particularly controversial choice concerns whether to “discount” lives, that is, whether to give deaths far into the future less weight than present deaths. This approach to valuation is sometimes advocated on the ground that people typically prefer a given amount of any particular good in the present to the same value in the future—if they invested the cost of the good, they could expect to have increased purchasing power and thus to be able to purchase more of it in the future than in the present. Although one cannot “invest” human life in the same way, society can invest the resources used to save or prolong lives. From an individual's point of view, one arguably loses less by dying at an old age than when younger, so people may be less willing to work to avoid probable deaths the farther they are in the future.

Discounting is controversial partly because it is used to put a monetary value on human life. Some measure, whether based on probable future earnings or consumption or on willingness to pay to reduce the probability of fatality, is selected to put a price on what for many has intrinsic moral or even religious value—and each of these measures embodies controversial assumptions about what is worthwhile about life. In addition, choosing a positive discount rate—one that treats future lives as worth less than present lives— suggests that society cares less about its children's generation than its own, a controversial assumption to say the least. But deciding not to discount lives also involves a judgment about the future, and so it is also a value-laden choice (Zeckhauser and Shephard, 1981).

Values also enter into scientists' choices about how to characterize the uncertainty in their information. It is traditional among civil engineers, public health professionals, and others to take account of uncertainty by being “conservative” in stating risk estimates. This means that they leave a margin for error that will protect the public if the actual risk turns out to be greater than the best currently available estimate. But it has sometimes been argued that risk analysts should instead present their best available estimate to decision makers, along with an explicit characterization of its uncertainty, and allow the decision makers to decide explicitly how much margin of safety to allow. The dispute is highly controversial because many believe that in practice the latter approach will provide a narrower margin of safety. The central point here is that either way of representing uncertainty embodies a value choice about the best way to protect public health and safety.

These few examples show how human values can enter into even apparently technical decisions in risk analysis, such as about the choice of a number to summarize a body of data. It is easy therefore to see how choices that are justified by appeal to data from a risk analysis can sometimes be questioned by appealing to the very same data (see Figure 2.3).

Are determined by comparing the probabilities of harm involved in various activities?

FIGURE 2.3

SOURCE: National Wildlife Magazine, August-September, 1984. Copyright © 1984 Mark Taylor. Reprinted with permission of Mark Taylor.

Values and the Attributes of Hazards

We have noted that decision makers do not choose among risks but among alternatives, each with many attributes, only some of which concern risk. Similarly, each hazard—and, for that matter, each benefit—that a decision alternative presents has many attributes. These attributes are important to nonexperts for the purpose of making decisions. Qualitative aspects of hazards are relevant to decisions in various ways. In different decision contexts it may be necessary to consider comparisons and trade-offs such as the following: Is a risk of cancer worse than a risk of heart disease? Is an accidental death of a person at age 30 more to be avoided than a death by emphysema at age 70? Is an industrial hazard more acceptable if it is borne by workers partly compensated by their pay than if it is borne by nonworking neighbors of the industrial plant? Are the deaths of 50 passengers in separate automobile accidents equivalent to the deaths of 50 passengers in one airplane crash? Is a hazard that faces the unborn worse than a similar hazard that we face ourselves? Is a large hazard with a low probability equally undesirable as a small hazard with a high probability when the estimated risks are equal? The difficult questions multiply when hazards other than to human health and safety are considered. Technological choices sometimes involve weighing the value of a river vista, a small-town style of living, a holy place, or the survival of an endangered species, in addition to dangers to human health, against probable economic benefits. Such choices are ultimately matters of values and interests that cannot be resolved merely by determining what the risks and benefits are.

A growing body of knowledge on what is usually called “risk perception” helps illuminate the values involved in the evaluation of different qualities of hazards.4 In studies of risk perception individuals are given the names of technologies, activities, or substances and asked to consider the risks each one presents and to rate them, in comparison with either a standard reference or the other items on the list. The responses are then analyzed, taking into account attributes of the hazards and benefits each technology, activity, or substance presents (Table 2.1 lists several such attributes). Analysis consistently shows that people's ratings are a function not only of average annual fatalities according to the best available estimates, but also of the attributes of the hazards and benefits associated with a technology, activity, or substance (Fischhoff et al., 1978; Gould et al., 1988; Otway and von Winterfeldt, 1982; Slovic et al., 1979, 1980). In particular, the studies show that certain attributes of hazards, such as the potential to harm large numbers of people at once, personal uncontrollability, dreaded effects, and perceived involuntariness of exposure, among others (see Table 2.1), make those hazards more serious to the public than hazards that lack those attributes. Also, choices that provide different types of benefit, such as money, security, and pleasure, are valued differently from each other (Gould et al., 1988). The fact that hazards differ dramatically in their qualitative aspects helps explain why certain technologies or activities, such as nuclear power, evoke much more serious public opposition than others, such as motorcycle riding, that cause many more fatalities.

An important implication of such findings is that those quantitative risk analyses that convert all types of human health hazard to a single metric carry an implicit value-based assumption that all deaths or shortenings of life are equivalent in terms of the importance of avoiding them. The risk perception research shows not only that the equating of risks with different attributes is value laden, but also that the values adopted by this practice differ from those held by most people. For most people, deaths and injuries are not equal— some kinds or circumstances of harm are more to be avoided than others. One need not conclude that quantitative risk analysis should weight the risks to conform to majority values. But the research does suggest that it is presumptuous for technical experts to act as if they know, without careful thought and analysis, the proper weights to use to equate one type of hazard with another. When lay and expert values differ, reducing different kinds of hazard to a common metric (such as number of fatalities per year) and presenting comparisons only on that metric have great potential to produce misunderstanding and conflict and to engender mistrust of expertise.

IMPLICATIONS FOR RISK COMMUNICATION

We have shown in this chapter that different experts are likely to see technological choices in different, sometimes contradictory, ways even when the information is not at issue. Incomplete and uncertain knowledge leaves considerable room for scientific disagreement. Judgments about the same evidence can vary, and both judgments and the underlying analyses can be influenced by the values held by researchers. Since scientists and the people who convert scientific information into risk messages do not all share common values, it is reasonable to expect risk messages to conflict with each other. Even in the best of circumstances for communication, conflicting risk messages would create confusion in the minds of nonexperts who must rely on them to inform their choices. But as the next chapter shows, the circumstances are not the best. The social conflict that surrounds modern technological choices is characterized by anxiety and mistrust and by clashes of vested interests and values, conditions that create formidable tasks for those who would improve decision making through risk communication.

1. One technical definition of risk is that risk is the product of a measure of the size of the hazard and its probability of occurrence. Regardless of how numerical estimates are made, the essence of the distinction between hazard and risk is that “risk” takes probability explicitly into account.

2. This discussion is drawn from Fischhoff et al. (1981 a). More extensive discussions of expert overconfidence with additional examples can be found there and in Lichtenstein et al. (1982).

3. This discussion is drawn from Fischhoff et al. (1984:125–126), where further citations can be found.

4. The term “risk perception” is put in quotation marks because, as the discussion shows, this body of research is more accurately described as the study of human values regarding attributes of hazards (and benefits).