Draft: 4 June 1994

To be submitted to Chemosphere


Alexander Shlyakhter1*, L. James Valverde A., Jr.2, and Richard Wilson1

1Department of Physics, Harvard Center for Risk Analysis and Northeast Regional Center for Global Environmental Change

Harvard University, Cambridge, Massachusetts 02138

2Sloan School of Management and Technology, Management, and Policy Program

Massachusetts Institute of Technology, Cambridge, MA 02139 USA

*Corresponding author


This paper considers several factors that must be considered in the integrated risk analysis of global climate change. The paper begins by describing how the problem of global climate change can be subdivided into largely independent parts, which can then be linked together in an analytically tractable fashion. Naturally, uncertainty plays a crucial role in the integrated risk analysis of global climate change. Accordingly, the paper considers various aspects of uncertainty as they relate to the problem of global climate change. The paper also considers a number of issues relating to risk management and public decision-making, including sequential decision strategies, value of information, and problems of interregional and intergenerational equity.


There is a long history of scientific study of global warming. Fourier (1835) may have been the first to notice that the earth is a greenhouse, kept warm by the atmosphere, which reduces the loss of infrared radiation. The overriding importance of water vapor as a greenhouse gas was recognized even then. Arrhenius (1896) was the first to quantitatively relate the concentration of carbon dioxide (CO2) in the atmosphere to global temperature. Scientific understanding has increased since then, particularly stimulated in the latter half of this century by the conclusion of Revelle and Suess (1957) that human emissions of CO2 would exceed the rate of uptake by natural sources in the near term, and the demonstration by Keeling (1989) that atmospheric CO2 is steadily increasing. These scientists' warnings had little effect on public opinion and policy until the summer of 1988, when it was noted that five out of the previous six summers in the United States were the highest on record and a long-term global temperature record was presented to the U.S. Congress suggesting that a global mean warming had emerged above the background natural variation (Hansen 1981). In fact, most of the warming in this century occurred before 1940, when the emissions of CO2 were much lower (see Seitz 1994 and Figure 8 below).

A risk analyst is different from a pure scientist. He addresses a particular issue and tries to bring together all issues related to the question of interest. He therefore plays an integrating role. At the same time, he has to make many assumptions about which there is much uncertainty. Uncertainty is central to the concept of risk. In this paper we outline an approach to risk analysis of global warming. During the course of the past several decades, the field of risk analysis has emerged as a useful means by which to structure and evaluate complex public policy decisions concerning human health and safety. As commonly construed, the notion of risk conjoins two basic ideas, namely, that of adverse consequences and that of uncertainty. Typically, risk analysts distinguish between risk assessment, on the one hand, and risk management, on the other. According to this distinction, risk assessment attempts to valuate undesirable outcomes and to assign probabilities to their chance of occurrence, whereas risk management involves political decisions as to what can, or what should, be done to control, or otherwise mitigate societal risks.

A risk analyst brings to bear an integrated approach to problem. Not necessarily expert in any one aspect, he must nonetheless understand enough to describe all the linkages between different aspects. He addresses a particular issue and brings to bear what ever information he can on the question that is being addressed. Since his duty is to obtain an answer in a timely fashion, he must necessarily fill in gaps in knowledge with many assumptions about which there is much uncertainty.

As we have said, uncertainty is central to the concept of risk: if an event occurred with certainty (like an eclipse of the sun) we would not use the word risk. An evaluation and discussion of uncertainty is an essential part of a good risk analysis. Because of the necessity of communicating, as best as one can, the uncertainty, the results of a good risk assessment cannot be a single number. An assessor can only give a probability distribution for a given outcome. Unless the assessor conveys the full distribution to the risk manager, his work is incomplete.

The adjective integrated should hardly be necessary in discussions of risk assessment. However, the phrase "integrated assessment" has become politically fashionable. In our view, an integrated assessment, and even more, an integrated analysis should be done in such a way as to enable all aspects of a problem to be considered simultaneously. That does not mean that every detail should be considered. The skill of the risk analyst lies in showing linkages between different parts of a particular problem, and how they may be decoupled into discrete, separable modules that can receive individual attention.

Nor should an integrated assessment mean that the whole process should be incorporated into one gigantic computer program. Indeed, as is well known, an overemphasis on coupled computer programs too early in an assessment process can prevent careful thinking about important linkages. To this end a simple diagram and simple analytic approach can help ensure that the computer program is addressing the appropriate issues.

An integrated analysis is even more important. The importance of separating risk assessment and risk management was stressed by two committees of the National Academy of Sciences. (National Research Council 1983) But this can go too far (Valverde 1992; Wilson and Clark 1991). One way they are intertwined is in the understanding of how cautious the decision maker wants to be. Technically speaking, at what point on the probability distribution of the final answer of an assessment should a decision maker take action? It is important that the assessment contain enough information for the manager to decide this appropriately.

Sometimes it is convenient to distinguish a horizontal integration, where all outcomes are considered simultaneously, and a vertical integration where the whole procedure from population to outcome is considered. While, ideally, a full integration considers all of these facets at once, we stress in this paper the vertical integration that we believe is the most important.

This paper lays out our views on several important factors to be considered in an integrated risk analysis and global climate change. In Section 2, we lay out a simplified progression of cause and effect, showing how the problem can be (and almost always has been) decoupled into a series of separate steps. This simplification includes an assumption that, at any one time, the global climate is quasi-static; with a constant concentration of greenhouse gases. Section 3 discusses uncertainty; a central feature in any analysis of the potential risks of global climate change. Section 4 discusses how these risks might be compared with other risks in society. Our motivation here is to gain perspective on the process of an assessment, the meaning of an assessment and the level at which society might wish to take action. Section 5 goes into more detail on each one of the steps in the risk layout of Section 2, and goes further into the difficulties faced in assigning probabilities to the uncertainties that characterize the global climate change problem. Section 6 discusses the evaluation and illumination of the various policy issues. Section 7 discusses the twin problems of overconfidence and surprise in scientific inference and prediction, as well as the truncation of probability distributions by other (usually historical) data. The remaining sections discuss various limitations of the simple approach presented in Section 1. Section 8 discusses the problem of formulating sequential strategies for making global climate change policy decisions. This is closely coupled with the issue of value of information in global climate change research. Section 9 discusses the balancing of cost-benefit and risk. Lastly, Section 10 discusses how global climate change introduces problems of interregional and intergenerational equity, and how such problems can be formally addressed in risk management decisions.


In all risk analyses, the crucial issue is what question, or set of questions, is being addressed. For climate change-related issues, we consider a set of questions encompassed by the single one: What are the expected impacts of global warming upon the world, and how can these impacts be reduced or modified?

There are various recommended procedures for carrying out risk assessments. The most generally accepted is that of National Research Council (1983), but that is specialized to the risks of chemical carcinogens. Moreover, the sequence proposed allows for no feedback from decision-makers on which risks to assess.

A more general approach was put forth by Crouch and Wilson (1982) that includes such feedbacks. But for the assessment of the risks of global warming, we use a general layout of the progression of the physical processes involved. This general layout is stimulated by ideas originally put forth by Kates et al. (1985). This is shown in the central vertical line of Figure 1. We divided the sequence into steps that are approximately independent of each other. Between each box in Figure 1 is a factor that multiplies the number in the box to reach the quantity in the next box. The final impact is considered to be the product of all of the factors, and is written in Equation (1). This diagram is simplified (but we hope not oversimplified) by discussing only CO2 as a greenhouse gas. This simplification is made because it is the most important greenhouse gas that people can alter. Nevertheless, a full discussion must show other entries, such as methane, nitrogen oxides, and chlorinated fluorocarbons. The difficult scientific question of the role of water vapor, the most important greenhouse gas, is discussed later.

In addition to the main sequence running from top to bottom in the center of Figure 1, we show a few steps which we believe to be the most important in discussing the calculation of the factors in the boxes shown. They are enumerated by the numbers 1-6 referred to in the text.

To begin, the number in the first box is the world population; the second factor is energy production per capita; the third factor is the total CO2 emissions per unit of energy production leading to total CO2 emissions; the fourth factor is the increase of atmospheric concentration of CO2 per unit emission; the fifth factor is the temperature rise per unit of concentration; the sixth is the environmental outcome per unit temperature rise. Multiplying these factors together leads to an estimate of the final outcome.

The relationship of Eq. (1) to Figure 1 comes from the following: the product of the first two factors is the world energy use; the product of the factors one through three is the total of world CO2 emissions; the product of factors one through four is the average CO2 concentration, and so on. In writing Eq. (1), we explicitly assume that each factor is independent of all the others, which is approximately true.

All calculations of global warming that we have seen follow this layout and formula to some extent, although some ask a more limited question, and therefore only follow a part of the procedure. Thus, the Intergovernmental Panel for Climate Change (IPCC 1990a,1990b,1992) discusses various energy scenarios for the world in their Volume I (CHECK) . This encompasses factors 1, 2, and 3. Factor 4 is the scientific discussion of the fate of CO2 in the environment. The main output of General Circulation Models (GCMs) is the Factor 5, and it is here that the main scientific controversy lies. Finally, Factor 6 is the discussion of impacts of global change.

Figure 1. The proposed causal framework for global climate change consists of three parts: climate change assessment, impact assessment, and risk management. Population and energy policy studies serve as inputs to climate change assessments. World population and energy consumption appear as endpoints in risk management decisions about climatic change.

It should be evident that the diagram (and the equation) should branch just before factor 6, to allow for different possible outcomes. Alternatively, several diagrams may be discussed, and the overall outcomes related to each other (perhaps by a cost per unit outcome) and summed. Later in this paper we will discuss the work of Oerlemans (1989) on possible sea level rise, where he discusses just this last factor. Sea level rise was the parameter that most attracted people's imagination, although the effects upon agriculture are usually considered to be the most important outcome (Crosson 1993, Bowes 1993).

Each factor in Figure 1 and Equation (1) has both recognized and unsuspected uncertainties. An important issue is what these uncertainties are and how to combine them to give the overall uncertainty in the final outcome. This is discussed further in section 3.

We agree with National Research Council (1983) that this assessment should be independent of the management decisions that follow it. In general, the assessor should restrict his advice to asking questions and giving alternatives for the risk manager from which to choose. Once he is presented with the estimated outcome, with its uncertainties, by a risk assessor, a risk manager or set of managers (for example the conference at Rio de Janeiro in summer 1992) must decide what, if anything, to do. The options are limited and are illustrated on the left hand side of Figure 1.

At the top is a line suggesting that we can modify world population (upward by reducing war, famine, and pestilence, or downward by birth control) by man's decisions. we next component suggests that man may modify the energy use per capita (either up by increasing the global standard of living or down by increased efficiency of energy use). The third suggests that we can modify CO2 emissions per unit of energy, either up by environmental controls or abandoning nuclear energy, or down by replacing fossil fuels (especially coal) by alternative fuels, nuclear, hydro, solar, etc.

Although it is intuitively attractive to create a sink for CO2, we do not draw a line in Figure 1 to modify the ratio of concentrations to emissions, because the consensus seems to be that this is not possible on the necessary scale. Nor do we draw a line suggesting a modification of the ratio of temperature rise to CO2 concentrations, because we know of no suggestion that it can be done. But we do draw a line suggesting a possible mitigation of the outcome given a temperature rise. If the outcome is defined generally (as, for example, the effect on GNP) we can modify this factor by adaption - such as moving from South Dakota to North Dakota as the temperature goes up.


As noted above, each one of the factors in Equation (1) and Figure 1 is uncertain. Here we distinguish between different types of uncertainty. To illustrate this, we draw upon experience from two other fields, namely, the study of risks to human health of nuclear power station or of chemical manufacturing plants, and calculating the risks of chemical carcinogens.

In a discussion of the risks of chemical carcinogens, Wilson, Crouch, and Zeise (1985) distinguished between stochastic uncertainties and uncertainties of fact. A statement that a cancer risk is 10-6 (one in a million) per life means that an individual is unlikely to develop cancer from a given exposure to a particular chemical, but one person in a million will. Thus, there is a "stochastic uncertainty" for a given individual. A different type of uncertainty is the uncertainty of the slope of a dose-response curve, which gives the projected number of cancers per unit exposure. As typically construed, this slope, regardless of its value, is the same for all exposed individuals. This type of uncertainty is sometimes referred to as "uncertainty of fact," to distinguish it from stochastic uncertainty.

The uncertainties that characterize the global climate change problem are both factual and stochastic in character. For example, the scatter of predictions for the sensitivity of climate system to CO2 doubling factor is, in some ways, analogous to the uncertainty of the slope of a dose-response curve. Stochastic uncertainty arises at the end of the causal chain; without detailed information concerning regional impacts, it appears to be almost random how changes in CO2 would affect a particular community.

Further, it is clear that the uncertainties in the first few factors are different again. They are largely uncertainties in what society will do. Although these can often be addressed by analyzing past experience, people have a habit of surprising analysts. Indeed, one of the purposes of analyzing risks of global warming is to encourage people to behave in productive ways not predictable from past behavior.

Combining uncertainties in several factors is straightforward when these factors are independent. The effects of correlations between variables can often be ignored, particularly if the correlation coefficients or the uncertainties in the correlated variables are small. Furthermore, if the risk model includes both correlated and uncorrelated uncertain inputs, the uncorrelated inputs will moderate the effect of neglecting correlation (Smith et al. 1992). For simplicity, we shall neglect possible correlations between different factors in Eq.(1).

Schneider (1983) suggested that uncertainties be combined by considering each component of the overall CO2 problem as part of a cascading pyramid of uncertainties. Using the elements of Figure 1, Figure 2 illustrates one possible framework for this. Specifically, it depicts a "synthetic" probability tree for the first six elements of the global climate change problem presented in Figure 1 and Equation (1). For each factor, there are, of course, many possibilities.

For each vertex in Figure 2 there are three choices. This means that there are 36 = 729 possible scenarios, each with its own outcome. The general approach to evaluating even this simplified diagram of the risk of the final event would be to evaluate each of the 729 scenarios. Naturally, this is a formidable task which no one has undertaken. It resembles a distributional approach developed by Evans et al. (1994) for chemical carcinogens, which he calls a "decision analysis" approach.

Most assessors of global warming, however, evaluate the uncertainties by considering a continuum of choices for each factor, governed by a probability distribution. Following this line of reasoning, we suggest that the three typical choices in Figure 2 be the centroid of the distribution and two on either side, for which the probability is approximately the same. These might, for example, be halfway down the sides of the probability distribution.

Then, assuming independence of each factor, the probability distributions can be combined. This is particularly simple if each distribution can be approximated by a lognormal one, in which case the final distribution is lognormal with the logarithmic standard deviation given by the square root of the sum of squares of individual geometric standard deviations. If the distributions are far from lognormal, Monte Carlo methods can be used to combine them.

Mathematically, this procedure is similar to the event tree procedure for calculating the probability of a nuclear reactor (or other industrial plant) accident (Rasmussen 1985). In assessing the risks to human health and safety posed by complex, technical systems, it is recognized that failures of components can be treated in a statistical fashion, and if it is assumed that these events are independent, then the probability of a major accident can be estimated. The event tree procedure is more general, in that each scenario must be assigned a weight. The problem becomes much simpler if we assume independence of the probabilities at each node, and simpler still if we can approximate the probability distribution at the node by a lognormal distribution.

This procedure is also analogous to the procedure for calculating the risk of exposure to chemical carcinogens. In such analyses, carcinogenic potencies are measured in animals, and then an uncertain interspecies potency factor is used to predict carcinogenicity in humans on the basis of toxicity. A third factor is the dose to which the person can is exposed. Crouch and Wilson (1981) and Wilson, Crouch, and Zeise (1983) pointed out that these three factors are approximately independent of each other, and approximated them by log normal distributions, which were then combined analytically. Recently, analysts of chemical risk have tended to fold these distributions by Monte Carlo calculations even though independence is typically assumed.

In the United States, the approach that most regulatory bodies take towards uncertainty is very conservative, and does not always take into account better analytical methods when they are available. The EPA's approach to uncertainty propagation, for example, takes a conservative upper limit for each risk factor. The upper limits are then multiplied to arrive at a total risk level for regulation. Regardless of whether this procedure is used for final regulation, it obscures the understanding of the problem, in that it gives too little information to the risk manager.

We now consider in detail each of the factors in Figure 1 and Equation (1).

Factor 1

We may endeavor to reduce world population. Discussion of this dates back at least to Malthus (REF). Malthus suggested that unless society did something, war, famine and pestilence would take their toll. Indeed, in the 1990s, civil wars in Bosnia and Rwanda are reducing the population, but not (yet) on a global scale. Pestilence may be important as the unchecked ravages of AIDS in Africa suggest. But, these all have a common feature; mankind is doing its best to stop many processes which would otherwise reduce the population. Positive steps to reduce population are being taken in many countries, of which China seems to be the most successful.

Population studies is a fairly mature science, and predictions of world population over the next few decades are more reliable than for factors 2 to 6. Although Shlyakhter and Kammen (1992) have shown that forecasters consistently overestimate the reliability of their projections, this does not affect the final conclusion very much. Calls for reducing world population are likely to have little effect on global warming in the next century. This will be discussed further in Section 5.

Factor 2

Various world energy projections have been made in the past. The uncertainty in these projections is mainly due to the uncertainty in energy use per capita, and in CO2 emissions per unit energy, two factors which are not usually disaggregated. Shlyakhter et al. (1994a) have shown how these projections are much more uncertain than originally claimed. This is discussed further in Section 5.

Energy use per capita has been discussed by Hafele et al. (1982) and Goldemberg et al. (1988). In the work of Hafele et al., it was pointed out that as countries develop, the energy use per capita increases sharply, and even the energy use per unit of GNP rises. But later in the development, the ratio of energy use per GNP falls as sophisticated energy efficient technologies are introduced. This raises the question of how much societal intervention to encourage energy efficiency in developing countries can accelerate this historical process. End use efficiency has been discussed by Goldemberg et al. (1988).

Economists argue, with considerable historical justification, that the principal effective way of encouraging energy efficiency is by increasing the price of energy. The use of taxes or charges to reduce energy use per capita has been discussed by Nordhaus and Yohe (1983), Nordhaus (1991), and by Jorgenson and Wilcoxen (1991). However, in one case, mandatory automobile efficiency standards in the USA , compulsion has improved efficiency without a preceding tax, charge, or fuel price increase, although the price of automobiles increased.

Factor 3

The amount of CO2 emitted per unit of energy use is not constant, and can be changed by societal action. Some sources of energy (e.g. hydro, solar, nuclear) produce none. There is a difference between the amount of CO2 emitted between fossil fuels. The energy from coal comes solely from the conversion of carbon to carbon dioxide, whereas when natural gas (CH4) burns, both carbon and hydrogen contribute. Therefore, in the burning process, natural gas produces half the amount of CO2 produced by coal, and oil produces an intermediate amount. In addition, natural gas is easier to use; so that 52% thermodynamic efficiency has been obtained in a combined cycle turbine, vs. 42% for the best coal burners. But this gain can all be lost if any natural gas leaks anywhere in the cycle - from well to burner - because CH4 is a greenhouse gas 30 times (per atom) as important as CO2. A 3% leak in the system leads to a doubling of the greenhouse effect, and negates the advantage over coal.

Wilson (1989), Starr (1990), and Bodansky (1990) have all pointed out that for electric power, improvements in end use efficiency (leading to a reduction in factor 2) and a choice of generating source are almost independent societal decisions. Changing the electric power station can continue to reduce CO2 emissions until the last fossil fuel power station is closed.

Nuclear energy expanded rapidly in the late 1970s, but at about 1980 expansion slowed, and no new nuclear power plants were ordered in the USA since 1977 (that were not subsequently canceled). Yet enough coal-fired electrical generating plants were built since 1975 to increase CO2 emissions by 5%. This illustrates the fact that utility decision makers do not put the possibility of global warming high in their considerations of what power plants to build.

The USA and Europe have installed hydroelectric plants in most reasonable sites, but China and Africa may have more development opportunities. Although there has been considerable political support in the last 20 years for expanding various forms of solar energy, this has not taken off. It is possible that many of the constraints affecting nuclear power also affect solar energy. Again, it is of interest whether the natural progression of energy use in developing countries outlined by Hafele et al. (1982) can be altered by help from developed countries. Assistance in developing solar ovens in Kenya (Kammen 1994) or developing nuclear power in Asia, can be useful steps in reducing CO2 emissions.

Factor 4

Keeling et al. (1989) have measured CO2 concentrations over many years. If one naively assumes that all the CO2 emitted from fossil fuel burning stays in the atmosphere, then the CO2 concentrations would be increasing at twice the rate which has been observed. This leads to a discussion of the carbon cycle, or carbon budget. (Revelle and Munk 1977, Bacastow and Keeling 1991)

A critical scientific uncertainty is thus the environmental sinks for CO2. Terrestrial plants and soils comprise a potential sink and the ocean another. Although the deep oceans are effectively unlimited in the amount of carbon they can absorb, the rate of absorption is limited by chemical partitioning rates and the transfer rates between the surface and the deep ocean. Fifteen years ago, it was thought that this absorption time constant was somewhere on the order of 800 years, governed by the transfer from surface to deep oceans. Conventional wisdom now puts that estimate at 200 years or less. Lindzen (1991, private communication) and Heimann (1991) have argued that the time constant for carbon absorption by the ocean and biosphere may be on the order of that for the exponential increase of emissions, i.e., approximately 50 years. The crucial question is, what will be the future CO2 concentrations if we succeed in limiting the increase in emissions? The answer depends critically upon this time constant. Optimists argue that the effect of any intervention will result in only 50 years of high values; pessimists, on the other hand, argue that the one time constant is 800 years. Although the uncertainty in factor 4 seems small, it turns large when the whole is considered as a dynamic problem, and becomes important for public policy decisions.

Factor 5

The core of the scientific debate on global warming is about the temperature rise resulting from an increase in concentration of greenhouse gases. These greenhouse gases include - in addition to carbon dioxide (CO2) - methane (CH4), nitrous oxide (N2O), freon, and, most importantly, water vapor. Simple calculations of the concentrations of these greenhouse gases (except water vapor) from known emissions are, for the most part, well understood. If temperature rise did not affect concentrations, as is approximately the case for these gases, then the calculation of temperature rise would also be well understood. The factor governing the effect of CO2 is not however constant; there is a non-linearity with a smaller factor at high concentrations. But, as anyone can see by looking upwards at the world's clouds, the concentrations of the most important greenhouse gas, namely, water vapor, vary rapidly over space and time, and arise from feedback mechanisms that are less well understood.

A key scientific uncertainty in the global climate change problem lies therefore in evaluating the increase T per unit increase in atmospheric CO2). This is usually done on the basis of general circulation models (GCMs), which calculate the global mean temperature increase that follows from an increase in CO2 concentrations that is maintained at a constant level over a long period of time (Houghton et al., 1990; NAS, 1991). This is sometimes called an "equilibrium response" to a static, or quasi-static, doubling of CO2. In reality, this process is dynamic in character, with CO2 concentrations rising steadily. The temperature rise is not given immediately by a simple factor multiplying the CO2 concentrations, but rather lags, due to the coupling of various heat sinks. (Shown as boxes to the right of the temperature increase box in Figure 1). Nevertheless, it is conventional to consider first the simpler problem of estimating the temperature rise that would result from an equilibrium situation after CO2 has doubled, or after all emitted gases have produced an equivalent radiative absorption. The lag of temperature rise is large enough that, at the time that CO2 doubling is reached, only a 0.5C - 1.0C temperature rise is expected, not the equilibrium temperature of 2.5C. (Cubasch 1992, 1993)

If there were no change in the concentration of water vapor (such as would be the case if the Earth was dry), the global-mean surface temperature would increase by Td = 1.2C, for a static doubling of CO2, and this estimate is quite reliable. But concentrations of water vapor are expected to increase with increasing temperature, and since water vapor is the most important infrared absorber (greenhouse gas), this could amplify warming. Numerous interactive feedbacks from water (most importantly, water vapor, snow-ice albedo, and clouds), introduce considerable uncertainties into the estimates of the mean surface temperature rise, Ts. The value of Ts is roughly related to Td by the formula Ts=Td/(1-f), where f denotes a sum of all feedbacks. The water vapor feedback is relatively simple: warmer atmosphere contains more water vapor, which is itself a greenhouse gas. This results in a positive feedback, as an increase in one greenhouse gas, CO2, induces an increase in another greenhouse gas, water vapor. On the other hand, cloud feedback is the difference between the warming caused by the reduced emission of infrared radiation from the Earth into outer space, and by the cooling through reduced absorption of solar radiation. The net effect is determined by cloud amount, altitude, and cloud water content. As a result, the values of Ts from different models vary from Ts=1.9C to Ts=5.2C (Cubasch and Cess 1990). Typical values for these parameters are To = 1.2C and f= 0.7, so that Td = 4C. It is important to note that some feedbacks of water vapor may not yet have been identified.

Note that two models with comparable Ts values can have different strengths as to the various feedback mechanisms. For example, two models (labelled GFDL and GISS) show an unequal temperature increase as clouds are included (from 1.7C and 2.0C to 2.0C and 3.2C, respectively). The effects of ice albedo are different between models, but oppositely, so that the results converge (4.0C versus 4.2C, respectively). Therefore, agreement between models may be spurious, and both could be wrong. In addition, most experts believe that even small increases in the value of f could result in a runaway warming not estimated by any of the models, leading, ultimately, to a different stable (or quasi-stable) state of the Earth's climate (Stone, 1993).

The outputs of the models used by Houghton et al. (1990) and NAS (1991) were given as "bounds" on the global temperature rise Ts. The committee of the National Academy of Sciences explicitly declined to fit a distribution because that might be taken too seriously. Here we go further and justify our boldness in quantifying our conclusions. We consider these bounds as extreme values of a probability distribution of global temperature rise T. We plot both lognormal and normal fits to the IPCC and NAS limits in Figure 3. This type of simple calculation leads to statements such as those made by Dickinson that there is a one-percent chance of T being above 5C. We discuss the possible meaning of this below.

In our proposed framework, the probability distributions of different factors in Eq.(1) serve a very clear purpose to produce the overall estimate of uncertainty. We hope that we have explained it well enough, and added enough qualifiers, that misuse is less likely.

In this connection it is important to realize that the probabilities given here include no contribution from the probability that the whole Global Climate Model is wrong. We also note that the bounds given by IPCC and NAS are not rigorously derived from a mathematical model, but rather, represent in an ill-defined way the expert judgement of the committee members involved

Factor 6

The present best estimate for the sea level rise in the IPCC "Business-as-Usual" scenario is 66 cm by the year 2100, and is based upon the work of Oerlemans (1989) who calculated sea-level rise h/T using a simple fit to the temperature rise predicated by a specific scenario "Business-as-Usual" (Houghton 1990). This model assumes a simple extrapolation from past behavior for emission of CO2. T = (t - 1850)3, where t is time, = 27 x 10-8K yr-3, and uncertainty is 35% of the mean for each variable. In terms of Figure 2, Oerlemans started at the fifth level of the tree (from the top). He used the right branch to describe the CO2 emissions (Business-as-Usual) and the middle (best estimate) values for the lower levels of the tree.

Further, he evaluated the uncertainty in the calculated sea-level rise by combining in quadrature uncertainties in each of the five factors: 2 = 2glac + 2ant + 2green + 2wais + 2expa + internal variability. The subscripts refer, respectively, to the effect of glaciers, the Antarctic, Greenland and West Antarctic ice sheets, and thermal expansion of sea water. We are dubious about Oerlemans' assumption of independence of these factors, so the uncertainty in sea level rise might be greater than he calculates. Moreover, we address further in Section 5 the effect of overconfidence in predictions. This observation can only make more important the fundamental conclusion that uncertainty in h/T is greater than all other uncertainties about sea level rise, and that h might even be negative. However, the level of rise is far less than the extremes suggested some 20 years ago. If one attempts to put a cost on the height increase, one finds it is less than the cost of typical dikes in the Netherlands. The cost can then be handled in a similar way.

Combining uncertainties can be simplified when they are very different in size. For example, the distribution of predictions of 21 GCM models shown in Figure 8 has a mean of 3.7C and a standard error 0.9C, so that the relative uncertainty is about 0.24, and its square is just 0.06 when added in quadrature to the relative uncertainty in the sea-level rise, which is close to one. If the relative uncertainties in population projections and energy consumption per capita are on the order of thirty percent, their contribution in quadrature to the total relative uncertainty will just increase the relative uncertainty from the sea-level rise from 100% to 125%.

We discuss sea level rise because it is the most dramatic potential effect of global warming, and because it is the effect that has been most extensively studied. But the effect on agriculture is often considered to be the most serious and costly (REF). We must also include the possibility that the effects on agriculture are beneficial for some regions -- such as Siberia. We will not go into all these effects here. At his lectures on global warming, Nordhaus asks the audience for their estimates of the effect on GNP of a doubling of CO2. The result of this highly informal expert survey is that it is on the order of a few percent. But this must be understood as the mean of a distribution, and the extremes of the distribution may be considerable.

Although we have assumed independence, the physical stresses that global climate change places on the environment have the potential to compound synergetically. For example, storms whose strength may be increased by regional warming (Emanuel 1987) may also have a reach and severity that is increased by a rising mean sea level. Ecosystems are also at risk. For example, Bazzaz (1990) and Bazzaz and Fajer (1992) have studied the combined effects of rising concentrations of CO2, rising temperatures, and increased ultraviolet radiation on plants and ecosystems. Their findings suggest that these factors can give some species distinct advantages over others. For instance, most weeds are more resilient to stresses than are cultivated plants. One possible remedy would be to increase the production of pesticides. This, however, would likely lead to increased energy use, as well as to additional health risks.

These synergistic effects of temperature rise and CO2 concentration increase do not invalidate the concept of calculations derived using independence but they do form an exception that must be evaluated separately. This situation is analogous to the deviations from independence in reactor safety calculations due to common mode failures. Rasmussen (Atomic Energy Commission 1985) set up a procedure for analyzing nuclear reactor accidents by constructing an "event tree" which follows the progression of a nuclear power accident from initiating event to ultimate consequence. He then calculates the probability at each step and assumes that each step is independent of the previous one. But sometimes several events occur simultaneously or several pieces of equipment fail simultaneously. The overall usefulness of the procedure now called Probabilistic Risk Assessment or PRA) is not invalidated by the existence of the "common node failures", on the contrary the procedure has proven to be an excellent way of uncovering these common nodes.


Before proceeding further in our discussion of the various issues raised in Sections 1, 2 and 3, we make a comparison with various other societal hazards. This comparison, if made well, can aid the risk manager in his decisions. However, the comparisons should, in our view, be used primarily to ask questions of society and its decision makers. In a comprehensive and well-publicized study, a team of 75 experts assembled by the U.S. Environmental Protection Agency compared the potential impacts of 31 environmental problems on economic welfare, human health, and ecosystems (EPA 1987, 1990). Four types of risk were considered in this study: cancer risks, non-cancer health risks, ecological risks, and welfare risks. In this study, no attempt was made to combine expert rankings across these risk types. Figure 5 illustrates the relative ranking of the six aggregated hazards in terms of their potential impact on welfare and on ecological systems. These experts agreed that climate change, together with ozone depletion, give rise to the highest expected ecological impacts, but have only a moderate economic impact. On the other hand, the expected ecological impacts of air and water pollution are moderate, while the expected welfare effects are high. Although the EPA did not rank the potential human health risks of global climate change, they did rank the risk of ozone depletion as "medium."

Once the assessment has been made that gives the probability distribution of possible outcomes, it must be communicated to whoever makes decisions about options for mitigation. What form this information should take, however, is a matter of some debate. For example, it remains an open question whether policymakers are best served by "best guess" scenarios for population growth, energy production, and temperature increase, rather than by probability distributions of these estimates. In this paper we use the latter approach. It has frequently been pointed out that different decisions will require different degrees of caution. In the analytical context of a risk assessor, different decisions will require addressing different percentiles of the final probability distribution of outcomes. Policy analysts and decision-makers can then draw distinctions between those scenarios that are probable and those that are possible, but are, nonetheless, thought to be extremely unlikely. Those risks that fall below a particular threshold of probability, - and thereby ignored by a particular group or society - are called de minimis risks. How societies and governments, in effect, decide what constitutes de minimis risk in particular situations or contexts is largely a matter of political judgement. For our purposes here, this problem becomes a matter of answering the question, "How improbable is improbable enough?"

Again, an analogy can be made with nuclear reactor safety. The old Atomic Energy Commission defined a set of "maximum credible accidents" and demanded that reactor designers introduce a safety system to prevent these accidents having untoward consequences. But there remains a probability (hopefully small) of the safety systems failing simultaneously with the accident. An estimation of these individual probabilities and their combinations becomes the core of probabilistic risk assessment. (Atomic Energy Commission 1985)

It is therefore crucial to determine the specific questions that the risk analysis is intended to illuminate. In what follows we ask the question "At what probability of a serious effect should society take action?" Is there, in other words, a de minimis risk? Although there is no clear definition of a de minimis risk, it can generally be seen to be closely akin to a related concept, namely, the probability of "surprise." Although many conceptions and definitions are possible, our use of the word surprise is meant to denote those situations where the true values of a particular uncertain parameter, e.g., the slope of a dose-response curve, climate sensitivity to CO2 doubling, etc., appear at least 2.6 standard deviations away from its current "best guess" value. As is well known, for a random variable that is assumed to be normally distributed, the probability that the "true" value is more than 2.6 standard deviations from the current "best guess" is just 1%.

For the purposes of reasoned policy making, it is important to give some consideration to how the issue of risk perception enters into this operational approach to defining de minimis risk. Clearly, how we perceive and respond to societal risk influences, to a large extent, what is deemed, ultimately, as a "surprise." It is, nevertheless, difficult to say how policymakers should reconcile -- cognitively or otherwise -- systematic accounts of risk with their own mental models. Such concerns lie at the foundations of human rationality, and one does not have to probe the depths of such arcane considerations to recognize that reasoned policymaking requires that uncertainty (and all of it attendant issues and concerns) be considered in the broader context of socio-political goals and aspirations. A telling example of this dilemma can be gleamed from the comment by Dickinson (1986) noted earlier. In this study, Dickinson used a lognormal fit of various estimates of global warming to estimate that there is a 5% in a lifetime chance that an increase in greenhouse concentrations would, by the year 2100, lead to a temperature rise of 10C. If this 5% chance were to obtain during that time period, such an event would almost surely come as a "surprise" to many people, the reason being that it would give rise to adverse consequences that were largely unanticipated by them. It is interesting to note, however, that public opinion polls suggest that many people are unconcerned about a 5% chance of a climate-related catastrophe within their lifetime, although they are concerned about a 1% chance of a nuclear accident.

We also note that an airliner with a calculated chance of failure far lower than 5% in its 70 year life would not be allowed to fly in commercial service. Of course, there are, as yet, no simple answers for why people differ in their perceptions and reactions to risk. Nevertheless, if the nature of the uncertainties that underlie problems such as global climate change are not clearly articulated and understood, then confusion may arise even among the best experts. For example, Clark (1989), referring to Dickinson's equation, notes that the chance that the world of 2100 will have witnessed a single nuclear power catastrophe is anywhere from 10 to 100 times less than the chance that everyone in the world will be living in the Mesozoic greenhouse. He concludes that "this assessment jars common sense, which is exactly why we need to reexamine the assessment methods and philosophies that produced it as an urgent task of understanding global environmental change."

For chemical carcinogens, it is common to discuss a risk to an individual of 10-6 in a lifetime of 70 years. This is a far smaller number than the probabilities of a huge temperature rise and catastrophic effect in the next 70 years. Following Clark, we ask whether it means that EPA is too conservative in taking this small number for chemical carcinogens, too optimistic about global warming, or whether the comparison is invalid?


A clear message from the history of science is that unexpected uncertainties or surprises are quite common, and that new results are often far away from previous values. In interpreting the predictions of climate change models, scientists recognize this, and often recommend cautious action accordingly. We note that recent work shows how this can be more formally addressed. The long record of measurements of physical constants (such as the masses of elementary particles) prompted several early studies of the temporal evolution of uncertainty (Hampel et al., 1986; Henrion and Fischoff 1986). Shlyakhter et al. (1992, 1993b, 1994a,b) expanded upon these original studies by examining trends in several data sets derived from nuclear physics, environmental measurements, energy and population projections.

Over a period of 20 years, the measurements improve sufficiently that we may consider the present measurements as the "truth," and the earlier measurements as mere approximations thereto. Then, we can ask whether the old measurements obtained the correct result to within the stated error. Similarly, stated uncertainties in the old projections of such driving forces of climate change as population growth and energy consumption can be compared with the actual errors after the target date has passed.

More precisely, we can derive a ratio (x) of the subsequently determined error in the old measurement to the author's stated estimate of error x = (a - A)/, with a, the new value, A, the measured value, and, the old standard deviation. For projections, we use the range between the reference (central) projected value and the lower (or upper) projected value as a substitute of the standard deviation of the equivalent normal distribution. This corresponds to assigning 68% confidence to the reported uncertainty range. If the errors were random, one would expect the distribution of x to be normal (Gaussian), with standard deviation of unity. In fact, there are large deviations from the Gaussian distribution in the tails. The cumulative distribution of x can be approximately described at large x by a compound distribution with both the mean value and the standard deviation following the normal distribution. At large values of x, this compound distribution is described by the exponential function exp(-x/u), where u is a new parameter, describing the frequency of unsuspected errors; larger values of u correspond to more common underestimation of uncertainties. Using statistical analysis of past errors, one can develop safety factors for current models (Shlyakhter 1994b). Fits to physical measurements give u=1; fits to the national population projections (Shlyakhter and Kammen 1992, 1993a) give u=3; fits to a set of U.S. energy projections give = 3.4 (Shlyakhter et al. 1994a). These are illustrated in Figures 6 and 7.

Figure 5. Probability of unexpected results in physical measurements. The plots show the cumulative probability, that new measurements (a) will be at least x standard deviations () away from the old results (A); for particle data (heavy solid line); magnetic moments of excited nuclear states (dotted line), neutron scattering lengths (heavy dashed line). Also plotted is the cumulative normal distribution, erfc(x/) (thin solid line with markers), and the exponential distribution with parameter u=1 (solid line).

As Figure 5 illustrates, the normal distribution underestimates the probability of large deviations: instead of the predicted 5% chance of x>2, there is a 20% chance. Empirical probability distributions suggest that there is a 5% chance of x>4, while the normal distribution yields a chance that is about 700 times less. A better fit to the data at large values of x is obtained with an exponential distribution, which is a straight line on the semi-logarithmic graph of the cumulative probability versus the number of standard deviations.

The results for population projections are shown in Figure 6. Because all of the estimates come from an authoritative source, namely, the United Nations, it might be expected that systematic errors would be small, representing a well-calibrated model. However, the unsuspected uncertainty is very large.

The issue of importance to this paper is the application of this concept to a discussion of the predictions for global warming and its effects. Are the predictions as reliable as measurements of physical constants, more reliable (u=0), or less reliable (u=3)? Only after this question is addressed can we properly address Dickinson's question of how to consider an extreme temperature rise.

One can view the collection of all Ts predictions from a set of available general circulation models as a random sample derived from the population of all possible models. We do not know whether current models cover all possible values of Ts. We assume that with probability , the true value is within the range of reported values. Let us assume that =99%; the standard deviation of the equivalent normal distribution is then 5.15 times less than the reported range. Note that in estimating u values for measurements and projections, we use =68% for the reported uncertainty range. Therefore, we assume that the collection of current climate models almost certainly covers the true value of Ts. Had we assumed that =99% for the old forecasts, the derived standard deviations would be smaller, and all x values would be larger. The resulting u values and the corresponding inflation factors would also be larger than the ones actually used.

Figure 6 Population projections (Shlyakhter and Kammen, 1992, 1993a). The plots depict the cumulative probability, S(x)=xp(t)dt, that true values (T) will be at least x standard deviations () away from the reference value of old projections (R).

Since Ts is determined by the value of the sum of all feedbacks, f, we convert the range of Ts values into the range of f values. For example, Ts=1.9C gives f=0.37 and Ts=5.2C gives f=0.77. This range of f values can be used to estimate the standard deviation of the equivalent normal distribution in the same way as for the population and energy projections above.

The corresponding distribution of Ts values is shown in Figure 8, together with the exponential distribution for u=1 and the distribution of Ts from 21 general circulation models (Houghton et al., 1990). By using the exponential distribution with u=1, we assume that the fraction of unsuspected errors in climate models is similar to the fraction of unsuspected errors in physical measurements. With the normal distribution,

there is 1% chance that the true value of Ts exceeds 5C, while with the exponential distribution, this same probability corresponds to a catastrophic increase of more than 10C. In a simple feedback description, f 1 would result in a catastrophic runaway warming. Although the true picture will be much more complex, and negative feedbacks will ultimately limit the warming, the possibility of CO2 atmospheric buildup that could lead to a runaway warming and, ultimately, to a switch to a different climate equilibrium, must be avoided by all means. In our view, any policy decisions should be based on the exponential as the "default" distribution, rather than on the normal distribution. Policy decisions based on more optimistic views of future temperature rise would require further justification. However, this view is in contradiction to a view held, for example, by Lindzen (1991), that the effects of global warming should not be considered any more seriously than any other effect not yet explicitly demonstrated.

Figure 8 Observed global-mean temperature changes (smoothed to show the decadal and longer time scales trends more clearly) compared with predicted values for several values of Ts (shown on the curves). This figure was adapted from Figure 8.1 in Wigley and Barnett (1990).

Using the observed climate trends, the distribution may be truncated by bringing in other information not considered by the general circulation models. This is illustrated in Figure 8, where the observed global-mean temperature changes during the last century are compared with predicted values (Wigley and Barnett 1990). Climate sensitivity estimated from such comparison is unlikely to exceed Ts=5C.

The distributional uncertainty in each of the factors in Eq. (1) has been shown to have longer tails than the lognormal distribution. Does that mean that the simple formula for combining lognormal distributions is inapplicable? This is a tricky logical issue, which we only touch upon here. Since the initial uncertainty estimates in measurement and forecasts were, in fact, the estimates of combined uncertainties from several factors, we believe that it would be inappropriate to combine the individual exponential distributions. Instead, one should combine uncertainties in the individual factors in Eq. (1). After the combined uncertainty for the impact of interest is evaluated, one can then hedge against the unaccounted errors by assuming an exponential distribution (or a log-exponential distribution, if the lognormal distributions were used for each factor) instead of a normal distribution.

Figure 9 Projections of sea-level rise for 2050 A.D. and 2100 A.D. The probability of a sea-level rise greater than a given threshold is plotted for the normal probability and for the exponential distribution. Note that a fall in sea-level is also possible.

We can also ask how the procedure can be adapted to discussions of sea level rise, assuming an exponential fit to Oerlemans' stated limits: does this mean that there is a 1% chance of 2.5 meters sea level rise by the year 2100? Most observers would say that this is impossible. This and similar judgments about temperature rise are based not on model parameters, but rather on historical experience. The neglect of historical experience is the second deficiency of simply drawing probability distributions. The constraints imposed by historical experience are not simple, and must themselves be imposed by applying the model. However, imposing this constraint is not simple, because the model predictions that are compared are for the artificial equilibrium world of constant CO2 concentrations, and there are delays in translating concentrations to temperature rises. However, it is in applying this historical constraint that we may be able to show that the probability of the extreme event is less than typically calculated. For example, the historical experience described by Figure 9 does not seem to permit more than .1 rise or perhaps even indicate that the climate system may be in equilibrium.

This also appears in using the models that calculate cancers from chemical carcinogens. The uncertainty, particularly in the animal/man comparison, is great, so that if one is interested in the upper 95th percentile of a distribution, an unreasonably large number of cancers is sometimes predicted. This was emphasized by Ennever et al. (REF) (and discussed further by Goodman and Wilson 1991) The probability distribution for the number of cancers must be truncated at a value corresponding to the maximum number that could exist without them having been measured.

We have discussed the issues presented in this section with many scientists more expert on climate change than ourselves. As the above analysis suggests, the distribution of opinion about the predicted temperature rise is wider than that suggested by IPCC; some say that T/CO2 is close to zero; others would put it at 3 times the IPCC value. This is being addressed more formally by Morgan and Keith (1993).


The simplified model of Section 2 is a static model, whereby the risk is estimated, with its uncertainty, at one period of time, and the decision on what, if anything, to do is made essentially simultaneously with the assessment. But real life is not that simple. In an important subject such as global change, decisions on measures to avert or mitigate the effects will be made frequently over time. The assessment must then be an iterative process.

Any reasonable analysis of the potential risks of global climate change must address the general concern shared by many scientists that once the effects of global warming appear above some agreed upon "noise" level, the CO2 atmospheric concentration will be so far advanced that the effects of warming will take centuries to reverse, if that will be at all possible. It is here that the time constant for coupling to the deep oceans is critical. If, indeed, this time constant is large, then it seems reasonable to suppose that action to prevent increased CO2 concentration should be taken before the effect of this increase is conclusively verified.

The crucial question that must be continuously addressed therefore becomes "What, if any, actions should be taken about global warming when the effects have not yet demonstrated themselves unequivocally?"

The first decisions might be to take those actions where the cost is not high. In this context, a much publicized article addressed the minimal agreement of three distinguished scientists with widely different views. (Singer et al. 1990) This minimal set of actions seem very reasonable; yet, even these are not now being undertaken in the USA! In particular, all three of these scientists have, in the past, advocated expansion of nuclear power, whereas the present U.S. administration is ignoring it.

One cannot, of course, expect that the administration (risk managers) accept all the recommendations of scientists -- even unanimous ones. In particular, support of nuclear power might well be considered by some members of the public as too draconian to be considered at this time.

Consensus on a more draconian set of actions might be achieved if the uncertainties in the assessment of outcomes were reduced. But, a decade-long time scale is anticipated for narrowing the uncertainties in predictions of the rate of climatic change through improved coupled atmosphere-ocean models (McBean and McCarthy, 1990).

Manne and Richels (1991) developed an "act then learn" strategy for decision making in the energy sector. In their approach, decisions are made at discrete points in time (every decade), and the value of new information depends on changes in the probabilities assigned to each scenario before and after the study. If the probabilities of three scenarios remain equal, then the study's value is zero; if, on the other hand, only one scenario can be selected, a study can be worth as much as 100 billion. Critics of such a strategy emphasize that it is sensible only if something is done in the time between the discrete assessments. Thus, the question always raised by expert decision analysts is "What will you do with the available time?" If the answer is nothing, there would be no merit in postponing a decision.

Formal analyses of the value of future information usually underestimate the value in two ways. As we have demonstrated in fields that are much easier to study, experts are often poorly calibrated, and they often underestimate the probability of "surprises." The value of future research in reducing the probability of surprise is often ignored. Also ignored in formal analyses is the value of the long and tedious process of explaining the details to decision-makers as well as the public.

In general, there are two ways to approach the problem of valuing information (Hammit 1994). First, new information can refine the fundamentally correct, but imprecise, information characterized by a prior distribution. Second, the new information can reveal that a prior distribution reflects overconfidence or fundamental misunderstanding. This is the phenomenon we parametrized in Section 5. The conventionally defined VOI measures the first type of information value, but because it is fundamentally dependent on the prior distribution, it cannot capture the second type. Stated another way, if the gain in information is measured by the expected decrease in the variance of a parameter, then an overly narrow prior distribution produces an underestimate of the gain, in that the gain cannot be larger than the prior variance. For this reason, the probability of surprise should be explicitly taken into account in setting future research priorities.


Another issue arising in decisions about climate change is whether there is an optimum level of reduction of CO2 emissions. In order to address this question, one must carry the risk assessment to its conclusion; i.e., to assign a financial cost to each and every impact, and to add them together. As in all such situations, the marginal financial gain on reducing CO2 emissions decreases with additional expenditures on control of CO2. According to several studies, the first 20% of emissions are averted cheaply (less than $1 /ton CO2 equivalent); the next 20% reduction will require up to $10/ton; the next 20% require up to $100/ton. If we are willing to pay $1,000/ton, then another 20% can be averted (NAS 1991, Nordhaus 1991). There is, however, no feasible way to eliminate emissions completely, because the costs rise very steeply. Note that these numbers are not accepted by ardent proponents of a nuclear electric economy, who argue that 50% of energy use can be converted to nuclear electricity for a 20% cost increase, provided that societal fears do not increase the cost of nuclear electricity. This ardently pronuclear argument, although technically plausible, completely ignores the political, cultural, and economical realities presently faced by nuclear power throughout the world and is therefore ignored in the remaining discussion.

There is, as yet, no similar large-scale analysis of expected losses versus efficiency of controls. One might expect that such a "damage curve" will generally decrease rapidly, since for low efficiency of reductions, much higher losses can be attributed to each additional ton of greenhouse gases than for a highly efficient reduction action, when global warming is small anyway. This is supported by a study of economic vulnerability vs. the sea level rise for the Long Beach Island, New Jersey, USA (Yohe 1991). The sum of losses and mitigation costs, i.e.,the total expenditure, has a minimum, indicating the optimal efficiency of emissions reduction. This minimum is shown schematically in Figure 10. The optimality holds only for a particular branch in the scenario tree, since the expected losses for different scenarios vary. Of course, we have, in many respects, oversimplified the problem. For example, discount rates relating today expenditures with future benefits must be explicitly incorporated. If the probability values were known for each scenario, one could estimate the appropriate mitigation costs for this particular outcome, and then come up with a probability distribution of the optimal efficiencies. Eventually, such a distribution could be used to translate the uncertainty in predictions of global warming into the uncertainty of goals that policy makers must formulate.

Dowlatabadi and Morgan (1993) used their ICAM-1 model to study uncertainty about climate sensitivity to changes in radiative forcing. In this study, they first ran the model with the present uncertain value for the climate sensitivity, and then obtained estimates of the expected net present value of each of several policy options. The model then assumes that a research program designed to improve our understanding of climate sensitivity is launched, and then considers a set of possible values that the mean and variances of their estimates might assume after the research is complete. If research reveals that the expected cost of the preferred policy has declined from an expected value of D to D-d as a result of the new knowledge, then society has d more disposable income to support consumption or investment than it originally had expected to have. If research reveals that the expected cost of the optimum policy goes up from D to D+d, then society must reallocate its other expenditures and investments to secure the additional resources d. The problem is to assign a monetary value of learning ahead of time, whether the current policy is an optimal one, and whether it will cost more or less than originally thought. This approach is combining steps 5 and 6 of the progression presented in Section 2. While promising, at the moment this procedure has too much uncertainty to be useful.


One of the most obvious public policy concerns about global climate change is that of interregional equity. Each person who emits, or allows someone else to emit, one of the greenhouse gases gains some benefit, or perceives that he gains some benefit. However, the increase in global warming happens all over the world, and the risk initially caused by the gaseous emission may be incurred by completely different people. But, clearly, intergenerational equity is important also; future global warming depends, in part, on past CO2 emissions.

CO2 has, in the past, been emitted primarily in industrialized countries. Already, any global warming that may have occurred is a problem for other countries as well. Any risk of adverse impact may fall on particular groups whose boundaries are not defined by their industrial development.

It is easier, and perhaps fairer, to make a risk-related decision when the risks are borne by the same person or group to whom the benefits accrue. If the risk of an action exceeds the benefit perceived by that person, then the action will not proceed. However, if the person who bears the risk is different from the person to whom the benefit accrues, and if the risk bearer is willing to value risk lower than the benefactor values the benefit, it may be possible to achieve a net excess of benefit over risk for each party; this might be achieved by some charge of payment, whereby the party who benefits compensates the party who bears the risk. Although by such a monetary transfer the risk/benefit decision for each party becomes favorable, there is the complication of deciding upon the exact payment; one party may benefit (overall) more than the other, and negotiation(s) maybe time consuming. In fact, the time and effort needed to make the negotiated transfers themselves become an additional cost.

Society has developed a variety of tools for coping with interregional inequities. The most obvious is a transfer payment, by taxation or otherwise, from those gaining the benefit to those incurring the risk. By such means, the risk/benefit balancing may be made to be positive for each affected group of importance.

The Earth Summit at Rio de Janeiro in June 1992 illustrated the importance of interregional equity: Third world countries with little industrial development argued that it was not for them to reduce emissions of greenhouse gases, or, for that matter, to encourage their absorption by retention of rainforests, unless appreciable transfer payments came from those countries who are producing the major emissions. Interesting studies are now being done on whether it is cheaper for the USA to achieve a given level of reduction of CO2 by paying the Chinese to reduce the emissions, rather than by reducing them ourselves.

This line of reasoning suggests that economic adjustments can be considered for intergenerational equity, as well as for interregional equity. A person (or society) can, and perhaps should, make appropriate investments to pay for the cost of the future consequences. By such considerations, Raiffa, Schwartz and Zeckhauser (1977), in a classic paper, argue that future "lives" in the risk/benefit equation should be discounted at the same rate as money. Their argument is that money can be invested now, at the monetary discount rate, and at the time the hazard arrives, the money has been appropriately increased by the investment. If it is proper now to discuss a relationship of some sort between the maximum amount one is prepared to pay to reduce a hazard, then it is also appropriate to discount that amount with the usual monetary discount rate.

Included in this idea is that the money could be put aside for "balancing" the risk over future generations, as well as for finding a way to avoid the risk. Money might be invested in avoiding some other comparable risk, which in the future would otherwise add to the risk of global warming -- such as the risk of cancer. In this paper, we take it as self evident that some discounting is appropriate, and we suggest that the discount rate be left as a free variable for the moment.

It is useful to try to express this discounting approach in simple ways. With a 5% discount rate, the investment that is necessary to reach a capital sum of $1,000,000 after a couple of generations -- about 60 years -- is small. Most people will not worry about most decisions over a period longer than that; we think about our grandparents and our grandchildren, but do not often think much further. But, of course, there are exceptions, and these exceptions almost invariably seem to involve major societal decisions. Some people expect that society as a whole would approach such decisions in a logical and consistent way. Some simple examples suggest that this is not the case.

A simple risk/cost-benefit decision about nuclear waste might proceed along the following lines. If, for example, we take an amount to save a life of $1,000,000 (corresponding to the $1,000 per man-rem suggested by the Nuclear Regulatory Commission in 1975 in their discussion of the ALARA principle in RM-30-2), and we take a discount rate of 5%, then we should be prepared to put down $1,000,000/(1 + 0.05)n per life, where n is the number of years over which we discount. For a hazard in 100 years time, we should invest $7604. For a hazard in 1,000 years, this becomes $58. For a maximum of 1000 cancers caused by a possible leak of a high level nuclear waste repository in a 1000 years time, this would be an "up front" charge of $58,000. This is far from the billions of dollars now being spent for this purpose. It is evident that our societal decision making is not working this way; whether from intent or from accident. Naturally, this remains a matter of considerable debate, and we raise the issue as a question for risk managers. If the difference is unintentional, it should be easy to correct.

The application of this line of reasoning to toxic chemical waste is somewhat different. The differences come primarily in the regulation and the demand for what are called "secure landfills," which are intended not to leak for 50 years. The depth of this time horizon is in sharp contrast to the nuclear case, where water reportedly should not leak at all for 500 years and, moreover, can have only a small leak rate beyond that. There are cases where a supposedly secure landfill has leaked before 50 years. In addition, the usual definition of a "secure landfill" seems to be based on the unproven presumption that toxic chemical waste will not be toxic after 50 years. This might be true if it were exposed to the environment, such as ultraviolet light and other natural means of breaking down complex chemicals. It seems far less likely to be true, however, for a landfill where the waste is essentially isolated from the environment.

One possible reason for the differences in the two instances may be that, for nuclear waste, the failure of a high level waste repository is perceived as an irreversible disaster, whereas the failure of a toxic chemical waste site is merely regarded as a large hazard. Of course, the geographical extent of the impact and the time needed for ecological recovery are also relevant considerations.

Given the difference in these two cases, it is hard to find a simple analogy to global climate change. Is the situation for intergenerational equity like that for nuclear waste, where attention must focus on what happens 5,000 years from now, or is the situation more like the case of chemical waste, where the implicit hope is that 50 years will tell us whether there is a hazard, as well as give us some idea of what do better or improve upon? Climate change could have long-enduring impacts if, for example, ecosystems are totally disrupted, and if the impacts are felt globally. However, many of the worst impacts may appear indistinguishable from local hazards, and they might cause stress only for a period of decades as adaptation occurs.

Naturally, societal views on such matters can change. Fifteen years ago, society was not concerned about global warming. When some scientists said that it might be enough, within a century, to melt the polar ice cap and flood New York City, they were met with alternate skepticism (which has turned out to be appropriate, since now the predicted sea level rise is much less than originally thought) and resignation. The analysis above provides an initial basis for thinking about such potential calamities.


By contemplating the process of an integrated risk analysis of global warming, we have shown how the calculation of the risk proceeds by considering a number of almost independent steps. But there remain serious problems. The general circulation models (GCMs) that have been designed to describe the effect of increasing CO2 concentrations fail to describe important regional details and are almost completely wrong over a million year time scale. Moreover, the models are usually run only in a quasi-static mode -- such as a doubling of CO2 continued for long enough to achieve equilibrium. Analyzing the risks of global climate change involves more complexities than almost any other risk analysis, such as analyses of chemical carcinogens and the failure of industrial systems. Although the scatter of predictions provided by different climate models to the doubling of CO2 concentrations is, in some ways, analogous to the uncertainty associated with the slope of dose-response curve. In this respect, the major uncertainty we are addressing is not stochastic in character. In addition, past approaches to risk analysis have not explicitly allowed for the possibility of complex decision processes, with information being gathered along the way.

An important feature of climate change-related risk management seems to be the feeling among both scientists and the lay public that we should take an insurance policy, which simply amounts to considering the "upper limit" of a probability distribution of impacts (Schelling 1991). This upper limit is ill defined, but is the inverse of a de minimis concept of risk. Here, however, it must be defined for nonstochasic uncertainty. This concept of risk distinguishes between scenarios that are believed possible and those that are rejected as improbable. Empirical evidence suggests that overconfidence in predictions of future development results in long tails of the distribution, and therefore in unexpectedly high probabilities of surprise. These tails can, in principle, be truncated by using additional information such as model-independent restrictions on climate change from paleoclimatic data or from volcanic eruptions. To this end, efforts must now focus on learning how to translate the large body of contradictory information about past and present climate into defensible upper limits on the probability of surprise.

We believe that integrated risk analysis of global warming and its impacts should become a working tool for illuminating the important scientific issues, and evaluating uncertainty. If the assessments are properly communicated, the analyses should also become important tools for policy evaluation and public decision-making.


The research of the first and third author was partially funded by the US Department of Energy's (DOE) National Institute for Global Environmental Change (NIGEC) through the NIGEC Northeast Regional Center at Harvard University (DOE Cooperative Agreement No. DE-FC03-90ER61010). Financial support does not constitute any endorsement by DOE of the views expressed in this paper. The research of the second author was supported by the Massachusetts Institute of Technology Joint Program on the Science and Policy of Global Change. The authors thank W. Clark, H. Jacoby, D. Kammen, G. Kaufman, J. Lancaster, R. Lindzen, and various participants at the Workshop on Uncertainty and Global Climate Change (Knoxville, TN; March 1994) for valuable discussions and comments on earlier drafts of this paper.


Arrhenius, S. (1896), "On the influence of carbonic acid in the air upon the temperature of the ground",Philosophical Magazine v. 41, p.237

Bacastow, R. B., and C. D. Keeling (1981), Atmospheric Carbon Dioxide Concentration and the Observed Airborne Fraction. In: Carbon Cycle Modelling, SCOPE 16 (Bolin, B., ed.), International Council of Scientific Unions, Wiley & Sons, New York, pp. 103-112.

Bardach, J. E. (1989) "Global Warming and the Coastal Zone: Some Effects on Sites and Activities," Climatic Change, v. 15, pp. 117-150;

Bowes, M. D. (1993) "Consequences of Climate Change for the MINK economy: impacts and responses" pp. 131-158 (Special Issue: Towards an Integrated Impact Assessment of Climate Change: The MINK study) Climatic Change v. 24 pp. 159-163

Bazzaz, F. A. (1990), "Response of natural ecosystems to the rising global CO2 levels," Annual Review of Ecology and Systematics, v.21, pp. 167-196.

Bazzaz, F. A. and Fajer, E. (1992) "Plant Life in CO2- rich World," Scientific American, January, pp. 66-72.

Bodansky D. (1991) "Global Warming and Clean Electricity," Talk presented at 18th European Conference on Controlled Fusion and Plasma Physics, Berlin, June 3-7.

Clark, W. (1989) "Towards useful assessments of global environmental risks," in Understanding Global Environmental Change: the contributions of risk analysis and management. A Report on the International Workshop, October 11-13, 1989. R. Kasperson, K. Dow, D. Golding, and J. Kasperson, eds. Earth Transformed (ET) Program, Clark University, Worcester, Massachusetts (USA).

Crosson, P. R. (1993) "An overview of the MINK study, " (see Bowes (1993) pp. 159-163.

Crouch, E.A.C. and Wilson, R. (1981) "Regulation of Carcinogens," Risk Analysis, v.1, pp. 47-57.

Crouch, E.A.C., Wilson, R., and Zeise, L. (1983) "The Risks of Drinking Water," Water Resources Research, v.19, pp. 1359-1375.

Cubasch, U. and Cess, R. D. (1990) "Processes and Modelling, Ch.3 in "Climate Change," The Intergovernmental Panel on Climate Change (IPCC) Scientific Assessment. Report Prepared for IPCC by Working Group 1. Edited by J. T. Houghton, G. J. Genkins and J. J. Ephraums. Cambridge University Press, 1990, pp. 75-91 (Table 3.3).

Dickinson, R. E. (1986) "Impacts of human activities on climate - a framework" in Clark, W.C. and Munn, R.E., eds. (1986), Sustainable development of the biosphere, International Institute for Applied Systems Analysis, Laxenburg, Austria, Cambridge University Press, pp. 252-291.

Dowlatabadi, H. and Morgan, M. G. (1993) "A model framework for integrated studies of the climate problem," Energy Policy, March 1993, v. 21, pp. 209-221.

Emanuel, K. A. (1987) "The dependence of hurricane intensity on climate," Nature, v. 326, pp. 483-485.

Ennever, F. K., Noonan, T. J., and Rosenkranz, H. S. (1987) "The predictivity of animal bioassays and short-term genotoxicity tests for carcinogenicity and non-carcinogenicity to humans," Mutagenesis v. 2, pp. 73-78

Evans, J.S., Graham J.D., Gray G.M., and R.L. Sielken, Jr. (1994) "A Distributional Approach to Characterizing Low-Dose Cancer Risk," Risk Analysis, v. 14, pp. 25-34.

Folland, C. K., Karl, T. R., and K. Ya. Vinnikov (1990) "Observed Climate Variations and Change," Ch.7 in "Climate Change," The Intergovernmental Panel on Climate Change (IPCC) Scientific Assessment. Report Prepared for IPCC by Working Group 1. Edited by J. T. Houghton, G. J. Genkins and J. J. Ephraums. Cambridge University Press, 1990, pp. 199-238.

Fourier, J. B. (1839) Mem. Acad. Sci. Imst. Fr., v. 7, 569

Funck, R. H., 1987, Some remarks on economic impacts of sea level rise and the evaluation of counter-strategy scenarios, In: Impact of Sea Level Rise on Society, (H.G. Wind, ed.), Report of a Project-Planning Session, Delft, 27-29 August, 1986, Balkema Press, Rotterdam-Brookfield (VT), pp. 177-188.

Goldemberg, J. et al. (1988) Energy for a Sustainable World, Wiley, New York.

Goodman, G. and Wilson, R. (1991) "Quantitative Prediction of Human Cancer Risk from Rodent Carcinogenic Potencies: A Closer Look at the Epidemiological Evidence for Some Chemicals Not Definitively Carcinogenic in Humans," Regulatory Toxicology and Pharmacology, v.14, pp. 118-146.

Hafele W. et al. (1981) "Energy in a finite world: Paths to a sustainable future" International Institute for Applied Systems Analysis (IIASA), Ballinger Press, Cambridge

Hammit, J. (1994) "Can More Information Increase Uncertainty?" Harvard School of Public Health, unpublished manuscript.

Hampel, F.R., Ronchetti, E.M., Rousseeuw, P.J., and Stahel, W.A. (1986) Robust Statistics: the approach based on influence functions, John Wiley & Sons, New York.

Hansen, J.D.

Heimann, M. (1991) "Modelling the Global Carbon Cycle," Paper presented at the First Demetra Meeting on Climate Variability and Global Change, Chianciano Therme, Italy, October 28-November 3, 1991.

Henrion, M. and B. Fischoff (1986) "Assessing uncertainty in physical constants," American Journal of Physics 54, 791 - 797.

Hohenemser, C., R. Kates, and P. Slovic (1983) "The nature of technological hazard," Science, 220:378-384.

Houghton, J. T., Jenkins, G. J. & Ephraums, J. J., eds. (1990) Intergovernmental Panel on Climate Change, Climate Change: The IPCC Scientific Assessment, Cambridge University Press, Cambridge.

IPCC (1990a) Potential Impacts of Climate Change, Report prepared for Intergovernmental Panel on Climate Change by Working Group II, June 1990, WMO and UNEP, Geneva, Switzerland.

IPCC (1990b) see: Houghton, J. T., G. J. Jenkins, et al., Ed. (1990).

IPCC (1992) Working Group 1 Workshop on Feedbacks in the Carbon Cycle and Climate Change, Woods Hole, MA, October 29, 1992 (Summary document under preparation).

Jorgensen, D. W. and Wilcoxen, P. J. (1991) "Reducing US Carbon Dioxide Emissions: the Cost of Different Goals," John F. Kennedy School of Government, Center for Science and International Affairs, Discussion paper number 91-9.

Kates, R. W., Hohenemser, C., and Kasperson, J.X., eds. (1985), Perilous progress: managing the hazards of technology, Westview Press, Inc., Boulder, Colorado).

Leatherman, S. P. (1989) "Impact of Accelerated Sea Level Rise on Beaches and Coastal Wetlands," in: Global Climate Change Linkages, J.C. White, ed., Elsevier, New York, pp. 43-57.

Malthus, T.R. (1976) An Essay on the Principle of Population, edited by P.Appleman, New York, Norton, 1976.

Manne, A. S. and Richels, R.G. (1991) "Buying greenhouse insurance," Energy Policy, 19(6):543-552.

McBean, G. and J. McCarthy (1990), "Narrowing the uncertainties: a scientific action plan for improved predictions of global climate change," chapter 11 in IPCC (1990).

Morgan M.G. and D.Keith (1993), "A Program of Substantively Detailed Expert Elicitations of Leading Climate Scientists," Talk at the Society of Risk Analysis Annual Meeting, December 5-8, 1993, Savannah, Georgia, USA.

National Academy of Sciences (1991), Policy implications of greenhouse warming. Synthesis Panel, Committee on Science, Engineering and Public Policy, National Academy Press, Washington, D.C. 1991.

National Research Council (1983), "Risk assessment in the Federal Government: managing the process", Washington, D.C., National Academy Press.

Nordhaus, W.D. (1991) "The cost of slowing climate change: a survey," Energy Journal, v.12(1) 1991, pp. 37-64.

Nordhaus, W. D. and Yohe, G. W. (1983) "Future Paths of Energy and Carbon Dioxide Emissions," in Changing Climate. Report of the Carbon Dioxide Assessment Committee, National Academy of Sciences, Washington, D.C. 1983.

Oerlemans, J. (1989) "A Projection of Future Sea Level", Climatic Change, v. 15, p. 151.

Raiffa, H., Schwartz, W.B., and Weinstein, M.C. (1977) "Evaluating Health Effects of Societal Decisions and Programs," Decision Making and Environmental Protection Agency, Selected Working Papers, vol. 2B, National Academy of Sciences, Washington DC 1977.

Revelle, R. and W. Munk, (1977) The Carbon Dioxide Cycle and the Biosphere, Chapter 10 in: Energy and Climate, Studies in Geophysics, National Academy of Sciences, Wash., D.C. pp. 140-158.

Singer, F., Revelle, R. and Starr, C (1990) "What to do About Greenhouse Warming," Cosmos XXX

Note: The (much too) public controversy about whether the second author of this paper held STRONGER views about global warming than expressed in this paper is not relevant to the argument in the text.

Schelling, T, (1991) Talk at Kennedy school of Government (also a better reference somewhere)

Schneider, S. H. (1983), "CO2, climate and society: a brief overview," in Social science research and climate change: an interdisciplinary appraisal, R.S. Chen, E. Boulding, and S.H. Schneider, eds., pp. 9-15.

Seitz, F. (1994) "Global Warming and Ozone Hole Controversies," George C. Marshall Institute, Washington, D.C.

Shlyakhter, A. I. and D. M. Kammen (1992) "Sea-level rise or fall?" Nature, 357, 25.

Shlyakhter, A. I. and D. M. Kammen (1993a) "Uncertainties in Modeling Low Probability/High Consequence Events: Application to Population Projections and Models of Sea-level Rise," Proceedings of ISUMA'93 the Second International Symposium on Uncertainty Modeling and Analysis, University of Maryland, College Park, Maryland, April 25-28, IEEE Computer Soc. Press, Los Alamitos, California, 246-253.

Shlyakhter, A. I., I. A. Shlyakhter, C. L.Broido, and R. Wilson (1993b) "Estimating uncertainty in physical measurements and observational studies: lessons from trends in nuclear data," pp. 310-317.

Shlyakhter, A. I., D. M. Kammen, C. L.Broido, and R. Wilson (1994a) "Quantifying the Credibility of Energy Projections from Trends in Past Data: the U.S. Energy Sector," Energy Policy, v.22, pp. 119-130.

Shlyakhter A.I. (1994b) "Improved Framework for Uncertainty Analysis: Accounting for Unsuspected Errors," Risk Analysis, in press.

Shlyakhter, A. I. (1995) "Uncertainty estimates in scientific models: lessons from trends in physical measurements, population and energy projections," in Uncertainty Modeling and Analysis: Theory and Applications, B. M. Ayuub and M. M.Gupta, eds., North Holland, in press.

Smith, A.E., Ryan, P.B., and Evans, J.S. (1992) "The Effect of Neglecting Correlations When Propagating Uncertainty and Estimating Population Distribution of Risk," Risk Analysis, v.12, pp. 467-474.

Stone P. H. (1992) "Forecast cloudy: the limits of global change models," Technology Review, v. 95 pp. 32-41.

Stone P. H. (1993) Massachusetts Institute of Technology, lecture notes

Valverde A., Jr., L. J. (1992) "Risk and public decision-making: social constructivism and the postpositivist challenge," International Journal of Applied Philosophy, v. 7, no. 2, pp. 53-56

Warrick, R. A. and J. Oerlemans (1990) "Sea Level Rise", in Scientific Assessment of Climate Change, Report prepared for Intergovernmental Panel on Climate Change by Working Group I, June 1990, WMO and UNEP, Geneva, Switzerland, pp 261-285.

Wigley, T.M.L. and Barnett, T.P. (1990) "Detection of the Greenhouse Effect in the Observations," ibid., pp. 239-255.

Wilson R. (1988) "Measuring and Comparing Risks to Establish a de Minimis Risk Level," Regulatory Toxicology and Pharmacology, v.8, p.267-282.

Atomic Energy Commission (1975) "An Assessment of Accident Risks in US Commercial Nuclear Power Plants," US Atomic Energy Commission, WASH-1400.

Wilson, R. (1989) "Global energy use: a quantitative analysis," in Global Climate Change Linkages, J.C. White, ed., Elsevier.

Wilson, R. and Clark, W. (1991) "Risk Assessment and Risk Management: Their Separation Should Not Mean Divorce," in: Risk Analysis, C.Zervos, Plenum Press, N.Y. 1991.

Wilson, R., Crouch, E.A.C., and L. Zeise (1985) "Uncertainty in Risk Assessment," in Risk Quantitation and Regulatory Policy, Barnaby Report 19, Cold Spring Harbor Laboratory.

Yohe, G. W. (1991) "The Cost of not holding back the sea-economic vulnerability", Ocean & Shoreline Management, v.15, pp. 233-255.

Add references:

Revelle and Suess (1957)

(Hansen, 1988)

Crouch and Wilson -- 1981 or 1982?

Wilson, Crouch and Zeise -- 1983 or 1985?

Bodansky -- 1990 or 1991?

Kammen 1994

Keeling et al.(1989)

Cubasch 1992/3

EPA 1987, 1990

Revelle replace xxxxxxxxxxx

Schelling -- find a 'better reference'