Integrating uncertainty to integrated assessment

From Opasnet
Jump to navigation Jump to search

Template:Not official version

Preface
The report “Integrating Uncertainty to Integrated Assessment” is the first report of the project Cross- Cutting Issues in Integrated Environmental Health Risk Assessment. The project is a work package (WP 1.5) of the project Integrated assessment of health risks of environmental stressors in Europe (INTARESE), a project funded under the auspices of the Sixth Framework Programme priority on Global Change and Ecosystems. More information on the INTARESE project can be found at http://www.intarese.org/. The scope for this report was decided at the WP 1.5 meeting hosted by INERIS in Paris on July 13th 2006. Here it was decided that the report should address the following topics:

  • Why assess uncertainty?
  • What is uncertainty?
  • Outline of a characterization scheme for uncertainty, including an example

It was further decided that the report should not exceed the length of a scientific paper.

Martin Krayer von Krauss & Marco Martuzzi, INTARESE Work Package 1.5

Copenhagen, December 2006

Executive Summary

Nearly all environmental and public health issues involve situations where the facts are uncertain and scientific conclusions cannot be expected to be definitive. This report examines the need for risk assessors to communicate the uncertainties characterising their assessments and the way in which the uncertainty that manifests itself in risk assessments can be conceived.

Although risk assessment can be a very powerful tool, it is crucial that its limitations be understood and made transparent to the various actors involved in the risk governance process. The basis of risk assessment is the systematic use of analytical – largely probability-based – methods. In short, risk assessments specify what the potential consequences could be, calculate the probability of these occurring using historical data, and aggregate the consequences and probabilities into a single metric. Assuming that future circumstances resemble the past circumstances upon which the probabilities are based, such approaches to risk assessment are extremely powerful conceptual tools. When dealing with well-understood closed systems, or highly repetitive events affecting a multitude of subjects in long term stable systems (as with life insurance in the absence of war, plague or famine), the assumption of linearity between past and present circumstances and outcomes is robust. However, the assumption breaks down very rapidly in the context of many policy issues, where conditions are far less tractable and circumscribed. In fields such as environmental health, novelty, uniqueness, complexity, irreversibility and incommensurability are often the norm.

An often encountered response to the predicament outlined above is to adopt a more openly subjective ‘Bayesian’ perspective and regard probabilities as an expression of the ‘relative likelihoods’ of different eventualities, given the best available information and the prevailing opinions of experts. Yet, even this more flexible approach requires knowledge of all the possible effects that could be caused by a risk agent, and an exhaustive analysis of each of the causal pathways leading to these effects. Such thoroughness is extremely difficult in the face of the myriad of health stressors to which we are exposed to, and the corresponding myriad of different effects they may cause. Furthermore, restricting the data basis to include only the opinions of experts raises important questions about the legitimacy of the results achieved. Where narrowly divergent (but equally reasonable) inputs may yield radically different results, and expert knowledge is recognised as uncertain, there is no sound basis for not including the opinions of other informed parties (e.g. stakeholders and decision makers) in the analysis. The importance of the legitimacy issue becomes obvious when considering the role of scientists and risk assessment in the regulatory decision making process.

Owing to the Liberal foundation of the regulatory system, evidence of harm is key to justifying regulatory interventions. Threats should be defined in as specific terms as possible and ideally in quantitative form. The basis for action should be a factual one, developed through the use of a rigorous and rational methodology (e.g. risk assessment) to ensure that the interpretation of the facts is as objective as possible. This poses a problem when the facts are uncertain, the stakes are high and values are in conflict, as is the case with many health issues. In response to this challenge, many scholars argue for a regulatory decision making process where deliberation amongst stakeholders plays a central role. Here, the legitimacy of regulatory decisions is restored through an increased democratisation of decision making, whereby a variety of actors, representing as wide a spectrum of perspectives as possible, are invited to participate in the risk governance process.

By clarifying the uncertainty that characterizes their assessments, experts can contribute valuable input to the process of collective reflection and deliberation leading up to a regulatory decision. Methods for assessing uncertainty can help experts diagnose the uncertainty characterising their assessments, and explicitly communicate it to the other actors in the policy community. Uncertainties can be prioritized in view of designing monitoring programs to evaluate the impact of new (and uncertain) policies and adapt these as new information becomes available. The results of uncertainty analysis can contribute to a discussion of the quality of the information underpinning a policy decision. The quality of the information available can then be considered in determining the extent of the regulatory measures that are warranted in a given situation and the extent to which the risk assessment process should be broadened to include an assessment of other issues such as public perception of the risk in question and the socio-economic stakes in balance. Furthermore, the quality of the information available will also influence the extent to which stakeholders and/or members of the public should be involved in the risk governance process.

The familiarity and sophistication of quantitative approaches make it tempting for risk assessors to conceive uncertainty strictly in statistical terms. While this approach may suffice to describe the consequences of inaccuracy and imprecision (i.e. measurement error), it does not do justice to the uncertainties typically encountered in risk assessment. Thus, new methods of uncertainty assessment are needed to diagnose and communicate the deeper uncertainties characterising risk assessments, as these can have important policy implications. In this report we describe an adapted version of the Walker & Harremoës framework, a typology of uncertainty aimed at helping risk assessors understand and systematically diagnose a broad range of the uncertainties characterising their assessments. Here, uncertainty is conceived as a two-dimensional concept, distinguishing between the i) Location and ii) Level of uncertainty. The example of the assessment of the health risks posed by air pollution is used to illustrate how the typology should be interpreted.

All of the widely used approaches to risk assessment rely on methodologies that can be considered idealized models, that is, abstractions of the real world issues under consideration. The location dimension refers to where uncertainty manifests itself within the configuration of the system model. The level of uncertainty is essentially an expression of the degree of severity of the uncertainty, as seen from the decision-makers perspective. In accordance with a significant part of the body of literature on uncertainty, a scale containing different categories of levels of uncertainty is proposed. These categories are referred to as Statistical Uncertainty (known outcomes, known probabilities), Scenario Uncertainty (known outcomes, unknown probabilities), and Identified Ignorance (unknown outcomes, unknown probabilities).

This report concludes that there are two main reasons why risk assessors should communicate the full spectrum of the uncertainties characterizing their assessments: i) because the level of uncertainty will influence the extent to which stakeholder involvement is required in the risk governance process, and ii) because uncertainty is an important consideration in the design of risk management measures. Because risk assessors operate on the front lines of the science-policy interface, it is incumbent upon them to draw attention to the need to take uncertainty into account when formulating policy and the need for stakeholder involvement in the decision making process on issues characterized by high levels of uncertainty.

Introduction

The methods underlying risk assessment have been constantly improved over the past years. With respect to human health, improved methods of modelling individual variation (Hattis 2004), doseresponse relationships (Olin et al. 1995) and exposure assessments (US-EPA 1997) have been developed and successfully applied. However, notwithstanding the developments that have occurred over the past few years, important limitations remain. In particular, complex, cumulative, synergistic or indirect effects continue to be inadequately addressed by risk assessors, as are the impacts on specific vulnerable sub-groups such as children, the elderly and the poor. Although further methodological innovation in risk assessment is to be expected, for the time being it is paramount that the limitations of the tool be recognised, and that the risk governance process be conducted in a manner that accounts for these limitations.

This report is based on the premise that although risk assessment can be a very powerful tool, it is crucial that its limitations be understood and made transparent to the various actors involved in the risk governance process. The report will begin by examining the basis for this premise. First, it will be illustrated how some of the fundamental assumptions upon which the methods of risk assessment are based are not necessarily fulfilled in practice. Then, the relationship between risk assessment, uncertainty and legitimacy will be illustrated. Here, legitimacy is used in a broad sense to designate the extent to which the actors in a given policy process accept the validity of the policy decisions made, as well as in a more narrow sense, to designate the extent to which the actors accept the risk assessment as the shared frame of reference for policy making. The report will continue by presenting a conceptual framework designed to help risk assessors systematically diagnose the uncertainty characterising their assessments. Finally, the interpretation of the framework will be illustrated by an example.

Why assess uncertainty?

Aside from the obvious reasons of honesty, transparency and good practice, there are two main justifications for why risk assessors should assess, address and communicate the full spectrum of the uncertainties characterizing their assessments. Firstly, in some cases, the uncertainty will be of a level such that the involvement of stakeholders in the risk governance process is required in order to ensure the legitimacy of the process. In such situations, stakeholder participation is a means of managing uncertainty. Secondly, because uncertainty will be an important consideration in the design of risk management measures. The relationship between uncertainty and stakeholder involvement is poorly understood by many risk assessors, who are the primary audience for this report. An important goal of this report is therefore to illustrate how the strong drive to involve stakeholders that can be witnessed today in the EU is a direct consequence of the acknowledgement of uncertainty in risk assessment. If risk assessors hope to produce assessments that will become the shared frame of reference amongst the actors in the policy process, they must be able to recognize situations where the level of uncertainty is such that the involvement of stakeholders in the assessment process is warranted.

The Probabilistic Foundation of Risk Assessment

The basis of risk assessment is the systematic use of analytical – largely probability-based – methods. In short, risk assessments specify the potential consequences of a particular event or technological innovation, calculate the probability of these occurring, and aggregate the consequences and probabilities into a single metric. The five methods most commonly used to determine probabilities are the following:

  • Collection of statistical data relating to the performance of a risk source in the past (actuarial extrapolation);
  • Collection of statistical data relating to components of a hazardous agent or technology. This method requires a synthesis of probability judgments from component failure to system performance (probabilistic risk assessments, PRA);
  • Epidemiological or experimental studies which are aimed at finding statistically significant correlations between an exposure of a hazardous agent and an adverse effect in a defined population sample (probabilistic modelling);
  • Experts’, or decision makers’ best estimates of probabilities, in particular for events where only insufficient statistical data is available (normally employing Bayesian statistical tools);
  • Scenario techniques by which different plausible pathways from release of a harmful agent to the final loss are modelled on the basis of worst and best cases or estimated likelihood for each consequence at each knot.

All these methods may be taken to reflect established frequencies of occurrence of similar past events under comparable circumstances (or in a hypothetical series of trials). Where outcomes can be fully characterized under a single metric (such as mortality frequency), then probabilities may be expressed as a continuous density function over the chosen scale. Such approaches to risk assessment are extremely powerful conceptual tools in dealing with well-understood self-contained formal rule-based systems (such as games of chance), or highly repetitive events affecting a multitude of subjects in long term stable systems (as with life insurance in the absence of war, plague or famine).

However, the epistemological basis for a more general ‘realist’ interpretation of the notion of probability has come under increasing doubt over recent years (Stirling, 2001). In particular, the validity of the underlying assumptions breaks down very rapidly in the context of the risk assessment of novel technologies, where conditions are far less tractable and circumscribed than those described above. The real-world systems impinging on the regulation of energy technologies, chemicals and genetically modified organisms, for instance, are imperfectly understood, open-ended, complex and dynamic.

Because of this, serious doubts emerge over the crucial assumption of comparability between past and future circumstances and outcomes. In fields such as environmental health risk assessment, issues of scale, novelty, uniqueness, complexity, change, irreversibility and incommensurability are often the norm, and they cannot simply be set aside for lack of a practical means of dealing with them.

These features undermine the concept of a hypothetical series of trials which is so central to classical ‘frequentist’ notions of probability. In a strict ‘frequentist’ sense, then, risk assessment methods based on probability theory are inapplicable to many of the most important decisions over the regulation of risks.

The claim here is not that it is impossible or not useful to apply probabilistic risk assessment methods to real world problems characterised by high levels of uncertainty. Increasingly, approaches such as imprecise probabilities or extreme value theory are being applied in risk assessment to address the challenges outlined above. It is worth recalling here the well known axiom that “all models are wrong, but some are useful”. The information derived from the application of quantitative approaches can be very useful in identifying priorities and designing policies in situations of high uncertainty. However, the usefulness and sophistication of these approaches must not mask their limitations and convey the pretence of good quality knowledge where this is not the case. As will be explained in more detail below, in situations of high uncertainty, formal quantitative approaches to uncertainty assessment must be accompanied by more qualitative assessment approaches, as well as by stakeholder involvement in the assessment process.

One approach that has been gaining in popularity recently is to complement the “objective” information available with the “subjective” estimates of experts, and to adopt a Bayesian perspective. In this approach, experts are asked to estimate the relative likelihood of the components in the risk assessment model for which information is lacking or limited. The risk estimates produced are then regarded as expressions of the ‘relative likelihoods’ of different eventualities, given the best available information and the prevailing opinions of experts. Here too it is important not to mistake the useful insight that can be achieved through this approach for a sufficient response to the implications of high levels of uncertainty. In theory, the Bayesian approach requires knowledge of all the possible effects that could be caused by a risk agent, and an exhaustive analysis of each of the causal pathways leading to these effects. Such thoroughness is extremely difficult in the face of the myriad of health stressors to which we are exposed, and the corresponding myriad of different effects they may cause. Even if expert understandings of the potential cause-effect chains relevant to health risk assessment were acknowledged to be complete and robust, there remain a host of more technical practical problems. The random variability assumed by standard error determinations is often overwhelmed by non-random influences and systematic errors. The form of a probability distribution is often as important as its mean value or its variance. Where differing irregular or asymmetric probability density functions overlap, this can have enormous implications for the results of a risk assessment. In short, where situations of high uncertainty require the use of subjectively derived data, the Bayesian approach exchanges the positivistic hubris and restrictive applicability of the frequentist approach for an enormous sensitivity to contingent and subjective framing assumptions (Stirling, 2001).

Because of the issues outlined above, restricting the subjectively derived portion of the data basis of a risk assessment to include only the opinions of experts raises important questions about the legitimacy of the results achieved. Where narrowly divergent (but equally reasonable) inputs may yield radically different results, and expert knowledge is recognised as uncertain, there is no sound basis for not including the opinions of other informed parties (e.g. stakeholders and decision makers) in the analysis. The importance of the legitimacy issue and the involvement of stakeholders in the assessment becomes obvious when considering the role of scientists and risk assessment in the regulatory decision making process.

Science as a Source of Legitimacy in Regulatory Decision Making

One of the founding principles of the modern regulatory process is the liberal principle of state neutrality. According to traditional liberalism, the state should be neutral with regards to particular attitudes and values, that is, conceptions of the good. Such conceptions are seen as private rather than public matters, and the law is supposed not to favour any particular conception. On the contrary, values are deemed to be illegitimate as justification for political action. Rather than being based on values, decisions should stem from a rational consideration of the facts. Thus, science is invested in the regulatory process in order to provide an impartial source of facts upon which policy decisions can be based. A second founding principle of the regulatory system, the harm principle, was formulated by John Stuart Mill. It basically states that persons should be free to do whatever they like, unless their activities are harmful to others. The principle was originally intended to protect individual freedom in matters of, for instance, religion and sexuality. Today, the principle is applied to many areas of regulation, including regulations on the application of new technologies.

The influence of the principle of state neutrality and the harm principle is to create a requirement for facts about harm. Harm is the trigger for regulatory intervention, and only facts can determine the existence of this harm. In practice, this means that in order to justify regulatory intervention, “threats” should be defined in as specific terms as possible and ideally in quantitative form. The basis for action should be a factual one, ideally developed through the use of a rigorous and rational methodology (e.g. risk assessment) to ensure that the interpretation of the facts is as objective as possible (Fisher, 2005).

As a result of this, when a party disagrees with a particular technological enterprise, the most legitimate grounds for disagreement is to prove (or claim) the harmfulness of the enterprise in question. In other words, to be effective, opposition must be expressed in terms of risk, the existence of which is to be demonstrated using a “scientific” approach like risk assessment (Jensen et al., 2003; Meyer et al., 2005).

Transparency and Deliberative Decision Making

In a system where regulators are meant to be the value-neutral administrators who base all of their decisions on facts, what may legitimate regulatory interventions and what kinds of interventions are justifiable, in situations where the facts are uncertain and scientific conclusions are not expected to be definitive? The problem is that this is the case with nearly all environmental and public health issues. In response to this challenge, many scholars argue for a regulatory decision making process where deliberation amongst actors plays a central role (Funtowicz & Ravetz, 1990; NRC, 1996; RCEP, 1998; Fischer, 2000; Stirling, 2001; Wynne, 2001; Klinke & Renn, 2002; Harremoës et al., 2001, Fisher, 2005).

Deliberative decision making aims to achieve a synthesis of scientific expertise and public values on a specific issue. Here, the notion of the “threat” that justifies regulatory intervention is interpreted broadly, such that there is no pre-defined or precise definition of the acceptability nor the nature of the risk (Fisher, 2005). The legitimacy of regulatory decisions is restored through an increased democratisation of decision making, whereby a variety of actors, representing as wide a spectrum of perspectives as possible, are invited to participate in the decision making process.

While deliberative decision making processes begin with the consideration of scientific inputs (i.e., risk assessments), this is only one activity in a more complex evaluation procedure. The scientific inputs are subsequently brought into a deliberative arena for debate in a wide forum which includes stakeholders, scientists and decision makers. This has profound implications for the role of experts in the decision-making process. Not only can they no longer place messy factors such as the economic, social and political aspects of an issue beyond the boundaries of their narrowly defined technical field, they are now expected to reflect publicly on the quality of their knowledge, explicitly revealing their uncertainties and opening up to questioning and confrontation by other members of the policy community. By making the uncertainty that characterizes their assessments transparent, experts can contribute valuable input to the process of collective reflection and deliberation leading up to a regulatory decision.

The competences required for experts to function well in this new context will not be acquired simply as a result of deciding to do so. Institutional arrangements and new methodologies to help facilitate the transition will be required. Methods for assessing uncertainty can help experts diagnose the uncertainty characterising their assessments, and explicitly communicate it to the other actors in the policy community. The results of uncertainty analysis can contribute to a qualified discussion of the quality of the information underpinning a policy decision. The quality of the information available can then be considered in determining the extent of the regulatory measures that are warranted in a given situation, as well as how monitoring and research resources should be allocated.

What is uncertainty?

Students in sciences are taught at an early stage how common problems such as sampling errors and imprecise measurements generate uncertainty in experimental results. This uncertainty is usually dealt with using statistical methods to express experimental results as confidence intervals. This approach to characterising uncertainty lends itself well to the probabilistic approach to risk assessment described earlier on in this report. When the distribution function of probabilities of occurrence and corresponding extents of damage is known, the uncertainty characterizing an assessment can be quantified by means of statistical techniques (for instance a 95% confidence interval). In such situations we speak of statistical uncertainty. However, as was explained above, the policy problems typically studied by risk assessors are often characterized by uncertainty at levels above and beyond statistical uncertainty, where it is acknowledged that there is no credible basis for claiming that we have considered all of the plausible outcomes, let alone assigning probabilities to them. In risk assessment, the practicality and elegance of probability calculus often leads to a focus on quantifiable uncertainties, while the level of uncertainty actually characterizing the real world is overlooked.

Uncertainties that cannot adequately be quantified, such as those generated by multi-causality, are difficult to integrate in quantitative risk-benefit analyses or in standard settings. Nonetheless, it is crucial to remember that the uncertainties that are quantified only represent a part of the “uncertainty picture”, and that the unquantifiable uncertainty may have a fare more fundamental bearing on policy. Thus, while quantitative approaches to uncertainty assessment do provide useful insight, it is important to be cognisant of the fact that in some cases, they can only provide partial insight.

In the following lines we present an adapted version of a typology of uncertainty which is designed to help risk assessors conceive a broad spectrum of the uncertainties characterising their assessments. The typology, referred to as the Walker & Harremoës (W&H) framework, was first presented in Walker et al. (2003). Since being introduced, the W&H framework has been applied to the risk assessment of GM crops (Krayer von Krauss, 2005), and incorporated to the uncertainty management guidance system used at the Netherlands Environmental Assessment Agency (RIVM/MNP) (van der Sluijs et al., 2003; Janssen et al., 2005).

The goal of the typology is to provide risk assessors with a conceptual framework through which they can understand the different ways in which uncertainty can manifest itself in their assessments.

Uncertainty in health risk assessment: a two dimensional concept

The W&H framework was born out of a desire to integrate the wide variety of terminology being used to describe uncertainty into a single coherent conceptual framework. Walker et al., (2003) adopt a broad definition of uncertainty, as being any departure from the unachievable ideal of complete deterministic knowledge of the system. At the core of the conceptual framework is the notion that from the risk assessors point of view, uncertainty is best thought of as a two dimensional concept, including the i) Location, and ii) Level of uncertainty (as illustrated in Figure 1). The location dimension refers to the aspect of the risk assessment model that is characterised by uncertainty. The level dimension refers to the severity of the uncertainty from the point of view of the decision maker. These concepts will be explained in more detail below and examples will be provided.

File:The two dimensions of uncertainty Adapted from: Walker et al. (2003).

The location of uncertainty

All of the widely used approaches to risk assessment rely on methodologies that can be considered models, that is, abstractions of the real world issues under consideration. For example, Risk is often modeled as a function of a system that includes probability and consequence subsystems. The group of cause-effect relationships encompassed by a particular risk problem is referred to as the system model for the particular risk. The location dimension refers to where uncertainty manifests itself within the configuration of the system model.

The notion of location of uncertainty can be illustrated by the example of a map of the world that was drawn by a European cartographer in the 15th century. Such a map would probably contain a fairly accurate description of the geography of Europe. Because the trade of spices and other goods between Europe and Asia was well established at that time, one might expect that those portions of the map depicting China, India, central Asia and the middle-east were also fairly accurate. However, as Columbus only ventured to America in 1492, the portions of the map depicting the American continent would likely be quite inaccurate (if they existed at all). Thus, it would be possible to point to the American continent as a “location” in the model that is subject to large uncertainty. In this case, the model in question is a map of the world, and all locations are geographic components of the map.

In a very similar manner, it is possible to situate uncertainty with respect to the locations (or model components) which comprise a risk assessment model. What are the health effects associated with exposure to a new kind of chemical? Until a wide variety of tests are performed, the answer to that question remains subject to much uncertainty. Thus, there is uncertainty at the “effects” or “endpoints” location of the risk assessment model.

The description of the model locations will vary according to the risk assessment method (model) that is being used. Nonetheless, it is possible to identify certain categories of locations that apply to most models. These are:

  • Context
  • Model structure
  • Inputs
  • Parameters
  • Model outcome (result)

These categories will be discussed in more detail in the sections to follow.

Context

The “Context” location refers to the choice of the boundaries of the system to be modeled. This location is of great importance, as the choice of the boundaries of the system determines what part of the real world is considered inside the system (and therefore the model), and what part of the real world is left out. The choice of the system boundaries is often referred to as the “problem framing”, “problem definition” or “issue framing”. Uncertainty in the problem framing is an important cause for controversy in the regulatory debate (Jensen et al., 2003; Meyer et al., 2005). Different stakeholders have different perceptions of what constitutes a risk, which risks should be assessed, and how much risk is acceptable. For example, while some stakeholders may demand that all health impacts associated with a project be assessed, including “soft” ones such as sleep disturbance, others may prefer to only examine the potential “hard”, measurable impacts such as counts of new cancer cases. Because different actors in a risk debate often have diverging interpretations of the problem, it is important that the problem framing take place early in the risk governance process, and that it be done in consultation with the important actors in the debate (IRGC, 2005).

An Intarese-relevant example of context uncertainty concerns the question of which health stressors and effects to consider in assessing the health impacts of agriculture and land use: should the scope of the assessment be confined to the effects of particulate matter, or should the assessment be broader than this?

Model structure

The term “model structure” refers to the variables, parameters and relationships that are used to describe (model) a given phenomenon. Model structure uncertainty is thus uncertainty about the form of the model that describes the phenomena included within the boundaries of the system. Here one could think of the shape of dose-response functions, or the additivity vs. the multiplicativity of risk factors. In situations where the system being studied involves the interaction of several complex phenomena, different groups of researchers may have different interpretations of what the dominant relationships in the system are, and which variables and parameters characterize these relationships. Uncertainty about the structure of the system implies that any one of many model formulations might be a plausible, although partial, representation of the system. Thus, researchers with competing interpretations of the system may be equally right, or equally wrong. Figure 2 illustrates the distinction between context uncertainty and model structure uncertainty.

Input

The “Input” location is associated with the data describing the system. Uncertainty about system data can be generated by a lack of sufficient amounts of data, by the fact that the data in hand is of poor quality, or by the fact that data describing the past is extrapolated to describe future conditions. Measurements can never exactly represent the “true” value of that which is being measured. Uncertainty in data can be due to sampling error, inaccuracy, imprecision in the measurements, conflicting data or simply lacking measurements. These are sources of uncertainty with which most scientists are quite familiar. An example of uncertainty at the input location could be a situation where measurements of concentration of air pollution are taken at a limited amount of points (e.g. specific measurement sites, studies performed in specific cities, etc.), then generalized for and/or taken to represent the situation for a much larger area.

File:Location of uncertainty Figure 2 – The Location of Uncertainty. Figs 1a and 1b illustrate the concept of context uncertainty, where ambiguity in the problem framing leads to the wrong question being answered (also known as a Type III error). Figs 1c and 1d illustrate the concept of model structure uncertainty, where competing interpretations of the cause-effect relationships exist, and it is probable that neither of them is entirely correct. Input is illustrated as that which crosses the boundaries of the system (Source: Walker et al., 2003).


Parameters

The following types of parameters can be found:

  • Exact parameters (e.g. p and e);
  • Fixed parameters, ( e.g. the gravitational constant g);
  • A priori chosen or calibrated parameters;

The uncertainty on exact and fixed parameters can generally be considered as negligible within the analysis. However, the extrapolation of parameter values from a priori experience does lead to parameter uncertainty, as past circumstances are rarely identical to current and future circumstances. Similarly, because calibrated parameters must be determined by calibration using historical data series and sufficient calibration data may not be available and/or errors may be present in the data that is available, calibrated parameters are also subject to parameter uncertainty.

Model outcome

This is the uncertainty caused by the accumulation of uncertainties from all of the above locations (context, model, inputs, and parameters). These uncertainties are propagated throughout the model and are reflected in the resulting estimates of the outcomes of interest (model result). It is sometimes called prediction error, since it is the discrepancy between the true value of an outcome and the model’s predicted value.

The level of uncertainty

File:Levels of uncertainty Figure 3 The levels of uncertainty (adapted from Walker et al, 2003).

The level of uncertainty is essentially an expression of the degree of severity of the uncertainty, as seen from the decision-makers perspective. While in some cases experts can express the uncertainty on their results in statistical terms, in other cases it is only possible for them to identify that scientific knowledge is limited in a given area, and the potential for surprise is therefore large.

The notion that uncertainty can manifest itself in different levels is illustrated by the example of climate change predictions. The uncertainty involved in predicting the change in mean global temperature that can be expected for a given increase in the concentration of atmospheric CO2 is small in comparison to the uncertainty involved in attempting to predict the myriad of changes that will occur as a result of this temperature increase. Will polar bears become extinct? Will costal cities be submerged? Are scientists even able to imagine all of the possibilities?

In accordance with a significant part of the body of literature on uncertainty (Knight, 1921; Smithson, 1988; Funtowicz and Ravetz, 1990; Faber et al., 1992; Wynne, 1992; Stirling, 2001), a scale containing different categories of levels of uncertainty is proposed, as shown in Figure 3 below. The different levels of uncertainty will be discussed in more detail below. Although they are presented as discrete categories, it can be difficult to determine the level of uncertainty in such discrete terms, and it can therefore be helpful to consider the scale presented in Figure 3 as continuous.

Determinism and statistical uncertainty

Determinism is the situation in which everything is known exactly and with absolute certainty, an ideal that is never achieved in policy relevant sciences due to the complexity of the problems dealt with. On the scale of levels of uncertainty, it is at the end of the scale where there is no uncertainty whatsoever. Statistical Uncertainty describes the situation where there exist solid grounds for the assignment of a discrete probability to each of a well-defined set of outcomes, as illustrated in Figure 4. Potential outcomes can be identified as a finite set of discrete outcomes, or a single continuous range of outcomes (e.g. range in Figure 4). In situations of statistical uncertainty, analysts possessing knowledge of the form of the distribution (normal, lognormal, exponential, etc…) and its properties (s, µ, etc…) can describe the probability with which any of the potential outcomes will occur. As mentioned previously, the uncertainty characterising regulatory assessments is frequently reported in statistical terms. However, where this is the case, it cannot be interpreted as an expression of the fact that the assessment is characterised by statistical uncertainty only. Rather, it should be interpreted as a lack of attention to the deeper levels of uncertainty. As will be illustrated further on, many complex real-world policy problems involve deep uncertainties that cannot be adequately expressed in statistical terms. It is therefore misleading to express the uncertainty in policy relevant sciences only in statistical terms.

File:Statistical uncertainty Figure 4.

Scenario uncertainty

Scenario Uncertainty describes the state where all of the possible outcomes are known, but where it is acknowledged that there exists no credible basis for the assignment of probability distributions to these outcomes, as illustrated in Figure 5. This can be due to the fact that the mechanisms leading to the potential outcomes are not well understood and it is, therefore, not possible to formulate the probability of any one particular outcome occurring.

Assumptions are a manifestation of scenario uncertainty. Decision support exercises often involve the use of scenarios in which a number of assumptions are made in order to simplify the problem being studied. In many cases, analysts do not have the time and/or data required to verify the validity of these assumptions. In some cases, verification may be practically or theoretically impossible. In many cases outcomes identified as being “improbable” by analysts are frequently left out of assessments in order to devote more resources to the analysis of outcomes deemed more likely (or about which more is known) (Patt, 1999).

An example that is useful in order to illustrate the notion of scenario uncertainty is that of the concerns raised over the use of antimicrobials or antibiotics in animal feedstuff (Edqvist and Pedersen, 2001). Antibiotics are probably the single most important discovery in the history of medicine. They have saved millions of lives by killing bacteria that cause diseases in humans and animals. Beginning in the 1940s, low levels of antibiotics began to be added to animal feedstuff as it was observed that this practice could increase the growth rate of the animals, increase the efficiency of food conversion by the animals, as well as have other benefits such as improved egg production in laying hens, increased litter size in sows and increased milk yield in dairy cows. Over the years, concerns developed over the potential for bacteria to develop resistance to the antibiotics. It was feared that the widespread use of the antibiotics would lead to the development of resistant bacterial strains, and that these antibiotics would therefore no longer be effective in the treatment of disease in humans. The scientific evidence available indicated that the development of bacterial resistance could take place, but how quickly and to what extent this could occur remain unknown to this day. The question of whether the short-term benefits outweigh the potential long-term risks is still being debated. In this case, the scenario is clear but the probability of its occurrence is unknown. The uncertainty here is of a level greater than statistical uncertainty, and is referred to as scenario uncertainty.


Ignorance

Identified Ignorance describes the state where there exist neither grounds for the assignment of probabilities, nor even the basis for defining the complete set of potential outcomes. It is a state where fundamental uncertainty about the mechanisms and functional relationships being studied has been identified, and where the scientific basis for developing scenarios is weak. In some cases ignorance may be lessened by conducting further research, which implies that it might be possible to somehow achieve a better understanding. However, in cases where the functional relationships are very complicated and/or the number of parameters is very large, or in some cases where the relationships are inherently unidentifiable, due to e.g. chaotic properties in the system that make predictions impossible, neither research nor development can resolve the ignorance. This is referred to as indeterminacy. Total ignorance is the other extreme from determinism on the scale of uncertainty, which implies a deep level of uncertainty, to the extent that it is not even know that knowledge is lacking. In Figure 3, the continuing arrow at the end of the scale is used to indicate that there is no way of knowing the full extent of our ignorance.

An example of a policy problem in which, for a while, ignorance was the dominant level of uncertainty is that of the outbreak of Mad cow disease (also known as BSE) in Britain (van Zwanenberg and Millstone, 2001). In an effort to reduce costs and maximise the re-use of resources, it was common practice that the remains of sheep, cattle and other animals were recycled and used as a source of protein in animal feedstuffs. Following the diagnosis of the first cases of BSE in 1986, it was noticed that the pathological characteristics of the new disease closely resembled scrapie, a contagious disease common in the UK sheep population. Scrapie is a disease that attacks the brain of sheep, is untreatable and invariably fatal. Health authorities soon observed that contaminated feed was the principle cause of BSE in cattle. However, the question remained: contaminated by what? There was no scientific evidence that eating sheep meat from scrapieinfected animals could pose a health risk, and health authorities could not be sure that the agent that caused BSE had in fact derived from scrapie. Moreover, there was no scientific evidence indicating that BSE could subsequently be transmitted to humans in the form of Creutzfeldt-Jakob disease (CJD), and it was a big surprise when, in 1995, it was discovered that this could happen.

File:Scenario uncertainty Figure 5 Scenario uncertainty: known outcomes, unknown probabilities.

The notion of ignorance is illustrated by considering the uncertainty characterizing an assessment of the potential costs associated to BSE, performed at the time of the discovery of BSE in 1986. No historical data on BSE was available and scientific understanding of how the disease is contracted was limited. The extent of the public outcry that would eventually occur remained unknown, as did the extent of the loss of exports and the drop in domestic demand that ensued. Data on the relationship between BSE and CJD would not become available for another 10 years. In this context, any assessment would necessarily rely on a large number of assumptions and there would be no credible basis for the assignment of probabilities. Furthermore, at the time there was not even a credible basis to claim that all of the potential ramifications or costs (outcomes) of the BSE crisis had been thought of. The uncertainty characterizing this situation is a good example of ignorance.

An example: health risks of air pollution

In the following section we will illustrate how the two dimensional framework for uncertainty analysis presented above can be used to conceptualize the uncertainty surrounding a familiar environmental health policy issue: that of the adverse health effects of ambient air pollution. Research on air pollution has documented a broad range of adverse health effects, ranging from respiratory symptoms to premature mortality. These effects result from exposure to air pollutants at levels usually experienced throughout the world.

File:Full chain model - air Figure 6.

Locations of uncertainty

Context

The context of the assessment is determined by the policy community (i.e. stakeholders, decision makers, experts) associated to a particular policy issue. Ultimately, it is the context that dictates the policy question to be addressed in the risk assessment. In relation to our case study, the broadest possible formulation of the policy question to be assessed could be stated as follows: “What are the health impacts of exposure to ambient air pollution?”

An important issue of interpretation in the above question is that of “ambient air pollution”, as this can mean many things. In the US, the Clean Air Act of 1990 establishes air quality standards for six pollutants: carbon monoxide, lead, nitrogen dioxide, particulate matter (PM10 and PM2.5), ozone and sulfur oxides. In the EU, ambient air pollution is regulated by the daughter directives to the Air Quality Framework Directive of 1996. All of the pollutants listed for the Clean Air Act are regulated by the EU legislation, with the notable exception of PM2.5, for which no air quality standard has been established yet.

The quality standard for PM2.5 is an interesting discrepancy between the US and EU regulatory regimes, indicating that there is uncertainty in the way in which the notion of “ambient air pollution” should be interpreted. For the purpose of this case study, the scope of the policy question will be narrowed down to the following:

“What are the health impacts associated with particulate matter?” As there is a correlation between the occurrence of PM and a number of other air pollutants, there is a sound basis for using PM as an indicator of air pollution. However, even the more narrowly defined policy question stated above is subject to controversy. Accordingly, Maas (2006) identifies four distinct views of the problem within the policy debate on particulate matter: i) “PM2.5 is the problem”; “PM10 is the problem”; “Specific traffic related particles are the problem”, and iv) “The problem is primarily socio-economic” (i.e. PM is not the main cause).

The conclusions of the scientific assessments of the PM problem are critically sensitive to the assumptions underlying the choice of problem framing. For example, if specific fractions of PM are primarily responsible for health effects (e.g. particles emitted from cars), then reducing PM emissions from electric utilities is not an effective way to reduce the health risks despite the fact that this will result in a decrease of PM2.5 emissions. The knowledge currently available is inconclusive with regards to the choice of the most appropriate problem framing. Even though the evidence from epidemiological studies accumulates and consistently shows statistically significant associations between health effects and PM10 or PM2.5 concentrations (Pope and Dockery, 2006), strong associations have also been observed between cardiopulmonary diseases (one of the main health effects of PM) and traffic noise (Kempen et al., 2002), the quality of housing and the diet of low income families (Eschenroeder and Norris, 2003). Because these other associations can be observed, the extent to which cardiopulmonary diseases result from PM pollution is to a certain extent open for interpretation.

Risk Assessment Model

For the purpose of this uncertainty analysis, we will use the Intarese “full-chain approach” model as the conceptual framework for our assessment. A version of the model adapted to this case study is presented in figure 6 below.

Sources & Releases

Ideally, data on local air quality would be obtained through direct observation. However, because the resources available for monitoring are limited, empirical data must be used in combination with modelling techniques in order to estimate local air quality. Depending on the approach used, these models can require knowledge of the emission sources of air pollutants and of the prevailing geographical and meteorological conditions, in combination with empirical observations, in order to estimate pollution levels at other, similar locations. Both the inventory of the sources, and the data characterising individual sources, are subject to uncertainty. Sources omitted from the inventory will reduce the accuracy of the assessment of emissions. Furthermore, because particles from different sources (i.e. natural vs. anthropogenic) or of different sizes may cause different health effects, aggregating the sources makes it difficult to identify which sources are responsible for which health effects.

Media and Exposure

Once released into the atmosphere, particulate matter is subject to a number of physical and chemical processes that influence the way it is dispersed. For example, particle bound water may alter the size distribution of particles, thereby affecting particle deposition characteristics. The physical and chemical processes are influenced by meteorological and geographical conditions. Knowledge of these conditions, in combination with knowledge of emission sources and measured data from monitoring programs, can be used to predict local pollution levels. In addition to the level of local pollution, exposure levels will depend on the setting in which exposure takes place, be it indoors or out. The accuracy of exposure estimates will therefore be influenced by uncertainty on the source characterization (described above), sampling error in the empirical data obtained through monitoring, uncertainty on the modelling of the influence of meteorological and geographical conditions, as well as on the modelling of the influence of the setting.

Dose and Effects

The assessment of the health effects of particulate matter pollution is complicated by the fact that differences in the composition of the mixtures of particles in different regions may result in different biologic effects, toxicity, and potency. This is because i) the physiochemical characteristics of the particles will influence their toxicity, and ii) a number of other air pollutants act with the particulate matter to create health effects. Also, the potency of particulate matter can be influenced by meteorological factors.

The occurrence of most outdoor air pollutants (e.g. NO2, CO, total suspended particles, SO2) is highly correlated to that of PM10. For this reasons, PM10 is routinely considered an indicator for this complex mixture of air pollutants in epidemiological studies. However, while the results of epidemiological studies indicate a strong correlation between the occurrence of “PM10 the indicator” and health effects, there is still uncertainty regarding the toxic potency of the individual components of the complex mixture. For example, there is suggestive evidence that finer particles (PM2.5) are more toxic than the coarser fractions of PM10.

The influence of meteorological conditions on the potency of the particles is a further source of uncertainty related to the estimation of the effective dose. For example, the ambient relative humidity in a region can affect how much particle-bound water is present and this can act as a conveyor of dissolved gases or reactive species in the lungs, thereby increasing the potency of the particles in question. In addition, the effective dose will also be influenced by the deposition pattern and fate of different particles in the respiratory tract. For example, particulates of less than 10 microns in diameter may penetrate more deeply in the lungs than larger particulates.

There is also some uncertainty concerning the mechanisms of injury and the nature of the associated health effects, particularly in the long term. In the short term, particles can cause lung irritation leading to immunological responses, lung constriction, shortness of breath and cough. Soot, fly ash, pollen, fungi, and yeast are amongst the particles known to cause lung irritation. Some particles are composed of compounds which form acids when mixed with moisture in the lung. For example, particles of zinc ammonium sulphate, commonly reported as a constituent of smog, form sulfuric acid in the lungs. In the long term, some kinds of particles, or their metabolites, can cause cell damage or cancer. However, with only few long-term animal studies available, and with the uncertainty related to extrapolating from animals to humans, risks assessments of PM must rely on important assumptions regarding the shape of the concentration-response function, to the extent that US EPA (2006) describes these assumptions as being the most determining factor on the outcome of risk assessments for PM2.5 (US EPA, 2006).

Vulnerability

The assessment of the health effects of particulate matter pollution is also complicated by the fact that there is variability in the way in which people will respond to the same levels of exposure. For example. those with a known history of asthma or chronic lung disease will be especially sensitive to these effects. The way in which responses and exposure levels will vary for different sub-groups of the population is an area where there is only little information available.

Impact

In order to make different (environmental) health problems comparable, methods exist to quantify health impacts into one single indicator. An example of such a method is the DALY (Disability Adjusted Life Years). In DALY calculations, the number of people with a certain disease is multiplied by the duration of the disease (or loss of life expectancy in case of mortality) and the severity of the disorder (varying from 0 for perfect health to 1 for death). In this way, morbidity as well as mortality can be expressed in one similar value, making environmental health problems more or less comparable and providing ways to prioritize, plan or evaluate environmental health policies. However, DALYs are a simplification of a very complex reality, and therefore only give a very crude indication of (environmental) health impacts. The question is whether or not DALYs appropriately capture the health impacts of particulate matter. Given the difficulty in determining the long term health effects that are attributable to PM pollution, there are large uncertainties in determining the “duration of the disease(s)” necessary to calculate the DALY. How much earlier does one die when one dies of air pollution (compared to other causes)? Further uncertainties arise in the assessment of the “severity of the disorder”. How ‘bad’ is e.g. asthma? This is a subjective judgment that may vary from one person to the next.

Level of uncertainty

On the basis of an expert meeting with Dutch experts on particulate matter and health (Kloprogge and van der Sluijs, 2006) and a review of the Impact Assessment of the EU’s Thematic Strategy on Air Pollution (COM (2005)446 and 447), Petersen et al., (2006) identified the following key sources of uncertainty in the integrated assessment of the PM and health problem:

  • i. Emission data;
  • ii. Measurement uncertainty,
  • iii. Inter-annual variability in meteorology;
  • iv. Poor understanding of the behaviour of secondary organic particles,
  • v. Attribution of effects to individual species of particles (causal fraction) or other pollutants or stressors;
  • vi. Quantification of the mortality impact of exposure to fine particles;
  • vii. Assessment of the effects of long term chronic exposure to particles;
  • viii. Distribution of risk over subgroups of the population (to what extent is the relative risk agedependant?);
  • ix. Valuation of mortality impacts from particles and other pollutants;
  • x. Uncertainty in cost estimates of preventative measures.

These locations of uncertainty are listed in Table I below. The table, referred to as an “Uncertainty Matrix”, shows the location and level dimensions of uncertainty. In the Uncertainty Matrix, the locations of uncertainty characterising the PM problem are listed in the column on the far left. For each location, an assessment of the level of uncertainty is provided. The assessment is based on the logic provided in the explanation of the categories of level of uncertainty. For example, because of the fact that there are competing, equally legitimate, interpretations of the problem, the context location is characterised by scenario uncertainty. The imprecision and inaccuracy encountered in sampling and measuring generate statistical uncertainty. Given the limited number of studies available on the health effects of long-term chronic exposure to PM, this location is characterised by identified ignorance. The table illustrates that many of the key uncertainties characterising the integrated assessment of the health effects of PM are of a level greater than statistical uncertainty. On this basis, we can conclude that the commonly employed statistical methods for treating uncertainty are not sufficient to adequately capture the influence of uncertainties characterising the assessment of the health impacts of PM. Together with a sensitivity analysis, the results presented in the table can also serve as the basis for a prioritisation of uncertainties.

To a certain extent, such uncertainty assessments will always remain subjective. However, when experts go through the process of assessing the level of uncertainty for each of the locations identified, insights emerge where different experts disagree and are forced justify their judgement. Together with the results of the uncertainty assessment, these insights provide a valuable contribution to the basis for decision making.

File:Ranking of air pollution uncertainties Figure 7.

Conclusion

Nearly all environmental and public health issues involve situations where the facts are uncertain and scientific conclusions cannot be expected to be definitive. Under such circumstances, experts must be able to recognize situations where the level of uncertainty is such that the best possible response to uncertainty is to be open and transparent about it with the stakeholders. Where risk assessments must be based on important subjective assumptions, these may be best derived through deliberation amongst informed stakeholders. Experts must be able to systematically diagnose and communicate the uncertainties characterising their assessments, even when these can only be described in qualitative terms. Inviting stakeholders to partake in the assessment process and being open and transparent about uncertainties increases the chances that the resulting assessments will become the shared frame of reference amongst the actors in the policy process. The results of uncertainty analysis can contribute to a discussion of the quality of the information underpinning a policy decision. The quality of the information available can then be considered in determining the extent of the regulatory measures that are warranted in a given situation, and to design research programs, or programs aimed at monitoring the impact of policy decisions made in view of adapting policies as new information becomes available.

Information about uncertainty

Due to the practicality and elegance of statistical approaches, it can be tempting for experts to conceive uncertainty strictly in quantitative terms. However, the real world problems studied by risk assessors are typically characterized by uncertainties that cannot adequately be captured in quantitative terms only. Strictly communicating uncertainty in quantitative terms risks conveying an impression of high certainty, when this is not actually the case. In this report we present an adapted version of the W&H framework, a two dimensional typology of uncertainty aimed at helping risk assessors understand and systematically diagnose a broad range of the uncertainties characterising their assessments. Within the typology, uncertainty is conceived as a two-dimensional concept, distinguishing between the i) Location and ii) Level of uncertainty. The example of the assessment of the health risks posed by air pollution is used to illustrate how the typology should be interpreted.

References

Edqvist, L-E, Pedersen, KB. (2001): Antimicrobials as growth promoters to farm animals, 1969-99: resistance to common sense. In: Harremoës, P, Gee, D, MacGarvin, M, Stirling, A, Keys, J, Wynne, B & Vaz, SG. (eds.), The precautionary principle in the 20th century. Late lessons from early warnings. Earthscan Publications Ltd., London, GB.

EPA (2006): National Ambient Air Quality Standards for Particulate Matter; Proposed Rule. 40 CFR Part 50, Federal Register, Vol. 71, No. 10, pp. 2620-2708.

Eschenroeder, A, and Norris, G, (2003): Should socioeconomic health effects be included in risk assessments?, Environmental Sciences 1: 27-58.

Faber M, Manstetten R, Proops J. (1992): Humankind and the environment: an anatomy of surprise and ignorance. Environmental Values, 1, 217-242.

Fischer, F. (2000): Citizens, experts, and the environment: the politics of local knowledge. Duke University Press, Durham and London.

Fisher, E. (2005): Risk, regulation and administrative constitutionalism. Hart Publishing, Oxford, GB.

Funtowicz, SO, Ravetz, JR. (1990): Uncertainty and quality in science for policy, Kluwer Academic Publishers, Dordrecht, NL.

Harremoës, P, Gee, D, MacGarvin, M, Stirling, A, Keys, J, Wynne, B, Vaz, SG. (eds.) (2001), The precautionary principle in the 20th century. Late lessons from early warnings. Earthscan Publications Ltd., London, GB.

Hattis, D.(2004). “The Conception of Variability in Risk Analyses: Developments Since 1980,” in: T. McDaniels and M.J. Small (eds.): Risk Analysis and Society. An Interdisciplinary Characterization of the Field (Cambridge University Press: Cambridge 2004), 15-45.

Holling, CS (ed.). (1978): Adaptive Environmental Assessment and Management. John Wiley & Sons., New York, USA.

International Risk Governance Council (IRGC: 2005). White Paper on Risk Governance, IRGC: Geneva.

Janssen, PHM, Petersen, AC, van der Sluijs, JP, Risbey, JS, Ravetz, JR. (2005): A Guidance for Assessing and Communicating Uncertainties, Water Science & Technology, 52, (6), 145-152.

Jensen, KK, Gamborg, C, Madsen, KH, Jørgensen, RB, Krayer von Krauss, M, Folker, AP, Sandøe, P. (2003): Making the EU "risk window" transparent: The normative foundations of the environmental risk assessment of GMOs. Environmental Biosafety Research, 3, 161-171.

Kempen, EEMM, van Kruize, H, Boshuizen, HC, Ameling, CB, Staatsen, BA, de Hollander, AEM (2002): The association between noise exposure and blood pressure and ischemic heart disease: a metaanalysis, Environmental Health Perspectives

Knight, FH. (1921): Risk, Uncertainty and Profit. Houghton Mifflin, Boston, USA.

Krayer von Krauss, M.P. (2005): Uncertainty in policy relevant sciences. PhD Thesis, Institute of Environment and Resources, Technical University of Denmark. ISBN 87-89220-97-8.

Klinke, A, Renn, O. (2002): A new approach to risk evaluation and management: risk-based, precaution-based, and discourse-based. Risk Analysis, 22, (6), 1071-1093.

Kloprogge, P. and van der Sluijs, JP (2006): Verslag Expert Meeting onzekerheidscommunicatie rond fijn stof en gezondheid, Deptartment of Science, Technology and Society Report NWS-E-2006-55, Utrecht: Copernicus Institute, Utrecht University.

Lee, KN. (1999): Appraising adaptive management. Conservation Ecology, 3, (2), 3. [online] URL: http://www.consecol.org/vol3/iss2/art3.

Maas, R. (2006): Fine particles: From scientific uncertainty to policy strategy. Journal of Toxicology and Environmental Health, in press.

Meyer, G, Folker, AP, Jørgensen, RB, Krayer von Krauss, MP, Sandøe, P, Tveit, G. (2005): The factualization of uncertainty: Risk, politics, and genetically modified crops – a case of rape. Agriculture and Human Values, 22, (2), 235 – 242.

Olin, S., Farland, W., Park, C., Rhomberg, L., Scheuplein, R., Starr, T. and Wilson, J. (1995): Low Dose Extrapolation of Cancer Risks: Issues and Perspectives. (ILSI Press: Washington, D.C.)

Patt, A. (1999): Extreme Outcomes: The Strategic Treatment of Low Probability Events in Scientific Assessments, Risk, Decision and Policy, 4, (1), 1-15.

Petersen, A, van der Sluijs, J, Tuinstra, Willemijn (2006): Adaptation and Anticipation in EU and Dutch Particulate Matter Policies, Background Papers for TAUC Workshop, Washington, DC, 10-11 October 2006.

Pope, CA, Dockery, DW (2006): Health effects of fine particulate air pollution: Lines that connect, Journal of the Air & Waste Management Association, 56: 709-742. Royal Commission on Environmental Pollution. (1998): Setting environmental standards. CM4053, HMSO, London, GB.

Stirling, A. (2001): On ‘Science’ and ‘Precaution’ in the Management of Technological Risk, Volume II: Case Studies, report to the EU Forward Studies Unit by European Science and Technology Observatory (ESTO), EUR 19056/EN/2. (IPTS: Sevilla 2001).

US-EPA Environmental Protection Agency (1997): Exposure Factors Handbook. NTIS PB98-124217. (EPA : Washington, D.C.)

van der Sluijs, JP, Risbey, J, Kloprogge, P, Ravetz, J, Funtowicz, S, Corral Quintana, S, Pereira, A, De Marchi, B, Petersen, A, Janssen, P, Hoppe, R, Huijs, S. (2003): RIVM/MNP Guidance for uncertainty assessment and communication: detailed guidance. Utrecht University.

Walker, WE, Cave, J, Rahman, SA. (2001): Adaptive policies, policy analysis, and policymaking, European Journal of Operational Research, 128, (2), 282-289.

Walker, W, Harremoës, P, Rotmans, J, van der Sluijs, J, van Asselt, MVA, Janssen, P, Krayer von Krauss, MP. (2003): Defining uncertainty: a conceptual basis for uncertainty management in modelbased decision support. Journal of Integrated Assessment, 4, (1), 5-17.

Walters, C. (1986): Adaptive management of renewable resources. Macmillan, New York, USA.

Wynne, B. (1992): Uncertainty and environmental learning - reconceiving science and policy in the preventive paradigm. Global Environmental Change, 2, (2), 111-117.

Wynne, B. (2001): Creating public alienation: expert cultures of risk and ethics on GMOs. Science as Culture, 10, (4), 445-481.

Zwanenberg, P., Millstone, E. (2001): Mad cow disease’ 1980s–2000: how reassurances undermined precaution. In: Harremoës, P, Gee, D, MacGarvin, M, Stirling, A, Keys, J, Wynne, B & Vaz, SG. (eds.), The precautionary principle in the 20th century. Late lessons from early warnings. Earthscan Publications Ltd., London, GB.