Open access peer-reviewed chapter

The Long Shadow of Human‐Generated Geohazards: Risks and Crises

Written By

Franco Oboni and Cesar Oboni

Submitted: 12 July 2016 Reviewed: 28 September 2016 Published: 30 November 2016

DOI: 10.5772/66066

From the Edited Volume

Geohazards Caused by Human Activity

Edited by Arvin Farid

Chapter metrics overview

2,420 Chapter Downloads

View Full Metrics

Abstract

The purpose of this chapter is to focus attention on the “damage and risk” side of the geohazard (GHZ) phenomena rather than on their generating processes. Damage evaluations are indeed often neglected and oversimplified in predictive studies. As a result, risks are poorly understood and often considered as the mere expression of the probability or likelihood of an adverse event. In this chapter, we will use numerous real‐life examples and will discuss among other subjects: technical glossary of risk, damages, crises, multidimensional consequences analysis, and definition of risk tolerance. This chapter also focuses on ethical (geo‐ethical) issues linked to GHZs caused by human activities and their mitigation decisions and possible unintended consequences. The discussion includes the sometimes excessive and sometimes lacking (blindness) perception of risks by the public, corporate, and public officers. The root cause of some odd human behaviors when facing risks (biases) like the survivor bias is discussed. GHZs cast a long and often misunderstood shadow on human activities, development, and survival. By understanding how to model consequences and better evaluating risks and crises, we will be able to alleviate human and environmental suffering and foster sustainable development.

Keywords

  • Risk
  • Crises
  • Social
  • Interdependencies
  • Consequences
  • Glossary
  • tolerance
  • geoethics
  • anthropocene

1. Introduction

Geohazards (GHZs) are defined as geological (geotechnical, hydrogeological) states that may lead to widespread damage or risk. They are geological and environmental conditions involving long‐term or short‐term geological processes. Humans are also altering the planet with actions, traceable back to the Neolithic age agriculture. We can call these interactions between the geosphere and humanity Anthropogenic (ANPgenic) global changes.

As we can safely assume we are in the Anthropocene (ANPcene), it is also safe to assume that many of the processes we observe nowadays are man‐made or man‐altered. It is therefore obvious that risks and crises, hazard and risk perception and related decision‐making, loaded with their social aspects, have to be studied in a holistic way, integrating humanistic and social aspects to rational, technical, and scientific approaches.

Human‐generated, i.e., man‐made or man‐altered, GHZs and natural‐occurring GHZs lists can be, in first analysis, assumed to be identical. That applies even to seismicity as lately exposed by research on fracking and other oil‐extraction techniques. Exceptions are special hazards like the spread of unexploded landmines contamination via erosion and flooding processes, the spread of heavy metals or other contaminants via dumps leaching or mining tailings, dam failures, etc.

This chapter focuses attention on the “damage and risk side” of the phenomena rather than on its generating processes or on the full risk assessment/management (RA, RM) process: GHZ processes may hit targets T with a probability p and generate damages (losses) or consequences, C. Thus, the generated risk on T is p*C.

Damage (consequence) evaluations are often cursorily discussed by geoscientists, and obviously so, as those themes are not within their direct scope of knowledge. Scientists are of course more interested in studying the science behind processes rather than their potentially grim outcomes. Thus, damage evaluations are often oversimplified by predictive studies, and risks are poorly understood.

This chapter first reviews the technical glossary of risk, damages, and crises using real‐life examples. Various concepts including ways to evaluate probabilities, frequencies, and their relationship are then approached. Further sections bear on multidimensional consequences analysis and risk tolerance. The discussion includes accident consequences and risk perception; identical consequences generating different behavior; and third‐party hazards possibly requiring strategic shifts if common mitigations are impossible.

Ethical (geo‐ethical) issues linked to GHZs caused by human activities and their mitigation decisions are discussed with special focus on possible unintended consequences and the sometimes excessive, sometimes lacking (blindness) perception of risks by the public, corporate, and public officers. The root cause of some odd human behaviors when facing risks (biases), like the survivor bias, is then discussed.

We present below a navigation help for the reader, which: (a) summarizes some very general rules to consider when building an RA model [1, 2] while (b) displaying what is included in this chapter and where to find it. The items marked with “literature” are an invitation for the reader to search the vast body of specialized literature on specific RM themes that could not enter in the chapter for obvious space limitations.

Any RM model should satisfy all five roles of system science, namely:

  • describe the physical world [Glossary (Section 2.1); System Description (Section 2.2)];

  • portray the results of interactions among a few of its components [Consequence Analysis (Section 3.2)];

  • propose a generic design [Mitigations:(Section 4.5.2 and Literature)];

  • be a constituent of “science of complexity” as it enlarges the domain of demonstrable results in the service of humanity [Interdependencies (Section 4.4)]; and

  • be actionable, as it has linguistic clarity and a model that suggests a clear direction of actions essential to resolve emergencies (Decision‐Making: Literature).

Any RA/RM model should also address some of the Complexity Laws [2], like:

  • not require humans to process more than three components at a time (triadic constraint) [e.g., Probability (Section 3.1), Consequence, Risk and Perception (Section 4.)];

  • render a parsimonious description of any emergency [Unified Scale of Consequence (Section 3.2)];

  • address the challenge of vertical incoherence as it can show the right aggregated level to decision‐makers at different organizational levels [Interdependencies (Section 4.4), Communication, Information Dashboards (Section 2.3.2) and Literature];

  • consider all relevant factors of emergencies in a balanced fashion (Rational Risk Prioritization: Literature) [3].

Advertisement

2. Need for technical glossary, clear semantics of risk, damages and crises

Over two decades of day‐to‐day work in the space of GHZs, risk assessment (RA) and risk & crisis management (RM, CM) have shown that many problems, oftentimes leading to ill‐conditioned decision‐making, squandering of private and public resources (money), misaligned expectations, boycotts, protests, and even turmoil arise from: (a) unclear glossary, (b) poor semantics, (c) poor (or absent) definition of the system(s) to be assessed, and (d) use of misleading methods or poor understanding of extant methodologies.

2.1. Glossary and semantic of risk statements

In our capacity of third‐party reviewers, we encounter many “classic” missteps in industrial or governmental agencies’ RAs around the world [4]. In this chapter, we expose a number of these missteps following the logic described below.

Misstep,
Erroneous statement,
Rule (numbered),
CORRECT example.

We will start with a misstep stemming from the use of confused technical glossary.


Misstep example: We have heard people talking about “risk” as a synonym for probability or hazard. This becomes extremely confusing when modeling a system and discussing what are manageable/unmanageable risk and how to address them. Hazards, in short, are anything that can go wrong. Hazards have a probability of occurring and potential consequences. Risks are hazards’ probability*consequence.

Erroneous statement: “The risk of dam A breach is 10-4.” It is wrong because 10-4 is a probability of occurrence and not a risk. Risk is p*C, as noted in the Introduction section and above.

Rule 1: Always use a well‐defined technical glossary throughout the studies and presentations. Do not accept improper or unclear definitions from anyone. Do not try to guess the meaning of other team members.

Quick fix: Always base your assessment on a well‐defined glossary, for example, see http://www.riskope.com/knowledge‐centre/tool‐box/glossary/. There are many others available in literature.

Correct: the risk of dam A breach is 10-4, with a consequence of......casualties, …environmental damages, and…lost infrastructure, business interruption, etc. As you can see, “talking” risk is not that simple, as despite the simplicity of the governing equation, there are numerous nuances. Shortcuts are deadly as they create confusion.


If one needs to prove the above statement, it is enough to look at the questionnaires that large consulting companies of this world send out to “get the temperature” of RM to managers without specifying the glossary first. The replies they get, often bound‐up in nice “yearly reports” on the state of risks in the world, are misleading to say the least!

Due to ubiquitous confusion, modern decision‐makers (DMs) are reportedly feeling the need to improve measurement and risks communication in various areas, as stated in a recent international consulting company report. Better evaluation/definition of key risks and enhanced definition of organizational purpose and values are at the top of listed concerns. Decision‐makers (DMs) want better communication and hopefully politician will too, if they are not too concerned by electoral pressure.

DMs seem to realize that RAs based on failure modes and effect analysis (FMEA) and other probability impact graphs (PIGs, see Figure 1) only clutter their horizon, leaving them struggling, and realize that it is difficult to communicate organizational purpose and values if one does not know the risks to which the company exposes itself or exposes the public. No wonder “modern” DMs are looking to improve measurement and risks communication, and let us hope it is not too late from an ANPcenic point of view.

Naming risks by their consequences (e.g., environmental risks, frequently used to define mishaps that could lead (among others to) environmental consequences) constitute a misleading (for the analyst) and confusing practice for the user/DM.

Figure 1.

Example of Probability Impact Graph (PIG).

2.2. Defining the Anthropocenic system to be assessed

As stated in Section 1, prehistoric and historical evolution of humankind meant to modify environments to appropriate resources. The timescale of the modification of the environment is briefly reviewed by looking at “the state of the world” thousands of years ago, 1000, 500, and 200 years ago, and then by imagining the requirements in those times, and their implications for the long term. We have selected a mining operation, which would have been built then [5].

2.2.1. A brief review of time

If our mining company had existed:

Forty‐five thousand years ago, we would have been mining hematite at Bomvu Ridge in Swaziland. Our mine would have produced insignificant waste and no societal concerns.

Three thousand years ago, we would have been bidding for the rights to mine silver at Laurentia just east of Athens. Slaves would have worked to our command to provide the money to build the Parthenon.

One thousand years ago, we would probably have been asked to sponsor the crusades. Total world population was around 300 M souls. Languages used then have since disappeared.

Five hundred years ago, we could have learned about Mr. Columbus’ recent discovery. Total world population had increased to 500 M souls.

Two hundred years ago, we would have been concerned by the Battle of Trafalgar, Napoleon's retreat from Moscow, and the Battle of Waterloo. The world's population had reached 950 M souls.

If we had closed our mining tailings facility 1000, 500, or 200 years ago, would we have expected that the tailings (mining wastes) should still be right there where we dumped them, unattended, not maintained, not monitored? Oh, we were forgetting one thing: had we left a Standard Operating Procedure and Maintenance Manual for “future generations,” now the manual would be in a difficult (impossible) to understand language. The documents might have turned to dust or have been heavily damaged. In addition, if we think digital transcriptions of our documents may have saved us, well the solar flare of 1859 (Carrington event) would probably have erased them all if fires, floods, and wars had not done it earlier.

Keep the information above in mind as you will go through the rest of this chapter.

2.2.2. Physical system model

Defining the system to be assessed is the key for a meaningful RA and yet one of the most neglected phases under the common excuse that ANPgenic systems are “too complex.” We are not denying complexity, but people tend to cross their arms and “do nothing” because they encumber themselves with clouded preconceived opinions. We all know that ISO and other international and national risk codes stress the fact that the context of the study and the environment in which a system operates has to be described. However, so many times, we have seen project teams and facilitators embarking in FMEAs or other risk‐related endeavors without taking the time to rigorously describe the system anatomy and physiology. Although it may seem inappropriate to use medical terms, you will see they are very useful for illustrating purposes.

A brief history of medicine: In ancient times, human health (the system of interest in medical science) was in the hand of shamans and other medicine men (and women). Visibly, the understanding they had of human body was not satisfactory. They needed to understand more. Hence, for example, Leonardo da Vinci started to perform anatomical studies (dissection was prohibited by the Church and the Law in those times) and recorded his acute observations in the famous sketches still displayed in various museums around the world. Those studies delivered a first understanding of human anatomy. A few more centuries of research brought us to be able to detect genetic mutations, hereditary diseases, and much more. Only in the early 1900, thanks to S. Freud, we started treating psycho‐pathologies with psychoanalysis and then started understanding the link between physical ailments and psychological troubles.

A brief history of RA methods: This story is faster and shorter than the prior one. Most common‐practice tools have military origins as they were used to increase weapons reliability. Industry was still using the so‐called insurance gals two decades ago to transfer risk, without any serious evaluations, to insurance companies willing to take a bet on them. Then, a series of GHZ and man‐made mishaps, public outcry, and political pressure transformed “risk” in a fashionable buzz‐word. RA and RM were nice words to say and common practice percolated down to the minimum common denominator, using FMEA and other methods and models to administer social “placebo.” Accidents were still occurring, failures were still considered “unforeseeable,” and potential consequences were still looked at cursorily and in a compartmentalized way. No one was carefully describing the system's anatomy and physiology in industry and GHZs RAs. It was the time of open‐risk workshops where participants were able to voice concerns and fears, without having dissected the system under consideration, pretty much like we used to do in medicine before understanding anatomy and physiology. Then, large‐scale terror acts (September 11, 2001) occurred on U.S. soil and, in 2008, there was a global recession. All of a sudden, new words were coined to describe what we all knew very well already: poorly made RAs do not bring any value to projects and society. The talk was all about systemic risk, nonfunctioning models, Black Swans (BS, see Section 3.1), fragility, complexity, etc. All of those efforts just to hide one simple fact: unless we take the time and effort to properly define our systems, we cannot perform any serious analysis on them! The parallel is striking: if we do not know the human body anatomy and physiology, any surgery or drug will have a very poor rate of success, or be detrimental. Consider GHZs diseases of our ANPcenic systems and you will be ahead of the game.


Misstep example: Put together a risk register without defining the system's functional analysis, success/failure criteria. Consider self‐sufficient engineering systems forgetting their interactions with other systems/subsystems, the environment and the world.

The largest and costliest mistakes are generally made when (poorly) defining the system. You have to understand the context of the study and what constitutes the system you have to assess.

Erroneous statement: Let's perform an RA of Lake B environment, but let's not look at what could happen upstream of the contributing rivers and creeks.

Rule 2: Always perform a functional analysis (as is required, but very seldom performed, when starting a FMEA study). Be sure to take into account cascading failures and inter‐systems interdependencies. The definition of the success/failure criteria is fundamental to understand both the hazard and the system.

Quick fix: Determine the limit of the system and the logical connections between the components. Then state why inclusion/rejection decisions are made as part of increasing study's transparency.

Correct: Let's include in the RA of Lake B environment all the contributing rivers and creeks, contamination that comes from aerosols and air pollution, and what could be mobilized (e.g., bottom purge of reservoirs that could have accumulated mercury or dioxins in sediments).

Misstep example: Starting the RM process by brainstorming all possible risks with the crews, without proper preparation, most of the time leads to mislabeling hazards or concerns as “risks” (See Rule 1).

Erroneous statement: Fire is a significant risk in this facility. This statement misses two major points. What fire: a wildfire (like the one Fort McMurray, Canada) or an electrical fire (man‐made) or an arson? Where and hitting what?

Rule 3: Always start by identifying hazards using threats‐to and threats‐from. Perform strong logic checks on your risks definitions.

Quick fix: The hazard identification process is an important step, but not the first one. Only once all the logical connections are established, one can be sure that you have been as methodical and exhaustive as possible.

Correct: There are several hazards potentially causing fires at this facility: natural conditions (including climate change), operational mistakes by subcontractors (actually the lack of instruction of the subcontractor would be the root cause), and criminal/terror activities. The potential targets of each would be.... the consequences of each at every potential target would be …


Working properly with a robust logic leads to a significant increase of the number of records of the Hazard and Risk Register (HRR) (the database that should contain all your RA information based on your physical model). The efficiency of off‐the‐shelf all‐purpose worksheet software is limited, and specific software solutions have to be sought.


Misstep example: If in a HRR a hazard is listed without a “threat‐to,” it is impossible to assess its consequences. The same hazard (say a rock falling) can lead to the definition of a widely different risk because the consequences may vary in time and location.

Erroneous statement: “Traffic accidents” in this industrial area have a consequence ranging from … to … This statement is wrong because the hazard will have different consequences depending on what it is impinging on and what generates it (construction equipment hits pipeline, snow removal knocks out a data telemetry station, etc.). In many cases, consequences (or the hazard) will be counted again in another scenario.

Rule 4: Check your risk statements (record per record in your hazard and risk register) to avoid double counting.

Quick fix: Always link a hazard to a component of the system. If various hazards can hit the same component, or if the same hazard can hit many components, each one of them has its own line in the hazard register.

Correct: Traffic hazards by fork‐lifts on acid pipeline have a consequences ranging from … to …; traffic hazard by subcontractors trucks …, etc.


In Section 4.4, we discuss third‐party‐induced risks through interdependencies. As we will see, those constitute a special case of threat‐from analysis.

A key element for the success of an RA is the definition of the success criteria (i.e., the RA's metric). Not achieving the success criteria means being in a failed state. Clear definition of success/failure constitutes the basis for rational RAs as it allows to understand what constitutes a “failure” or an undesirable event that warrants the recording of a line in the HRR. Without a clearly defined success criterion, any attempt to evaluate risks that matter in a considered system will be misleading at best. For example, in a GHZ RA, the success criteria for a rockfall study were defined as: no casualties due to rockfall. However, as zero risk does not exist, the success criteria should be defined probabilistically: success is reducing a casualty probability to 10-5/year or lower.

The success criteria should always be complemented by a measure of the tolerance. If the tolerance covers a range of possible consequences (e.g., from 1 casualty to 100 casualties) coupled with their tolerated likelihood, then a tolerance criterion is defined, a notion we will develop in Section 4.5. Success criteria may become rather complex, depending on the aim of the assessment. A more sophisticated, four‐dimensional success criterion could be suggested as follows: (a) reduce casualties as described above, (b) reduce closure time of the path at the toe to max 1 day/month, (c) adapt to climate changes (runoff surges on the slope), and (d) survive for at least 5 years without the need for capital expenditures and extraordinary maintenance.

2.3. What is a risk assessment: do pigs fly?

The ubiquitous common‐practice RA methods are FMEAs and Probability Impact Graphs (PIGs, see Figure 1). Due to common mistakes described above (and further down in this chapter) and their misleading nature, these methods do not allow to grasp the full true story of the multihazard (or convergent) RAs that should be an integral part of responsible DMs lives. PIGs present at best a colorful chart, usable as a rough first estimate [6], but not sufficient for complex decision‐making for critical infrastructures of our modern society. People often accept PIGs uncritically and trustingly until something goes wrong. As shown above, problems arise from the use of an unclear glossary, the basic structure of the HRR, and continue with simplistic definition of probabilities and censored consequences. Experience has shown that PIGs often end up with a major confidence crisis, possibly leading to societal or regulatory opposition [79].

2.3.1. Common practice


Misstep example: PIGs are usually drawn with symmetrical color schemes (Figure 1 is symmetrical around the yellow colored diagonal), or almost symmetrically, in such a way that high consequence, low probability events have the same risk “color” as low consequence, high probability events (the extremes of the yellow diagonal in the case depicted in Figure 1). Following that scheme leads, for example, to prioritize the risk of an asteroid obliterating your house (and family) in the same class as you getting a cold, a prioritization that would be considered misleading by most readers and DMs. Symmetry implies that a “Fukushima scenario” (catastrophic impact, very low probability) is considered equal to your next flu (low consequence, high probability).

Erroneous statement: The matrix cannot be colored in ways that lead to misleading prioritization. Symmetrical coloring is a clear flag for erroneous statements in the interpretation based on the prioritization.

Rule 5: When using a risk matrix (Probability Impact Graph, PIG) for the risk prioritization (usually stated as a specific color), there is the need to check that the colors “match” real‐life (corporate or societal) expectations.

Quick fix: Use extreme cases to see if the coloring scheme still makes sense. If you really have to use the PIGs representation, alter the coloring scheme until it makes sense. Do not try to guess a tolerance threshold: there are specific studies required to develop a defensible one.

Correct: Matrices cannot be symmetrical. Consider all extreme cases and if you really feel like “coloring the cells,” pay attention to what the colors will tell your users.

Misstep example: Develop classes for the consequences and probabilities for your risk prioritization. It is not by adding another color or class that you will solve the binning systemic error.

Erroneous statement: In order to increase “precision” of the studies, the Geologic Bureau asks all permits to be developed using a 5 × 5 matrix (1 = negligible, 2 = small, 3 = medium, 4 = large, 5 = catastrophic) instead of the 3 × 3 matrix (1 = small, 2 = medium, 3 = large) used to date. It would be interesting to ask them how they are going to compare “old” studies with “new” ones, and what 1 through 5 mean today when compared to 1 through 3 yesterday.

Rule 6: Do not bin.

Quick fix: Above all, avoid using indices. Stay quantitative. The math is simple, and the RA will be tremendously improved if your “risk‐dots” are in the proper p, C position and not just binned in. This fix will allow for rational prioritization but also enable providing insurance‐limits computations and avoid being overwhelmed by bin‐overcrowding.

Correct: You do not need to develop classes and indices; use “real” ranges of values: probabilities vary between 0 = cannot happen and 1 = certain to happen, and each one of your hazards can be allotted a range (See Section 3.1). Consequences range between 0 and infinite (in a metric we will define in Section 3.2) and you can evaluate a range for each hazard record. Do not color “cells,” as you will soon be able to develop a tolerance level in Section 4.5.


2.3.2. What is needed

A recent decision by an Environmental Review Board in Arctic Canada [10] quoted the following five requirements for a socially and technically acceptable (GHZ) RA.

  1. Compilation of a proper glossary containing a description of all the terms used in the project and its development, especially those that might have a common use, which differs from the technical meaning (such as “risk,” “crisis,” “hazard”) in compliance with ISO 31000.

  2. Definition of the project context in compliance with ISO 31000, including all the assumptions on the project environment, chronology, etc.

  3. Properly defined HRR covering:

    • Clearly defined system of macro‐ and subsystems/elements and their links describing for each one of them: (a) expected performances, (b) possible failure modes, (c) quantification of the related ranges (to include uncertainties) of probabilities evaluated as numbers in the range 0–1 (mathematical characterization) with a clear explanation of the assumptions underlying their determination, and (d) associated magnitude of the hazards and related scenarios.

    • An independent analysis of failure/success objectives.

    • A holistic consequence function integrating all health and safety, environmental, economic and financial direct and indirect effects.

    • Applicable published correlations and information.

  4. RA is expected to use a unified metric showing consequence as a function of all health and safety, environmental, economic, and financial direct and indirect effects. This will be done in a manner that allows transparent comparison of holistic risks with the selected tolerability threshold (see below).

  5. Consequences will be expressed as ranges, to include uncertainties. When evaluating the consequences, the RA will:

    • Explicitly define risk acceptability/tolerability thresholds, in compliance with ISO 31000 international code. These will be determined in consultation with potentially affected communities, using a unified metric compatible with the one described above for consequences.

    • Risks and tolerability or acceptability will be developed separately, in such a way not to influence or bias judgment of the assessors or evaluators. Risks will then be grouped into “tolerable” and “intolerable” classes. The risks in the intolerable group will be ranked as a function of their intolerable part. Mitigation efforts will be allotted proportionally to that ranking.

We will now proceed with a systematic discussion of the requirements and their meaning.

Advertisement

3. Characterizing risks

In the ANPcene, insurers are facing “new” challenges when insuring against GHZs, especially those caused by human activity. They have realized that because of the dynamic evolution, the usual actuarial point of view on risk faces significant challenges and can be misleading. The indiscriminate use of force majeure (FM) and insurance denial to protect themselves is actually detrimental to their business and their clients. What an epiphany! Looking only in the rear‐view mirror while driving is indeed going to complicate the steering of the vehicle! Now, insurers have always worked like that, i.e., using past data (statistics) to evaluate their risks and business opportunities, and they have already got their share of misery from climate changes and other GHZs. GHZs caused by ANPcenic fast‐track evolution are typically an arena where using actuarial data and statistics can only be wrong and expose everyone, including the insurers, to enormous risks overexposures.

Unfortunately, insurers have asked hazard specialist (geoscientists, weather people) help in solving their conundrum, a mistake we oftentimes see occurring in various business spaces. As mentioned earlier, hazard specialists want to measure what they know, but they often confuse hazard with risks, and by managing hazards instead of risks, they end up being ineffective or inefficient, i.e., squandering money, not getting results, or leading to unjustified insurance denial. Rule 1 is of paramount importance in mitigating this.

3.1. Evaluating probabilities, frequencies, their relationship

In this chapter, we do not discuss qualitative evaluations as we consider them not aligned with modern societal needs of transparency and rationality.

We start this section by looking at some particular values, which are often encountered in the hazard‐risk literature. The first one is the “Act of God (AoG),” a term that entered in contract practice more than a century ago to indicate unforeseeable and unmanageable events such as hurricanes and earthquakes, and thus trigger the activation of FM clause. The definition of an AoG can be understood with an example: if we consider a commercial contract for a facility in Salt Lake City, one could have said that a tornado in Salt Lake City was an AoG (scientific consensus that a tornado was not a credible, actually “impossible” event in Salt Lake City) until one happened on August 11, 1999. From that point on, a tornado in Salt Lake City became a rare, but credible, event and hence, should not be considered an AoG anymore.

Of course at this point, terms such as “credible event (CE)” have to be clarified, as their definition is intimately linked with the limiting value of FM.

Figure 2.

Risk reduction vs. mitigative costs (investments), “acceptable” mitigative thresholds, and acceptable residual risks. Zero risk is a theoretical concept only.

Continuing our discussion, in recent times, “standardized levels of risk reduction/mitigation” have been formulated (in various industrial spaces), and at least three of these definitions are now in common use among analysts (http://www.riskope.com/2016/07/07/geohazards‐probabilities‐frequencies‐and‐insurance‐denial/). These three levels of risk mitigation also represent a convenient way to elude explicit tackling of risk tolerance (see Section 4.5), especially when the delicate theme of human life has to be dealt with. However, these standardized levels of risk reduction can also be seen as a way to define “the state of the art” practice. Anything below a predefined ALARA, ALARP, BACT level can be considered as “negligent” (NB: in recent years, it has been noted that public opinion tends to consider negligent mitigative levels that are above these limits, thus leading to image risks even when “reasonable mitigative behavior” is followed by corporations, governments) (Figure 2).

Here are the definitions of the three “standard” levels of mitigation:

ALARA: ALARA means as low as reasonably achievable [11] and can be used for rockfalls and landslides along roads.

ALARP: ALARP stands for as low as reasonably practicable, a term often used in safety‐critical and high‐integrity systems. For a risk to be ALARP, it must be possible to demonstrate that the cost involved in reducing the risk would further be grossly disproportionate to the benefit gained. It is a best common practice of judgment of the balance of risk and societal benefit. It could be used for GHZs impinging on critical infrastructures.

BACT: BACT stands for best available control technology. It can be used for toxic dumps and tailings storage facilities.

No matter which mitigative level is adopted as “standard, nonnegligent, practice,” a fundamental step is to define against which event it is necessary to mitigate (as it is a “credible” event) and against which event we humans have to humble, as it can be considered an AoG.

Like for risk, the technical definition of credibility differs quite substantially from the “colloquial” definition. Industries where major accidents/events are a concern, generally define credible accidents as accidents within the realm of possibility (i.e., probability higher than 10-6/year) and with a propensity to cause significant damage (at least one fatality). This concept comprises both probable damage caused by an accident and probability of its occurrence. As the threshold value of 10-5 is a generally accepted value used in seismic, geological, and other sciences to define “maximum CEs (MCE),” it can be assumed, for the optimization of the FM clause, that events with a probability of occurrence of less than 1 in a million (10-6) or less belong to AoG, whereas events with a probability of at least one in hundred thousand (10-5) or more are credible and more importantly, although rare, foreseeable, and should be mitigated.

As mentioned earlier (Section 2.2.2), just around the time of the 2008 recession, the “Black Swan (BS)” buzz‐word phenomenon exploded. It caught the imagination of many, but in our mind, it brought forward a slanted image. Let's explain the point: BSs were originally defined [12] and currently defined on the web as “an unpredictable or unforeseen event, typically ones with extreme consequences.” Unpredictable can be assumed to be beyond credibility or at AoG level, hence one in a million (10-6), or less. Interested readers can study the Blackett report [13] to find a balanced set of approaches to low probabilities risks. Thus, when one reads that “geopolitical BS events, such as the Arab Spring and the Japanese earthquake, have further complicated the market dynamics,” we are reading about some illogical statement. The same when we read about tailings dam breaches being considered BSs: they cannot be as their rate of failure is 10-3 to 10-4, at least two orders of magnitude more likely than an AoG and definitely in the credible range. In addition, if major economic turmoil occurred 17 times in the last two centuries (rate of occurrence is 17/200 = 0.085/year), it is also difficult to consider it a BS. However, societies have very short, selective, memory, so we all thought that 2008 was one unpredictable and unforeseen event despite such a high recurrence of similar events. That's WRONG! Furthermore, social scientists [14] have argued that major disasters do not occur “out of the blue” but incubate over a period of time with potentially identifiable patterns of attributes. The thesis was not new, and its germinal concepts started between the thirties and the seventies, with systematic studies of industrial accidents conducted by various insurance companies. We have found in our day‐to‐day practice numerous cases where GHZs seem develop following this “incubation” principle, provided a serious record of “near misses” is maintained.

In the case of the Fukushima earthquake, reportedly the sea defences were designed for the MCE, but there are also voices that a priori the height of the tsunami at Fukushima might have been regarded as inconceivable. However, if our lack of knowledge about such rare events had been admitted, then, in view of the spectrum of potential consequences (uncensored and unbiased spectrum, of course), there should have been an incentive to ensure that the design was robust to the lack of knowledge. Reportedly, designers trusted only one line of defence (hence, the project was not robust or reliable, but fragile), leaving the electrical commands ready to flood in the underground of the plant (yet, as engineers, we know that trusting one line of defence, the properties of one material, device, etc. is not good sense). Considering Fukushima a BS was/is dead WRONG, and the resulting consequences due to the dismissal of the uncertainties became apparent.

“Availability heuristic” [15] is a very well‐known human cognitive bias tainting decisions under uncertainty. That bias can explain why the 2008 recession was considered unheard of, a BS: just because most people did not remember (were not even born) in 1929! The BS “fad” is indeed based on humans having “short memory” and considering the last events as “unique.” Sometimes, we are forced to use availability heuristics because available data are indeed very scarce and only recently gathered, but reliable statistical evidence will systematically outperform “intuition” when “looking backwards” in time to past events to draw conclusions. Looking backwards, however, is not enough and actually it is critically limiting and incomplete when we are confronted with managing risks. A good RA, especially a GHZ one, has to be “looking forward,” examining “classic” scenarios and hypothetical ones that have not yet occurred (or not yet occurred with larger magnitudes) before making management decisions. As a matter of fact, Kahneman and Tversky [15] have explored in detail how human judgment can be distorted when making decisions under uncertainty: humans tend to be risk‐averse when facing the prospect of a gain, and paradoxically risk‐prone when facing the prospect of a loss (even if the loss is almost certain to occur)! Thus, using improper methods like PIGs (See Figure 1), which will almost surely lead to confusion, losses, and poor planning, sits well with “main‐stream” human nature.

Thus, we can state that the range of the probabilities we need to define spans from 10-6 to 1, so from the threshold of credibility to “certainty.” In performing probability estimates, we need to remember that it is better to be roughly right than precisely wrong, meaning that uncertainties should always be part of the estimates and be explicitly stated. Probabilities can be evaluated through probabilistic models, statistics, and direct encoding of expert knowledge. The detailed discussion of these methodologies is not within the scope of this chapter but can be found in technical literature [16, 17].


Misstep example: Giving one precise value for the probability.

The past can never be assumed to equal the future. At best, it can be used as a point estimate.

Erroneous statement: Based on the historic series of flooding in this area, it is assumed that flooding to +4 m will occur every 50 years.

Rule 7: Always consider a range of probabilities in order to include the range of uncertainties.

Quick fix: Uncertainties will always exist. Consider the limits of our human capability to estimate events. Give one pessimistic probability, usually Common‐Cause Failure based (i.e., in the case all redundancies fail because of a common flaw), and one optimistic probability with the foreseen mitigation active. If probabilities are transparently considered uncertain, then a Bayesian update mechanism can be implemented when new data become available.

Correct: Based on the flooding historic series in the area, it is assumed that flooding to +4 m has a yearly probability ranging between x and y. Thus, it is possible to evaluate (Poisson process) that the probability of occurrence of 1, 2, 3 floods at +4 m is x1–y1, x2–y2, and x3–y3.


3.1.1. Quantitative by analogy (applied to L’Aquila earthquake)

NB: Data for this summary have been gathered through media and publicly available records. Details we consider “irrelevant” to this discussion have been omitted because of space limitations.

In 2009, the city of L’Aquila, located in Italy, was hit by an earthquake. The city featured many historic public buildings and antique residential structures, which had not been retrofitted, Italy being a country where retrofitting of old (privately owned) structures to meet new seismic safety criteria is reportedly not enforced (decree OPCM 3274/03, art. 2, comma 6). A 1999 study on the vulnerability of public, strategic, and “special” buildings had indeed shown critical vulnerabilities. The area is seismic as witnessed by major earthquakes recorded since the year 1349, then 1452, 1461, 1501, 1646, 1703, 1706, 1958, and 2009 (eight major events in approx. 650 years, or 0.0123 (1.23%) per annum, excluding the 2009 event). This last quake led to 309 casualties, 1600 wounded (200 very severely), 65,000 evacuated out of the city, and damages for over 10B€. Prior to the tragic event, a “swarm” of foreshock earthquakes with almost 100 times the average rate was recorded in and around L’Aquila. The swarm triggered a crisis status due to public's panic, further fueled by independent scientists’ opinions. The swarm had started in December 2008 (magnitude, M = 1.8), then in January with M = 3 gradually and continuously evolving with increasing intensity and frequency to the date of the major event.

For the sake of our discussion, that 1.23% per annum, accelerating very significantly and increasing in intensity has to be placed in comparison with other phenomena: the world portfolio of hydraulic dams has a rate of failure near credibility (10-5 to 10-6), tailings dams major breaches at 10-3 to 10-4. There was clearly something significant occurring and an official government body called the National Commission for the Forecast and Prevention of Major Risks had six top officers participating in a meeting with the public on March 31, 2009, 6 days before the nefarious earthquake (Magnitude 6.3/Richter 5.9), and a day after the latest, and strongest event in the swarm.

The team spoke directly with the public rather than via the civil protection department. The public's concerns were entirely dismissed, and as a result, some of the town's residents changed their behavior of seeking shelter outside, as they were used to do when tremors happened, staying indoors instead.

The team was brought to trial for manslaughter in September 2011 for the advice they gave in that meeting.

Scientific American [18] rightly wrote that “this was not a case against science, the judge recognized the unpredictability of such an event already in the indictment, but a judgment against the failure of scientific communication (of risks).”

It is time that geoscientists, seismologists, and engineers, who are very capable and respectable hazard specialists, recognize that RAs are an area requiring specific knowledge. RAs should be prepared by risk specialists and hazard knowledge constitutes at most half of the equation.

Operating by comparing the long‐term rate of occurrence, accelerated frequencies, and using analogies with other catastrophic occurrences (dam breaches), would have helped to rationally characterize the ongoing changes and the uncertainties. Armed with that knowledge, it is likely that the communication to the public would have been different. Reportedly, all the second instance tribunal ended up releasing the accusations. It is not our role to discuss the judges sentencing.

3.1.2. Quantitative by model(s)

A large alpine landslide was the object of a quantitative RA, which used a probabilistic slope‐stability method [19] to show which failure modes were most critical in terms of probability of sudden accelerations (paroxysms) and consequences to the transportation corridor and the watercourse lying at the toe (Figure 3). Failure modes varied from relatively shallow slides (<500,000 m3 in volume and less than 10 m deep) possibly occurring on top of the massive historic slow‐creeping mass (10 Mm3, up to 100 m deep).

Figure 3.

Probability of catastrophic acceleration vs. phreatic surface over average long‐term level at two locations determined using a probabilistic slope‐stability method [19].

Based on the evolution of the topographic displacements, a model was built and maintained to evaluate the potential runoff of the masses; hence, the consequences of failures on the surrounding infrastructure [20]. Monitoring revealed that the slide responded as foreseen by the probabilistic analyses in the case of a particular cycle of adverse meteorological events. All stakeholders were shown the evidence of the instrumental and probabilistic analyses together with the expected landslide evolution. Thanks to clear risk communication and understanding, consensus was reached on the appropriate level of mitigation in the form of a drainage tunnel, which gained strong “social” support and was built.

3.1.3. Quantitative by internet of things

Data management systems linking multitemporal objective data acquisition with dynamic convergent RA platform are needed to:

  • capture data from many sources (manual, automatic)

  • create a common database for all—no disagreements about whose data to use

  • present the data in an intelligible form via the web‐based portal

  • prepare reports, with up‐to‐date data and analysis, virtually automatically

  • generate efficient meetings and strongly reduce “emergency” clarification meetings

  • spare time spent chasing for factual information—decision can be made more rapidly with agreed data

  • allow management at all levels to hold paperless meetings with current data viewed and analyzed on the system

  • give access to project's historical data to all stakeholders from one unified data library

  • make data available across all phases of endeavors and across all stakeholders and all (authorized) parties

  • manage alert/alarm/action levels sending email and/or SMS to controlled users groups.

The link between the data acquisition (sensors, monitoring stations, etc.) and the RA platform should use Bayesian updates of probabilities, frequencies, and other selected parameters to distill the data used in the RA. Connecting a dynamic quantitative risk‐analysis platform with a high‐performance data‐gathering technique reduces costs, avoids blunders, and constitutes a healthy management practice, especially for long‐term projects requiring short‐ or long‐term monitoring (Figure 4).

This allows all parties using the system, including senior managers and the next generation of managers to easily and rapidly review the current status of the project and start management procedures when preselected thresholds are reached. It allows management and engineers to focus on RM. There are tangible and intangible benefits deriving from big data use. Intangible benefits are considered to be potentially as valuable as the tangible (directly quantifiable) benefits and include a focus on hazard identification and dynamic risk evaluation rather than (uninformed) risk taking, as well as the use of one dataset visible to all parties.

Figure 4.

Convergent, scalable, dynamic scheme for a RA/RM platform like ORE (Optimum Risk Estimates, ©Riskope).

3.2. Multidimensional consequences analysis in the Anthropocene

In a risk management model, it is essential to have a vision of the losses that an event could cause. In too many cases, risk studies take into account losses in a limited way, either by ill‐will or by misunderstanding of the real implications of an accident or event.

The need for a unified, emergency/accident scale is vital to facilitate clear communication and mutual understanding of the nature of the emergency, by the public, government agencies, and responding organizations. It has been stated that “50% of the problems with (risk) communication are due to individuals using the same words with different meanings. The remaining 50% are due to individuals using different words with the same meanings” [21]. In many countries, legislation still has not provided definitions of “disaster” or “emergency,” as well as the difference in impact and immediacy of response. An objectively calculable emergency scale should, therefore, quantify and clearly communicate the notion of “emergency.”

The elements that must be considered for the definition of the overall metric of losses are, at least and not in any particular order: (a) direct & indirect, (b) health and safety, (c) environmental, (d) image and reputation, (e) legal, etc.

In some national scale RAs, we have reviewed that the aggregated extent of damage is calculated by converting each damage into the same unit, i.e., monetary value.

The marginal costs are equivalent to the approximate amount of money that the society is willing to pay in order to reduce the extent of damage of an indicator by one unit [22]. That approach is simple and allows for specialist discussions. The public requires a more direct approach. Figure 5 displays the results of a GHZ RA which uses a sophisticated multidimensional monetary scale to express a consequence metric as well as words.

Solutions have been proposed [23, 24], recognizing the public needs to be well informed with accurate, time critical information, especially in the aftermath of a catastrophic event, including, of course, GHZs. Primary information source are generally event‐specific scales that are inconsistent in their categorization and measurement, adding confusion to public responsiveness. Furthermore, these scales are not extendable to new emergencies in a changing world. Society reportedly needs the development of a unified emergency scale to facilitate communication and understanding. Such a scale could inform local communities with regional community‐specific information and could be extendable for further use by professional responders. Research in the reference above [23, 24] elicited 15 dimensions of an emergency using a Delphi‐like process and then ranked the dimensions by importance utilizing Thurstone's law of comparative judgment.

Figure 5.

A quantitative result of a GHZ (various scenarios) RA where consequences are expressed with a sophisticated multidimensional monetary scale as well as words.

Advertisement

4. Real‐life Anthropocene accident consequences and risk perception

When making choices, individuals, societies, and governments are driven by their needs and their perceived or factual uncertainties and hazards. Thus, different socioeconomic, cultural, and religious contexts will need different selections even within a same jurisdiction, over time. The tobacco‐industry debate, the climate‐change debate, and the nuclear‐ and mining‐industry one are all examples of that multifaceted reality. People in need will often forget or disregard hazards. In countries like Cambodia, plagued by land mines and unexploded ordnance, demographic pressure (and the specter of starvation) drives, for example, people to cultivate perimeters considered as hazardous. Governments will often ponder costs and benefits of large projects, often based on incomplete and misleading evaluations.


Misstep example: Giving one precise value for the consequences. The human brain is generally good at imagining the best and the worst scenarios, but we see many times that people censor the range considered. In modern society, he who hides risks dies, sooner or later.

Erroneous statement: If we build a plant in this location, the consequence of an explosion could be that 50 nearby residents die.

Rule 8: Always consider a range of consequences.

Quick fix: Uncertainties will always exist. Don't censor!

Correct: If we build a plant in this location, the consequence of an explosion could be that 10–100 nearby residents die, their residences are destroyed between 20–50%, there will be jobless people for 1 to 2 years, etc., and the site may be contaminated by chemicals in a radius of 2 km, causing 100 M$ to 200 M$ clean‐up costs.

Misstep example: The consequences of a small car accident are that you arrived late AND you have some repair to make AND you might be bruised. Why is it that consequences of a facility evaluation often consider only the “worst” among, for example, H&S, environmental, or BI?

Erroneous statement: Often seen in FMEA/PIGs applications: use the largest consequence among H&S, physical losses, and environmental to “bin” the risks.

Rule 9: The consequences are almost always a mix of those associated with health and safety (H&S), business interruption (BI), environmental, etc., at least in an additive way.

Quick fix: Record all types of consequences and then work with a blended metric.

Correct: Define, a priori, a multidimensional cost of consequence function where the various components are added to obtain the total.


4.1. The German model

German researchers in the field of theoretical risk management developed a series of metaphors to describe public perception of risks [25]. They are summarized below, with examples.


Sword of Damocles has very high‐potential consequences, paired with very low probability. Nuclear power plants (Fukushima), large‐scale chemical facilities (Bophal, Seveso), hydro‐dams failures, and meteorite impacts are typical examples.

In Cyclops, it is only possible to ascertain either the probability of occurrence or the extent of damage, while the other side remains uncertain. A number of natural events such as volcanic eruptions (Vesuvius), earthquakes (various “large ones” like San Francisco, Tokyo, etc.), and floods belong in this category.

Pythias includes risks associated with the possibility of sudden nonlinear climatic changes such as the risk of self‐reinforcing global warming or of the instability of the West Antarctic ice sheet. The extent of damage is unknown, and the probability of occurrence cannot be ascertained with any accuracy.

Pandora's Box has strong uncertainties in the probability of occurrence and extent of damage (only presumptions) coupled with high persistency. Beside persistent organic pollutants, biosystem‐changing endocrine disruptors can be quoted as examples.

Cassandra is characterized by a relatively lengthy delay between the triggering event (e.g., nuclear radiation exposure below a certain critical threshold) and the occurrence of damage. This case is naturally only of interest if both the probability and magnitude of damage are relatively high. In other cases, it gives a false sense of safety.

Medusa refers to the potential for public mobilization. This criterion expresses the extent of individual aversion to risk and the political protest potential fueled by this aversion, both of which are triggered among the lay public when certain risks are taken. This risk class is only of interest if there is a particularly large gap between lay risk perceptions and expert risk‐analysis findings. Some innovations are rejected although they are hardly assessed scientifically as threat (cell phones, high voltage lines, etc.).


In Table 1, we have attempted to relate “knowledge level,” “degree of incertitude,” “main criteria,” and finally the German metaphor name.

Knowledge level Degree of uncertainty Main criteria Metaphors
Minimal Ignorance Probability of occurrence and extent of damage are highly unknown to science Pandora's Box
Fair Uncertainty Probability of occurrence or extent of damage or both are uncertain (because of natural variations or genuine stochastic relationships) Cyclops
Pythias
Cassandra
High Known distribution of probabilities and corresponding damages Probability of occurrence and extent of damage are known Sword of Damocles
Medusa

Table 1.

A summary of the German metaphors.

We oftentimes see different accident types, with identical or very similar single‐accident direct consequences but very different global impacts, generate surprisingly different public reactions. Would the metaphors help explain why?

4.2. Identical consequences generating different behavior

Indeed, identical single accidents’ consequences can lead to diverging societal impacts. Table 2 gives a list of well‐reported historic flooding examples around the world with identical or very similar single‐accident direct consequences. The global impacts were very different and generated surprisingly different public reactions, in particular with the Stava tailings dam breach.

Casualties Location Year
230 Marrakesh flash flood, Morocco 1995
235–244 Philippine Floods, Philippines 2009
246 Rio de Janeiro floods and mudslides, Brazil 2010
268 Val di Stava dam disaster, Italy 1985
299 Nagasaki, massive rain and landslide, Japan 1982
300 Quebrada Blanca canyon, landslide, Colombia 1974
313 Jambi, Batanghari, Tondano, Indonesia 2003

Table 2.

Historic flooding examples.

Let's examine some examples:

Accident a1) Floods kill hundreds of people around the world (NB: Floods in Virginia, Texas, and China in 2016 have also killed dozens/hundreds). There is short‐term commotion but of limited impact. Societally, a1 has such a high rate that it is considered a “fact of life” by society, whose perception becomes numb, while the public is not informed and goes by totally unaware. However, a1 could mobilize public opinion, because people could feel “surprised” and almost “betrayed” by scientists and public officers. At that stage, a1 could be interpreted as a Medusa “freeze,” a public stupor before outcry, and mitigative actions would be decided on the spot (building new dikes, etc.). As risk perception depends on the viewers’ position, for the population potentially exposed (near to riverbeds), this risk is a Sword of Damocles. However, as human often believe they are different from others, accidents only occur to others, so personal likelihood is perceived as very low.

Accident a2) A single accident of a Tailings Dam with “identical” consequences in terms of Casualties, but probability (rate of occurrence), most likely very different (a1 >> a2 as p(a1) is more than 1% per year, p(a2) is around 1/1000 or lower per year). Societally, a2 is a Cyclops: it is easy to imagine that the exposed population will die, but the probability of the accident is highly uncertain because of extant, apparently sufficient, safety rules (See Samarco Dam breach in Brazil, 2015). Death is an “expected” consequence, but people think its occurrence is most uncertain. If an accident occurs, there is an immediate awareness, in some cases panic, due to the fact that at least that occurrence is now certain to have happened. The case gets lots of attention and mitigation/punishment of the guilty is decided, but there is likely no Medusa “freeze.” For the potential victims, this risk is a Sword of Damocles, like for a1.

4.3. Accident perception and crises potential

We are now ready to discuss accident consequences and risk perception using various ANPcenic nefarious events (not necessarily all belonging to the GHZ family, but useful, nevertheless, for the sake of discussion).

Accident c1: Class 5+ nuclear accidents (Fukushima, this was a GHZ linked accident, Tschernobyl, Kyshtym disaster, Windscale fire, Three‐Mile Island accident, First Chalk River accident, and Lucens partial core meltdown) were likely to be interpreted as Sword of Damocles metaphor but would likely be Medusa now.

Accident c2: Nuclear‐waste storage accidents (possibly due to GHZs in the future) can be easily considered to belong to the Pandora interpretation, evolving into Cassandra (presently), and then later they may evolve into Cyclops.

Accident c3: Post‐accidental exposure is Cassandra, but for people onsite, it is a Sword of Damocles.

We can now set up Table 3 with various accidents’ examples and their related metaphoric descriptors at personal/local or societal/general level as follows, including public perception and likely reactions. It appears from Table 3 that when personal/local risks belonging to the Cyclops, Sword of Damocles metaphors has societal/general potential to be finally perceived as Medusa, and then they are to be considered as “societally intolerable” because they soon trigger mitigation, moratoria, protests, etc. In conclusion, the German metaphors can be used for discussing image and societal perception of risk scenarios.

Accidents and related GHZs risks Personal/local metaphor Societal/general metaphor Public perception
a1 natural flooding Sword of Damocles Medusa Intolerable, leads to crisis if awareness raises, sense of betrayal
a2 Dam breaches/man‐made accidents Sword of Damocles Cyclop Mitigative measures can be decided
c1 Class 5+ nuclear (including GHZs generated ones) Sword of Damocles Medusa Intolerable, leads to crisis if awareness raises, sense of betrayal
c2 Nuclear underground
storage (including GHZs
generated ones)
Pandora to Cassandra Cyclops
evolving to
Medusa
Mitigative measures will be proposed, not necessarily implemented until the Medusa stage is reached, then there will be a crisis.
c3 Post accident exposure of nuclear underground storage. Sword of Damocles  Cassandra evolving to Medusa Mitigative measures will be proposed, not necessarily implemented until the Medusa stage is reached, then there will be a crisis.

Table 3.

Various accidents examples and their related metaphoric descriptors.

If any scenario is considered to have a high Medusa perception, it can also be considered to be at the limit of societal tolerance, whatever the factual consequences may be, even if they are relatively small. If the Medusa is considered of fast development, then the ensuing crisis will probably lead to a major crisis, with potential catastrophic corporate/governmental consequences.

4.4. Third‐party Hazards’ possible impacts

As mentioned later on (See Section 4.5.2), third‐party operational hazards can generate risks requiring strategic shifts. We live in a complex interconnected world sometimes generating very difficult to detect/understand interdependencies. These are of different kinds: physical, geographical, logical, and cyber interdependencies. Of course, the last two are not directly linked to GHZs.

Example of physical and geographical interdependency: Following recent reports from various sources, it seems that fracking extraction techniques have the potential to generate seismicity; climate change is reportedly increasing the rate of extreme events in various areas of the planet.

Example of geographical interdependency: An accidental spill from a “third‐party” dump in an area may result in a severe restriction of activities (exclusion zone) for other activities.

Based on the above examples, it is easy to see that an HRR can stop at the “property perimeter,” which does not necessarily correspond to the “risk battery limit,” but doing so corresponds to an “ostrich” stance, that is, to purposely censoring the scope of the study, leaving open vulnerabilities to risks that may affect strategic goals.

Neighbors’ (third parties’) operational hazards can generate risks impinging on operations: it is enough to “be there,” and the image and reputation may even suffer a blow from those hits. Some of the generated risks may be intolerable, or even unmanageable, thus requiring a strategic shift (See Section 4.5.2 below). Only a rational and well‐balanced RA can help decide and avoid the pitfalls.

4.5. Risks and risk tolerance

Tolerable risk curves (tolerance thresholds) are always project‐ and owner‐specific and indicate the level of risk, which has been deemed acceptable by an owner for a specific project or operation (possibly taking into account public opinion). This means, as an example, that within large companies, corporate‐risk tolerability may differ quite substantially from a branch operation's tolerability. The development of empirical‐estimated tolerability curves requires caution and continuous calibration; they should always be defined by a group and not by an individual.

In an RA, great attention must be exerted in ensuring that the acceptability curves are derived for the considered risks: curves derived for hazardous industrial activities cannot be used for GHZ risks or business risks.

The first explicit examples of Risk Tolerance/Acceptability criteria were published in the mid‐eighties [26]. For example, in more recent times, the Australian National Committee on Large Dams [27] also published its own criteria (Figure 6). In general, two criteria are defined, a prudent or risk‐averse one, thus a low‐bound curve, and a risk‐prone aggressive, high‐bound curve.

Figure 6.

ANCOLD and Whitman published tolerance thresholds [26, 27].

4.5.1. Defining tolerance

“Instinct” and “intuition” are often poor advisors when attempting to define risk tolerance thresholds to replace the arbitrary PIGs (risk matrix) coloring. Such intuition‐based thresholds constitute an attempt to diverge from the classic “diagonal‐color scheme,” but the minimum consequence and maximum probability oftentimes show again the same risk priority as minimum probability and maximum consequence (Remember Rule 5, Section 2.3.1).


Misstep example: Coloring schemes or thresholds’ criteria mismatch with accepted thresholds. We have seen RA rejected because they were not defensible at that level (Figures 6 and 7).

Tolerability has to be defined in order to allow proper decision‐making.

Tolerability definition requires transparent communication with stakeholders.

Erroneous statement: Tolerance is a single line that, if crossed, is deemed intolerable.

Rule 10: Use published societal tolerance thresholds (Figure 6) to see where you are standing and develop your own tolerance criteria for corporate affairs.

Quick fix: Do not use prefabricated PIGs with arbitrary cell‐limit definitions or arbitrary colors. You do not need the cells! And you can add to your plot a well thought‐out tolerance limit.

Correct: Link every probability to the consequences at stake to define an empirical tolerance threshold and then compare it to publicly available literature such as Figure 6.


Figure 7.

A colored risk matrix superimposed over ANCOLD and Whitman's thresholds. Vertical axis: probability, horizontal axis: cost (consequences).

Figure 7 shows a real‐life, colored risk matrix from a corporate RA superimposed over ANCOLD and Whitman's thresholds. One can see that the green‐colored cells are in the intolerable part of both Whitman's and ANCOLD, which, in the case of a mishaps, would prove difficult to defend corporately, societally, and legally. One day, such a case will be challenged in the court of law: will it be negligence, or misrepresentation to the public and victims? In the case of the Samarco Dam breach, the FMEA has reportedly been cited in the Brazilian Federal Police inquiries.

There are ways to build tolerance thresholds based on societal consensus and on models. Again, these methodologies do not fall within the scope of the chapter but can be found in specialized books [28]. However, we discuss below an example, originally proposed in 2011, under the form of an attempt to update Whitman & Morgan thresholds.

The 2011 threshold is not necessarily the “true” large‐scale societal acceptability, as we used a few examples of 2000–2011 events that (a) caused significant casualties and (b) by the generated reactions, clearly showed the events were not tolerable in G8 countries. Furthermore, by the very nature of the considered events, the 2011 threshold is most likely located near (just higher than) the new upper bound (2011 tolerance/acceptability threshold).

Let us look at examples of accidents and evaluate whether they are considered societally tolerable or not: (1) several dozen traffic‐accident casualties per weekend, several times per year, lead the Italian government to invest a large capital in a continuous real‐time speed checking and enforcing system (Traffic Tutor) and road safety, as the situation was deemed intolerable; (2) a quake causing 308 casualties (Aquila), thirty years after another catastrophic one (Irpinia), leads to the conviction of a number of public officers for mass man‐slaughter and various other charges (no such reaction for the Irpinia one, thirty years before); (3) a terrorist act (9/11, New York) caused approximately 3000 casualties and the USA “declared war on terrorism;” (4) a quake and a tsunami (Fukushima) with a wave considered to be larger than the MCE caused an evacuation zone of 20 km and then 30 km radius, with very large number of afflicted people (which may become ill in the future), leading Germany and other countries to decide to stop their nuclear energy programs, showing that the event was considered intolerable.

For each one of the examples above, it was possible to define a rate of occurrence (a range of rates); then knowing the range of consequences, it was possible to draft the societal‐tolerance thresholds (optimistic, pessimistic) updated to 2000–2011.

It is important to distinguish between location‐based risks and societal risks. The first is an expression of the risk exposure for someone who lives or works in a place where a hazardous activity takes place. Societal risk is quite different: it looks at the consequences of mishaps from a very broad point of view of an entire society, possibly physically and emotionally removed from the mishap itself; as such, it is of interest mainly to public administrators. Tolerable risk curves are always project‐ and owner‐specific and indicate the level of risk, which has been deemed acceptable by an owner for a specific project or operation (possibly taking into account public opinion). This means, as an example, that, within large companies, corporate‐risk tolerability may differ quite substantially from a branch operation's tolerability.

The development of empirical‐estimated tolerability curves requires caution and continuous calibration (see example above); they should always be defined by a group and not by an individual.

Figure 8.

Examples of empirically derived corporate tolerance curves.

In a risk study, great attention must be exerted in ensuring that the acceptability curves are derived for the considered risks: curves derived for hazardous industrial activities cannot be used for natural GHZs (like for typhoons, quakes, or flooding) or business risks. Figure 8 shows a series of curves derived through discussion with European and North American companies that have been willing to develop their own risk tolerability. In reality, oftentimes, these curves correspond to the perceived tolerability rather than the real “absolute” financial capacity of the company to withstand the occurrence of a damage due to GHZ or other natural or man‐made hazards.

4.5.2. Defining manageable vs. unmanageable, strategic vs. tactical & operational risks

Strategic planning is an organization's process of defining its strategy or direction and making decisions on allocating its resources to pursue this strategy. Typically, strategic choices look at three to five years (say 30–50 for GHZs or more), although some extend their vision to 20 years (long term). Because of the time horizon and the nature of the questions dealt, mishaps potentially occurring during the execution of a strategic plan are afflicted by significant uncertainties and may lie very remotely out of the control of management (e.g., of MCEs, quakes, landslide, forest fire, flooding). Those mishaps, in conjunction with their potential consequences, are called “strategic risks.”

Tactical planning is short range, emphasizing the current operations of various parts of the organization. Short range is generally defined as a period of time extending about one year or shorter in the future (say ten years for GHZs). Managers use tactical planning to outline what the various parts of the organization must do for the organization to be successful at some point one year or shorter into the future. Tactical plans are usually developed in the areas of production, marketing, personnel, finance, and plant facilities, but have their place with GHZs as well. Because of the time horizon and the nature of the questions dealt, mishaps potentially occurring during the execution of a tactical plan should be covered by moderate uncertainties and may lie closer to the control of management (e.g., of rockfalls and floodings) than strategic ones. Those mishaps, in conjunction with their potential consequences, are called “tactical risks.”

Operational planning is the process of linking strategic goals and objectives to tactical goals and objectives. It describes milestones and conditions for success and explains how, or what portion of, a strategic plan will be put into operation during a given operational period (say less than five years in the GHZ space). Operational risks are those arising from the people, systems, and processes through which a company operates and can include those generated by GHZs. A tailings dam failure, an open‐pit slide, and a black‐out (man‐made or GHZ generated) are all operational hazards generating operational risks.

Advertisement

5. Ethical (geoethical) issues in the Anthropocene

Engineering ANPgenic global change is loaded with implicit societal issues to an unprecedented level because of demographic pressure and the raise of public opinion, thanks to the emergence of the blogosphere. Because of the nonlinear dynamics, perceived or real complex feedbacks, and apparent chaotic dynamics, many claim our ANPcenic systems are difficult to forecast and unintended and counter‐intuitive system behavior is likely. In our experience, it is not always so: poor RA tends to mislead people to believe that things may be more complex that what they really are [29, 30].

Academic and popular literatures suggest an agreement that the public's distrust has developed over the past half century as a result of repeated failures to provide adequate and/or accurate risk information to the public. In the public health arena, regulators have traditionally been confronted with the difficult task of allocating risks and benefits; sometimes, they have missed some important risks, and sometimes, they have spent a lot of money and energy on dealing with negligible risks [31]. GHZ will certainly follow this trend as proven by recent “mining/environmental” cases, allowing to measure public skepticism [10, 32]. In fact, “the scientific majority sometimes finds itself pitted against a public opinion which simply does not accept its conclusions” [33].

Meanwhile, over the last five decades or so, the risk management community at large, including engineers and designers performing RAs on their own projects/designs for civil projects, oftentimes in a conflict of interest situations, has settled on representing the results of RAs using misleading methods (see Section 2.3) and has maintained poor communication habits.

The implications of poor risk prioritization for the world industry's balance sheet facing GHZs can be staggering, aside from the possible liabilities. Inaccuracies can lead to mistaken resource allocation and create fuzziness for DMs and the public, thus offer little support to rational decision‐making, and lead to public distrust and loss of confidence because of their arbitrariness. It does not come as a surprise then to recognize that, contrary to what is proposed by international codes like ISO 31000, communication and risk approaches are poorly developed through the life of projects and operations (Figure 9).

Figure 9.

The evolution of RA and communication through a project and operational life. Red‐shaded panels indicate poorly performed functions of the RA process.

It is normally accepted that experts disagree in their analyses (e.g., probability or frequency estimates for an event). However, if and when the public disagrees with an expert analysis of risk, they are dismissed as being highly emotional or lacking scientific literacy [34, 35].

This concept is important because while there is an accepted difference in scientific literacy between the public and scientific (GHZs) experts, there is an assumption that the public are ignorant about risks and probabilities and that an increased scientific literacy would help decrease perceived risks [36]. An increase in scientific literacy may in fact increase perceived risks. The question remains as to whether the level of required scientific literacy is “so high that it is difficult to attain and difficult to motivate the public to attain it” [37]. It is simply unrealistic that the average citizen can obtain sufficient scientific (GHZs) literacy to thoroughly tackle any GHZs RAs. Thus, RMs must move their communication approach from that of paternalistically doling out pieces of information supporting their RM approach to partnering with the public [38] to demonstrate that the practices meet socially acceptable levels and practices. Partnering with the public requires effective communication, but more importantly, public consultation and participation. Two vital components of GHZs risk communication are trust and credibility, which corporations and governments must earn [32]. Research must aid risk analysis and policy making by, in part, “improving the communication of risk information among lay people, technical experts, and DMs” [39]. The price to pay if communication and partnering are not improved is never‐ending crises, turmoil, boycotts, and possibly revolts.

Advertisement

6. Conclusions

We will close this discussion on Human‐Generated Geohazards with an example drawn from a regional‐scale study performed a few years ago (Figure 10). Figure 10 displays three selected locations where man‐made GHZs were of particular interest: (a) an area where a river had been excessively channelized and increased flooding chances downstream, (b) an industrial area where sludge and waste ponds have breach potential, and (c) a mountainous area where bridges and roads have increased various families of risks. At each site, the “columns” represent the risks due to a certain hazard, split (the three colors) by consequence type (direct, indirect, and social).

The study complied with the requirements described in Section 2.3.2 [10] and was geared toward showing to the public convergent ANPcenic risks deriving from natural, industrial, transportation, and agricultural generated GHZs.

The model satisfied all five roles of system science, namely:

  • described the physical world [Glossary (Section 2.1); System Description (Section 2.2)];

  • portrayed the results of interactions among a few of its components [Consequence Analysis (Section 3.2)];

  • proposed a generic design [Mitigations (Section 4.5.2 and Literature)];

  • was a constituent of “science of complexity” as it enlarged the domain of demonstrable results in the service of humanity [Interdependencies (Section 4.4)]; and

  • was actionable, as it has linguistic clarity and suggested clear direction of actions essential to resolve emergencies (Decision‐Making: Literature).

The model also addressed some of the Complexity Laws [2], like:

  • did not require humans to process more than three components at a time (triadic constraint) [e.g., Probability (Section 3.1), Consequence, Risk, and Perception (Section 4.)];

  • rendered a parsimonious description of any emergency [Unified Scale of Consequence (Section 3.2)];

  • addressed the challenge of vertical incoherence as it showed the right aggregated level to decision‐makers at different organizational levels [Interdependencies (Section 4.4), Communication, Information Dashboards (Section 2.3.2), and literature];

  • considered all relevant factors of emergencies in a balanced fashion (Rational Risk Prioritization: Literature) [3].

Figure 10.

An example of a convergent, ANPcenic‐GHZ, quantitative RA at regional scale.

It is time to stop seeking excuses and develop Risk Management 2.0!

References

  1. 1. Warfield, J. N., A proposal for systems science. Systems Research and Behavioral Science, 20(6): 507–520, 2003.
  2. 2. Warfield, J. N., Understanding Complexity: Thought and Behavior. AJAR Publishing Company, Palm Harbor, FL, ISBN 0‐971‐6962‐0‐9, 2002.
  3. 3. Asproth, V., Håkansson, A., Complexity challenges of critical situations caused by flooding. Emergence: Complexity and Organization, 9(1): 37–43, ISSN 1532‐7000, 2007.
  4. 4. Oboni, F., Caldwell, J., Oboni, C., Ten rules for preparing sensible risk assessments. Proceedings of Risk and Resilience Mining Solutions, Infomine, Vancouver, Canada, 2016.
  5. 5. Oboni, F., Oboni, C., Caldwell, J., Risk assessment of the long‐term performance of closed tailings, Tailings and Mine Waste 2014, Keystone, Colorado, USA, October 5–8, 2014.
  6. 6. NASA, NASA Systems Engineering Handbook SP‐2007‐6105. National Aeronautics and Space Administration, Washington, DC, Chapter 6.4, 2007.
  7. 7. Chapman, C., Ward, S., The Probability-Impact Grid – A Tool That Needs Scrapping. How to Manage Project Opportunity and Risk, Chapter 2, pp. 49–51, 3rd Ed., Wiley, West Sussex, United Kingdom, 2011.
  8. 8. Cox, L. A. Jr., What's wrong with risk matrices? Risk Analysis, 28(2), pp. 1–20, 2008.
  9. 9. Hubbard, D., Worse Than Useless. The Most Popular Risk Assessment Method and Why It Doesn’t Work. The Failure of Risk Management, Chapter 7, Wiley, West Sussex, United Kingdom, 2009.
  10. 10. Mackenzie Valley Review Boards, Report of Environmental Assessment and Reasons for Decision, Giant Mine Remediation Project, Appendix D, Yellowknife, NWT, Canada, pp. 227–228, June, 2013.
  11. 11. Wilson, A. C., Crouch, E., Risk/Benefit Analysis. Ballinger Publishing Company, Boston, MA, pp. 92–93, 1982.
  12. 12. Taleb, N. N., The Black Swan: The Impact of the Highly Improbable. Random House, 26 ISBN 978-1400063512, New York, United States, 2007.
  13. 13. Government Office for Science, Blackett Review of High Impact Low Probability Risks. Government Office for Science, UK, 2011.
  14. 14. Turner, B., A., Pidgeon, N., F., Man Made Disasters, 2nd ed., Butterworth‐Heinemann, Oxford, 1998.
  15. 15. Kahneman, D., Tversky, A., 1979. “Prospect Theory” quoted in Oboni, F. Oboni, C., p. 32 212, Systech, Froideville, Suisse, 2007.
  16. 16. Ang, A. H-S., Tang, W. H., Probability Concepts in Engineering Planning and Design, 34 Vol. I, Wiley, United States, 1975.
  17. 17. Ang, A. H-S., Tang, W. H., Probability Concepts in Engineering Planning and Design, 2 Vol. II, Wiley, United States, 1984.
  18. 18. Ropeik, D., The L’Aquila Verdict: A Judgment Not Against Science, But Against a Failure of Science Communication, Scientific American, October 22, 2012.
  19. 19. Oboni, F., Bourdeau, P. L., Determination of the critical slip surface in stability problems, Proceeding of IVth International Conference on Application of Statistics and Probability in Soil and Structural Engineering, Florence. Università di Firenze (Italy) 1983, Pitagora Editrice, pp 1413–1424, 1983.
  20. 20. Oboni, F., Angelino, C., Moreno, J., Using artificial intelligence in an integrated risk management program for a large alpine landslide, International Conference ‘Climate Change – Challenges and Solutions,’ Ventnor, Isle of Wight, United Kingdom, May 21–24, 2007.
  21. 21. Appleby, M., Forlin, G., et al., The Law Relating to Emergencies and Disasters. In Tolley's Handbook of Disaster and Emergency Management: Principles and Practice, R. Lakha and T. Moore (Eds.), Butterworth‐Heinemann, ISBN 0‐40697270‐2, Oxford, United Kingdom, 2003.
  22. 22. Federal Office for Civil Protection, KATARISK: disasters and emergencies in Switzerland, A risk assessment from the perspective of the Civil Protection, Bern, Switzerland, August 2003.
  23. 23. Plotnick, L., Gomez, E. A., White., C., Turoff, M., Furthering development of a unified emergency scale using Thurstone's law of comparative judgment: A progress report, Proceedings ISCRAM, Information Systems Department New Jersey Institute of Technology, Newark, NJ, USA, 2007.
  24. 24. Rohn, E., Blackmore, D., A unified localizable emergency events scale. International Journal of Information Systems for Crisis Response Management (IJISCRAM), 1(4), pp. 1–14, 2009.
  25. 25. Klinke, A., Renn, O., Prometheus Unbound, Challenges of Risk Evaluation, Risk Classification and Risk Management, ISBN: 3-932013-95-6, http://dx.doi.org/10.18419/opus-8558, Center of Technology Assessment in Baden-Württemberg, Germany; 153, 1999.
  26. 26. Whitman, R. V., Evaluating calculated risk in geotechnical engineering. Journal of Geo‐engineering Sciences, 110(2), pp. 143–188, 1984.
  27. 27. ANCOLD, International Commission on Large Dams, Australian National Committee. Guidelines on Risk Assessment, Melbourne, Australia, 2003.
  28. 28. Oboni, F., Oboni, C., Improving Sustainability through Reasonable Risk and Crisis Management, ISBN 978‐0‐9784462‐0‐8, JSO, Froideville, Switzerland, 2007.
  29. 29. Oboni, F., Oboni, C., Ethics and Transparent Risk Communication Start with Proper Risk Assessment Methodologies, EGU General Assembly 2014, Vienna, May, 2014.
  30. 30. Oboni, F., Oboni, C., Zabolotniuk, S., Can We Stop Misrepresenting Reality to the Public?, CIM 2013, Toronto, 2013.
  31. 31. Bouder, F., Slavin, D., Loefstedt, R. (Eds.), The Tolerability of Risk: A New Framework for Risk Management, Earthscan, New York, United States 2007.
  32. 32. Peters, R. G., Covello, V. T., Mccallum, D. B., The determinants of trust and credibility in environmental risk communication: An empirical study. Risk Analysis, 17(1): 43–54, 1997.
  33. 33. Sjöberg, L., Risk perception by the public and by experts: A dilemma in risk management. Human Ecology Review, 6(2): 1–9, 1999.
  34. 34. Durant, J. R., What is Scientific Literacy? In Science and Culture in Europe, J. R. Durant and J. Gregory (Eds.), pp. 129–137, Science Museum, London, 1993.
  35. 35. Shen, B. S. P., Scientific Literacy and the Public Understanding of Science. In Communication of Scientific Information, S. B. Day (Eds.), pp. 44–52, Karger, Basel, 1975.
  36. 36. Frewer, L., The public and effective risk communication. Toxicology Letters, 149(1): 391–397, 2004.
  37. 37. Frewer, L. J., Howard, C., Hedderley, D., Shepherd, R., What determines trust in information about food related risks? Underlying psychological constructs. Risk Analysis, 16(4): 473–486, 2006.
  38. 38. Fischhoff, B., Risk perception and communication unplugged: Twenty years of process. Risk Analysis, 15(2): 137–145, 1995.
  39. 39. Slovic, P., Risk perception. Science, 236(4799): 280–285, 1987.

Written By

Franco Oboni and Cesar Oboni

Submitted: 12 July 2016 Reviewed: 28 September 2016 Published: 30 November 2016