Open access

On the Very Idea of Risk Management: Lessons from the Space Shuttle Challenger

Written By

Robert Elliott Allinson

Submitted: 11 January 2012 Published: 12 September 2012

DOI: 10.5772/51666

From the Edited Volume

Risk Management - Current Issues and Challenges

Edited by Nerija Banaitiene

Chapter metrics overview

4,258 Chapter Downloads

View Full Metrics

1. Introduction

1.1. The case for risk taking or risky decision making

In this chapter, we will argue that the very concept of risk management must be called into question. The argument will take the form that the use of the phrase ‘risk management’ operates to cover over the ethical dimensions of what is at the bottom of the problem, namely, risky decision making. Risky decision making takes place whenever and wherever decisions are taken by those whose lives are not immediately threatened by the situation in which the risk to other people’s lives is created by their decision. The concept of risk management implies that risk is already there, not created by the decision, but lies already inherent in the situation that the decision sets into motion. The risk that exists in the objective situation simply needs to be “managed”. By changing the semantics of ‘risk management’ to ‘risk taking’ or ‘risky decision making’, the ethics of responsibility for risking other people’s lives will come into focus. The argument of the chapter is that by heightening the ethical sensitivity of decision makers, these decision makers will be less likely to make decisions that will cause harm and/or death to those who are the principal actors in the situation created by the decision.

Advertisement

2. Definition of terms

We should first define our terms. The phrase ‘risk management’ will refer primarily to the risk of life and limb of human beings, present and future and to the life and health of the planet. Secondarily, it will refer to the taking of risk with the possessions, e.g., the wealth of human beings. For the most part, the discussion will refer to the primary sense of the term and only occasionally in special contexts, to the second. It should be pointed out, at the outset, that the phrase ‘risk management’ is already biased in favor of risk. The phrase implies that the taking of risk is either necessary and/or advantageous and if it carries any negative consequences, these consequences can be mitigated or eliminated by proper management. This bias toward risk is assumed in the acceptance of the use of the phrase ‘risk management’. A major purpose of this investigation is to call into question this built-in bias toward risk.

In the strictest sense, the proper reference for the inquiry should be ‘risky decision taking’ or ‘risky decision making’ rather than ‘risk management’. Such a refining of the subject of inquiry would make the concept of risk either neutral or questionable since the ‘taking’ of risk implies negative consequences whilst ‘risk management’ carries with it the hidden meaning that risk is already being protected against or absorbed by effective management policies. The phrase ‘risk management’ grants risk a protective coating such that the consequences of the risk are camouflaged. As a result, the following discussion will focus primarily on the entire question of ‘taking risk’ or making ‘risky decisions’ in the first place and only secondarily with the ‘management of risk’.

For the sake of convenience, in general we will employ the term ‘risk taking’ as a short-hand for ‘risky decision making’ which will always stand for ‘risking the consequences to the principal risk takers as a result of decision making’. The phrase ‘principal risk takers’ refers to those whose lives will be affected by the occurrence of the risk, the primary actors who will be directly involved in taking the risk in the risky situation; e.g., in the case of the Challenger, the astronauts and civilians aboard rather than the ‘decision makers’, the four middle managers.

It is important to make this distinction because the entire question of the ethics of risk must be considered in the first place.

It is gratifying to learn that the U.S. government is currently teaching the terms and definitions for the proper understanding of risk management that the author of this chapter originated that are at the basis of the ideas in this chapter in its educational training courses for FEMA, the Federal Emergency Management Association.

Where, to any degree, whatsoever, ‘risk’ is already acceptable, the question of the ethics of risk becomes diluted in value. When the taking of risk is itself under question, the ethics of risk takes on greater meaning.

In the specific case from which we will draw most of our information and discussion, the launching of the U. S. space shuttle, Challenger, the action to be taken, was considered to be an ‘acceptable risk’. The question of ‘acceptable risk’ was applied to the action to be taken, not to the decision to take the action. Had the notion of ‘acceptable risk’ been applied to the decision to launch, some ethical responsibility for the decision would have been present. As it turned out, the ethical responsibility for the decision to launch was conspicuous by its very absence.

The ethical issues involved in the decision to launch were further nullified by the choice of terminology utilized to classify the level of risk involved in the malfunction of the part that eventually did malfunction (the O-ring). The label utilized was criticality 1, the definition of the consequence of its occurrence being, loss of mission and life. When one of the four managers who overrode the unanimous decision of 14 engineers and managers not to launch was asked at the Presidential Commission hearings, whether the phrase ‘loss of mission and life’ had a negative connotation, the answer given by the manager, Larry Mulloy, was that such a description had no negative connotation and simply meant that you have a single point failure with no back-up and the failure of that single system is catastrophic.

Richard C. Cook, Challenger Revealed, An Insider’s Account of How the Reagan Administration Caused the Greatest Tragedy of the Space Age, New York: Avalon, 2006, p. 243. Without Richard Cook’s early articles in the New York Times, it is possible that the entire Challenger investigation would not have occurred. It was a source of inspiration to be in communication with Richard Cook at the time of my writing the Challenger chapters in my book, Saving Human Lives: Lessons in Management Ethics.

The reaction of Richard Cook, the budget analyst at the time, shows how the language of choice removes ethical considerations from one’s consciousness:

How extraordinary: possible “loss of mission and life” doesn’t have a negative connotation.

Ibid.

Cook goes on to say that there was no negative connotation because it had been deemed an acceptable risk and, moreover, there had never been any criteria for defining an “acceptable risk”. This was a result of using a failure modes effects analysis without any quantitative risk measures. In other words, the odds of the risk of a catastrophe occurring to the Challenger, were conjured out of thin air, not calculated, because it was a subjective engineering judgment that was not based upon any previous performance data.

Three conclusions present themselves. Firstly, there must exist considerable ethical blindness when loss of life has no negative connotation. Secondly, there must be considerable ethical blindness when loss of life is somehow considered an acceptable risk. Thirdly, there must be considerable epistemological blindness when a notion of “acceptable risk” can be in use without being based on any objective measurements. Cook, one of the few if not the only source on the Challenger who goes into detail on this point, points to one study, conducted in 1984, which concluded that the chance of a Solid Rocket Booster explosion was one in thirty-five launches.

Ibid., p. 356.

Nevertheless, the Marshall managers spoke of “acceptable risk”. If you “lost” one astronaut, that was “data” in the risk equation.

In 1977, NASA commissioned a group called the Wiggins Group to study the possibility of flight failures and by examining data for all previous space launches, a likely failure rate of one in fifty-seven was derived. According to Cook, NASA complained that many of the launch vehicles the Wiggins group included were not similar enough to the Challenger shuttle to be part of the data base. So, Wiggins changed the probable failure rate to one in 100. Another study conducted by the Air Force placed the likely failure rate for the shuttle in a similar range to the Wiggins analysis. According to Cook, none of these studies were publicized and most of the newspaper reporters who covered NASA most likely had never heard of them. When NASA was forced to arrive at an official number, their chief engineer came up with the infamous and arbitrary estimate of one in 100,000. As Cook put it, ‘At a rate of twenty-four launches per year, this meant that NASA expected the shuttle to fail catastrophically only once every 4,167 years.

Ibid., pp. 126-7.

In the language of risk utilized in this chapter introduced below, the possible incidence was therefore negligible. By presenting, not calculating, the incidence of risk as virtually non-existent, it was possible to immunize oneself against the realities of the consequences of the risk. One could then make a decision to risk other people’s lives, because statistical probability had eliminated the problematic dimension of the risk factor, the consequences, from the equation. When a figure is used, such a one in 100,000, one might assume that this is a calculated risk since it is put in the mathematical language of percentages and statistics. But, this was not a “calculated risk” at all, but fantasy parading as mathematics. A figure picked out of thin air had granted the decision makers moral immunity.

Advertisement

3. Incidence and consequence

An important distinction to keep in mind when considering the whole question of risk taking is the distinction between the likelihood of the occurrence of the risk under question versus the harmful consequence of the actual occurrence of risk under question. Clearly, when the likelihood of occurrence is high and the consequence of occurrence severe in terms of harm doing to risk takers, be it the public at large and/or the environment, this is an example of the kind of risk that should never be taken except under the most severely warranted circumstances. The example of self-defense, when confronted with a murderer who is armed and plans to murder you, comes to mind. Here, the likelihood that your defense will result in his escalating his response is very high, and the consequence of his escalation, in the hypothetical circumstance, could prove to be fatal. Nevertheless, in order to protect your own life, you must take the risk of self-defense under these circumstances.

There is no need to consider the case in which the incidence of occurrence is low and the consequence is also low.

For the sake of convenience, whenever the term ‘consequence’ is used it is understood that what is meant is ‘harmful consequence’ to primary risk taker, general public, the environment, future generations or all of the above.

Here, the example of starting to speak while inside a house during the winter in Finland with all the windows and doors closed and the sound of a bird chirping that interrupts one’s speech may be taken as an example of low possibility of occurrence and low severity of consequence. Of course, one may imagine a scenario in which one’s speech was to warn another of a fatal and impending danger, and this warning would be blocked, but any example can be played with to tamper with the point it is designed to illustrate. All we needed to do in this case would be to qualify the original example to state that the speech one was about to utter was an exclamation of how blue the sky was this morning. But, it is important to consider what point an example is designed to make since one can always find some way to find some fault with the example one is choosing to illustrate one’s point.

We also need not consider the kind of risk that involves little consequence even if the possibility of its incidence is high. For example, when we carry a glass of milk across the floor, we may easily spill some milk. But, the consequence of the spill (excepting the scenario that we or the person to whom we are carrying the milk are starving) is of no great import. Our discussion need not include examples of high risks of incidence that involve harmless consequences.

The kind of risk with which we most frequently struggle is the risk in which the incidence of its occurrence is low, but the consequence of its occurrence is grave. The example of an airplane exploding in mid-air is a good example of this type of risk. We assume in the state of technology that currently exists commercial air travel is an advantage that we do not wish to surrender. We also know that in the general, unknown risk category, that a plane may explode in mid-air. This risk taking is minimized by careful and regular inspection of the mechanical parts of the airplane and a replacement of said parts and said plane on a needful basis. Other aspects of this risk are minimized by guarding against a drunken pilot, hijacking by suicidal terrorists, etc. In such cases, risk is minimized. It is more accurate to consider that the risk in these cases is minimized rather than managed, because its possibility of occurrence is reduced rather than the occurrence of its risk being managed. The latter understanding is how the term ‘risk management’ might well be construed. In fact, it is difficult to understand what the term ‘management’ means in the case of ‘risk management’.

Advertisement

4. The ethics of Risk

We assume, as an ethical premise, that there should never be a risk taken to potential life and limb unless it is absolutely necessary. A good example of this would be the Hippocratic Oath taken by physicians which begins with the axiom: ‘Primum non Nocere’, ‘Do no Harm’. One takes risk, as with surgery, only when it is necessary to promote or safeguard health. In other words, risk is justifiable only when it is absolutely necessary in the service of life.

What about cases of advantage rather than absolute necessity? For example, let us again consider the case of commercial airplane travel. There is certainly risk involved and it would seem to be the case that the concept of risk management would come into play. Upon closer examination, however, when one considers the safety precautions that are taken through mechanical inspection, etc., one realizes that it is ‘risk taking’ that is modified, that is, one is reducing the risk involved, rather than “managing” an existing risk. One minimizes the risk involved: one is not managing risk; one is minimizing risk.

Advertisement

5. General unknown risk versus specific and foreknown risk

There is a confusion that is frequently made between general and unknown risk that is operative in the universe and any specific risk that is known in advance to exist. For example, whenever one gets out of one’s bed in the morning, one may trip, fall, crack one’s skull and have a concussion. This is the general and unknown risk that is operative in the universe. We should not construe risk in these terms as this kind of risk exists, for the most part, outside of human control and intervention.

Risks that are foreknown in advance to the principals involved in the risk-taking are the only kind of risks that our discussion should consider. A classic case in point of the contrast that exists between general, unknown risk and specifically foreknown risk is the case of the faulty and dangerous O-rings that were known in advance to exist (by managers and engineers though not by the principal risk takers) prior to the fatal flight of the space shuttle Challenger. The classic case of the Challenger disaster can be used to illustrate the fallacies of the concept of ‘risk management’ and the need to replace this concept with the more accurate, new concepts of ‘risk taking’ or ‘risky decision making’. While other cases could also be chosen, the availability of overwhelming, documented evidence in the case of the Challenger disaster makes it an ideal case study for the purposes of examining the concept of risk management.

One can argue, fallaciously, that whenever an astronaut goes into space, that astronaut is subject to that general, unknown, universal-style risk of space travel. This, however, is not comparable to an astronaut going into space equipped with the full knowledge of the existence of a real, specific and pre-existing mechanical fault that could be potentially fatal to her or his space craft. It is only with the latter kind of known risk, known fully to those taking the risk, that a discussion on risk management should focus. Such was not the case with the astronauts and civilian passengers of the U.S. space shuttle, Challenger.

In the case of the Challenger launch, the overwhelming evidence has revealed that the astronauts were completely unaware of the specific dangers that the O-rings posed. According the Malcolm McConnell, the science reporter, no one in the astronaut corps had been informed of any problem with the SRB field joints.

Robert Elliott Allinson, Saving Human Lives, Lessons in Management Ethics, Dordrecht: Springer, 2005, p. 156.

According to Charles Harris, Michael Pritchard and Michael Rabins, authors of Engineering Risks, Concepts and Cases, ‘… no one presented them [the astronauts] with the information about the O-ring behavior at low temperatures. Therefore, they did not give their consent to launch despite the O-ring risk, since they were unaware of the risk.’

Claus Jensen writes in No Downlink, A Dramatic Narrative about the Challenger Accident and Our Time, that when the Rogers commission summoned a group of space shuttle astronauts, ‘During this session, the astronauts reiterated that they had never been told about the problems with the solid rocket booster.’ And, in a private correspondence with the present author and Roger Boisjoly, the late senior scientist and the engineer who knew the most about the O-rings, Boisjoly wrote, ‘I KNOW for a FACT that the astronauts on Challenger did NOT KNOW about the problem with the O-rings at temperatures below 50 degrees F.‘ (emphasis his)

Ibid., p. 184.

According to Richard Lewis’ book, Challenger, The Final Voyage, ‘Along with the general public, the astronauts who were flying the shuttle were unaware of the escalating danger of joint seal failure. So were the congressional committees charged with overseeing the shuttle program. NASA never told them that the shuttle had a problem.’ Later, in the same work, Lewis pointedly quotes from the Presidential Commission report:

Chairman Rogers raised the question of whether any astronaut office representative was aware [of the O-ring problem]’ Weitz, [an astronaut’s representative] answered: “We were not aware of any concern with the O-rings, let alone the effect of weather on the O-rings.

Ibid., pp. 184-5, 187.

Despite the very clear declarations above, nowhere in the 575 pages of Diane Vaughan’s book, The Challenger Launch Decision: Risky Technology, Culture and Deviance at NASA, is it ever mentioned that the astronauts and the civilians were not informed of the O-ring dangers.

Ibid., p. 187.

By not ever mentioning this crucial point of information, her book leaves one with the impression either” (i) that the astronauts knew about the risk they were taking and thereby had given their informed consent, or, (ii) that their knowledge and thereby consent to take such a risk, was completely irrelevant.

All of the above clearly points to the fact that the objection that one always takes a general, unknown existential risk in life and is aware of this fact is irrelevant whenever there is a specifically foreknown risk that carries with it great and unfortunate consequence. These points make a further argument to support the case that the very notion of risk management needs to be replaced with the idea of risk taking when a known risk carries with it a high probability of life and death danger and there is nothing crucial to be gained by taking such a risk, such a risk is never “managed”, such a risk is never taken. In such a case, any possibility of risk is eliminated by not being taken in the first place. Risk management becomes risky management. We can now turn to the examination of risky management.

Advertisement

6. Risky technology versus risky management

In the classic case of the space shuttle Challenger example, two years after the horrific event, when the official, U. S. government committee on Shuttle Criticality, Review and Hazard Analysis examined the risk that had been taken in launching the Challenger, the Chairman of that committee wrote in the very first paragraph of chapter four of their report, entitled ‘Risk Assessment and Risk Management’:

Almost lost in the strong public reaction to the Challenger failure was the inescapable fact that major advances in mankind’s capability to explore and operate in space – indeed, even in routine, atmospheric flight – will only be accomplished in the face of risk.

And, later, in the body of that same report, the Committee wrote: ‘The risks of space flight must be accepted by those who are asked to participate in each flight …

Preface by Alton Slay, Chairman, Committee on Shuttle Criticality, Review and Hazard Analysis, Post-Challenger Evaluation of Space Shuttle Risk Assessment and Management, Washington, D.C.: National Academy Press, 1988, p. v; p. 33.

It is rather easy to spot the fallacy that is being made in this case. It is the fallacy of equating a general unknown risk (space flight in general) with a specifically foreknown fatal risk (the flawed design of the O-rings about which the senior engineer Roger Boisjoly had issued red flagged warnings).

Robert Elliott Allinson, Saving Human Lives, Lessons in Management Ethics, Dordrecht: Springer, 2005, p. 138.

If the committee were pressed on this matter and replied that they were aware that there was some risk in the use of the technology employed at the time (the hazardous O-rings), in the case of the Challenger disaster, what the Committee on Shuttle Criticality, Review and Hazard Analysis would then seem to be saying is that it was justifiable to employ the ‘risky technology’ that was employed at the time because of the understanding that in order to make progress in space exploration that one was required to take chances with very risky technology. Is this actually the case? Was it necessary to take cavalier chances with risky technology in order to make progress in the arena of space exploration? The implication is that the risk would have to be taken and the “management” of the risk would seem to amount to something on the order of crossing one’s fingers, and hoping that nothing would happen.

When one examines the Challenger case more closely, one discovers that there was no need to choose the technology that was chosen in the first place. The risky technology in this case in point was the engineering design of the O-rings. Of four designs submitted, the one chosen was the least safe (and the cheapest). This choice was a case of risky judgment on the part of the managers who chose this design that was an initiating cause of the Challenger disaster, and not the fact that they were obliged to employ a risky design in order to venture into space. The design itself was risky. Why? This design was chosen because financial cost factors, read profit, were taken as a priority over safety. This decision to place cost ahead of safety is an example of ethical misjudgment. Thus, we may consider the decision to choose this unsafe design a case of management malpractice. There was no necessity to choose this particular design. Alternative designs were available.

The O-ring design of giant rubber gaskets keeping combustible hot gases from leaking out, in actual fact, ranked fourth out of four submitted engineering designs and, according to an important article co-written by Trudy Bell, Senior Editor of the engineering journal, IEEE Spectrum and Karl Esch: the selection of this design was the chief cause of the Challenger disaster.

Robert Elliott Allinson, ‘Risk Management: demythologizing its belief foundations,’ International Journal of Risk Assessment and Management, Volume 7, No. 3, 2007, p. 302.

For the next space flight, this design was replaced with the safest design, demonstrating that the safest design could have been chosen in the first place and it was economically feasible to have done so.

There was also no need to choose a design for the space shuttle that did not include an abort system, that is explosive bolt hatches, space pressure suits and a parachute descent system. That it did not include such a safety system was a matter of policy, not impossibility since earlier spacecraft had been equipped with launch escape systems. Again, nowhere, in any place, in the 575 pages of Ms. Vaughan’s, The Challenger Launch Decision: Risky Technology, Culture and Deviance at NASA, is it ever mentioned that there was no necessity to omit an escape system for the crew and passengers. Omission of this detail can readily be regarded as ethically unpardonable. It implies that there was no way to save the lives of the crew and passengers when in fact they were indeed alive when the space craft broke apart (the space shuttle did not explode, as was popularly reported – the spectacular image of the smoke was due to the chemical reaction of hydrogen colliding with oxygen). The astronauts were conscious as they hit the ocean floor at the tremendous impact (some of the crew had actually activated and used their emergency air packs) that caused their death. By omitting this incredibly important fact in her volume, Vaughan leaves one with the impression that the life and death risk that was taken was one that could not possibly have been prevented!

Op. cit., pp. 188-189.

That it could, indeed, have been prevented places an entirely different perspective on the kind of unnecessary, and therefore, incredibly unwarranted risk that the astronauts were forced to take. The Challenger space shuttle astronauts never needed to take a risk with their lives. The risk did not exist: it was created by risky and unethical management decisions. The false impression created by interpreting the plume of smoke to mean that the space shuttle Challenger had exploded blinds one to the fact that the crew compartment had separated from the orbiter and thus the lives of all passengers could have been saved with a parachute descent system. There was no need to take a risk with their lives! The continuing belief that the shuttle exploded continues to play its role in veiling the real issue of the ethics of risky decision making.

The above examples make it clear that the problem of risk lies in the choices to be made by the risk takers, or, more precisely, the risk decision makers, not in the management of risk already taken. By focusing on the term ‘management’, one takes for granted that a risk must exist in the first place and needs only to be managed. It is not even clear in this case how the risk is managed unless crossing one’s fingers counts as management. In the case of the O-rings, there was no need for this risk to exist in the first place.

One can look into the matter in more detail. Suffice it to say for the present discussion and analysis that there were two levels of risk taking that were matters of choice, not management. The first level was the choice of design. The choice of design could have been altered to have prevented the breaking apart of the space shuttle. The choice of the design could have been altered to prevent the death of the crew and passengers. It was design choices that determined the cause of the disaster and the fatal consequences of the disaster. The second was the choice to fly under weather conditions that heightened the risk involved. These were both management choices or decisions. The focus should be on risky management in the sense that there was risky choice and risky decision-making, not on the technology involved, because the risky technology only came into being in the first place because of these two very risky management choices: the choice of technology and the choice of launch timing.

There has been much erroneous discussion of the technical issues of the temperature and the O-rings and as a result, massive misconceptions have created complicated layers of confusion around the issue.

The mass of confusion surrounding the technical issues is, to the best of this author’s knowledge, discussed in proper factual detail in the two chapters on the Challenger disaster in the author’s Saving Human Lives, Lessons in Management Ethics (now in Kindle and Google Books). The author is indebted to extensive private correspondence with Roger Boisjoly for clearing up the confusion about the technical issues that made understanding the Challenger issues murky and seemingly resistant to plain understanding. To the best of the author’s knowledge, such clear explanations that Boisjoly gives, sorting out the mistakes in previous analyses of the Challenger disaster such as in Diane Vaughan’s confused studies on the issue only exist in print in the author’s Saving Human Lives. Cf., especially Chapter Seven, ‘The Space Shuttle Challenger Disaster,’ and Chapter Eight, ‘Post-Challenger Investigations’, pp. 107-197.

Diane Vaughan’s book, The Challenger Launch Decision: Risky Technology, Culture and Deviance at NASA, has been responsible for generating much of this confusion. Note the first phrase in her sub-title: ‘Risky Technology’. The clear implication is that the fault lay not in the decision to employ such technology but in the problem of having to rely upon risky technology.

According to Roger Boisjoly, the senior scientist at NASA and the one who knew the most about the O-rings, she routinely mixed up data relating to field joints and nozzle joints and did not have a clue about the difference between the two joints.

Saving Human Lives, pp.192-3 et passim.

Diane Vaughan is dismissive of Professor Feynman’s famous gesture of dipping a piece of an O-ring into a glass of ice water during the televised Commission hearings and of his astonishment over the concept of ‘acceptable risk’. In contrast, Boisjoly argued that Feynman’s scientific experiment proved exactly what Feynman was trying to prove: rubber gets stiff at freezing temperatures. In Boisjoly’s words, ‘As the temperature decreases the sealing performance of the O-ring gets worse. At freezing temperature or below, it will get much worse. IT’S REALLY THAT SIMPLE.’

Ibid., p. 192. (emphasis his)

Whose opinion should we accept, the opinion of a Nobel laureate physicist and the senior scientist who knew the most about the O-rings, or a sociologist with a Master’s Degree who is evaluating the validity of a scientific experiment carried out by a Nobel laureate physicist the validity of which is confirmed by a rocket scientist?

Ibid., pp. 172-3. The author of this chapter was gratified when he personally received a hand-written letter written to him by the famous sociologist David Riesman from Harvard, author of The Lonely Crowd, who wrote to commend the merits of this author’s first review of Diane Vaughan’s book, The Challenger Launch Decision: Risky Technology, Culture and Deviance at NASA, which was to come out in Society . Later, he was prompted by this author’s second review, which took a very different tack, and which appeared in Business Ethics Quarterly to read her book after which he wrote to this author to say that Diane Vaughan’s book was simply, in his words, ‘a bad book’.

We can point to three elementary characteristics of warning signals to illustrate that the risk was foreknown and knowledge of it was transmitted: The three characteristics of warning signals. To focus on risky technology is to put the cart before the horse. Risky choice is the horse: it pulls the technology along behind it. Without the horse, the cart would not move. Technology has no power on its own. It is the servant of decision making 10 14-15.

Advertisement

7. That the risk was foreknown and knowledge of it was transmitted

In the case of the Challenger disaster, nothing should blind us to the point that the most senior engineer involved was keenly aware of the fatal risk involved and sent red-flagged warnings to this effect. That these warnings were not heeded is sometimes obscured with the “argument” that one cannot hold up actions to be taken on the basis of warnings since every possible action will always have risks and it is next to impossible to take note of every single warning that comes across one’s desk. The existence of warnings that cannot supposedly be noticed is referred to under the hypothesis of “weak signals”.

The hypothesis of “weak signals” to describe the warnings of the failure of the O-rings was put forward by Diane Vaughan in her book, The Challenger Launch Decision: Risky Technology, Culture and Deviance at NASA.

The hypothesis of “weak signals” is offered up as a rationale why warnings cannot always be well noted.

This “weak signals” hypothesis is easily refuted when one considers the source, the content and the form of the signals. The source of the signal in this case is the most senior scientist and the one who knows the most about the O-rings. It is not a crank call made by a tourist to the space center at the information desk. The source in this case was the project’s senior engineer himself. It was a warning from the inside, by an insider, who knew the most about the technology about which he was issuing the warning.

The content of the signal warns of the danger being fatal to all aboard. There is no weakness in terms of the content of the message. In Boisjoly’s famous memorandum of July 31, 1985, he warned before the fact that if there were a launch, ‘The result would be a catastrophe of the highest order – loss of human life.’ In his earlier warning of July 22, 1985, he warned of a horrifying flight failure unless a solution were implemented to prevent O-ring erosion.

Op. cit., p. 170.

One can readily see that there is no mincing of words to minimize the possibility that the danger might be understood to be less than absolutely extraordinary. The consequences in terms of life and death danger are spelled out in detail. The specific risk factor is named. One could not possibly ask for a stronger signal.

Finally, the form of the message is red-flagged to indicate the most serious of attention must be given to this message. There is no rational way that this warning can be construed as a “weak” signal. It is the strongest possible signal in three terms: its source, its content, and its form. Since the warning is multiple and not single, the form of multiple warnings should further make the case that this is not a single and therefore “weak” signal that might not be noticed. In the case of Boisjoly’s warnings, there were two such red-flagged warnings.

It should be noted that this signal was not a one-time occurrence. This signal was continuously sounded for eight years. It was not a one-time message that could conceivably have been missed. There is a memorandum written in 1978 by John Q. Miller, Chief of the Solid Rocket Motor Branch to the Project Manager, George Hardy, when referring to the Thiokol field joint design that was chosen, in which he writes that this design was so hazardous that it could produce ‘ … hot leaks and resulting catastrophic failure.’

Ibid., p. 151.

It was known eight years prior to the Challenger launch that this design choice was a dangerously, faulty one which could end in a catastrophe!

In terms of, in terms of the form of the “weak signals” hypothesis, it should be noted that when it came to the timing of the launch, 14 managers and engineers alike unanimously voted against it.

Ibid., pp. 174, 195.

One could not conceive of a stronger signal than this. That this decision was overturned in a meeting of 4 managers (with no engineers present who were not managers) does not take away from the fact that these 4 managers were fully aware of the previous signal of 14 unanimous votes against the launch.

Ibid., for an extended analysis of the unsound and unethical decision making process engaged in by the four middle managers.

There is no possibility that the signals that were made were not the strongest possible signals to be made. To refer to the warnings as “weak signals” is to turn a red light into a green light.

Advertisement

8. Two straw men

It should be emphasized that the Committee on Shuttle Criticality, Review and Hazard Analysis did not focus on the life and death consequences to the principals involved in taking the risk of launching the Challenger, but rather focused on the general, abstract case of space flight as an opportunity for learning about the universe. In short, they examined a “straw man” and not the real case in front of them to examine.

In the case of the red-flagged warnings, regarding the launching of the Challenger, the consequences were not the consequences marked out by the Committee on Shuttle Criticality, Review and Hazard Analysis. The consequences marked by that Committee were outlined in terms of the risk that was to be accepted in terms of general space exploration. The implication is that those who accepted such a risk were space explorers and that they were fully aware of the risk that they were taking. Neither of these implications is accurate.

The first straw man was to examine the case as if all aboard were astronauts. This assertion was contrary to fact. Not every person aboard the shuttle was an astronaut. In addition to the five astronauts, one of whom was a female, Judith Resnik, there were two passengers: one, a junior and high school teacher, a 37 year old mother of two, Christa McAuliffe and the other, Greg Jarvis, an engineer from Hughes Aircraft (not a member of the Astronaut Corp) who had been given a ride in a space flight as a prize for winning a company competition. Mrs. McAuliffe was scheduled to deliver a nationwide “lesson from space” called the “Ultimate Field Trip” to the nation’s school children. She was also supposed to receive a telephone call from President Reagan in mid-flight.

Part of the consequences, then, was the risk of the death of two civilians, who were there, not to operate the space craft, but one to act as a Teacher in Space and the other to claim his contest prize. Both civilians were given the deceptive, camouflage designation of “payload specialists” that implied some kind of crew responsibilities that did not exist. For a cover, Mr. Jarvis was given the task of conducting a fluid dynamics experiment and Mrs. McAuliffe was to videotape six science demonstrations of the effects of weightlessness on gravity, etc.

Richard C. Cook, Challenger Revealed, An Insider’s Account of How the Reagan Administration Caused the Greatest Tragedy of the Space Age, New York: Avalon, 2006, pp. 177-8.

The Teacher-in-Space mission portion was to be featured in President Reagan’s State of the Union address the evening of Tuesday, January 28, though the White House later denied it.

The second straw man argument is to assume that knowing the risk of space flight in general was equivalent to knowing the risk of a launch under weather conditions that were known to be unsafe with the O-ring technology in use. Again, neither the astronauts nor the two civilians aboard were made aware of the risk that they were taking. They may have been aware of the existential, general unknown risk of space exploration, but they were not aware of the specific, needless risk they were actually committing to take by being launched into space with a known, faulty technology.

Op. cit.., p. 156.

Indeed, would not the inclusion of the schoolteacher Christa McAuliffe, who had been given the understanding that the launch was safe, create the vivid impression to all aboard, and to the public at large around the world that this was a very safe flight? The genuine risk being taken was perceived to be minimal around the world by the inclusion of non-necessary personnel on the shuttle.

Was Mrs. Christa McAuliffe made aware of the risk that she was taking? Grace Corrigan wrote:

With respect to the Challenger launch, ‘ … Christa felt no anxiety about the flight. ‘I don’t see it as a dangerous thing to do.’ She said, pausing for a moment. ‘Well, I suppose it is, with all those rockets and fuel tanks. But if I saw it as a big risk, I’d feel differently.’

Grace George Corrigan, A Journal for Christa, Christa McAuliffe, Teacher in Space, Lincoln and London: University of Nebraska Press, 1993, pp. 115, 118.

It was not only the case that Christa felt no pre-flight fears. She was never even informed that there was any real problem about which she should have been informed. Corrigan relates Christa McAuliffe’s account of what the President and the pilot from a previous launch told her in the White House:

‘They were told about the dangers of the space program. She said that one could be intimidated thinking of all that he had said until you realize that NASA employed the most sophisticated safety features, and they would never take any chances with their equipment, much less an astronaut’s life.’

When interviewed by Space News Roundup, she said that, ‘When the Challenger had the problem back in the summer with the heat sensors on the engine … and … one of Boston’s papers called me and asked me what I thought was wrong, … I said, ‘I have no idea. What has NASA said?’

Op. cit., p. 185.

It is obvious that Christa McAuliffe was not informed of any O-ring faults and the consequent life and death risk she was engaged and committed to taking by her participation. Is this a case where the statement from the Committee on Shuttle Criticality, Review and Analysis, that ‘The risks of space flight must be accepted by those who are asked to participate in each flight’ is even relevant when no one has been informed about the specific risks to which this particular flight will be prone? The fallacy of conflating the general, unknown risks of space flight with the specifically known risks of this flight makes “risk management” here into an unethical practice. There is, strictly speaking, no risk management that is being practiced. There is simply a wanton risk taking with human life.

Advertisement

9. Subjective judgment versus performance data

The Nobel laureate physicist, Richard Feynman was shocked to learn that NASA management had claimed that the risk factor of a launch crash was 1 in 100,000 which they had arrived at through a subjective engineering judgment without relying upon any actual past performance data. If one calculated risk based upon actual past performance data, the risk was, according to Professor Feynman, 1 in 100.

Ibid., p. 183.

While management, in defending its decision to launch, pointed to the risk involved as being 1 in 100,000, there was no examination of how these figures were generated. If one took the actual performance data of rocket engines in the past, as the Nobel laureate physicist Roger Feynman did, the risk was far greater. When one does this, one can more clearly consider the case of the possibility of incidence versus the actuality of consequence. Does one wish to risk the lives of the astronauts and the civilians when the chance of their death is 1 in 100?

From the above example, one can generate the conclusion that whenever possible, when calculating risk, one should not calculate risk in any other way than from the conclusions generated from actual, real-life, past performance data. The lesson to be learned is that in risk assessment, past performance data, when available, must always be consulted. One should avoid guess work. Unless the risk estimates are based on past performance data as a data base, according to Professor Feynman, ‘it’s all tomfoolery’.

Ibid., p. 183.

There is no reason why we should not learn from the lessons of the Challenger disaster to generalize sound conclusions concerning the method we should employ when engaging in risk assessment.

Advertisement

10. Safety margins

Major General Kutnya, Director of Space Systems and Command Control, USAF, and Presidential Commission member, argued that the O-ring evidence was analogous to evidence that an airliner wing was about to fall off. Professor Feynman pointed out with respect to Diane Vaughan’s contention that there was a ‘safety factor of three’, that because in previous cases, the O-ring had burned only one third of the way through, that did not prove that there was a safety factor of three. If we merge the O-ring and the airplane wing examples, the argument that General Kutnya, an Air Force General and Professor Feynman, a Professor of Physics, give is that if the wings of an aircraft have burned one-third of the way through, that did not mean that they had a two-thirds safety margin as Diane Vaughan, a sociologist with a Masters Degree in Sociology, thinks. If a part that is designed to hold back inflammatory gases is weakened by one-third, then its capacity to hold those gases back is diminished by one-third. In such a weakened state, the margin between its holding up and its caving in to the pressure of the gases is seriously undermined. It is not that it possesses a two-thirds safety margin; it is that one-third of its capacity is diminished. It may not be capable of standing up to a heavy load. Its safety margin at that point may be zero.

In Professor Feynman’s words:

If a bridge is designed to withstand a certain load … it may be designed for the materials used to actually stand up under three times the load … But if the expected load comes on to the new bridge and a crack appears in a beam, this is a failure of the design. The O-rings of the solid rocket boosters were not designed to erode. Erosion was a clue that something was wrong. Erosion was not something from which safety could be inferred.’

Ibid., p. 183.

If we are to generalize from these arguments to future scenarios of risk assessment, we must be careful never to consider problems that develop as evidence that the design is still basically sound. Problems are danger signals, not signals that everything is fine. When safety is compromised, it does not signify that there is still a viable margin of safety. When safety is weakened, what we have left is not a state of safety conditions which are a little less than perfect; we have conditions which are not safe.

11. Safety back-ups

One must also be very careful when one considers safety back-ups. In engineering language, safety back-ups are referred to as redundancies. (It is important to take note of the difference in English language usage in the case of technical engineering terms and ordinary English language usage since, in the latter case, a redundancy is that which is unnecessary!) In the case of the Challenger, the back-up system was a secondary O-ring seal. The secondary seal was known also to be prone to failure. In effect, there was no secondary seal. Of course, if we are to consider the argument made above carefully, if the primary seal is unsafe, and the secondary seal is made from the same materials with the same design, how is it any safer? When examining any safety back-ups, one must be certain that the back-ups are not of the same faulty design as the technology that they are supposedly “backing-up”. Since the O-ring was considered to be at Criticality 1 (no back-up), it is thus not suprising that the secondary seal was not considered to be a back-up by this designation.

In addition, it must not be forgotten that initially all 14 engineers and managers unanimously voted against launching the Challenger. Such a vote of no confidence would be proof enough that all 14 engineers and managers had no confidence in the secondary seal. If, when Professor Feynman in his famous, improvised experiment during the televised hearings dropped a piece of the rubber O-ring into a glass of ice water obtained from a waiter, and demonstrated that it had no resiliency left to it at a freezing temperature and therefore could not expand to contain superhot inflammable gases, how would a second piece of the same rubber material be of any use? If one piece of rubber would not seal, why would another piece of the same rubber not also be stiff?

Ibid., p. 171.

There is no safety back-up if the materials of the back-up suffer from the same defect as the materials of the primary material they are supposedly backing-up.

12. The right to make decisions over others’ life and death

As a final point in the question of risk taking, we must consider what gives any person or group of people the right to make decisions which will have life and death consequences for those who partake in the actions that will take place as a result of that decision-making. It has been argued that astronauts already knew the risks of space travel when they undertake such an adventurous role. Again, this is to confuse the general unknown risk of space travel with the specifically foreknown risk of launching with an unsafe part. It is also to ignore the fact that two civilians were aboard in addition to the five astronauts.

What if we consider the question of decision making over life and death when the one involved is oneself? Suppose, for example, the astronauts and the civilians had been told of the dangers. It is entirely possible that the astronauts would still have chosen to launch. Even if the decision were placed in the hands of the Captain of the astronauts on that launch and he was fully informed of the dangers, should he have the right to decide if the launch should proceed? In a war, a general does decide if troops should engage in battle. There, of course, there may be a set of circumstances when something very valuable (the lives of one’s countrymen for example) must be protected. But, here, we are still considering the scenario where there is nothing great to be gained – such as protecting one’s countrymen from destruction - by proceeding with the timing of this launch..

What if we leave the decision to launch to the astronauts themselves and make sure that they are fully informed of the dangers? (For the sake of this discussion, we except the case that there are two civilian passengers aboard). The crew, who perceive themselves as heroes, are likely to decide to vote to launch. When the author had the privilege to personally interview Kathyrn Cordell Thornton, the celebrated astronaut who was part of the 100 strong astronaut corps at the time of the Challenger, and then Director of the Center for Science, Mathematics and Engineering Education at the University of Virginia, and the author of this chapter was privileged to be accompanied in this interview by the distinguished business ethicist, Patricia Werhane, we discussed this very issue. The author of this chapter vividly recalls Thornton saying that, ‘if these astronauts had refused to go up, there would be 100 others behind them waiting to take their place.’ This is not surprising when one considers the peer expectations of that kind of group. There is a psychological expectation that they should be fearless. This is no different from football players going into the field with a head injury. The question now has become, does one have a right to take a risk with one’s own life? One could say that this is a matter of individual choice. But, is it?

One can consider the choice of a circus performer who decides to risk her or his life by performing on a high wire with no safety net. Should this be a matter of individual choice? Suppose the circus performer has overestimated her or his abilities? Again, the possibility of the incidence may be low, but the consequences are fatal. The performer is placing a greater value on the spectacle value of a performance without safety than on the value of her or his life. Should such a decision be left in the hands of an individual performer? Or, should such a decision be overridden, if necessary, by the director of the circus?

Do we ever have a right to make life and death decisions that affect our own life and death? Or, is the value of life so precious that it should never be risked unless there is an absolute necessity? If we take the latter position, no one has the right to take such a risk with one’s own life except under the conditions of absolute necessity; e.g., as in the case of self-defense. The point here is that the ethics of the preciousness of life is valued over the concept of absolute choice.

The case becomes more clear when we consider the circumstance of intended suicide. Assuming a situation in which the intended is not the victim of an incurable disease and is not making a forced choice (such as when a Nazi may say to a Jew, ‘Kill yourself now or I will kill your child’), does anyone have a right to gratuitously take her or his own life? While it may be thought that this is still a matter of individual choice and one can point to the custom of ancient Romans and Japanese in this regard, one must remember that in the case of ancient Rome and Japanese tradition, it was a case of personal honor and not merely gratuitous. In the case of the ancient Romans and the Japanese, these honorable suicides were not sanctioned because the individuals involved were desperately unhappy with their lives.

In the case of the astronauts – leaving aside the civilians, the secondary school teacher and the journalist who have no such honor code – they do not lose their honor if they refuse to board an unsafe vehicle. However, being young, adventurous and headstrong, they may not be able to understand that the risk that they are taking is an unnecessary one. It should not be their decision. It should be the decision of the engineers. The managers, in this situation, have the responsibility to follow the opinions of their experts. That some scholar/commentators have argued some of these experts concurred that it was safe to launch or they questioned the senior engineer’s warnings about the safety to launch is based upon uninformed, unsupported or prejudicial evidence. This issue is discussed in detail in the author’s Saving Human Lives: Lessons in Management Ethics. The prejudicial evidence is examined in Robert Elliott Allinson, Saving Human Lives: Lessons in Management Ethics.

Ultimately, the deciders, those who are taking the risk of other people’s lives in their own hands, have the responsibility of not risking other people’s lives. It is as simple as this. There is no such thing as an ethical choice to risk another person’s life, including one’s own, when it is not necessary. If managers do so on account of cost-saving, they are making unethical choices if we define the basic principle of ethics to value human life as the highest priority.

We should not leave the discussion of this one classical example without reiterating that the astronauts and the civilians aboard could have easily been provided with proper space suits, parachutes and ejection chairs. No one died as a result of the explosion. Horrifically, all died because of too fast a collision with the ocean. All were breathing until ocean contact. It is important to emphasize that the crew and the passengers were alive, were conscious after the spectacular explosion. As pointed out above, during their three minute descent, some crew members had actually activated and used their emergency air packs.

Ibid., p. 188.

That such a provision of space suits, parachutes and ejection chairs were considered and then rejected by management is another risky decision that resulted not in the safe abortion of the mission, but in the deaths of every person aboard. Earlier spacecraft had been equipped with launch escape systems, thus proving that escape systems were not only possible, but were actual. The decision not to equip the Challenger with an abort system was not a decision based on possibility, but a decision based on policy.

Ibid.

This death, or should it be said, manslaughter of the astronauts and passengers, was not the result of high-risk technology, but the result of a cost-benefit analysis that took into account the benefit of profit, not the benefit of life. This horrific outcome was the result of risky and unethical decision making, not the result of improper risk management. This risk did not have to exist in the first place. It was a management decision that decided not to include these safety precautions: it was risky decision making, not a lack of risk management. (All later missions were equipped with such life-saving devices).

13. Risking funds

Classic examples of risk management would appear to be the actions of fund managers. Under the arguments advanced above, if one likens one’s funds as the means to one’s livelihood, one should never take risk with other people’s money. While this would appear to be impossible without doing away with capitalism altogether, there is one interesting counter example that should give pause to one’s thought. There is one hedge fund company on Wall Street that guarantees any loss of any client’s investment up to four million US dollars.

Cf., Peter Lattman and Jenny Anderson, ‘For 92nd St. Y, a Break from Wall Street Worry,’ NY Times, November 29. 2011. In this article, this guarantee is offered by the hedge fund manager, John A. Paulson.

How does this company do this? While the author of this chapter cannot answer this question, it would be assumed that this company is sufficiently profitable such that it can make this guarantee. What is notable is not that it is capable of making this guarantee; what is notable is that it makes the guarantee. By making such a guarantee this company is making the statement that it regards the potential loss of its client’s funds as completely unacceptable. This standard of ethics is apparently commercially viable. This is an example, not of practicing risk management in the context in which it is most commonly accepted (that of a fund manager), but rather of practicing not taking risk in the first place (with other people’s money).

14. On the hindsight objection and the problems with risk management

Before departing from this topic, we must address an objection commonly brought forth, which in the designation of the author may be labeled, the ‘Hindsight Objection’. The following is the ‘Hindsight Objection’. There are always warnings of disasters and they are commonly ignored. When no disaster occurs, these warnings are forgotten. If one went back to every successful venture; e.g. a space flight, one would find the ignored warnings. Therefore, if one infers from warnings that an unsuccessful flight occurred, one is inferring from unwarranted premises. Whenever one goes back and traces the causes of disasters to unheeded warnings, one is justifying warnings through hindsight which is always 20-20. Hence, the label, the Hindsight Objection.

There are two major replies to this objection. The first reply is that it is entirely hypothetical. In order to make good on the Hindsight Objection, one must bring forth evidence to support it. In other words, one must take a successful venture; e.g., a space flight, and show first of all that there were red-flagged warnings that the flight should not have taken place. If the ‘red-flagged’ designator is not in use, the warnings must be shown to be of high-priority status, i.e., it should be demonstrated that such warnings were equivalent in status to the warnings not to launch disastrous flights, such as the dire warnings that were lodged in the attempt to stop the launch of the Challenger. Warnings must be vetted in terms of the criteria spelled out above, in terms of source, form and content. What is commonly brought forth as “hindsight evidence” in the case of the Challenger are operational parts which are designated as Criticality 1, of which there were many such on the Challenger which did not fail. A Criticality 1 designation indicates that the failure of such a part would result in the loss of the vehicle and human life. But a Criticality 1 designation is not equivalent to a red-flagged warning that the part designated as Criticality 1 was likely to malfunction. One cannot point to parts with a Criticality 1 designation that did not fail as evidence that there were warnings that failure was imminent. A part with a Criticality 1 designation could, theoretically, be extremely safe. To count as a legitimate warning, there must be a specific, high priority (red-flagged) warning concerning the faulty design or operational capacity under certain weather conditions, etc., of a part which possesses a Criticality 1 designation.

This first condition, the prior presence of red-flagged warnings that meet the above criteria in terms of source (knowledgeable expert witness), form (red-flagged), and content (spelling out the specific flaw in the part), to the best of my research and knowledge, have never been met. The Hindsight Objection is always made as a purely hypothetical objection without any evidence for its truth value ever put forth. In the case of space flights (this example is used because it is best known to the author), one must show that there were other launches in which the senior scientist issued repeated red-flagged alerts and all of the engineers and managers voting on the planned launch voted unanimously against the launch. Has anyone ever brought forth any evidence in support of the Hindsight Objection claim?

The second major objection is that even if one could offer an example of a flight which met the equivalent set of warnings that were issued in the case of the Challenger, one might miss the point that such warnings are not intended to be construed to be 100% reliable predictions concerning the particular flight in question. Such warnings are not meant to be on the level not of, if this flight will take place, but of when this flight, or a similar flight, takes place, such a likelihood of such a flight ending in disaster is a horrific eventuality. If the warnings are not justified on the basis of one flight, they might well be on the basis of the third or the seventh flight. When one leaves the auto mechanic’s shop with faulty brakes, the service manager might warn the customer that the brakes may fail. If the brakes do not fail on the first hill, that does not mean that her or his warning was without value. They may fail on the ninth hill. Thus, even if one could find a case in which dire warnings that fit the above criteria were present and the flight or sail went without incident, that would not prove that all such flights were to be considered safe.

It is informative to reflect on the fact that Boisjoly’s warnings cover both the likelihood of the possibility of the incidence of the occurrence of the horrific failure of the mission and the consequence of the death of the crew and passengers. Thus, both aspects of risk are covered: the possibility of the likelihood of the incidence and the gravity of the consequence. This example of risk taking fits both the criteria of the likelihood of the possibility of the incidence and the enormity of the consequences. Such risk taking is entirely incompetent and unconscionable. That it need not be restricted in its eventual occurrence in this flight is evidenced by the fact that it was warned against eight years previous. The resistance to launch on this flight was based on the fact that the weather conditions compounded the already existent risk. The problem was that the case of the decision to launch the Challenger was an iconic case where it was thought that risk could be managed. In other words, to some extent, the very concept of risk management was at fault. A better example of a safeguard would have been never to have installed this unsafe part in the first place. That would have been an example not of risk management, but rather of not taking risk in the first place. (After that fatal flight, one of the original designs that was originally rejected was chosen to be used). More obviously, not launching in adverse weather conditions would be an example of not taking risk. Launching in dangerous weather conditions is an example of attempting to manage risk. It comes under the thinking of, ‘the weather is not good, but we can manage it’. It is not clear what this means. It seems to suggest the belief that ‘the weather is not good, but we can chance it.’ ‘Risk management’, when properly analyzed, seems to be equivalent to ‘risk chancing’ or gambling’. The eminent Nobel laureate physicist, Richard Feynman likened the decision to launch the Challenger space shuttle to playing Russian roulette. The proper way of thinking would have been, ‘the weather is not good; we are not going to take the risk’. When we frame the decision to be taken in term of risk taking rather than risk managing, it is far more likely that we will act not to take the risk rather than to act to take the risk and then attempt, somehow, to “manage” it.

It is therefore worthwhile to consider abandoning the concept of risk management and replacing it with the concept of risk taking. When one removes the euphemism of risk management and replaces it with risk taking, it becomes abundantly evident that it is actual human life with which one is taking risk. One is playing G-d with human life. The very concept of risk management is itself too risky. The seemingly objective social science language of ‘risk management’ is in reality a license to treat human life lightly.

15. In conclusion

Whenever, under whatever circumstances, risk management is being considered, one must first consider whether any risk must be in existence in the first place. In a condition of great advantage, such as commercial air travel, the existence of risk is minimized in the first place. After its minimization, it is not a case of risk management that is practiced as much as it is a case of safety protection. It is not a case of a mere shift in wording, because if we employ the language of safety we will be reminded of the ethical priority of the preciousness of human life (together with its corollaries of future generations and the health of the planet). The risk of airplane travel is minimized with proper attention to mechanical safety, pilot training, etc. The airplane pilot has the right to make the decision not to fly when she or he decides weather conditions are not safe. Such an important decision is placed in the commercial pilot’s hands. The judgment as to whether it is safe to fly and therefore to risk lives of the crew and passengers is given to the most experienced and most expert person among the primary actors who will be actually taking the risk.

To make a brief shift to another example to ensure that the problems with the concept of risk management are not confined to the case of the Challenger, many years ago, the author of this chapter personally recommended to cruise ship management, that all commercial cruise ships be outfitted with cockpit voice recorders (black boxes) as are airplanes. Originally, when the author made this recommendation, this author was told it was against strong, naval customs. It is refreshing to learn that strong customs can be changed. This provision of a black box is, of course, strictly speaking, a device that serves as a prevention of future disasters rather than a safeguard for the sail in question.

Whatever we do in life, we cannot close our eyes and pretend that we can remove all dangers from any human endeavor. But a step forward can be made if we consider safety protection as our highest priority. In doing so, we must pay attention to the taking of risk in the first place and not be content with managing risk that need not exist in the first place. When we do take steps that minimize risk, we should not consider these steps to be “managing” risk, but rather as steps that reduce the consequences of risk. While cruise ships are now required to have enough life boats to accommodate 100% of their passengers, the diesel engines with which they are equipped can only power them up to 6 knots and cannot carry passengers to a faraway shore if harsh wind and sea conditions exist.

The idea of “risk management” implies that risk must be present. We must ever guard against complacency. If we keep uppermost in mind that the preciousness of life is our highest priority, we will ensure that the presence of the possible incidence and the consequence of risk is kept at the lowest possible point. If we change our language from the language of “risk management” which implies that somehow there must always be a risk present, and it is our task to manage it, to risk taking, we will be more alive to the ethical responsibility which is involved in taking risks with our own or other people’s lives. We will be more inclined to work sincerely to minimize the possibility of risk and to reduce the effect of the consequences of risk.

The notion of providing enough safety boats for half of the passengers (the model of the Titanic) fits perfectly into the concept of “managing risk”. When we do this, we have performed some kind of cost-benefit analysis, or, to speak more strictly, some kind of probability-benefit analysis, and have reached a decision that by providing life boats for only half of the passengers we are fulfilling the responsibility of “managing risk”. It is not clear how this decision was reached. Perhaps, it was reached by assuming that only half of the passengers would make it to the life boats, hence, by providing half of the needed boats, we have “managed” the risk.

In conclusion, if we use the language of risk taking, the provision of life boats for half of the passengers on board is still a case of taking risk with half of our passengers’ lives. On the other hand, if we take the language of risk taking seriously and consider that human life is precious, we would not sail, that is, we would not take risk, unless there were enough life boats provided for 100% of the passengers. (If we rely upon life preservers, for example, those in ocean waters would only live for a few minutes because of the icy temperature of the sea).

Everything in the area of risk management is a matter of ethics. Do we value human life? What do we mean by the phrase ‘acceptable risk’? Such a concept can only be tolerated if there are safety provisions that will fully protect and preserve life if the consequences of the risk taking threaten human life. Otherwise, there is no such concept as an ‘acceptable risk’. Was the decision to provide life boats for only half of the passengers aboard commercial vessels based on past performance data? In the case of the Titanic, lifeboats for only half of the passengers were provided and, as a result, the lives of 1,523 men, women and children were lost.

Ibid., p. 87.

1,523 men, women and children were killed based on risk management. The Titanic disaster was in 1912. Now, 100 years later while the custom is to provide enough lifeboats for all passengers, the lifeboats are prepared with engines that will leave their occupants at the risks of the high seas.

It is hoped that this introduction to the new idea of risk taking as opposed to risk management will create reflection and commitment to a greater ethical sensitivity when we consider how our decision making may affect other people’s lives. Do we ever have a right to take decisions that risk other people’s lives? If the message of this chapter is heard, then we do not ever have the right to risk other people’s lives unless we provide full and adequate protection for the consequences of the risk that we are taking with other people’s lives. If we change our language habits from commonly using the phrase ‘risk management’ to never using this term, but rather replacing it with the phrase, ‘risk taking’, we may have a more ethically responsible world.

References

  1. 1. Allinson, Robert Elliott, Diane Vaughan, The Challenger Launch Decision, Risky Technology, Culture and Deviance at NASA, Chicago and London: The University of Chicago Press,1996575Business Ethics Quarterly, 84pp. 743-756.
  2. 2. Allinson, Robert Elliott, Diane Vaughan, The Challenger Launch Decision, Risky Technology, Culture and Deviance at NASA, Chicago and London: The University of Chicago Press,1996575Society351November-December, 1997, pp. 98-102.
  3. 3. Allinson, Robert Elliott, ‘Risk Management: demythologizing its belief foundations,’ International Journal of Risk Assessment and Management, Volume 7, No. 3, 2007.
  4. 4. Allinson, Robert, Elliott, Saving Human Lives, Lessons in Management Ethics, Dordrecht: Springer, 2005.
  5. 5. Bell, Trudy E., Esch, Karl, ‘The Fatal Flaw in Flight 51-L,’ IEEE Spectrum, February, 1987.
  6. 6. Cook, Richard, Challenger Revealed, An Insider’s Account of How the Reagan Administration Caused the Greatest Tragedy of the Space Age, New York: Avalon, 2006.
  7. 7. Corrigan, Grace, George, A Journal for Christa, Christa McAuliffe, Teacher in Space, Lincoln and London: University of Nebraska Press, 1993.
  8. 8. Harris, Charles E., Pritchard, Michael S., Rabins, Michael J., Engineering Ethics, Concepts and Cases, Belmont: Wadsworth Publishing Company, 1996.
  9. 9. Jensen, Claus, No Downlink, A Dramatic Narrative about the Challenger Accident and Our Time, Barbara Haveland (trans.), New York: Farrar, Straus and Giroux, 1996.
  10. 10. Lattman, Peter and Anderson, Jenny’For 92nd St. Y, a Break from Wall Street Worry,’ NY Times, November 29, 2011.
  11. 11. Lewis, Richard S., Challenger, The Final Voyage, New York: Columbia UniversityPress, 1988
  12. 12. Slay, Alton, Post-Challenger Evaluation of Space Shuttle Risk Assessment and Management, Washington, D.C.: National Academy Press, 1988.
  13. 13. Schinzinger, Roland and Martin, Mike W., Introduction to Engineering Ethics, Boston, New York, Toronto: McGraw Hill, 2001.
  14. 14. Vaughan, Diane, The Challenger Launch Decision: Risky Technology, Culture and Deviance at NASA, Chicago: University of Chicago Press, 1996.

Notes

  • For the sake of convenience, in general we will employ the term ‘risk taking’ as a short-hand for ‘risky decision making’ which will always stand for ‘risking the consequences to the principal risk takers as a result of decision making’. The phrase ‘principal risk takers’ refers to those whose lives will be affected by the occurrence of the risk, the primary actors who will be directly involved in taking the risk in the risky situation; e.g., in the case of the Challenger, the astronauts and civilians aboard rather than the ‘decision makers’, the four middle managers.
  • It is gratifying to learn that the U.S. government is currently teaching the terms and definitions for the proper understanding of risk management that the author of this chapter originated that are at the basis of the ideas in this chapter in its educational training courses for FEMA, the Federal Emergency Management Association.
  • Richard C. Cook, Challenger Revealed, An Insider’s Account of How the Reagan Administration Caused the Greatest Tragedy of the Space Age, New York: Avalon, 2006, p. 243. Without Richard Cook’s early articles in the New York Times, it is possible that the entire Challenger investigation would not have occurred. It was a source of inspiration to be in communication with Richard Cook at the time of my writing the Challenger chapters in my book, Saving Human Lives: Lessons in Management Ethics.
  • Ibid.
  • Ibid., p. 356.
  • Ibid., pp. 126-7.
  • For the sake of convenience, whenever the term ‘consequence’ is used it is understood that what is meant is ‘harmful consequence’ to primary risk taker, general public, the environment, future generations or all of the above.
  • Robert Elliott Allinson, Saving Human Lives, Lessons in Management Ethics, Dordrecht: Springer, 2005, p. 156.
  • Ibid., p. 184.
  • Ibid., pp. 184-5, 187.
  • Ibid., p. 187.
  • Preface by Alton Slay, Chairman, Committee on Shuttle Criticality, Review and Hazard Analysis, Post-Challenger Evaluation of Space Shuttle Risk Assessment and Management, Washington, D.C.: National Academy Press, 1988, p. v; p. 33.
  • Robert Elliott Allinson, Saving Human Lives, Lessons in Management Ethics, Dordrecht: Springer, 2005, p. 138.
  • Robert Elliott Allinson, ‘Risk Management: demythologizing its belief foundations,’ International Journal of Risk Assessment and Management, Volume 7, No. 3, 2007, p. 302.
  • Op. cit., pp. 188-189.
  • The mass of confusion surrounding the technical issues is, to the best of this author’s knowledge, discussed in proper factual detail in the two chapters on the Challenger disaster in the author’s Saving Human Lives, Lessons in Management Ethics (now in Kindle and Google Books). The author is indebted to extensive private correspondence with Roger Boisjoly for clearing up the confusion about the technical issues that made understanding the Challenger issues murky and seemingly resistant to plain understanding. To the best of the author’s knowledge, such clear explanations that Boisjoly gives, sorting out the mistakes in previous analyses of the Challenger disaster such as in Diane Vaughan’s confused studies on the issue only exist in print in the author’s Saving Human Lives. Cf., especially Chapter Seven, ‘The Space Shuttle Challenger Disaster,’ and Chapter Eight, ‘Post-Challenger Investigations’, pp. 107-197.
  • Saving Human Lives, pp.192-3 et passim.
  • Ibid., p. 192. (emphasis his)
  • Ibid., pp. 172-3. The author of this chapter was gratified when he personally received a hand-written letter written to him by the famous sociologist David Riesman from Harvard, author of The Lonely Crowd, who wrote to commend the merits of this author’s first review of Diane Vaughan’s book, The Challenger Launch Decision: Risky Technology, Culture and Deviance at NASA, which was to come out in Society . Later, he was prompted by this author’s second review, which took a very different tack, and which appeared in Business Ethics Quarterly to read her book after which he wrote to this author to say that Diane Vaughan’s book was simply, in his words, ‘a bad book’.
  • The hypothesis of “weak signals” to describe the warnings of the failure of the O-rings was put forward by Diane Vaughan in her book, The Challenger Launch Decision: Risky Technology, Culture and Deviance at NASA.
  • Op. cit., p. 170.
  • Ibid., p. 151.
  • Ibid., pp. 174, 195.
  • Ibid., for an extended analysis of the unsound and unethical decision making process engaged in by the four middle managers.
  • Richard C. Cook, Challenger Revealed, An Insider’s Account of How the Reagan Administration Caused the Greatest Tragedy of the Space Age, New York: Avalon, 2006, pp. 177-8.
  • Op. cit.., p. 156.
  • Grace George Corrigan, A Journal for Christa, Christa McAuliffe, Teacher in Space, Lincoln and London: University of Nebraska Press, 1993, pp. 115, 118.
  • Op. cit., p. 185.
  • Ibid., p. 183.
  • Ibid., p. 183.
  • Ibid., p. 183.
  • Ibid., p. 171.
  • Ibid., p. 188.
  • Ibid.
  • Cf., Peter Lattman and Jenny Anderson, ‘For 92nd St. Y, a Break from Wall Street Worry,’ NY Times, November 29. 2011. In this article, this guarantee is offered by the hedge fund manager, John A. Paulson.
  • Ibid., p. 87.

Written By

Robert Elliott Allinson

Submitted: 11 January 2012 Published: 12 September 2012