Open access

Augmenting the Risk Management Process

Written By

Jan Emblemsvåg

Submitted: 14 October 2010 Published: 28 July 2011

DOI: 10.5772/16742

From the Edited Volume

Risk Management Trends

Edited by Giancarlo Nota

Chapter metrics overview

3,804 Chapter Downloads

View Full Metrics

1. Introduction

I have seen something else under the sun:

The race is not to the swift

or the battle to the strong,

nor does food come to the wise

or wealth to the brilliant

or favour to the learned;

but time and chance happens to them all.

King Salomon

Ecclesiastes 9:11

Time and chance happens to them all… – a statement fitting one corporate scandal after the other, culminating by a financial crisis that has demonstrated that major risks were ignored or not even identified and managed, see for example (The Economist 2002, 2009). Before these scandals, risk management was an increasingly hot topic on a wider scale in corporations. For example, the Turnbull Report made at the request of the London Stock Exchange (LSE) ‘… is about the adoption of a risk-based approach to establishing a system of internal control and reviewing its effectiveness’ (Jones and Sutherland 1999), and it is a mandatory requirement for all companies at the LSE. Yet, its effectiveness might be questioned as the financial crisis shows.

Furthermore, we must acknowledge the paradox that the increasing reliance on risk management have in fact lead decision-makers to take risks they normally would not take, see (Bernstein 1996). This has also been clearly demonstrated by one financial institution after the other in the run-up to the financial crisis. Sophisticated risk management and financial instruments lead people astray, see for example (The Economist 2009). Thus, risk management can be a double-edged sword as we either run the risk of ignoring risks (and risk management), or we fall victim to potential deception by risk management.

Nonetheless, there exists numerous risk management approaches, but all suffer from a major limitation: They cannot produce consistent decision support to the extent desired and subsequently they become less trustworthy. As an example; three independent consulting companies performed a risk analysis of a hydro-electric power plant and reached widely different conclusions, see (Backlund and Hannu 2002).

This chapter therefore focuses on reducing these limitations and improve the quality of risk management. However, it is unlikely that any approach can be developed that is 100% consistent, free of deception and without the risk of reaching different conclusions. There will always be an element of art, albeit less than today.

The element of art is inescapable partly due to a psychological phenomenon called framing which is a bias we humans have ingrained in us to various degrees, see (Kahneman and Tversky 1979). Their findings have later been confirmed in industry, see for example (Pieters 2004). Another issue is the fact that often we are in situations where we either lack numerical data, or the situation is too complex to allow the usage of numerical data at all. This forces us to apply subjective reasoning in the process concerning probability- and impact estimates regardless whether the estimates themselves are based on nominal-, ordinal-, interval- or ratio scales. For more on these scales, see (Stevens 1946).

We might be tempted to believe that the usage of numerical data and statistics would greatly reduce the subjective nature of risk management, but research is less conclusive. It seems that it has merely altered it. The subjective nature on the individual level is reduced as each case is based on rational or bounded rational analysis, but on an industry level it has become more systemic for a number of reasons:

  1. Something called herding is very real in the financial industries (Hwang and Salmon 2004), which use statistical risk management methods. Herding can be defined as a situation when ‘…a group of investors following each other into (or out of) the same securities over some period of time [original italics]”, see (Sias 2004). More generally, herding can be defined as ‘…behaviour patterns that are correlated across individuals”, see (Devenow and Welch 1996).

  2. Investors have a tendency to overreact (De Bondt and Thaler 1985), which is human, but not rational.

  3. Lack of critical thinking in economic analyses is a very common problem particularly when statistical analyses are involved – it is a kind of intellectual herding. For example, two economists, Deirdre McCloskey and Stephen Ziliak studied to what degree papers in the highly respected journal American Economic Review failed to separate statistical significance from plausible explanations of economic reality, see (The Economist 2004). Their findings are depressing: first, in the 1980s 70 % of the papers failed to distinguish between economic - and statistical significance, and second, in the 1990s more than 80 % failed. This is particularly a finding that researchers must address because the number among practitioners is probably even worse, and if researchers (and teachers) cannot do it correctly we can hardly expect practitioners to show the way.

Clearly, subjectivity is a problem for risk management in one way or the other as discussed. The purpose of this chapter is therefore to show how augmenting the risk management process will reduce the degree of subjectivity to a minimum and thereby improve the quality of the decision support.

Next, some basic concepts – risk and uncertainty – are introduced. Without useful definitions of risk and uncertainty, an enlightening discussion is impossible. Then, in Section 3, a common – almost ‘universal’ – risk management approach is presented. Then, in Section 4, an improved approach – the augmented risk management approach – is presented. Critical evaluation of the approach and future ideas are discussed in Section 5. A closure is provided in Section 6. A simple, functional case is provided along for illustrational purposes.

Advertisement

2. Introducing risk and uncertainty

Risk and uncertainty are often used interchangeably. For example, (Friedlob and Schleifer 1999) claim that for auditors ‘risk is uncertainty’. It may be that distinguishing between risk and uncertainty makes little sense for auditors, but the fact is that there are many basic differences as explained next. First, risk is discussed from traditional perspectives, and the sources of risks are investigated. Second, the concept of uncertainty is explored. Finally, a more technical discussion about probability and possibility is conducted to try to settle an old score in some of the literature.

2.1. Risk

The word ‘risk’ derives from the early Italian word risicare, which originally means ‘to dare’. In this sense risk is a choice rather than a fate (Bernstein 1996). Other definitions also imply a choice aspect. Risk as a general noun is defined as ‘exposure to the chance of injury or loss; a hazard or dangerous chance’ (Webster 1989). Along the same token, in statistical decision theory risk is defined as ‘the expected value of a loss function’ (Hines and Montgomery 1990). Thus, various definitions of risk imply that we expose ourselves to risk by choice. Risk is measured, however, in terms of ‘consequences and likelihood’ (Robbins and Smith 2001; Standards Australia 1999) where likelihood is understood as a ‘qualitative description of probability or frequency’, but frequency theory is dependent on probability theory (Honderich 1995). Thus, risk is ultimately a probabilistic phenomenon as it is defined in most literature.

It is important to emphasize that ‘risk is not just bad things happening, but also good things not happening’ (Jones and Sutherland 1999) – a clarification that is particularly crucial in risk analysis of social systems. Many companies do not fail from primarily taking ‘wrong actions’, but from not capitalizing on their opportunities, i.e., the loss of an opportunity. As (Drucker 1986) observes, ‘The effective business focuses on opportunities rather than problems’. Risk management is ultimately about being proactive.

It should also be emphasized that risk is perceived differently in relation to gender, age and culture. On an average, women are more risk averse than men, and more experienced managers are more risk averse than younger ones (MacCrimmon and Wehrung 1986). Furthermore, evidence suggests that successful managers take more risk than unsuccessful managers. Perhaps there are ties between the young managers’ ‘contemporary competence’ and his exposure to risks and success? At any rate, our ability to identify risks is limited by our perceptions of risks. This is important to be aware of when identifying risks – many examples of sources of risks are found in (Government Asset Management Committee 2001) and (Jones and Sutherland 1999).

According to a 1999 Deloitte & Touche survey the potential failure of strategy is one of the greatest risks in the corporate world. Another is the failure to innovate. Unfortunately, such formulations have limited usefulness in managing risks as explained later – is ‘failure of strategy’ a risk or a consequence of a risk? To provide an answer we must first look into the concept of uncertainty since ‘the source of risk is uncertainty’ (Peters 1999). This derives from the fact that risk is a choice rather than a fate and occurs whenever there are one-to-many relations between a decision and possible future outcomes, see Figure 1.

Finally, it should be emphasized that it is important to distinguish between the concept of probability, measures of probability and probability theory, see (Emblemsvåg 2003). There is much dispute about the subject matter of probability (see (Honderich 1995)). Here, the idea that probability is a ‘degree of belief’ is subscribed to, but that it can be measured in several ways out of which the classical probability calculus of Pascal and others is the best known. For simplicity and generality the definition of risk found in (Webster 1989) is used here – the ‘exposure to the chance of injury or loss; a hazard or dangerous chance’. Furthermore, ‘degree of impact and degree of belief’ is used to measure risk.

One basic tenet of this chapter is that there are situations where classic probability calculus may prove deceptive in risk analyses. This is not to say, however, that probability theory should be discarded altogether – we simply believe that probability theory and other theories can complement each other if we understand when to use what. Concerning risk analysis, it is argued that other theories provide a better point of departure than the classic probability theory, but first the concept of uncertainty is explored, which is done next.

2.2. Uncertainty

Uncertainty as a general noun is defined as ‘the state of being uncertain; doubt; hesitancy’ (Webster 1989). Thus, there is neither loss nor gain necessarily associated with uncertainty; it is simply the not known with certainty – not the unknown.

Some define uncertainty as ‘the inability to assign probability to outcomes’, and risk is regarded as the ‘ability to assign such probabilities based on differing perceptions of the existence of orderly relationships or patterns’ (Gilford, Bobbitt et al. 1979). Such definitions are too simplistic for our purpose because in most situations the relationships or patterns are not orderly; they are complex. Also, the concepts of gain and loss, choice and fate and more are missed using such simplistic definitions.

Consequently, uncertainty and complexity are intertwined and as an unpleasant side effect, imprecision emerges. Lotfi A. Zadeh formulated this fact in a theorem called the Law of Incompatibility (McNeill and Freiberger 1993):

As complexity rises, precise statements lose meaning

and meaningful statements lose precision.

Since all organizations experience some degree of complexity, this theorem is crucial to understand and act in accordance with. With complexity we refer to the state in which the cause-and-effect relationships are loose, for example, operating a sailboat. A mechanical clock, however, in which the relationship between the parts is precisely defined, is complicated – not complex. From the Law of Incompatibility we understand that there are limits to how precise decision support both can and should be (to avoid deception), due to the inherent uncertainty caused by complexity. Therefore, by increasing the uncertainty in analyses and other decision support material to better reflect the true and inherent uncertainty we will actually lower the actual risk.

In fact, Nobel laureate Kenneth Arrow warns us that ‘[O]ur knowledge of the way things work, in society or in Nature, comes trailing clouds of vagueness. Vast ills have followed a belief in certainty’ (Arrow 1992). Basically, ignoring complexity and/or uncertainty is risky, and accuracy may be deceptive. The NRC Governing Board on the Assessment of Risk shares a similar view, see (Zimmer 1986). Thus, striking a sound balance between meaningfulness and precision is crucial, and possessing a relatively clear understanding of uncertainty is needed since uncertainty and complexity are so closely related.

Note that there are two main types of uncertainty, see Figure 1, fuzziness and ambiguity. Definitions in the literature differ slightly but are more or less consistent with Figure 1. Fuzziness occurs whenever definite, sharp, clear or crisp distinctions are not made. Ambiguity results from unclear definitions of the various alternatives (outcomes). These alternatives can either be in conflict with each other or they can be unspecified. The former is ambiguity resulting from discord whereas the latter is ambiguity resulting from nonspecificity. The ambiguity resulting from discord is essentially what (classic) probability theory focuses on, because ‘probability theory can model only situations where there are conflicting beliefs about mutually exclusive alternatives’ (Klir 1991). In fact, neither fuzziness nor nonspecificity can be conceptualized by probability theories that are based on the idea of ‘equipossibility’ because such theories are ‘digital’ in the sense that degrees of occurrence is not allowed – it either occurs or not. Put differently, uncertainty is too wide of a concept for classical probability theory, because it is closely linked to equipossibility theory, see (Honderich 1995).

Kangari and Riggs (1989) have discussed the various methods used in risk analysis and classified them as either ‘classical’ (probability based) or ‘conceptual’ (fuzzy set based). Their findings are similar:

… probability models suffer from two major limitations. Some models require detailed quantitative information, which is not normally available at the time of planning, and the applicability of such models to real project risk analysis is limited, because agencies participating in the project have a problem with making precise decisions. The problems are ill-defined and vague, and they thus require subjective evaluations, which classical models cannot handle.

To deal with both fuzziness and nonspecific ambiguity, however, Zadeh invented fuzzy sets – ‘the first new method of dealing with uncertainty since the development of probability’ (Zadeh 1965) – and the associated possibility theory. Fuzzy sets and possibility theory handle the widest scope of uncertainty and so must risk analyses. Thus, these theories seem to offer a sound point of departure for an augmented risk management process.

Figure 1.

The basic types of uncertainty (Klir and Yuan 1995)

For the purpose of this chapter, however, the discussion revolves around how probability can be estimated, and not the calculus that follows. In this context possibility theory offers some important ideas explained in Section 2.3. Similar ideas seem also to have been absorbed by a type of probability theory denoted ‘subjective probability theory’, see e.g. (Roos 1998). In fact, here, we need not distinguish between possibility theory and subjective probability theory because the main difference between those theories lies in the calculus, but the difference in calculus is of no interest to us. This is due to the fact that we only use the probability estimates to rank the risks and do not perform any calculus.

In the remainder of this chapter the term ‘classic probability theory’ is used to separate it from subjective probability theory.

2.3. Probability theory versus possibility theory

The crux of the difference between classic probability theory and possibility theory lies in the estimation technique. For example, consider the Venn diagram in Figure 2. The two outcomes A and B in outcome space S overlap, i.e., they are not mutually exclusive. The probability of A is in other words dependent on the probability of B, and vice versa. This situation is denoted nonspecific ambiguity in Figure 1.

Figure 2.

Two non-mutually exclusive outcomes in outcome space S

In classic probability theory we look at A in relation to S and correct for overlaps so that the sum of all outcomes will be 100% (all exhaustible). In theory this is straightforward, but in practice calculating the probability of A * B is problematic in cases where A and B are interdependent and the underlying cause-and-effect relations are complex. Thus, in such cases we find that the larger the probability of A * B, the larger may the mistake of using classic probability theory become.

In possibility theory, however, we simply look at the outcomes in relation to each other, and consequently S becomes irrelevant and overlaps do not matter. The possibility of A will simply be A to A + B in Figure 2. Clearly, possibility theory is intuitive and easy, but we pay a price - loss of precision (an outcome in comparison to outcome space) both in definition (as discussed here) and in its further calculus operations (not discussed here). This loss of precision is, however, more true to high levels of complexity and that is often crucial because ‘firms are mutually dependent’ (Porter 1998). Also, it is important that risk management approaches do not appear more reliable than they are because then decision-makers can be lead to accept decisions they normally would reject, as discussed earlier.

This discussion clearly illustrates that ‘[classic] probabilistic approaches are based on counting whereas possibilistic logic is based on relative comparison’ (Dubois, Lang et al.). There are also other differences between classic probability theory and possibility theory, which is not discussed here. It should be noted that several places in the literature the word ‘probability’ is used in cases that are clearly possibilistic. This is probably more due to the fact that ‘probability’ is a common word – which has double meaning (Bernstein 1996) – than reflecting an actual usage of classic probability theory and calculus.

One additional difference that is pertinent here is the difference between ‘event’ and ‘sensation’. The term ‘event’ applied in probability theory requires a certain level of distinctiveness in defining what is occurring and what is not. ‘The term ‘sensation’ has therefore been proposed in possibility theory, and it is something weaker than an event’ (Kaufmann 1983). The idea behind ‘sensation’ is important in corporate settings because the degree of distinctness that the definition of ‘event’ requires is not always obtainable.

Also, the term ‘possibility’ is preferred here over ‘probability’ to emphasize that positive risks – opportunities, or possibilities – should be pursued actively. Furthermore, using a possibilistic foundation (based on relative ordering as opposed to the absolute counting in classic probability theory), provides added decision support because ‘one needs to present comparison scenarios that are located on the probability scale to evoke people’s own feeling of risk’ (Kunreuther, Meyer et al. 2004).

To summarize so far: the (Webster 1989) definition of risk is used – the ‘exposure to the chance of injury or loss; a hazard or dangerous chance’ – while risk is measured in terms of ‘degree of impact’ and ‘degree of belief’. Furthermore, the word ‘possibility’ is used to denote estimate the degree of belief of a specific sensation. Alternatively, probability theoretical terms can be employed under the explicit understanding that the terms are not 100% correct – this may be a suitable approach in many cases when practitioners are involved because fine-tuned terms can be too difficult to understand.

Next, a more or less standard risk management process is reviewed.

Advertisement

3. Brief review of risk management approaches

All risk management approaches known to the author are variations of the framework presented in Figure 3. They may differ in wording, number of steps and content of steps, but the basic principles remain the same, see (Meyers 2006) for more examples and details. The discussion here is therefore related to the risk management process shown in Figure 3. The depicted risk management process can be found in several versions in the literature, see for example public sources such as (CCMD Roundtable on Risk Management 2001; Government Asset Management Committee 2001; Jones and Sutherland 1999) and it is employed by risk management specialists such as the maritime classing society Det Norske Veritas (DNV)

Personal experience as consultant in Det Norske Veritas (DNV).

. The fact that the adherence to the same standards leads to different implementations is also discussed by (Meyers 2006).

Briefly stated, the process proceeds as follows: In the initial step, all up-front issues are identified and clarified. Proposal refers to anything for which decision support is needed; a project proposal, a proposal for a new strategy and so on. The objectives are important to clarify because risks arise in pursuit of objectives as discussed earlier. The criteria are essentially definitions of what is ‘good enough’. The purpose of defining the key elements is to provide relevant categorization to ease the risk analysis. Since all categorization is deceptive to some degree, see (Emblemsvåg and Bras 2000), it is important to avoid unnecessary categories. The categories should therefore be case specific and not generic.

Figure 3.

Traditional risk management process. Based on (Government Asset Management Committee 2001)

The second step is the analysis of risks by identification, assessment, ranking and screening out minor risks. This step is filled with shortcomings and potential pitfalls of the serious kind. This step relies heavily on subjectivism, and that is a challenge in itself because it can produce widely different results as (Backlund and Hannu 2002) point out. The challenge was that there existed no consistent decision support for improving the model other than to revise the input – sadly sometimes done to obtain preconceived results. For example, suppose we identified three risks – A, B and C – and want to assess their probabilities and impacts, see Figure 4. The assessment is usually performed by assigning numbers that describe probability and impact, but the logic behind the assignment is unclear at best, and it is impossible to perform any sort of analysis to further improve this assignment. Typically, the discussion ends up by placing the risks in a matrix like the ones shown in Figure 4, but without any consistency checks it is difficult to argue which one, if any, of the two matrices in Figure 4 fit reality the best. Thus, the recommendations can become quite different, and herein lays one of the most problematic issues of this process. In the augmented risk management process this problem is overcome, as we shall see later.

Figure 4.

The arbitrary assignment of probability and impact in a risk ranking matrix

The third step – response planning, or risk management strategies – depends directly on the risk analysis. If the assignment is as arbitrary as the study of (Backlund and Hannu 2002) shows, then the suggested responses will vary greatly. Thus, a more reliable way of analysing risks must be found, which is discussed in Section 4. Nonetheless, there are four generic risk management strategies; 1) risk prevention (reduce probability), 2) impact mitigation (reduce impact), 3) transfer (risk to a third party such as an insurance company) or simply 4) accept (the risk). Depending on the chosen risk management strategy, specific action plans are developed.

The fourth step is often an integral part of step three, but in some projects it may be beneficial to formalize reporting into a separate step, see (Government Asset Management Committee 2001) for more information.

The fifth step – implementation (of the action schedules, management measures and allocation of management resources and responsibilities) is obviously an important step in risk management. It is vital that the effectiveness of these measures must be monitored and checked to secure effective implementation. Possible new risks must also be identified, and so the risk management process starts all over again. Just like the famous PDCA circle, the risk management process never stops.

In addition to the obvious problems with the risk analysis as argued earlier, the entire risk management process lacks three important aspects that aggravate the problems:

  1. The capabilities of the organization – the strengths and weaknesses – are either ignored or treated as implicit at best. This is a problem in itself because we cannot rely on responses that cannot be implemented. Understanding that risks are relative to the organization’s capabilities is a leap for risk analysis in direction of strategic analysis, which has often incorporated this factor. In other words, risk management should be regarded just as much as management of capabilities as management of risks. If an analysis shall provide recommendations for actions, it is clear that the capabilities, which can be managed, are needed in the risk analysis as well. In this chapter using risk management in strategy is not discussed, so interested readers on how this can be done are referred to (Emblemsvåg and Kjølstad 2002).

  2. There is no management of information quality. Management of information quality is crucial in risk management because uncertainty is prevalent. Uncertainty can be defined as a state for which we lack information, see (Emblemsvåg and Kjølstad 2002). Thus, uncertainty analysis should play an integral part in risk management to ensure that the uncertainty in the risk management process is kept at an economically feasible level. The same argument also holds for the usage of sensitivity analyses; both on risk- and uncertainty analyses. This idea is also supported by (Backlund and Hannu 2002).

  3. There is no explicit management of either existing knowledge that can be applied to improve the quality of the analyses, or to improve the knowledge acquired in the process at hand which can be used in the follow-up process. The augmented risk management approach therefore incorporates Knowledge Management (KM). KM is believed to be pivotal to ensure an effective risk management process by providing context and learning possibilities. In essence, risk management is not just about managing risks – the entire context surrounding the risks must be understood and managed effectively. Neef (2005) states that ‘Risk management is knowledge management’, but the point is that the reverse is also important.

This is where the greatest methodological challenge for the augmented risk management process lies – how to manage knowledge. According to (Wickramasinghe 2003), knowledge management in its broadest sense refers to how a firm acquires, stores and applies its own intellectual capital, and according to (Takeuchi 1998), Nonaka insisted that knowledge cannot be ‘managed’ but ‘led'. Worse, we are still not sure what knowledge management really involves (Asllani and Luthans 2003). These aspects, along with the augmented risk management process are elaborated upon in the next section.

Advertisement

4. The augmented risk management process

The augmented risk management process is presented in Figure 5, and it is organized into five steps as indicated by a number, title and colour band (greyish or white). Furthermore, each step consists of three parallel processes; 1) the actual risk management process, 2) the information management process to improve the model quality and 3) the KM process to improve the usefulness of the model. These steps and processes are explained next, section by section. At the end of each section a running, real-life case is provided for illustrational purposes.

Figure 5.

The augmented risk management process

4.1. Step 1 – provide context

A decision triggers the entire process (note that to not make a conscious decision is also a decision). The context can be derived from the decision itself and the analyses performed prior to the decision, which are omitted in Figure 5. The context includes the objectives, the criteria, measurements for determining the degree of success or failure, and the necessary resources. Identifying relevant knowledge about the situation is also important. The knowledge is either directly available or it is tacit

The dichotomy of tacit- and explicit knowledge is attributable to (Polanyi, M. 1966), who found that tacit knowledge is a kind of knowledge that cannot be readily articulated because it is elusive and subjective. Explicit knowledge, however, is the written word, the articulated and the like.

, and the various types of knowledge may interplay as suggested by the SECI model

SECI (Socialization, Externalization, Combination, and Internalization) represents the four phases of the conversions between explicit and tacit knowledge. Often, the starting points of conversion cycles start from the phase of socialization (Li, M. and F. Gao (2003).

, see (Nonaka and Takeuchi 1995). Tacit knowledge can be either implicit or really tacit (Li and Gao 2003), and it is often the most valuable because it is a foundation for building sustainable competitive advantage, but it is unfortunately less available, see (Cavusgil, Calantone et al. 2003). Residing in the mind of employees, as much tacit knowledge as possible should be transferred to the organization and hence become explicit knowledge, as explained later. How this can be done in reality is a major field of research. In fact, (Earl 2001) provides a comprehensive review of KM and proposes seven schools of knowledge management. As noted earlier, even reputed scholars of the field question the management of knowledge…

Therefore, this chapter simply tries to map out some steps in the KM process that is required without claiming that this is the solution. The point here is merely that we must have a conscious relationship towards certain basic steps such as identifying what we know, evaluate what takes place, learn from it and then increase the pool of what we know. How this (and possibly more steps) should be done most effectively, is a matter for future work. Currently, we do not have a tested solution for the KM challenge, but a potentially workable idea is presented in Section 5.

From the objectives, resources, criteria and our knowledge we can determine what information is needed and map what information and data is available. Lack of information at this stage, which is common, will introduce uncertainty into the entire process. By identifying lacking information and data we can already early in the process determine if we should pursue better information and data. However, we lack knowledge about what information and data would be most valuable to obtain, which is unknown until Step 3.

Compared to traditional risk management approaches the most noticeable difference in this step is that explicit relations between context and knowledge are established to identify the information and knowledge needs. Typical procedures- and systems of knowledge that can be used include (Neef 2005):

  1. Knowledge mapping – a process by which an organization determines ‘who knows what’ in the company.

  2. Communities of practice – naturally-forming networks of employees with similar interests or experience, or with complementary skills, who would normally gather to discuss common issues.

  3. Hard-tagging experts – a knowledge management process that combines knowledge mapping with a formal mentoring process.

  4. Learning – a post-incident assessment process where lessons learned are digested.

  5. Encouraging a knowledge-sharing culture – values and expectations for ethical behaviour are communicated widely and effectively throughout the organization.

  6. Performance monitoring and reporting – what you measure is what you get.

  7. Community and stakeholder involvement – help company leaders sense and respond to early concerns from these outside parties (government, unions, non-governmental or activist groups, the press, etc.), on policy matters that could later develop into serious conflicts or incidents.

  8. Business research and analysis – search for, organize and distribute information from internal and external sources concerning local political, cultural, and legal concerns.

Running case

The decision-maker is a group of investors that wants to find out if it is worth investing more into a new-to-the-world transportation concept in South Korea. They are also concerned about how to attract new investors. A company has been incorporated to bring the new technology to the market. The purpose of the risk management process is to map out potential risks and capabilities and identify how they should be handled. The direct objectives of the investors related to this process are to; 1) identify if the new concept is viable, and if it is to 2) identify how to convince other investors to join.

The investors are experienced people working in mass transit for years, so some knowledge about the market was available. Since the case involves a new-to-the world mass transit solution, there is little technical- and business process knowledge to draw from other than generic business case methods from the literature.

4.2. Step 2 – Identify risks and capabilities

Once a proper context is established, the next step is to identify the risks and the capabilities of the organization. Here, the usage of the SWOT framework is very useful, see (Emblemsvåg and Kjølstad 2002), substituting risks for threats and opportunities, and organizational capabilities for strengths and weaknesses. Identifying the capabilities is to determine what risk management strategies can be successfully deployed.

This step is similar to the equivalent step in traditional approaches, but some differences exist. First, risks are explicitly separated from uncertainties. Risks arise due to decisions made, while uncertainty is due to lacking information, see (Emblemsvåg and Kjølstad 2002). Risks lurk in uncertainty as it were, but uncertainties are not necessarily associated with loss and hence are not interchangeable with risks. Separating uncertainties from risks may seem of academic interest, but uncertainty has to do with information management and hence improvement of model quality, see Figure 5, while risks is the very objective of the model. The findings of (Backlund and Hannu 2002) also support this ascertainment.

Second, the distinction between capabilities and risks is important because capabilities are the means to the end (managing risks in pursuit of objectives). Often, risks, uncertainties and capabilities are mingled which inhibits effective risk management.

Third, for any management tool to be useful it must be anchored in real world experiences and knowledge. Neither the risk management process nor the information management process can provide such anchoring. Consequently, it is proposed to link both the risk management and information management processes to a KM process so that knowledge can be effectively applied in the steps. Otherwise we run the risk of, for example, only identifying obvious risks and falling prey to local ‘myths’, stereotypes and the like. For more information on how to do this, consult the ‘continuous improvement’ philosophy and approaches of Deming as described in (Latzco and Saunders 1995) and double loop learning processes as presented by (Argyris 1977, 1978).

Running case

The viability of the concept was related to 5 risk categories; 1) finance, 2) technology, 3) organizational (internal), 4) market and 5) communication. The latter is important in this case because an objective is to attract investors.

We started by reviewing all available documentation about the technology, business plans, marketing plans and whatever we thought were relevant after the objectives had been clarified. We identified more than 200 risks. Then, we spent about a week with top management, in which we also interviewed the director of a relevant governmental research institute and other parties, for a review of the technology and various communication and marketing related risks.

Based on this information we performed a SWOT after which 39 risks remained significant. The vast reduction in the number of risks occurs, as the documentation did not contain all that was relevant. In due course, this fact was established as a specific communication risk. To reduce the number of risks even further we performed a traditional screening of the 39 risks down to 24 and then proceeded to Step 3. This screening totally eliminated the organizational (internal) risks, so we ended up with 4 risk categories.

4.3. Step 3 – perform analyses

As indicated in Figure 5, we propose to have four types of analyses that are integrated in the same model; 1) a risk analysis, 2) a sensitivity analysis of the risk analysis, 3) an uncertainty analysis and 4) a sensitivity analysis of the uncertainty analysis. The purpose of these analyses is not just to analyse risks but to also provide a basis for double-loop learning, that is, learning with feedback both with respect to information and knowledge. Most approaches lack this learning capability and hence lack any systematic way of improving themselves. The critical characteristic missing is consistency.

All these four analyses can be conducted in one single model if the model is built around a structure similar to Analytical Hierarchy Process (AHP). The reason for this is that AHP is built using mathematics, and a great virtue of mathematics is its consistency – a trait no other system of thought can match. Despite the inherent translation uncertainty between qualitative and quantitative measures, the only way to ensure consistent subjective risk analyses is to translate the subjective measures into numbers and then perform some sort of consistency check. The only approach that can handle qualitative issues with controlled consistency is AHP and variations thereof.

Thomas Lorie Saaty developed AHP in the late 1960s to primarily provide decision support for multi-objective selection problems. Since then, (Saaty and Forsman 1992) have utilized AHP in a wide array of situations including resource allocation, scheduling, project evaluation, military strategy, forecasting, conflict resolution, political strategy, safety, financial risk and strategic planning. Others have also used AHP in a variety of situations such as supplier selection (Bhutta and Huq 2002), determining measures of business performance (Cheng and Li 2001), and in quantitative construction risk management of a cross-country petroleum pipeline project in India (Dey 2001).

The greatest advantage of the AHP concept, for our purpose, is that it incorporates a logic consistency check of the answers provided by the various participants in the process. As (Cheng and Li 2001) claim; ‘it [AHP] is able to prevent respondents from responding arbitrarily, incorrectly, or non-professionally’. The arbitrariness of Figure 4 will consequently rarely occur. Furthermore, the underlying mathematical structure of AHP makes sensitivity analyses both with respect to the risk- and the uncertainty analysis meaningful, which in turn guides learning efforts. This is impossible in traditional frameworks. How Monte Carlo methods can be employed is shown in (Emblemsvåg and Tonning 2003). The theoretical background for this is explained thoroughly in (Emblemsvåg 2003), to which the interested reader is referred.

The relative rankings generated by the AHP matrix system can be used as so called subjective probabilities or possibilities as well as relative impacts or even relative capabilities. The estimates will be relative, but that is sufficient since the objective of a risk analysis is to effectively direct attention towards the critical risks so that they will be attended to. However, by including a known absolute reference in the AHP matrices we can provide absolute ranking if desired.

The first step in applying the AHP matrix system is to first identify the risks we want to rank, which is done in step 2. Second, due to the hierarchical nature of AHP we must organize the items as a hierarchy. For example, all risks are divided into commercial risks, technological risks, financial risks, operational risks and so on. These risk categories is then broken down into detailed risks. For example, financial risks may consist of cash flow exposure risks, currency risks, interest risks and so forth. It is important that the number of children below a parent in a hierarchy is not more than 9, because human cognition has great problems handling more than 9 issues at the same time, see (Miller 1956). In our experience, it is wise to limit oneself to 7 or less children per parent simply because being consistent across more than 7 items in a comparison is very difficult. Third, we must perform the actual pair-wise comparison.

To operationalize pair-wise comparisons, we used the ordinal scales and the average Random Index (RI) values provided in Tables 1 and 2 – note that this will per default produce 1 on the diagonals. According to (Peniwati 2000), the RIs are defined to allow a 10% inconsistency in the answers. Note that the values in Table 1 must be interpreted in its specific context. Thus, when we speak of probability of scale 1 it should linguistically be interpreted as ‘equally probable’. This may seem unfamiliar to most, but it is easier to see how this work by using the running example. First, however, a quick note on the KM side of this step should be mentioned.

Intensity of Importance (1) Definition (2) Explanation (3)
1 Equal importance Two items contribute equally to the objective
3 Moderate importance Experience and judgment slightly favor one over another
5 Strong importance Experience and judgment strongly favor one over another
7 Very strong importance An activity is strongly favored and its dominance is demonstrated in practice
9 Absolute importance The importance of one over another affirmed on the highest possible order
2, 4, 6, 8 Intermediate values Used to represent compromise between the priorities listed above
Reciprocals of above numbers If item i has one of the above non-zero numbers assigned to it when compared to with item j, the j has the reciprocal value when compared with i

Table 1.

Scales of measurement in pair-wise comparison. Source: (Saaty, Thomas Lorie 1990)

Matrix Size Random Index Recommended CR Values
1 0.00 0.05
2 0.00 0.05
3 0.58 0.05
4 0.90 0.08
5 1.12 0.10
6 1.24 0.10
7 1.32 0.10
8 1.41 0.10
9 1.45 0.10
10 1.49 0.10

Table 2.

Average Random Index values. Source: (Saaty, Thomas Lorie 1990)

From a KM perspective the most critical aspect of this step is to critically review the aforementioned analyses. A critical review will in this context revolve around finding answers for a variety of ‘why?’ questions as well as judging to what extent the analyses provide useful input to the risk management process and what must be done about significant gaps. Basically, we must understand how the analyses work, why they work and to what extent they work as planned. The most critical part of this is ensuring correct and useful definitions of risks and capabilities (Step 2). In any case, this step will reveal the quality of the preceding work – poor definitions will make pair-wise comparison hard.

Running case

From Step 2 we recall that there are 4 risk categories; 1) finance (FR), 2) technology (TR), 3) market (MR) and 4) communication (CR). Since AHP is hierarchical we are tempted to also rank these, but in order to give all the 39 risks underlying these 4 categories the same weight – 25% - we do not rank them (or give them the same rank, i.e. 1). Therefore, for our running example we must go to the bottom of the hierarchy and in the market category, for example, we find the following risks:

  1. Customer decides to not buy any project (MR1).

  2. Longer lead-times in sales than expected (MR2).

  3. Negative reactions from passengers due to the 90 degree turn (MR3).

  4. Passengers exposed to accidents/problems on demo plant (MR4).

  5. Wrong level of 'finished touch' on Demo plant (MR5).

The pair-wise comparison of these is a three-step process. The first step is to determine possibilities, see Table 3, whereas the second step is to determine impacts. When discussing impacts it is important to use the list of capabilities and think of impact in their context.

From Table 3 we see that MR2 (the second Market Risk) is perceived as the one with the highest possibility (47%) of occurrence. Indeed, it took about 10 years from this analysis first was carried out – using the risk management approach presented in (Emblemsvåg and Kjølstad 2002) – until it was decided to build the first system. We see from Table 2 that the CR value in the matrix of 0.088 is less than the recommended CR value of 0.10. This implies that the matric internally consistent and we are ready to proceed. A similar matrix should have been constructed concerning impacts, but this is omitted here. The impacts would also have been on a 0 to 1 percentage scale, so that when we multiply the possibilities and the impacts we get small numbers that can be normalized back on a 0 to 1 scale in percentages. This is done in Table 4 for the top ten risks.

R1 R2 R3 R4 R5 Possibility
R1 1 0.14 0.20 3.00 0.33 8 %
R2 7.00 1 3.00 5.00 4.00 47 %
R3 5.00 0.33 1 4.00 3.00 26 %
R4 0.33 0.20 0.25 1 0.33 6 %
R5 3.00 0.25 0.33 3.00 1 14 %
Sum 16.33 1.93 4.78 16.00 8.67
CR value 0.088

Table 3.

Calculation of possibilities (subjective probabilities)

From Table 4 we see that the single largest risk is Financial Risk (FR) number 5, which is ‘Payment guarantees not awarded’. It accounts for 27% of the total risk profile. Furthermore, the ten largest risks account for more than 80% of the total risk profile.

The largest methodological challenge in this step is to combine the risks and capabilities. In (Emblemsvåg and Kjølstad 2002), the link was made explicit using a matrix, but the problem of that approach is that it requires an almost inhuman ability of thinking of risks independently of capabilities first and then think of it extremely clearly afterward when linking the risks and capabilities. The idea was good, but too difficult to use. It is therefore much more natural – almost inescapable, less time consuming and overall better to implicitly think of capabilities when we rate impacts and possibilities. A list of the capabilities is handy nonetheless to remind ourselves of what we as a minimum should take into consideration when performing the risk analysis.

At the start of this section, we proposed to have four types of analyses that are integrated in the same model; 1) a risk analysis, 2) a sensitivity analysis of the risk analysis, 3) an uncertainty analysis and 4) a sensitivity analysis of the uncertainty analysis. So far, the latter three remains. The key to their execution is to model the input in the risk analysis matrices in two ways;

  1. Using symmetric distributions, such as symmetric 1 (around the values initially set in the AHP matrices) and uniform distributions shown to the left in Figure 6. It is important that they are symmetric in order to make sure that the mathematical impact on the risk analysis of each input is traced correctly. This will enable us to trace accurately what factors impact the overall risk profile the most – i.e., key risk factors.

  2. Modelling uncertainty as we perceive it as shown to the right in Figure 6. This will facilitate both an estimate on the consequences of the uncertainty in the input in the process as well as sensitivity analysis to identify what input needs improvement to most effectively reduce the overall uncertainty in the risk analysis – i.e., key uncertainty factors.

Risks Possibility Impact Risk norm Risk, acc.
FR5 Payment guarantees not awarded 10 % 12 % 27 % 27 %
FR4 No exit strategy for foreign investors 8 % 8 % 15 % 42 %
TR7 Undesirable mechanical behavior (folding and unfolding) 6 % 6 % 9 % 50 %
TR1 Competitors attack NoWait due to safety issues 9 % 3 % 6 % 56 %
MR3 Negative reactions from passengers due to the 90 degree turn 6 % 4 % 6 % 62 %
MR2 Longer lead-times in sales than expected 12 % 2 % 5 % 67 %
CR1 Business essentials are not presented clearly 9 % 2 % 5 % 72 %
MR4 Passengers exposed to accidents/problems on demo plant 1 % 13 % 4 % 76 %
CR4 Business plan lack focus on benefits 6 % 2 % 3 % 79 %
MR1 Customer decides to not buy any project 2 % 5 % 3 % 82 %

Table 4.

The ten largest risks in descending order

Before we can use the risk analysis model, we have to check the quality of the matrices. With 4 risk categories we get 8 pair-wise comparison matrices (5 with possibility estimates and 5 with impact estimates). Therefore, we first run a Monte Carlo simulation of 10,000 trials and record the number of times the matrices become inconsistent. The result is shown in the histogram on top in Figure 7. We see that the initial ranking of possibilities and impacts created only approximately 17% consistent matrices (the column to the left), and this is not good enough. The reason for this is that too many matrices had CR values of more than approximately 0.030. Consequently, we critically evaluated the pair-wise comparison matrices to reduce the CR values of all matrices to below 0.030. This resulted in massive improvements – about 99% of the matrices in all 10,000 trials remained consistent. This is excellent, and we can proceed to using the risk analysis model.

A small sample of the results is shown in Figures 8 and 9. In Figure 8 we see a probability distribution for the 4 largest risks given a ±1 in all pair-wise comparisons. Clearly, there is very little overlay between the two largest risks indicating that the largest risk is a clear number 1. The more overlay, the higher the probability that the results in Table 4 are inconclusively ranked. Individual probability charts that are much more accurate are also available after a Monte Carlo simulation.

Figure 6.

Modelling input in two different ways to support analysis of risk and uncertainty

In Figure 9 we see the sensitivity chart for the overall risk profile, or the sum of all risks, and this provides us with an accurate ranking of all key risk factors. Similar sensitivity charts are available for all individual risks, as well. Note, however, that since Monte Carlo simulations are statistical methods there are random effects. This means that the inputs in Figure 9 that have very small contribution to variance may be random. In plain words; when the contribution of variance is less than an absolute value of roughly 3% - 5% we have to be careful. The more trials we run, the more reliable the sensitivity charts become.

Similar results to Figures 8 and 9 can also be produced for the uncertainty analysis of the risk analysis. Such analysis can answer questions such as what information should be improved to improve the quality of the risk analysis, and what effects can we expect from improving the information (this can be simulated). Due to space limitations this will be omitted here. Interested readers are referred to (Emblemsvåg 2010) for an introduction. For thorough discussions on Monte Carlo simulations, see ( Emblemsvåg 2003 ).

Figure 7.

Improving the quality of the pair-wise comparison matrices

Figure 8.

The 4 largest risks in a subjective probability overlay chart given ±1 variation

The final part of this step is to critically evaluate these analyses. Due to the enormous output of analytical information in this step, the analyses lend themselves to also critically evaluate the results. It should be noted that the AHP structure makes logic errors in the analysis very improbable. Hence, what we are looking for is illogic results, and the most important tool in this context is the sensitivity analyses.

Figure 9.

Sensitivity chart for the overall risk profile given ±1 variation

The next step in the augmented risk management process is Step 4. Running case is omitted from here and onwards since this is not significantly different from traditional approaches except the KM part, which at the time was not conducted in this case due to the fact that this was a start-up company with no KM system or prior experience.

4.4. Step 4 – Develop and implement strategies

After step 3 we have abundant decision-support concerning developing risk management strategies and information management strategies. For example, from Figure 10 we can immediately identify what risk management strategies are most suited (i.e., risk prevention and impact mitigation). Since most inputs are possibilities, risk prevention is the most effective approach. When the issues are impacts, impact mitigation is the most effective approach, and as usual it is a mix between the two that works the best.

These strategies are subsequently translated into both action plans (what to do) and contingency plans (what to do if a certain condition occurs). According to different surveys, less than 25% of projects are completed on time, on budget, and on the satisfaction of the customer, see (Management Center Europe 2002), emphasizing the focus on contingency planning as a vital part of risk management. “Chance favours the prepared mind”, in the words of Louis Pasteur. How to make action- and contingency plans is well described by (Government Asset Management Committee 2001) and will not be repeated here.

Information management strategies will concern the cost versus benefit of obtaining better information. This is case specific, and no general guidelines exist except to consider the benefits and costs before making any information gathering decisions. Before making such decisions, it is also wise to review charts similar to Figures 8. If the uncertainty distributions do not overlap, there is no need to improve the information quality; it is good enough. If they do overlap, sensitivity charts can be used to pinpoint what on inputs we need to improve the information quality.

KM in this step will include reviewing what has been done before, what went wrong, what worked well and why (knowledge and meta-knowledge). To the extent this is relevant for a specific case will greatly vary but the more cases are assembled in the KM system the greater the chances are that something useful can be found and therefore aid the process. In this step, however, it is equally important to use the results of the risk- and capability analyses performed in Step 3. These results can help pose critical questions which in turn can be important for effective learning.

4.5. Step 5 – Measure performance

The final step in the augmented risk management process is to measure the actual performance, that is, to identify the actual outcomes in real life. This may sound obvious, but often programs and initiatives are launched without proper measurement of results and follow-up, as (Jackson 2006) notes “Many Fortune 100 companies can plan and do, but they never check or act” [original italics]

The PDCA (Plan-Do-Check-Act) circle is fundamental in systematic improvement work.

. Checking relies on measurement and acting relies on checking, hence, without measuring performance it is impossible to gauge the effectiveness of the strategies and consequently learn from the process. This is commented on later.

Finally, it should be noted that although it may look like the process ends after Step 5 in Figure 5 – the risk management process is to continuously operate until objectives are met.

Advertisement

5. Critical evaluation and future ideas

The research presented here is by no means finished. It is work in progress although some issues seem to have received a more final form than others. What future work that should be undertaken, are the following:

  1. Using the AHP matrices for pair-wise comparison is incredibly effective, but it is not workable for practitioners and personally we also believe that many academics will have a hard time making these matrices manually (as done here). Therefore, if the augmented risk management approach is to become commonly used and accepted it will need software that can create the matrices, help people fill in the reciprocal values and to simplify the Monte Carlo simulations.

  2. There is significant work to be done on the KM side. Today, we have quite good grasp of handling risks that occur quite frequently as shown by (Neef 2005). The ultimate test of such systems would be how the system could help people deal with risks that are highly infrequent – so called high impact, low probability (HILP) events. This is a common type of events in the natural world, but here even professional bodies treat risks the wrong way, as shown in (Emblemsvåg 2008). Such events are also far more common than what we believe in the corporate world with enormous consequences as (Taleb 2007) shows. Thus, KM systems that could tap from a large variety of sources, to give people support in dealing with such difficult cases, would be useful.

  3. Many risks cross the organizational boundaries between business units (known and unknown when the risk management process is initiated) and this raises the issues of interoperability and managing risk in that context, see (Meyers 2006). He defines interoperability as ‘the ability of a set of communicating entities to (1) exchange specified information and (2) operate on that information according to a specified, agreed-upon, operational semantics’. The augmented risk management process has not been tested in such a setting, which should be done to prove that it works across organizational boundaries. In fact, due to the more consistent risk analysis, terminology and decision support of the augmented risk management process, it is expected that many of the problems (Meyers 2006) raises are solved, but it must be proven. KM, however, in an interoperable environment is a difficult case.

  4. Whether the augmented risk management process would work for statistical risk management processes is also something for future work. Intuitively, the augmented risk management process should work for statistical risk management processes because statistical risk management also has a human touch, as the discussion in Section 1 shows..

There are probably other, less pressing issues to solve, but this is the focus forward. Item 1 is mostly a software issue and is not commented further here. If this issue is resolved the risk analysis itself and the information management part would be solved. The issue concerning interoperability is a matter of testing, development of specifications and definitions necessary when crossing organizational boundaries. KM, however, is a difficult case – particularly if we include the issues of interoperability.

Since the origin of risks is multi-layered, it is important with a systemic approach towards KM. Also, some risks materialize quite seldom, from an individual perspective, but quite often on a corporate perspective, such as financial crises. This is another argument for a systemic approach on a wide scale that assembles knowledge from many arenas. Finally, the very rare risks, those that are high impact and low probability, can only be handled in a systematic way because any one person is not likely to experience more than one such risk in decades and hence memory becomes too inefficient. Another issue is that to evoke the right feeling of risk, people must internalize the risks and this can be really difficult. To borrow from (Nonaka and Takeuchi 1995) - explicit knowledge must be internalized and become tacit. Otherwise, the risk profile will not be understood.

From the literature we learn that the SECI model is an effective approach in handling tacit knowledge. Kusunoki, Nonaka et al. (1995 ), for example, demonstrated that the SECI model is good at explaining the successes where system-based capacities are linked with multi-layered knowledge. This is directly relevant for risk management, just mentioned previously. Thus, the SECI model seems to be a promising avenue for improving or complementing the more information-based KM systems that (Neef 2005) discusses. But, according to (Davenport and Prusak 1997), the philosophical position to Nonaka is in striking contrast to scholars subscribing to the information-based view of knowledge, which leads to IT based KM systems. Therefore, we must bridge the gap between these two main avenues of KM: 1) the information-based KM systems which are good at getting hold of large quantities of explicit knowledge, and 2) the SECI process which is good at converting all knowledge into action and vice versa. The SECI process is also good at generating new knowledge and making it explicit. How this bridge will work is still unclear and hence needs future work.

Advertisement

6. Closure

This chapter has presented an augmented risk management process. Compared to the traditional process there are many technical improvements, such as the usage of AHP matrices to ensure much more correct and consistent risk assessments, the usage of Monte Carlo simulations to improve the risk analysis and facilitate information management. However, it is still work in progress and the real issue that needs to be resolved in the future, for risk management to really become as important as it should be, is the establishment of a reliable KM process – particularly for HILP events. It is these events that cause havoc and need increased and systematic attention. How this can be achieved is currently unclear, except that it seems that we must listen to Albert Einstein’s famous statement that “Imagination is more important than knowledge”.

Advertisement

Acknowledgments

I greatly acknowledge the cooperation with Lars Endre Kjølstad concerning some parts of this chapter and the project which led to the case presented here. Also, I greatly appreciate the request to contribute to this book, as well as cooperation, from the publisher.

References

  1. 1. Argyris C. 1977Double loop learning in organizations." Harvard Business Review 55(Sept./Oct.).
  2. 2. Argyris C. 1978Organizational Learning: A Theory of Action Perspective, Addison Wesley Longman Publishing Company.304
  3. 3. Arrow K. J. 1992I Know a Hawk from a Handsaw. Eminent Economists: Their Life and Philosophies. M. Szenberg. Cambridge, Cambridge University Press:42 50
  4. 4. Asllani A. Luthans F. 2003What knowledge managers really do: an empirical and comparative analysis." Journal of Knowledge Management 7(3):53 66
  5. 5. Backlund F. Hannu J. 2002Can we make maintenance decisions on risk analysis results?" Journal of Quality in Maintenance Engineering 8(1):77 91
  6. 6. Bernstein P. L. 1996Against the Gods: the Remarkable Story of Risk. New York, John Wiley & Sons.383
  7. 7. Bhutta K. S. Huq F. 2002Supplier selection process: a comparison of the Total Cost of Ownership and the Analytic Hierarchy Process approaches." Supply Chain Management: An International Journal 7(3):126 135
  8. 8. Cavusgil S. T. Calantone R. J. Zhao Y. 2003Tacit knowledge transfer and firm innovation capability." Journal of Business & Industrial Marketing 18(1):6 21
  9. 9. CCMD Roundtable on Risk Management 2001A foundation for developing risk Management learning strategies in the Public Service. Ottawa, Strategic Research and Planning Group, Canadian Centre for Management Development (CCMD).49
  10. 10. Cheng E. W. L. Li H. 2001Analytic Hierarchy Process: An Approach to Determine Measures for Business Performance." Measuring Business Excellence 5(3):30 36
  11. 11. Davenport T. H. Prusak L. 1997Working Knowledge: How Organizations Manage What they Know. Boston, MA, Harvard Business School Press.224
  12. 12. De Bondt W. F. M. Thaler R. H. 1985Does the Stock Market Overreact?" Journal of Finance 40(3):793 805
  13. 13. Devenow A. Welch I. 1996Rational herding in financial economics." European Economic Review 40(3):603 615
  14. 14. Dey P. K. 2001Decision support system for risk management: a case study." Management Decision 39(8):634 649
  15. 15. Drucker P. F. 1986Managing for Results: Economic Tasks and Risk-Taking Decisions. New York, HarperInformation.256
  16. 16. Dubois, D., J. Lang and H. Prade Possibilistic logic. Toulouse, Institut de Recherche en Informatique de Toulouse, Université Paul Sabatier.p. 76.
  17. 17. Earl M. 2001Knowledge management strategies: toward a taxonomy." Journal of Management Information Systems 18(1):215 233
  18. 18. Emblemsvåg J. 2003Life-Cycle Costing: Using Activity-Based Costing and Monte Carlo Methods to Manage Future Costs and Risks. Hoboken, NJ, John Wiley & Sons.320
  19. 19. Emblemsvåg J. 2008On probability in risk analysis of natural disasters." Disaster Prevention and Management: An International Journal 17(4):508 518
  20. 20. Emblemsvåg J. 2010The augmented subjective risk management process." Management Decision 48(2):248 259
  21. 21. Emblemsvåg J. Bras B. 2000Process Thinking- A New Paradigm for Science and Engineering." Futures 32(7):635 654
  22. 22. Emblemsvåg J. Kjølstad L. E. 2002Strategic risk analysis- a field version." Management Decision 40(9):842 852
  23. 23. Emblemsvåg J. Tonning L. 2003Decision Support in Selecting Maintenance Organization." Journal of Quality in Maintenance Engineering 9(1):11 24
  24. 24. Friedlob G. T. Schleifer L. L. F. 1999Fuzzy logic: application for audit risk and uncertainty." Managerial Auditing Journal 14(3):127 135
  25. 25. Gilford W. E. Bobbitt H. R. Slocum J. W. jr 1979Message Characteristics and Perceptions of Uncertainty by Organizational Decision Makers." Academy of Management Journal 22(3):458 481
  26. 26. Government Asset Management Committee 2001Risk Management Guideline. Sydney, New South Wales Government Asset Management Committee.43
  27. 27. Hines W. W. Montgomery D. C. 1990Probability and Statistics in Engineering and Management Science. New York, John Wiley & Sons, Inc.732
  28. 28. Honderich T. Ed 1995The Oxford Companion to Philosophy. New York, Oxford University Press.1009
  29. 29. Hwang S. Salmon M. 2004Market stress and herding." Journal of Empirical Finance 11(4):585 616
  30. 30. Jackson T. L. 2006Hoshin Kanri for the Lean Enterprise: Developing Competitive Capabilities and Managing Profit. New York, Productivity Press.206
  31. 31. Jones M. E. Sutherland G. 1999Implementing Turnbull: A Boardroom Briefing. City of London, The Center for Business Performance, The Institute of Chartered Accountants in England and Wales (ICAEW).34
  32. 32. Kahneman D. Tversky A. 1979Prospect Theory: An Analysis of Decisions under Risk." Econometrica 47:263 291
  33. 33. Kangari R. Riggs L. S. 1989Construction risk assessment by linguistics." IEEE Transactions on Engineering Management 36(2):126 131
  34. 34. Kaufmann A. 1983Advances in Fuzzy Sets- An Overview. Advances in Fuzzy Sets, Possibility Theory, and Applications. P. P. Wang. New York, Plenum Press.
  35. 35. Klir G. J. 1991A principal of uncertainty and information invariance." International Journal of General Systems 17:258
  36. 36. Klir G. J. Yuan B. 1995Fuzzy Sets and Fuzzy Logic: Theory and Applications. New York, Prentice-Hall.268
  37. 37. Kunreuther H. Meyer R. C. van den Bulte (2004Risk Analysis for Extreme Events: Economic Incentives for Reducing Future Losses. Philadelphia, The National Institute of Standards and Technology.93
  38. 38. Kusunoki T. Nonaka I. Nagata A. 1995Nihon Kigyo no Seihin Kaihatsu ni Okeru Soshiki Noryoku (Organizational capabilities in product development of Japanese firms)." Soshiki Kagaku 29(1):92 108
  39. 39. Latzco W. Saunders D. M. 1995Four Days With Dr. Deming: A Strategy for Modern Methods of Management, Prentice-Hall.228
  40. 40. Li M. Gao F. 2003Why Nonaka highlights tacit knowledge: a critical review." Journal of Knowledge Management 7(4):6 14
  41. 41. Mac Crimmon. K. R. Wehrung D. A. 1986Taking Risks: The Management of Uncertainty. New York, The Free Press.400
  42. 42. Management Center Europe 2002Risk management: More than ever, a top executive responsibility." Trend tracker: An executive guide to emerging management trends(October):1 2
  43. 43. Mc Neill D. Freiberger P. 1993Fuzzy Logic. New York, Simon & Schuster.320
  44. 44. Meyers B. C. 2006Risk Management Considerations for Interoperable Acquisition. Pittsburgh, PA, Software Engineering Institute, Carnegie Mellon University.28
  45. 45. Miller G. A. 1956The magical number seven, plus or minus two: Some limits on our capacity for processing information." Psychological Review 63:81 97
  46. 46. Neef D. 2005Managing corporate risk through better knowledge management." The Learning Organization 12(2):112 124
  47. 47. Nonaka I. Takeuchi H. 1995The Knowledge-creating Company: How Japanese Companies Creat the Dynamics of Innovation. New York, Oxford University Press.298
  48. 48. Peniwati, K. (2000). The Analytical Hierarchy Process: Its Basics and Advancements. INSAHP 2000, Jakarta
  49. 49. Peters E. E. 1999Complexity, Risk and Financial Markets. New York, John Wiley & Sons.222
  50. 50. Pieters D. A. 2004The Influence of Framing on Oil and Gas Decision Making: An Overlooked Human Bias in Organizational Decision Making. Marietta, GA, Lionheart Publishing.55
  51. 51. Polanyi M. 1966The Tacit Dimension. New York, Anchor Day Books.
  52. 52. Porter M. E. 1998Competitive Strategy: Techniques for Analyzing Industries and Competitors. New York, Free Press.407
  53. 53. Robbins, M. and D. Smith (2001). BS PD 6668:2000- Managing Risk for Corporate Governance. London, British Standards Institution.p. 3333
  54. 54. Roos N. 1998An objective definition of subjective probability. 13th European Conference on Artificial Intelligence, John Wiley & Sons.
  55. 55. Saaty T. L. 1990The Analytic Hierarchy Process: Planning, Priority Setting, Resource Allocation. Pittsburgh, RWS Publications.480
  56. 56. Saaty T. L. Forsman E. 1992The Hierarchon: A Dictionary of Hierarchies, Expert Choice, Inc.
  57. 57. Sias R. W. 2004Institutional Herding." The Review of Financial Studies 17(1):165 206
  58. 58. Standards Australia (1999). AS/NZS 4360:1999- Risk Management. Sydney, Standards Australia.p. 4444
  59. 59. Stevens S. S. 1946On the Theory of Scales of Measurement." Science 103(2684):677 680
  60. 60. Takeuchi,H. 1998Beyond knowledge management: lessons from Japan. www.sveiby.com.au/.
  61. 61. Taleb N. N. 2007The Black Swan: The Impact of the Highly Improbable. London, Allen Lane.366
  62. 62. The Economist 2002Barnevik’s bounty. The Economist. 362:62
  63. 63. The Economist 2004Signifying nothing? The Economist. 370:63
  64. 64. The Economist 2009Greed- and fear: A special report on the future of finance. London, The Economist.24
  65. 65. 1854 Webster (1989). Webster’s Encyclopedic Unabridged Dictionary of the English Language. New York, Gramercy Books.p. 1854
  66. 66. Wickramasinghe N. 2003Do we practise what we preach? Are knowledge management systems in practice truly reflective of knowledge management systems in theory?" Business Process Management Journal 9(3):295 316
  67. 67. Zadeh L. A. 1965Fuzzy Sets." Information Control 8:338 353
  68. 68. Zimmer A. C. 1986What Uncertainty Judgements can tell About the Underlying Subjective Probabilities. Uncertainty in Artificial Intelligence. L. N. Kanal and J. F. Lemmer. New York, North-Holland. 4:249 258

Notes

  • Note that the views presented in this chapter are those solely of the author and do not represent the company or any of its stakeholders in any fashion.
  • Personal experience as consultant in Det Norske Veritas (DNV).
  • The dichotomy of tacit- and explicit knowledge is attributable to (Polanyi, M. 1966), who found that tacit knowledge is a kind of knowledge that cannot be readily articulated because it is elusive and subjective. Explicit knowledge, however, is the written word, the articulated and the like.
  • SECI (Socialization, Externalization, Combination, and Internalization) represents the four phases of the conversions between explicit and tacit knowledge. Often, the starting points of conversion cycles start from the phase of socialization (Li, M. and F. Gao (2003).
  • The PDCA (Plan-Do-Check-Act) circle is fundamental in systematic improvement work.

Written By

Jan Emblemsvåg

Submitted: 14 October 2010 Published: 28 July 2011