Open access

Quantitative Operational Risk Management

Written By

Aleksandra Brdar Turk

Published: 17 August 2010

DOI: 10.5772/intechopen.83881

From the Edited Volume

Advances in Risk Management

Edited by Giancarlo Nota

Chapter metrics overview

3,909 Chapter Downloads

View Full Metrics

1. Introduction

In the last decade, risk management in the banking and insurance sectors has witnessed a rise in the importance of operational risk among other types of risks, rising from the position of »other risks« and placing itself alongside credit and market risks – the two risk categories deemed to be the most important in the industry, gaining most of the attention by risk managers in financial institutions and by regulators. The main reasons for such a change are a powerful growth of the financial markets, its increasing deregulation and globalization, the growing organizational complexity of these institutions, their corporate and capital partnerships, which increases their overall exposure to risk, as well as the intense development of financial services, which are becoming more accessible to a wider circle of investors.

The development of a comprehensive operational risk management system is the basis for a comprehensive overall company-wide risk management model for any financial institution. The operational risk management system includes the following steps to the analysis of operational risk: the identification of operational risk factors and events, collecting data on operational risk events, the analysis of gathered data and the use of the analysis results in decision making throughout the institution. Historically, the financial institutions’ business environment has the upper hand in the development and implementation of risk management systems, including operational risk management, since risk management was first developed and incorporated. Fortunately, this does not mean that financial institutions hold the exclusive opportunity of using risk management tools to their advantage. As we will show later on, analogue operational risk management systems can also be developed for other industries, such as logistics, where business processes are mainly what operational risk management should focus on; also, high-technology or fast developing industries, such as telecommunications, software, hardware, pharmaceuticals and biotechnology, where the development of new product and services solutions requires substantial financial investments, with patent protection offering substantial rewards and patent lawsuits threatening with substantial financial downfalls.

Any operational risk management method allows a company to improve its opportunities in the business environment by identifying potential threats, by identifying potential losses or simply by turning the company’s attention to the processes within that company which in the past have caused or, in turn, experienced the most numerous or most financially burdening operational risk events. The different types of operational risk identification and measurement methods require a different involvement of the company; they also differ in their basic principle – being either qualitative or quantitative, their complexity and comprehensiveness and the width of the application of the data analysis’ results. Among different methods, the use of advanced measurement methods and the use of quantitative methods allow a company to quantify and manage this important type of risk in the widest possible way, but unfortunately, these methods require the most effort by the company – an extensive development, integration and knowledge for such a system to be efficiently used within the company’s overall risk management system.

Unfortunately, the quality of any risk management system can only be determined by its use, by testing the quality of gathered data and of forecasts made on the basis of such data. This can be done by comparing the gathered data and forecasts them to actual data arising from continuing operations. Obviously, it is imperative that in such a data gathering operation all levels of business be included; this means that a consensus on all levels of management in the company is necessary and that all levels of operations in the company are aware of, and integrated into, the risk management system.

One of the other problems with such data analysis models, apart from the system setup effort, lies in extreme events, which are rare by nature and extreme by consequences – both quantitatively (e.g. financially) or qualitatively (e.g. loss of reputation). These events may have not yet occurred in the recent history of a financial institution, say in the last 10 years. A logical, albeit erroneous, conclusion one would make based on such data is that extreme events do not happen and will not happen in the future. Such a conclusion may cause the underestimation of minimal capital requirements or capital reserves for financial institutions or the underestimation of risk event provisions in other companies.

Therefore, the development of an operational risk management model which takes into consideration and integrates data on extreme events is an important issue for companies with a short historical data background. This is the obvious situation for recently founded companies with a short business history or for companies which only recently developed an operational risk data gathering system. Such companies can be found in most fast developing industries, as well as in changing business environments due to changes in legislation, mode of operation, political or macroeconomic systems. Here, any data gathered in the past can be considered an unreliable base for the use within such a model.

In the following sections of the chapter, we will continue by presenting the different risk management methods which will be followed by proposing, in our opinion, the most suitable method for operational risk management. We will then present an innovative approach in using such a method by adapting it to the poor data environment and showing how to use simulations to obtain additional data for analysis. We will illustrate the use of the method on an example and analyze the results of the model. In conclusion, we will show some possibilities for the integration of the method within the company’s risk management system.

Advertisement

2. Risk Measurement Methods

2.1. The Choice of a Suitable Method

Operational risk can be defined as the risk remaining after eliminating market, credit, interest and exchange risks (Allen & Bali, 2007). The Basel Committee on Banking Supervision defines operational risk as the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. This definition includes legal risk, but excludes strategic and reputational risk (BIS, 2004; Van Greuning & Bratanovic, 2003). It is in the New Basel Accord (BIS, 2004) that operational risk is given a greater consideration and the methods for its identification, measurement and management are explored. It is also in this document that operational risk is included in the calculation of minimum capital requirements for banks.

The New Basel Accord allows financial institutions to choose one of the following proposed methods for the calculation of minimal capital requirements: the Basic Indicator Approach, the Standardized Approach or the Advanced Measurement Approach (AMA). All of the proposed methods can be somewhat modified to measure and manage operational risk, but the AMA is the one most suitable for operational risk events, as it allows the models to be implemented in companies other than financial institutions, i.e. companies which require such models for the calculation of capital adequacy or the calculation of provisions.

Within the AMA, a company has the possibility of developing its own specialized operational risk measuring model, with the premises that the model be comprehensive, transparent and systematic. The AMA includes the Internal Measurement Approach, the Scorecard Approach and the Loss Distribution Approach. With the LDA, the company creates a matrix of business processes and possible operational risk events and determines the probability of each combination of events and business processes as well as the severity of the loss. This is the basis for the determination of the loss distribution function of the total loss incurred in a year (or other period). The company then uses the loss distribution to calculate the VaR at a 99.9 % confidence level.

According to Chernobai et al. (2007), the advantages of using the LDA are its high sensitivity, the possibility of integrating both internal and external data into the loss distribution model, as well as expert estimates, and high reliability of results, provided that reliable data is used in the model. Some disadvantages of the method include VaR’s failure to meet the sub-additivity criteria in cases of fat-tailed distributions (Nešlehova et al., 2006), the interdependence and correlation between model input variables, the lack of a diversification effect with extreme event distributions (Embrechts et al. 2002; Ibragimov, 2005), the questionable reliability of high-quantile statistical indicators such as VaR with extreme events (McNeil et al., 2005), as well as the general problem of data gathering and data quality in an environment of scarce and extreme events, which are often well protected information (de Fontnouvelle et al., 2003). Many of these disadvantages can be averted by applying a few modifications to the LDA which will be presented in the next section.

A key issue in constructing such a model is the identification of the correct loss distribution function for the gathered data. By correctly choosing the loss distribution function a company can calculate the probability and total loss incurred by operational risk events and consequently maintain an adequate level of capital or provisions for operational risk losses. For the purpose of data analysis, the usage of classical statistical functions is quite adequate for data falling within the major part of the loss distribution, but may differ significantly in the tail of the data distribution, especially in the case of heavy-tailed data, where the use of the Extreme Value Theory (EVT) is much more suitable (Moscadelli, 2004). Due to the extreme nature, low frequency and high severity of operational loss events, which can cause significant losses to a financial institution, it is imperative to achieve a good fit in the tail of the distribution function.

2.2. The Use of Value at Risk

The principal method of estimating of the capital charge (or provisions) for operational risk within the LDA is the Value at Risk measure (VaR). From a market risk measure, VaR has become a much more versatile measure of risk (Jorion, 2001; Manganelli & Engle, 2001) thanks to actuarial methods of estimating the loss distribution functions based on historical data.

VaR is, in a way, a further development of classical derivatives valuation models such as the Black-Scholes model and refers to the volatility of a portfolio Ft within a timeframe t or on a target date:

V a R ( α , t ) = E ( F t ) Q ( F t , α ) E1

where Q(Ft,) is the quantile corresponding to the confidence level.

Considering that the aggregate operational risk losses are distributed according to an actuarial model, X being the loss for a certain operational risk event and N t the number of risk loss events in a t period of time, e.g. one year, the cumulative losses occur according to the following stochastic process:

S Δ t = k = 1 N Δ t X k E2

and the cumulative distribution function (CDF) can be written as:

F S Δ t ( s ) = P ( S Δ t s ) = { n = 1 P ( N Δ t = n ) F X n * ( s ) ; s 0 P ( N Δ t = 0 ) ; s = 0 E3

where FX is the distribution function of the stochastic variable X, while F X n * denotes the n-th convolution of FX with itself.

Clearly, such a distribution function is non-linear by X and N and an analytical approach to determining its parameters is not viable. Instead, some of the alternative methods can be used (Klugman et al., 2004; Enrique, 2006), such as the kernel method (Butler & Schachter, 1998) and the Monte Carlo simulation, which will be demonstrated in the following parts of the chapter in our example.

Within the actuarial models, the operational risk VaR can be calculated as follows (Chernobai et al., 2007):

1 α = F S Δ t ( V a R ) = n = 1 P ( N Δ t = n ) F n * ( V a R ) E4

or by using the inverse distribution function:

V a R = F S Δ t 1 ( 1 α ) E5

By keeping in mind some specifics when dealing with cumulative operational risk losses data within fat-tailed distribution, where the maximum observed value can significantly affect the cumulative loss Sn (see Embrechts et al., 1997), the fat-tailed distribution VaR can be calculated as:

V a R F X 1 ( 1 α Ε [ N Δ t ] ) E6

Some of the advantages of using VaR as a risk measure according to Wilson (1998) include the possibility of comparing different types of risk and different subjects, i.e. different financial institutions or, in the case of expanding the method to non-financial sectors, the comparison of other companies in highly competitive industries. It can be used directly as a measure for creating provisions for risk events and the use of VaR can also be applied to certain financial analysis measures, such as ROE or RAROC.

Potentially, the use of VaR presents also a few problems (Yamai & Yoshiba, 2002). These include the limitation of presenting only the 99.9-percentile loss and all higher losses lying to the right of the 99.9-percentile threshold, creating a potentially non-true picture of potential losses. It also fails to take into consideration the dependencies between risk factors and processes, which can significantly affect the total size of potential operational risk losses, by underestimating or overestimating the projected losses. The use of VaR enables companies to study operational loss data, but it cannot prevent high operational losses in itself. Therefore, the use of VaR must be integrated within an efficient and comprehensive risk management system. Finally, the sub-additivity criteria, which VaR fails to meet, is very important from a methodological point of view, as discussed by Artzner et al. (1999), Chavez-Demoulin et al. (2006) and Embrechts et al. (2002).

As an alternative to VaR, some authors are proposing the use of Expected Shortfall (ES) or Conditional Value at Risk (CVaR) (see Chernobai et al., 2007; Embrechts et al., 2008). It can be calculated as:

C V a R = Ε [ S Δ t | S Δ t V a R ] E7

CVaR calculates the potential loss in the case an event in the right tail of the distribution beyond VaR should occur. Unlike VaR, which may fail the sub-additivity property, CVaR is a sub-additive measure of risk suitable for use in fat-tailed and extreme event distributions.

2.3. The use of Extreme Value Theory (EVT)

Operational loss data are typically right-skewed, leptokurtic (i.e. concentrated around the value 0) and fat-tailed on the positive (right) side of the distribution (see Cruz, 2002; Moscadelli, 2004; De Fontnouvelle et al., 2006). The use of EVT methods is therefore recommended.

There are two basic analytical methods of the EVT: the block maxima model, which studies the most severe losses within a time interval-based block-organized data, and the peak-over-threshold model (POT), which analyzes data above a high threshold. In our quantitative operational risk measurement model, we have chosen the use of the POT method as follows:

Let u be a high threshold. Fu is the distribution function of data above this threshold, also known as the conditional excess distribution function (Chernobai et al., 2007):

F u ( x ) = P ( X u x | X x ) = F ( u + x ) F ( u ) 1 F ( u ) E8

as shown in Figure 1.

Figure 1.

The conditional excess distribution function above the high threshold u.

Embrechts et al. (1997) have shown that for high values of u the conditional excess distribution function Fu takes the shape of a two-parametric general Pareto distribution (GPD) with the following distribution function:

F ( x ) = { 1 ( 1 + ξ x μ β ) 1 ξ ; ξ 0 1 e x μ β ; ξ = 0 E9

where

x μ ; ξ 0 , μ x μ β ξ ; ξ 0 , E10

x being an individual operational risk loss above the threshold, μ + is the location parameter, usually assumed to be 0, >0 is the size parameter and is the distribution’s shape parameter. The GPD becomes interesting for operational risk event data when >0, where it becomes fat-tailed. For <1 it is possible to calculate the mean value, whereas the variance and standard deviation can only be calculated for values of <0.5.

There are several advantages of using the EVT in modeling operational risk event data

The advantages and disatvantages of using EVT for high-quantile risk measures are also discussed in Diebold, Schuermann and Stroughair (1998), Embrechts (2000), Aragonés, Blanco and Dowd (2000), Fernandez (2003b), Chavez-Demoulin and Roehl (2004), Emmer, Klüppelberg and Trüstedt (1999) and Bensalah (2000).

. The EVT describes the characteristics of the tail of a distribution function and is a direct tool for the analysis of high-severity-low-frequency extreme event data around and beyond a high threshold. By using the POT method one can analyze the catastrophic losses which lie beyond the high threshold, which clearly presents an advantage over other risk measures, such as VaR. By using the EVT, one can apply either computational theoretic methods for determining the parameters of the distribution function and, specifically, the tail of the distribution function, or use non-parametric measures, such as the Hill estimator, which possesses interesting asymptotic qualities.

Notwithstanding some of the shortcomings of using the EVT (the use of a limited number of observations, which can lead to inaccuracy in parameter estimates; the use of graphic methods; potentially long calculation times in large data samples and complex models; the focus on high-severity events and the potential underestimation of medium- and low-severity events), we feel that the advantages of EVT compared to VaR overweigh its weaknesses in the analysis of operational risk events, especially by applying the modifications of the method as described in the following section.

Advertisement

3. Adaptations and Simulation

3.1. Construction of the Model

The operational risk measurement model presented hereby is based on the Loss Distribution Approach (LDA) of the New Basel Accord.

It starts out as a bottom-up analysis of business processes, their classification, sorting and grouping into meaningful groups with common characteristics. This allows us to easily observe for operational risk events in practice and it facilitates the identification and further study of these events.

The bottom-up approach requires a detailed knowledge of business processes within the company which is where the help of the lowest management levels will provide a very useful feedback. The business processes are then evaluated and the most likely risk factors for the specific business processes are identified and input into the model.

The model of P processes and R risk factors is then a P x R matrix as shown in Figure 2. We then continue by estimating the frequency of the loss event for each process-risk factor combination (F()P ) and the probability distribution function of the loss severity (F(a, s)R ).

Figure 2.

The building elements of the operational risk measurement model.

At this point, one must consider the potential hazard of using experts from within the company as they tend to overemphasize the importance of individual processes and are prone to offering too detailed analyses of their field of operation. The use of sound judgment is definitely called for when setting up the model, as the mapping of business processes and subsequent identification of risk factors should not be too specific in order to find some common ground between processes within the sub-segment and, on the other hand, in order to keep the size of the model manageable and cost-effective.

The following steps in implementing the model require several modifications of the LDA, which are illustrated in the following subsection.

3.2. Modifications of Methodology

Firstly, there is the digression from the Basel “operating territory” into the field of non-financial companies that we are proposing and will show later on by example. By selecting a company that operates in the field of high-technology such as software production, we are choosing on average a smaller company than a bank and at the same time an individual organization, thus keeping the model simple while encompassing a full range of business processes and risk factors. We believe that keeping the model manageable in size is one of the most important features that helps it be cost-effective and its results comprehendible, which is why in the case of analyzing a large company we propose that the company first be segmented into few larger divisions (e.g. production, sales, research etc.) and an individual operational risk measurement model be set up for each such division.

The high-technology and other fast growing industries and their companies, as well as companies from developing countries, posses a relatively short business history, being due to short-lived stable economic, capital market and legislative environment, or the novelty of their existence in itself, usually ranging from ten to twenty years in the “oldest” new industries or most developed fast-growing countries to none at all other in countries, where the capital market is only a few years old and still in the first phases of development. This is obviously even truer for the individual company. This lack of history, meaning a lack of operational risk event data, causes a significant problem for the consistency and validity of the proposed model, since a large database of consistent quality is the basis of the method (Dell'Aquila & Embrechts, 2006; Ebnöther et al., 2001). The second proposed modification of the LDA method is aimed to solve this problem. Namely, we propose a widening of the database to include external data from the whole industry and the inclusion of subjective expert’s assessments and estimates on operational risk losses.

In collecting expert’s estimations, we adjust for the subjectivity of the estimations by instructing experts to base the estimations on historical data available, we adjust the estimates for differences in the companies’ sizes as well as correcting or eliminating historical data which are biased due to adjustment periods because of restructuring, reorganization, mergers, legislation changes (e.g. tax or capital markets regulation which is known to change often in developing countries) etc. within each company. By adding expert’s estimates of operational risk losses and adjusting historical data, we not only increase the quality and size of the database, creating an adequate base on which to build our model, but we also include the very important expectation or prediction factor of potential losses into the database, which significantly diminishes one of the problems of relying solely on historical data (de Fontnouvelle et al., 2003).

Within this modification we propose another similar deviation from the LDA model, the inclusion of potential or opportunity losses of the company, including the ones occurring during the process of detecting a potential (or actual) loss events and eliminating its potential consequences before they reach a significant severity by analyzing its sources or risk factors and limiting their influence by adding several additional internal control mechanisms.

Thirdly, we solve the problem of VaR potentially failing the sub-additivity property of coherent risk measures. Instead of determining a 99.9-confidence level VaR for each of the element of the business process/risk factor matrix and subsequent summation of these VaRs, we use a simulation to obtain the yearly operational risk losses for the entire company. This is then used for a yearly VaR calculation. This way we eliminate not only the sub-additivity problem, but we also eliminate another potential threat: the overestimation of VaR, which can occur by simply adding together several VaR measures, as this method does not take into consideration the diversification effect (Jorion, 2001; Embrechts et al., 2002). The proposed modification also eliminates the task of including correlation parameters into the final VaR estimate (Böcker & Klüppelberg, 2007; Chavez-Demoulin et al., 2006). By simulating total yearly operational risk losses, we obtain a loss distribution function consistent with the sum of the individual business process/risk factor loss distribution functions, while simultaneously simplifying the model and the analysis and reducing calculation time and costs.

The use of a simulation is indicated, since discreet events like operational risk loss events can be described by a Poisson process, which can be used for an elegant analysis of the frequencies of event occurrences, including the calculation of event occurrences in longer or shorter time periods and the estimation of event occurrences in two different business processes determined by two separate Poisson functions – the latter due to the sub-additivity property of the Poisson function (Chernobai et al., 2007).

By using a Monte Carlo simulation where the event occurrences are determined by a stochastic Poisson process for each business process and with a yearly frequency of for each business process, we can create a time series with enough OR loss event data to include in the model. For the distribution function of the frequency of event occurrences, we use the available historic data and experts’ estimates of loss severity (a) and the estimated span of the data determined by the standard deviation (s), obtaining a different distribution function for each business process/risk factor combination of the model matrix of varying shapes and sizes from normal distributions to asymmetric and skewed distributions like the log-normal distribution function.

This brings us to the core of the operational risk measurement model, the determination of the probability distribution function (PDF) of the losses. Here we use the EVT methods described in section 2 of the chapter. After successfully processing the data and determining the PDF, we can use the results for the estimation of potential losses using either VaR or CVaR, as described in section 2 of the chapter and, subsequently, determine the capital requirements or provisions that are necessary to protect the company from potential financial losses should an extreme operational risk event ever occur.

Advertisement

4. Analysis of Results

4.1. Business Process and Risk Factor Mapping

The concept of analyzing and the realization of the importance of a company’s business processes have been known for decades, their management has strongly gained importance with Porter’s Value Chain (Porter, 1985). Harmon (2007) and Jeston & Nellis (2008) emphasize the importance of system-oriented process management within the company, keeping in mind that they are inseparably linked within the company and all together contribute to the efficiency of the whole system.

A thorough business process analysis and business process mapping can identify weak or critical elements in a company’s value chain. By adding the information on risk factors that affect some or all business processes and by setting up additional control and safety mechanisms within the processes, a company can effectively manage those risks and thereby manage its overall business risk.

The bottom-up approach that we have chosen for business process mapping in our operational risk measuring model starts with a list of detailed business processes that are being executed throughout the company and are later combined with other processes into process groups with similar characteristics.

Since operational risk management is already widely spread in financial institutions, especially banks, and we have already stated that the logic behind operational risk management and the use of the proposed model can be generalized and widened to other industrial sectors as well, we will illustrate the construction of the model on a software making company, starting with mapping its business processes

For an example of the model use in financial institutions, see Brdar Turk (2009).

.

In mapping the business processes we have also analyzed the organizational structure, the supply, production and sales process and the products of the company. We have identified the following four business process groups and business processes: (1) research and product development, (2) product and client support, (3) marketing and sales, (4) general administration.

The risk factors’ identification was based on the Basel Accord and subsequently modified to the specifics of our example company. We have identified the following seven risk factors: (1) internal fraud and theft, (2) external fraud and theft, (3) clients, services and practices, (4) IT error or failure, (5) employment and work environment safety, (6) execution and management of processes and (7) physical damage to assets. Note that all the risk factors are chosen and defined in such a way as to be relevant for all business segments, i.e. they occur in each of the four business segments in our example.

4.2. Simulation

This whole step can be skipped if there is enough empirical data available for direct use in the model.

Based on available company, industry and competitors’ loss event data, we next estimate the probability distribution type and most important parameters (average, standard deviation) for the size and frequency of loss events in each business segment that are caused by each risk factor (i.e. for 28 process-risk factor combinations).

With this data, we use a simulation software package such as GoldSim, AnyLogic or MathLab to generate 1000 repetitions of a business year of our sample software company, thus creating a large data sample of loss event data. The sum of all loss events within one business year creates a composite Poisson distribution , a sum of independent Poisson distributions of loss event data for individual business processes i , each loss event being also independent from other loss events. Although in the real world loss events as well as business processes are correlated, this mathematical simplification eases the computation of model parameters.

4.3. Basic Data Analysis

The generated (or gathered) data are first analyzed with elementary statistical methods, as shown in the example below.

Figure 3.

Histogram of loss event severity (shown in EUR) and frequency from example data.

From the histogram alone the skewed nature and the extended right tail of the loss event data distribution is already visible, which is further confirmed by some descriptive statistics: the mean loss severity is slightly greater than the median, the kurtosis and skewness are less than 1, all indicating a right-side fat-tailed distribution.

Mean (EUR) 67.647,78
Median (EUR) 63.872,13
Standard Deviation (EUR) 41.424,74
Sample Variance 1.716.008.838,65
Kurtosis 0,62
Skewness 0,76
99th percentile (EUR) 207.005,48

Table 1.

Descriptive statistics for example data.

Similar analyses can be made for individual business process data or risk factors thus identifying the risk factor that causes the most frequent, smallest or most extreme losses from operational risk. This analysis can also provide the answer to the question which business process experiences the most loss events and which process’ losses differ most significantly from a normal distribution. These must be the point of focus in the subsequent operational risk management activities and when implementing additional safety mechanisms and controls.

4.4. Distribution Type and Parameter Estimation

By using a statistical software package, such as R, Stata or SPSS, we fist establish that we are indeed dealing with an extreme value distribution and that it is appropriate to use EVT methods for parameter estimation. This is done with graphic methods, such as the mean excess plot, which is defined as the mean of all differences between the values of the data exceeding a high threshold u and u for different values of u:

e ( u ) = Ε [ X u | X u ] E11

In the case of a fat-tailed distribution, the mean excess plot looks like a straight upward-sloping line (Chernobai et al., 2007; Cruz, 2002). The mean excess plot in Figure 4 clearly indicates that the example data is heavy-tailed.

Figure 4.

Mean excess plot for simulated example data.

We then choose several empirical distributions and estimate the functions’ parameters. Among available distributions in EVT we have chosen the most commonly used and tested distributions (see Cruz, 2002; Moscadelli, 2004; Ebnöther et al., 2001; Chapelle et al., 2005): the Generalized Pareto distribution (GPD), Generalized Extreme Value distribution (GEV), the Gumbel and the Weibull distribution. GPD and GEV have been most frequently determined as fitting to extreme losses in financial institutions. Additionally, the Gumbel distribution is a special case of GEV with 0 as the shape parameter, making it easier to determine other function parameters. Both Gumbel’s and Weibul’s are thin-tailed distributions, which consequently results in underestimating extreme losses in the right tail of the distribution.

As noted before, we have chosen to analyze the data in the distribution’s right tail and according to EVT, we have chosen the POT method for parameter estimation. The first step is the selection of the high threshold u. This can, in turn, affect the estimated parameters, it is therefore important to use additional diagnostic methods for the threshold determination. With GPD, we can use the Hill plot to determine the shape parameter (Chernobai et al., 2007; Cruz, 2002), which stabilizes horizontally at the most suitable threshold u; the Hill function for each sample value k is:

Η ξ ^ = 1 k j = 1 k ln X j ln X ( k ) E12

Additionally, we can use the plot of the shape parameter in regards to the value of the high threshold u, the appropriate threshold being the value where the plot horizontally stabilizes.

Figure 5.

Hill plot and Shape estimator plot for example data

In our example data, the Hill plot does not stabilize significantly at any threshold value, whereas the - u plot shows 150.000 EUR as the appropriate threshold, as can be seen from Figure 5.

For the estimation of the GPD parameters we have chosen the maximum likelihood estimate method (MLE)

The use of the method in the distribution’s tail is shown by Nylund (2001).

, which in our example data and with the value of u at 150.000 EUR converges to a single solution. The GEV parameters were estimated using the block maxima method and MLE. The fitting of the Gumbel and Weibull distribution was done for the whole data sample (not only the right-tail), since these are thin-tailed distributions.

The parameters of all fitted distributions are shown in Table 2.

Distribution parameter Parameter value Standard error of estimate
GPD
26 . 370 ,00 2 . 752 , 17
-0,081 0,08 5
u ( high threshold ) 150.000 ,00
n u ( number of values above u ) 136
f <u ( density of data below u ) 0,861
GEV
94 . 296,74 1.031,12
34 . 047,64 940,28
-0,118 0,0190
Gumbel
93 . 463,22 1.816,19
32 . 427,01 1.048,58
Weibull
3,16 0,0712
12 . 387,20 878,03

Table 2.

Distribution parameters estimated from example data

4.5. Goodness-of-fit Tests

It is important to test for goodness-of-fit for all fitted distributions in order to maximize the analytical power of the model and its potential use in risk management and extreme event prediction. The most common is the use of the quantile-quantile (QQ) plot which shows the empirical distribution quantile vs. actual data. The fit is adequate if the plot is close to the imaginary 45 diagonal line of the plot. The QQ plot can also be drawn for tail data only, which is important for GEV and GPD goodness-of-fit since they were only fitted to tail data.

Figure 6.

QQ plots for GEV for the whole sample (gevr / data) and the tail of the distribution (gevr1502 / data 150).

The results of the graphic tests are shown in Figure 6. They show an apparent adequate fit for GEV for the whole sample, whereas the fit for the tail is clearly inadequate. The plots in Figure 7 show an inadequate fit to Weibul and Gumbel distributions as well. The GPD QQ plot (Figure 8) shows an adequate fit.

Figure 7.

QQ plots for Gumbel (gumbr) and Weibull (weibullr) distributions.

Figure 8.

The QQ plot for GPD.

GPD was additionally tested with the GPD density function and its inverse function plots and residual plots showing an adequate fit in the tail as well as no autocorrelation or heteroschedasticity in the residuals (Figure 9), by which we can conclude that the example data are distributed according to the GPD.

The goodness of fit can also be determined by using non-parametric methods such as the Pearson’s Chi-square test (see D'Agostino & Stephens, 1986). There are two important shortcomings of the test: firstly, its dependency on the number of classes k into which the data are segmented, and secondly, the need for a large data sample for the asimptotic properties of the Chi-square function hypothesis to hold. This can also be seen from Chi-square test results in Table 3.

Figure 9.

Goodness of fit tests for GPD with estimated parameters = 28.027,00 and = -0,0781.

The second group of tests are empirical distribution function (EDF) based tests, which can be used for all distributions and use the vertical shift of the empirical distribution derived from the data compared to the theoretic distributions (see Anderson & Darling, 1952; D'Agostino & Stephens, 1986; Chernobai et al., 2007). The most commonly used are the Kolmogorov-Smirnov test (KS) and the Anderson-Darling test (AD).

Let us denote the empirical distribution function as Fn(x) and the theoretic distribution function as F(x). The KS test is defined as:

K S = n max { D + , D } E13

where D+ and D are the maximum and minimum distance between the empiric and theoretic distribution.

By the probability integration method, we get the following formula for sample data KS test estimation (D'Agostino in Stephens, 1986):

K S = n max { sup j ( j n z ( j ) ) , sup j ( z ( j ) j 1 n ) } E14

where z(j) is the theoretic distribution value for the j sample value.

The AD test is defined as:

A D = n sup x | F n ( x ) F ( x ) F ( x ) ( 1 F ( x ) ) | E15

whereas for sample data we have:

A D = n max { sup j ( j n z ( j ) z ( j ) ( 1 z ( j ) ) ) , sup j ( z ( j ) j 1 n z ( j ) ( 1 z ( j ) ) ) } E16

The KS test focuses mostly on the middle of the distribution and gives the values in this area a greater weight within the final result, whereas the AD test focuses more on the distribution tails, making it more suitable for fat-tailed distribution goodness-of-fit testing.

GPD GEV Gumbel Weibull
2 1 7 . 452 [0,18 9] does not converge does not converge does not converge
KS 0, 854 [0,8 22 ] 0,027 [0, 398 ] 0,517 [0,000] 0,99 9 [0,000]
AD 48 , 259 [0,9 52 ] 0,351 [0, 572 ] 3 , 275 [0,000] 2,9 89 [0,000]

Table 3.

Goodness-of-fit test results for example data

The results of the KS and AD tests in Table 3 show that GPD is indeed the most suitable distribution for the example data.

4.6. Use of Results

The most obvious use of the analysis’ results is the determination of VaR for the company. We can see from Table 1 that the 10th largest loss amounts to 207.005,48 EUR, which is the VaR at a 99% confidence level. This is the actual loss the company may suffer in the 1% worst case scenario and should set aside provisions (or capital) of this amount to protect from financial distress should such a loss actually occur.

To illustrate the underestimation problem with VaR, we calculated VaR and CVaR from the distribution function fitted to the example data and compared them to the empirical VaR (derived directly from sorted example data).

Criteria Value (EUR)
VaR - empirical data 207.005,48
VaR ? from GPD 215.276,32
CVaR ? from GPD 218.358,44

Table 4.

Value at Risk calculations from empirical data and from fitted GPD

From the results shown in Table 4, the following clearly holds:

V a R e m p V a R G P D C V a R G P D E17

which suggests that the most suitable criteria in view of non-underestimating potential losses for the assessment of capital adequacy or the creation of provisions for the protection from risk losses, is Conditional Value at Risk.

There are other uses for the analysis’ results as well. One can focus on a single business sub-process or segment, identify the contribution of individual risk factors to the loss estimates of that segment and use the results to implement additional risk control and prevention mechanisms, significantly reducing the potential risk losses. Also, in times of increased overall risk due to changes in the organization itself or in the business environment, the company can reassess its risk losses estimates and temporarily increase its capital or provisions.

4.7. Back testing

The analysis and evaluation of the quality of the methods for capital adequacy determination itself (Internal Capital Adequacy Assessment Process - ICAAP) is one of the crucial elements of control and supervision as defined by the Basel Accord’s 2nd Pillar. Its goal is to ensure that banks other financial institutions are providing an adequate capital structure for all the risks that the financial institution is exposed to and that they are constantly monitoring and improving risk management practices.

This regulatory mechanism can be adapted in other companies as well, especially where advanced methods for risk management, such as the proposed model, are being used. One of the simplest methods for a model’s evaluation is back testing. It involves comparing predictions and analysis results to subsequently gathered data on loss events by comparing a certain percentile (e.g. 80%) of predicted losses to the actual number of loss events above the percentile threshold.

Naturally, in risk management, where companies deal with extreme and rare events, this may take significant time (e.g. a decade), but the evaluation of smaller and more frequent losses can be effective as well. The evaluation should be performed periodically and also following any major loss event or major changes in the organization or its business environment. With the discovery of discrepancies between the model’s predictions and actual data, a reevaluation of the input data should be performed, identifying potential new risk factors and eliminating those which may have become irrelevant to the company’s operations, reevaluating the frequency and severity of loss events for each business process / risk factor element of the matrix, the integration of new empirical data into the model. Also, a reevaluation of the model’s fitted theoretical distribution should be performed, as it too may significantly affect the results.

A reevaluation and redesign of the model is sometimes not necessary, since the integration of new loss event data may significantly adjust the model’s results, making it unnecessary to radically change other inputs into the model, since this may be cost-inefficient and may cause the model to become more complex and harder to use and manage.

Advertisement

5. Further Integration of Operational Risk Management

Apart from using the proposed model’s results directly as a tool for the determination of capital requirements or provision formation, the model can be integrated into an overall risk management system devised by a company. The creation of an overall risk management system within a company starts with a risk management strategy (Andersen, 1998; Cruz, 2002; Marshall, 2001) which determines the way the company will deal with all the different risks that influence its operations. In this strategy, also called risk management framework or policy, the company must first define the different risks that it is subjected to and subsequently chose a methodology for their management, which includes identification, measurement, recording and analysis of risk event data, reporting, internal controls and risk management system support, the organization of these tasks and a clear definition of employees’ responsibilities and authorizations.

Figure 10.

The learning and development processes for risk management activities.

The process of risk management is a continuing learning process as illustrated in Figure 10. The identification and prompt examination of loss events leads to an update of the risk event database and, if necessary, to the update of the underlying risk measurement model, all of which result in an evaluation of the overall risk the company is facing, enabling the management to take corrective and protective measures. An important part of the loop is the reporting process which needs to start in the lowest management levels and continue right to the top management level, where risk awareness is crucial for adequate executive decisions to be taken. A more passive approach to risk management may include loss event recording, analysis and reporting. The next step is a more defensive risk management policy, which includes also a more in-depth risk analysis and corrective measures proposals, which include damage control and additional protective mechanisms and controls within business processes. An active risk management framework, as the final stage of risk management, may include risk event prediction, the development of complex risk causal model, the use of securitization, provisions or insurance and the calculation of risk-adjusted performance measures, such as risk-adjusted return on capital (RAROC), expected value added (EVA) and volatility of profits (Cruz, 2002; Marshall, 2001).

References

  1. 1. Andersen A. 1998 Operational risk and financial institutions, Risk Books, 1-89933-204-9
  2. 2. Anderson T. W. Darling D. A. 1952 Asymptotic Theory of Certain "Goodness of Fit" Criteria Based on Stochastic Processes. The Annals of Mathematical Statistics, 23 2 June 1952, 193 212, 0003-4851
  3. 3. Allen L. Bali T. G. 2007 Cyclicality in Catastrophic and Operational Risk Measurements. Journal of Banking & Finance, 31 4 April 2007, 1191 1235 , 0378-4266
  4. 4. Artzner P. Delbaen F. Eber J. Heath D. 1999 Coherent Measures Of Risk. Mathematical Finance, 9 3 July 1999, 203 228 , 0960-1627
  5. 5. Böcker K. Klüppelberg C. 2007 Multivariate Models for Operational Risk (to be published). Quantitative Finance, February 2010, 1469-76 1469 7696
  6. 6. Brdar Turk. A. 2009 A Quantitative Operational Risk Management Model. WSEAS Transactions On Business And Economics, 6 5 May 2009, 241 253 , 1109-9526.
  7. 7. Butler J. S. Schachter B. 1998 Estimating Value-At-Risk With A Precision Measure By Combining Kernel Estimation With Historical Simulation. Review of Derivatives Research, 1 4 Feabruary 1998, 371 390 , 1380-6645
  8. 8. Chapelle A. Crama Y. Hübner G. Peters J. 2005 Measuring And Managing Operational Risk In The Financial Sector: An Integrated Framework. Working paper, Ecole de Gestion de l’Université de Liege, Liege
  9. 9. Chavez-Demoulin V. Embrechts P. Nešlehová J. 2006 Quantitative Models for Operational Risk: Extremes, Dependence and Aggregation. Journal of Banking and Finance, 30 10 October 2006, 2635 2658 , 0378-4266
  10. 10. Chernobai A. S. Rachev S. T. Fabozzi F. J. 2007 Operational Risk. A Guide to Basel II Capital Requirements, Models and Analysis, 0-47178-051-0 Wiley & Sons, Hoboken
  11. 11. Cruz M. G. 2002 Modeling, Measuring and Hedging Operational Risk, John Wiley & Sons, 0-47151-560-4
  12. 12. D’Agostino R. B. Stephens M. A. 1986 Goodness-Of-Fit Techniques, Marcel-Dekker, 0-82477-487-6 York
  13. 13. De Fontnouvelle P. De Jesus-Rueff V. Jordan J. Rosengren E. 2003 Using Loss Data to Quantify Operational Risk. Working Paper, Federal Reserve Bank of Boston, Boston
  14. 14. De Fontnouvelle P. De Jesus-Rueff V. Jordan J. Rosengren E. 2006 Capital and Risk: New Evidence on Implications of Large Operational Losses. Journal of Money, Credit & Banking, 38 7 October 2006, 1819 1846 , 0022-2879
  15. 15. Dell’Aquila R. Embrechts P. 2006 Extremes and robustness: a contradiction? Financial Markets and Portfolio Management, 20 1 March 2006, 103 118 , 1555-4961
  16. 16. Ebnöther S. Vanini P. Mc Neil A. Antolinez-Fehr P. 2001 Modelling Operational Risk. Working Paper, Zurich Cantonal Bank, Zurich
  17. 17. Embrechts P. Furrer H. Kaufmann R. 2008 Different Kinds of Risk, In: Handbook of Financial Time Series, Andersen, T. G. (Ed.), Springer, 978-3-54071-296-1 Cambridge (MA)
  18. 18. Embrechts P. Klüppelberg C. Mikosch T. 1997 Modeling Extremal Events for Insurance and Finance, Springer-Verlag, 978-3-54060-931-5 Berlin
  19. 19. Embrechts P. Mc Neil A. Straumann D. 2002 Correlation and dependence in risk management: Properties and pitfalls, In: Risk Management: Value at Risk and Beyond, Dempster, M. A. H. (Ed.), 176 223 , Cambridge University Press, 0-52178-180-9MA)
  20. 20. Enrique N. 2006 Practical Calculation of Expected and Unexpected Losses in Operational Risk by Simulation Methods. Banca & Finanzas: Documentos de Trabajo, 1 1 January 2007, 1 12
  21. 21. Harmon P. 2007 Business Process Change, Second Edition: A Guide for Business Managers and BPM and Six Sigma Professionals, Morgan Kaufmann Publishers, 1-55860-758-7
  22. 22. Ibragimov R. 2005 Portfolio Diversification and Value At Risk Under Thick-Tailendness. Working Paper 05-10 05 10 , Yale International Centre of Finance, New Haven
  23. 23. International Convergence of Capital Measurement Capital Standards. A Revised Framework. 2004 Bank For International Settlements (BIS), Basel Comittee on Banking Supervision, Basel
  24. 24. Jeston J. Nelis J. 2008 Management by Process: A Practical Road-map to sustainable Business Process Management, Butterworth-Heinemann, Elseviere, 0-75068-761-4
  25. 25. Jorion P. 2001 Value at Risk. The New Benchmark for Managing Financial Risk. Second Edition, McGraw Hill, 0-07135-502-2 York
  26. 26. Klugman S. A. Panjer H. H. Willmot G. E. 2004 Loss Models: From Data to Decision. 2nd edition, John Wiley & Sons, 0-47121-577-5NJ)
  27. 27. Manganelli S. Engle R. F. 2001 Value At Risk Models In Finance. Working Paper, European Central Bank, Frankfurt a. M.
  28. 28. Marshall C. L. 2001 Measuring and Managing Operational Risks in Financial Institutions: Tools, Techniques, and other Resources. Wiley & Sons, 0-47184-595-7
  29. 29. Mc Neil A. J. Frey R. Embrechts P. 2005 Quantitative Risk Management: Concepts, Techniques and Tools, Princeton University Press, 978-0-69112-255-7 Princeton (NJ)
  30. 30. Moscadelli M. 2004 The Modelling Of Operational Risk: Experience With The Analysis Of The Data Collected By The Basel Committee. Economic Working Paper 517 Banca d’Italia, Rome
  31. 31. Nešlehová J. Embrechts P. Chavez-Demoulin V. 2006 Infinite-mean Models and the LDA for Operational Risk. Journal of Operational Risk, 1 1 Spring 2006, 3 25 , 1744-6740
  32. 32. Porter M. E. 1985 Competitive Advantage: Creating and Sustaining Superior Performance, Free Press, 0-68484-146-0 York
  33. 33. Van Greuning H. Brajnovic Bratanovic. S. 2003 Analyzing and Managing Banking Risk. A Framework for Assessing Corporate Governance and Financial Risk. Second edition, World Bank, 0-82135-418-3D.C.)
  34. 34. Wilson T. 1998 Value At Risk, In: Risk Management and Analysis, Measuring and Modelling Financial Risk, Alexander, C. (Ed.), John Wiley & Sons, 0-47197-957-0
  35. 35. Yamai Y. Yoshiba T. 2002 Comparative Analyses of Expected Shortfall and Value-At-Risk: Their Validity under Market Stress. Monetary and Economic Studies, 20 3 October 2002, 0288-8432, 181 237

Notes

  • The advantages and disatvantages of using EVT for high-quantile risk measures are also discussed in Diebold, Schuermann and Stroughair (1998), Embrechts (2000), Aragonés, Blanco and Dowd (2000), Fernandez (2003b), Chavez-Demoulin and Roehl (2004), Emmer, Klüppelberg and Trüstedt (1999) and Bensalah (2000).
  • For an example of the model use in financial institutions, see Brdar Turk (2009).
  • This whole step can be skipped if there is enough empirical data available for direct use in the model.
  • The use of the method in the distribution’s tail is shown by Nylund (2001).

Written By

Aleksandra Brdar Turk

Published: 17 August 2010