Open access peer-reviewed chapter

Quantifying Risk Using Loss Distributions

Written By

Retsebile Maphalla, Moroke Mokhoabane, Mulalo Ndou and Sandile Shongwe

Submitted: 01 November 2022 Reviewed: 02 November 2022 Published: 06 December 2022

DOI: 10.5772/intechopen.108856

From the Edited Volume

Applied Probability Theory - New Perspectives, Recent Advances and Trends

Edited by Abdo Abou Jaoudé

Chapter metrics overview

91 Chapter Downloads

View Full Metrics

Abstract

Risk is unavoidable, so quantification of risk in any institution is of great importance as it allows the management of an institution to make informed decisions. Lack of risk awareness can lead to the collapse of an institution; hence, our aim in this chapter is to cover some of the ways used to quantify risk. There are several types of risks; however, in this chapter, we focus mainly on quantification of operational risk using parametric loss distributions. The main objective of this chapter is to outline how operational risk is quantified using statistical distributions. We illustrate the application of parametric loss distributions’ risk quantification using “Taxi claims data” which seems to best fit one of the loss distributions and fully illustrate how to quantify this specific data. More importantly, we also illustrate how to implement quantification of risk for two other scenarios: (i) if we assumed the underlying distribution is unknown and use the nonparametric empirical distribution approach, and (ii) when using the generalized extreme value (GEV) distribution approach. The latter two scenarios were not the main objective but were done in an effort to compare our results with some of the more commonly used techniques in real-world risk analysis scenarios.

Keywords

  • risk quantification
  • loss distributions
  • parametric
  • nonparametric
  • value-at-risk

1. Introduction

The topic presented in this chapter serves to give novice risk readers an idea of how institutions quantify their operational risks using parametric loss distributions. For various financial institutions, risk is classified into different components. Firstly though, risk is defined as the probability of an event and the potential loss. Put differently, [1] defined risk as a condition in which there is a possibility of an adverse deviation from the desired outcome that is expected or hoped for. Secondly, [2] provides an excellent account of four main categories of financial risks (more applicable in the banking sector), i.e. credit risk, market risk, operational risk, and others. For more discussion on some of the latter mentioned risks, see [3]‘s Chapter 7 and the corresponding tools and techniques discussed in [3]‘s Chapter 8. In this chapter, we are focusing mainly on the quantification of operational risk.

There have been multiple instances in the previous century of many big multinational firms experiencing total collapse due to a lack of risk control. For instance, employees may embezzle funds from the firm, rogue employees may make unauthorized deals, etc. For best examples of the latter, see Chapter 1 of [2] and Chapter 20 of [3]. Note though, in the South African context, the best example for poor operational risk management are Steinhoff, Hullett, Venda Building Society (VBS) mutual bank, Eskom, South African Airways (SAA) and more recently the Capitec bank computer systems failure during a peak period of the month in 2022. It is worth mentioning that a variety of sources have indicated that, in most instances, losses incurred due to operational risk normally would originate from poor management practices, outsourcing nonstrategic activities, or external factors.

In this chapter, a study on operational loss data will be conducted. We hope to determine the loss distribution that best fits the data by performing goodness-of-fit tests to the proposed models and estimating the parameters using appropriate statistical methods so that it can be possible to forecast or quantify the loss to be anticipated.

To date, financial institutions are making it a norm to manage their exposure to different types of risks, see [4]. Quantification of risk is of great importance, a proper evaluation of risk in any financial institution is an uncertainty problem that may easily lead to the bankruptcy of that firm and would consequently become a major concern for national and international financial regulatory bodies. This research work is compiled to contribute to the improvement of the quantification of operational risk using the loss distribution approach (LDA). According to [5], operational risk is the probability of loss resulting from insufficient or unsuccessful internal processes, people, and systems or from external events. Consequently, in the next section, we review the five most common parametric loss distributions namely: Pareto, Burr, gamma, Weibull, and log-normal distributions. These loss distributions are reviewed mainly in the aspect of quantification of operational risk.

This topic is applicable to a wide variety of fields as all institutions face some certain type of risk which if left unnoticed and unmanaged, could lead to total collapse of the firm or the worldwide economy (as seen in the last two global financial crises—the domino effect). Operational risk is quantified in several institutions; according to [6], this is done because we cannot predict the future for certain, but we can prepare and anticipate it. Risk quantification gives us an insight into what we can anticipate. Quantification of risk is done in several financial institutions, e.g. banks, universities, insurance companies, etc. The limitation of our research is as follows: it is applicable in scenarios when the underlying operational loss data fits (or almost fits) the loss distributions considered here (i.e. Pareto, Burr, gamma, Weibull, and log-normal distributions). In the event of the data not passing the goodness-of-fit tests for any of the latter distributions, then in the concluding section (i.e. Section 4), we shall list different alternatives approaches that the readers need to consider.

Note that the field of risk identification and quantification has become more important as globalization is expanding. To date, different financial institutions are realizing the importance of quantifying risk to avoid huge losses that may even result in bankruptcy. The aspect of risk quantification is pivotal in making the best business decisions.

Therefore, the rest of the chapter is structured as follows: in Section 2, we review several publications that have covered operational risk using different loss distributions. Moreover, we take note of various approaches that were used to quantify risk exposure. Next, in Section 3, we use a dataset to illustrate quantification of risk using loss distributions. Given that this research work is a continuation of previous literature studies, in Section 4, we provide some concluding remarks and offer several possible future research topics.

Advertisement

2. Literature review

2.1 Introduction

The three major classes of financial risks and their corresponding definitions are defined as follows:

  • Market risk is the risk inherent from exposure to capital markets; see [3]. For instance, if some event led to insurance companies paying out large claims or banks are being exposed to risk due to an adverse movement in the stock market.

  • Credit risk arises when losses are observed due to the inability of a debtor to perform an obligation in accordance with agreed terms; see [2].

  • Operational risk is the risk of loss resulting from inadequate or failed internal processes, people, or systems or external events; see [7].

Note that [8] argued that the probability of an operational risk event increases with many personnel and with a greater transaction volume. The latter is also based on the study by [9] who investigated the effect of bank size on operational loss amounts and deducted that, on average, for every unit increase in bank size, operational losses are predicted to increase by approximately a fourth of a root of that. Note that there are different classes of operational losses that the financial industry must be aware of; see [2]:

  1. high frequency and high magnitude

  2. high frequency and low magnitude

  3. low frequency and low magnitude

  4. low frequency and high magnitude.

Category (i) has been argued that it is not feasible/implausible in the financial industry, with (ii) and (iii) are unimportant and can often be both prevented. However, category (iv) tends to cause the most devastation loses, with the best example being the 1995 Barings Bank’s collapse (also portrayed in the movie “Rogue Trader”). Consequently, banks must be extremely cautious of these types of losses as they tend to cause bankruptcy in many financial institutions. Low-frequency/high-severity operational losses can be extreme in size when they are compared to the rest of the data. If you construct a histogram of the loss distribution, the low-frequency/high-severity operational losses events would be placed in the far-right end, which often referred to as “tail event”. Due to operational loss, data exhibit such tail events. We say that the data are heavy-tailed.

In different fields that use “Data Science” techniques (e.g. insurance, banks, etc.), different types of distributions are used to model data due to the different products that are offered by several financial institutions. These financial institutions are increasingly measuring and managing their exposure to different types of risks; see [4]. A proper evaluation of risk in any financial institution is an uncertainty problem that may easily lead to a bankruptcy of that firm and consequently is a major concern for national and international financial regulatory bodies.

It is important to mention that risk data from different products offered by financial institutions (e.g. micro-insurance, re-insurance, investment, savings, stock exchange, etc.) are distributed differently; see for instance [10]. Consequently, a thorough understanding of a variety of distributions is a must for an inspiring data scientist. According to [2], the most applied basic distributions in quantifying operational risk are those that are skewed to the right (right-tailed). There are two main ways to categorize the right-tailed loss distributions, i.e. parametric and nonparametric approaches. More specifically, in this chapter, we will consider the following most common parametric loss distributions: (i) exponential, (ii) gamma, (iii) Weibull, (iv) Pareto, (v) Burr, and (vi) log-normal. It is worth mentioning that these are not the only existing loss distributions, for example, a combination of two of the above, i.e. the composite Weibull-Pareto distribution in the context of risk is discussed in [11].

The next subsections discuss the following: Section 2.2 provides some distributional properties of the considered parametric loss distributions. More importantly, Section 2.2 provides the literature review of some publications that applied the considered distributions in the context of risk analysis. Next, Section 2.3 gives a brief discussion on nonparametric loss distributions (seldomly used), and Section 2.4 discusses some well-known methods of quantifying risk. Section 2.5 discusses other types of risks that use the LDA. Finally, Section 2.6 gives some concluding remarks.

2.2 Parametric loss distributions

A summary of some publications that discussed parametric loss distribution’s application in operational risk is provided in Table 1. This table was constructed with an effort to easily identify which type of loss distributions is discussed in these separate publications. The corresponding loss distribution function properties are listed in Table 2 with the expressions adopted from [12, 13] and Chapter 6 of [2].

PublicationLoss distribution
GammaParetoWeibullLog-normalBurrOther
[2]
[4]
[5]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]

Table 1.

A summary of publications discussed in this chapter and their classification according to the type of loss distribution.

Loss distributionPDF -fxCDF - FxDomain of variable/parameter (s)MeanVariance
Exponentialλeλx1eλxx>0,λ>01λ1λ2
GammaβαΓαxα1eβxΓαβxx>0,α>0,β>0αβαβ2
Weibullαβxα1eβxα1eβxαx>0,β>0,α>0β1αΓ1+1αβ1αΓ1+1α
Paretoαβαxα+11βxxα>0,β<x<,β>0αβα1αβ2α12α2
Burrγβαxγ1β+xγα+11ββ+αx>0,γ>0,α>0,β>0β1γΓαΓ1+1γΓα1γβ2γΓαΓ1+2γΓα2γβ2γΓ2αΓ21+1γΓα1γ
Log-normal12πσxelogxu22σ2Φlogxμσx>0,<μ<,σ>0eμ+σ22eσ21e2μ+σ2

Table 2.

Some properties of different loss distributions.

where Γαβx2πα1eα+10α120α21αβxααeββxn=015βxnα!α+n!

Note that when the different parameters in Table 2 are varied, the distributions tend to vary significantly, especially in the tail area. The latter will be illustrated in detail in the next section.

2.2.1 Pareto distribution

The Pareto distribution is a very heavy-tailed distribution that takes on positive values, and its parameter α is used to determine the size of the tail heaviness. The Pareto distribution tail is monotonically decreasing, and this means that the tail decreases as x increases and it becomes is thicker for values of x closer to zero. To derive the Pareto distribution, assume that a variatex follows an exponential distribution with meanβ1; furthermore, suppose that βfollows a gamma distribution, therefore the x follows a Pareto distribution; see [17]. Note that when α<1, a very heavy tail is encountered with the mean and variance being infinite. This means that losses of infinite sizes are theoretically possible. The extreme heaviness of the Pareto distribution tail makes it ideal for modeling losses of high magnitudes; see [2]. There are also different versions of the Pareto distribution that are used in risk analysis. The most popular of those variations is being the generalized Pareto distribution (GPD). The GPD is especially good while modeling data greater than a high threshold, also known as estimation of tails of extreme losses.

2.2.2 Burr distribution

The Burr distribution is heavy-tailed, and it is skewed to the right; see [19]. The Burr distribution is a special case of GPD described in subsection 2.2.1. It has three parameters which gives it more flexibility over the traditional Pareto distribution. The Burr distribution has an additional parameter γ and when γ=1, it reduces to a Pareto distribution. One of the well-known uses of the Burr distribution is modeling natural catastrophes and as a result, it is a popular distribution or model for use in the insurance industry for pricing of premiums; see [2, 22]. The family of Burr distributions goes back to 1941, and it is sometimes referred as the extended Pareto or beta prime distribution. All the PDFs of the loss distributions in the Burr family have a monotonically decreasing, right-skewed tails; see [15]. The Burr distribution is well recognized in probability theory with many applications in agriculture, biology, etc., see [20].

2.2.3 Gamma distribution

The gamma distribution is a light-tailed distribution which is skewed to the right. The gamma distribution is a two-parameter distribution that is a generalization of the exponential distribution. It is a two-parameter probability distribution, where x is a random variable, β is a scale parameter, α is the shape parameter, and Γ is a gamma distribution; see Table 2. The gamma distribution is said to be the generalization of the exponential distribution because for γ=1 it becomes the exponential with parameter λ=1/β, and it is usually used to model time between events; see [2]. According to [7], the gamma distribution is one of the most important loss distributions in risk analysis because it forms the base for creating many of the popular distributions we have. Exponential distribution is described by a density f and distribution F given in Table 2 above, where λ represents the “failure” rate. The distribution is tractable and has unique mathematical properties, e.g. the failure distribution is described by a single parameter known as the mean time to failure, denoted by θ, also that the failure rate is defined by knowing the mean life, i.e. λ=1/θ . The exponential distribution can be used to model the time elapsed until the next event (e.g. accident); see [23].

The exponential PDF has a monotone decrease and an exponentially decreasing and light tail. This means that when it is applied in risk analysis, the event of high losses is given an almost zero probability; see [2]. Due to this property, [14] stated that the exponential distribution is not used very much in operational losses, but the constant decrease of the tail is useful for modeling lifetime data of items which have a constant failure rate. The exponential distribution has attractive and easily understandable mathematical properties; thus, it is mostly used in risk analysis for developing other models.

2.2.4 Weibull distribution

Another generalization of the exponential distribution is the Weibull distribution, and it has two parameters (see Table 2) compared to the one parameter of the exponential distribution. The Weibull distribution has a light tail which is skewed to the right. The additional parameter allows the Weibull distribution to have more flexibility as well as heavier or lighter tail than the exponential distribution. That is, the Weibull distribution has a lighter tail than exponential distribution if α<1, equals to the exponential distribution if α=1 and has a heavier tail than exponential distribution if α>1; see [2]. Furthermore, [2] stated that in risk analysis, the heavy-tailed Weibull distribution is a popular model as it has been shown to be optimal for modeling asset returns as well as used in reinsurance.

2.2.5 Log-normal distribution

A log-normal distribution is a moderately heavy-tailed distribution that is skewed to the right. The distribution is derived by taking the natural logarithm of the data and fitting it to the normal distribution. The distribution is right-tailed and takes on only positive x values. The log-normal distribution, like the normal distribution, has parameters μ and σ (see Table 2). The distribution is useful for modeling of claim sizes. The thick tail and right skewness properties make it fit many situations. The log-normal can also resemble the normal distribution if the α is very small, and this property is not always desirable for analyzing risk; see [16].

2.3 Nonparametric loss distribution

In the nonparametric loss distributions (e.g. empirical distribution function), all the data on the certain risk type is considered. In other words, we do not have to estimate any parameters as all the data are available (which is hardly ever the case); see [2, 16]. According to [7], the CDF of the empirical distribution, Fnx, is given by:

Fnx=1n#i:xixE1

where # denotes the number of observations x, and n is the total number observations in the sample.

Some of the advantages of nonparametric loss distributions are as follows:

  • It does not assume any underlying distribution, thus letting the data speak for itself.

  • It is simple and easy to understand and does not require any complicated sample theory.

  • It might be the only alternative for small sample sizes.

Below are some of the disadvantages:

  • Less efficient to compute and may provide inaccurate results, especially when the underlying distribution is known.

2.4 Risk quantification

According to [18, 24], LDA is widely used to quantify operational risk; moreover, both [18, 24] showed that when quantifying operational risk, the PDF for an occurrence and the frequency for that occurrence are approximated firstly for a certain risk type or business line then later for the institution. The process of deriving these probability distributions is done in three steps: firstly, the loss severity distribution is derived; secondly, the loss frequency distribution is also derived; and lastly, the aggregate loss distribution is found by compounding the severity and frequency loss distributions.

The Value-at-Risk (VaR), which is a combination of expected and unexpected losses, is used when approximating the PDFs, and [18, 24] stated that the Capital-at-Risk (CaR) given in Eq. (2) is just the VaR, and this value is computed for a certain risk type cell and a certain occurrence type:

CaRijα=ELij+ULijαE2

Note that in Eq. (2), we use the indices i and j to denote a given business line and a given event type, ELij is the expected loss, and ULijα is the unexpected loss at significance levelα.

Another method of quantifying risk is the internal measurement approach (IMA). According to [18], when using IMA, the business type and the event type risk are both quantified using

CaRij=ELij×γij×RPIijE3

where γis the scaling factor and RPI is the risk profile index.

LDA is of great importance when computing regulatory capital, and as noted by [18], even though LDA is such a great tool, it also has its downside. This is due to a lack of data and even if a bank keeps large amounts of losses data, they may still be unrepresentative of potential extreme losses. Three of some popular approaches under LDA is the extreme value theory (EVT), VaR, and IMA which are briefly discussed in Section 3.

Risks need to be measurable so that they can be evaluated and examined. It is ideal to have a high-quality historical data that can be subjected to in-depth statistical analysis. Numerous quantification models tend to lack high-quality statistical data. The types of risk determine which quantification technique to use. In finance, the main methods to quantify risk are as outlined below; see [2, 3]:

  • Dynamic financial analysis

This simulates the enterprise’s overall risks as well as their interactions. Typically, forecast balance sheets and projected income statements are produced as outputs using cashflows.

  • Financial Conditions Reports (FCR)

The Financial Conditions Report (FCR) displays both the current state of solvency and potential future developments. The volume and profitability of new business as well as any special characteristics it might have would typically be projected.

  • Quantitative methods

Quantitative methods are employed for risks in insurance and underwriting, markets, and economies, such as interest rate, basis risk, and market fluctuations. Time series and scenario analysis might be included, as well as the fitting of statistical models and subsequent calculation of risk metrics like VaR.

  • Credit risk models

Instead of measuring the risk in a credit portfolio, these models assess the credit risk of a single entity (business or person). These may be quantified as well as subjectively, and counterparty risk is one of them. A credit risk model’s job is to take the state of the overall economy and the circumstances surrounding the company under consideration as inputs and provide a credit spread as an output. In this context, structural and reduced form models are the two main groups of credit risk models. Based on the value of a company’s assets and obligations, structural models are used to assess the likelihood that a default will occur.

  • Asset Liability Modeling (ALM)

This approach, which is common in the insurance industry and primarily measures liquidity and capital requirements, might be used by various types of financial companies.

  • Scenario analysis

Operational hazards and other risks that are challenging to measure, such as legal risk, regulatory risk, agency risk, moral hazard, strategic risk, political risk, and reputational risk, are often covered under these.

  • Sensitivity testing

It is used to change each parameter separately and measure how much the outputs of the model fluctuate or are sensitive to different variables.

2.5 Other risks

Another risk that is quantified using LDA is credit risk, which is the estimation of expected and unexpected loss from credit defaults; see [10]. Note that [21] used the inverse Gaussian distribution to quantify credit risk and used Copula functions and Laplace transformation to run an algorithm that quantifies the corresponding probabilities. In this chapter, our focus is not on credit risk. Hence, a reader who might be interested in how the probability distribution of defaults is quantified can go through the articles by [10, 18].

Advertisement

3. Methodology

3.1 Introduction

In the previous section, we outlined numerous articles and textbooks that discussed various aspects of operational risk using loss distributions. Those articles and textbooks formed a literature review that helped us figure out how to quantify operational risk using loss distributions which is the main objective of our research work. Firstly though, we provide detailed description of the EVT, VaR, and their corresponding properties.

Under EVT approach, [2] explained that the analysis of the tail area of the distribution is the main focus as well as using appropriate methods for modeling extreme losses and their impact in insurance, finance, and quantitative risk management. We fit classical distributions to the data, using the maximum likelihood criteria, starting from light-tailed distributions (e.g. Weibull distribution), to medium-tailed distributions (e.g. log-normal). Under this approach, the Kolmogorov-Smirnov (KS) test and the Anderson-Darling (AD) test are adapted to measure the distance between the empirical and theoretical distribution functions only in the tail area, after deciding on the desired quantile. Readers are referred to [25, 26] for the KS test and for the AD test. Note that mean-excess plots are used to assess the validity of modeling the tails, while the Hill method is also used to get rough estimates of the shape of the parameter of a distribution; see [2]. In addition, [2] stated that the KS and AD tests can be used to examine the goodness-of-fit of models that we want to fit on the data. These tests can be used to determine which loss distribution best fits our operational loss data. These tests use different measures of discrepancy between fitted continuous distributions and empirical distributions. KS test is the best at measuring the discrepancy around the median, while the AD test is good at measuring the discrepancy for the tails.

According to [2], VaR is the largest loss an investment portfolio might sustain over a specific length of time. The time frame can be a single day, a month, a quarter, or even an entire year. According to [2], it is the (1-α)th percentile of the loss distribution over a desired time frame, where (1-α) is the level of confidence and practitioners typically put it at 99.99%. Also note that [27] defined VaR as a number that indicates how much a financial institution can lose with probability over a specific time horizon, and that its measurements can be used in risk management, the assessment of risk takers’ performance, and for regulatory requirements.

The method of moments is a different analytical method for quantifying risk. In this method, the mean and variance of input distributions defined at the task level are utilized to calculate the moments of the probability distribution corresponding to the task completion date. The major moments of work breakdown structure simulations may be determined using this method almost instantly and precisely, and it is still utilized in the cost risk analysis community; see [28].

Now in this methodology section, we intend to use a dataset to illustrate to readers how to quantify real-life operational risk using loss distributions. In addition, we intend to measure the descriptive statistics such as the mean, skewness, and kurtosis to obtain a general idea of how each of the considered datasets are distributed. For instance, Tables 3 and 4 below give a summary of how skewness and kurtosis are used to give an idea of how the dataset(s) may be distributed. As indicated in [2], there are important statistical approaches to consider when running a goodness-of-fit test to a dataset, i.e. the KS and AD test. More importantly, the maximum likelihood estimation (MLE) or method of moments shall be used to estimate the model parameters.

SkewnessData shape
ZeroAsymmetry
NegativeLeft-tailed
PositiveRight-tailed

Table 3.

Value of skewness implications.

KurtosisTail heaviness
ZeroEqual to normal curve
NegativeLighter-tailed
PositiveHeavy-tailed

Table 4.

Values of kurtosis implications.

The pivotal role of our study is to determine the best loss distribution that fits the considered dataset. To complete this role, we ought to compare between Pareto, gamma, Weibull, log-normal, Burr, and exponential distributions and investigate using specific metrics which distribution best fits our data. We are going to perform these tasks with the help of R software (the dataset used and R codes can be requested from the authors) using packages, such as moments and fitdistrplus. Having determined which distribution best fits our data, we shall use the best-fitting loss distribution to calculate the probability of loss for that specific dataset. Consequently, it may happen that the data do not seem to fit well with any distribution; then in such an instance, we would conclude that using a nonparametric approach would be of better benefit.

3.2 Goodness-of-fit test

Goodness-of-fit tests are useful to determine the validity of a theoretical model. There are different types of tests that can be used to perform the goodness-of-fit test, e.g. Kuiper, Cramer von Mises, and Pearson’s chi-square test, but in this research work, we mainly focus on the KS and AD tests.

3.2.1 Kolmogorov-Smirnov test

This test captures the deviation or variance around the median of the data. The KS test is computed using the maximum vertical distance between Fnx and Fx, where Fnx is the CDF of the empirical formula and Fxis the CDF of the observed data.

According to [2], KS test statistic is computed as follows: let D+ be the largest difference between Fnx and Fx and let D be the largest value between Fx and Fnx. Then, mathematically,

D+=supxFnxFxE4
D=supxFxFnx.E5

Thus, the KS statistic is calculated as:

KS=nmaxD+D,E6

which can be written as,

nmaxsupjnzjsupzjj1n.E7

where n is the number of observations, zj=Fxj and j=1,2,,n.

For hypothesis testing, the null hypothesis is that the dataset that we will use for illustration purpose “Taxi claims” data follows the specified distribution, and the alternative hypothesis is that the “Taxi claims” data does not follow the specified distribution using a critical value of 5% throughout.

3.2.2 Anderson-Darling test

This test is best suited for computing discrepancies around the tails. The test is mostly used for heavy-tailed data, and the test statistic of the AD test is given by:

AD=nsupFnxFxFx1Fx.E8

The computing formula is given by:

AD2=n+1nj=1n12jlogzj1nj=1n12(njlog1zj.E9

3.3 Sensitivity analysis

In this section, we perform the sensitivity analysis of the distribution that will be fitted to our data. The purpose of doing this is to show the effect of the different parameters’ behavior on the tail and peak portion of the probability distribution. When testing for the effect of a parameter, we will fix all other variables and vary the parameter of interest.

In Figure 1, we varied λ as 0.5, 1 and 1.5. We can clearly see that at 0.5 we had a thicker tail, while at 1.5 we observe a thin tail. Thus, the more we increase λ we obtain an exponential distribution with a thinner tail. In Figure 2a, β is fixed to be 2 with α246, and we can clearly see that when α =2, the resulting gamma distribution has thin tails; however, for large α, the distribution has thicker tail. In Figure 2b, α is fixed and β is varied. It is observed that the gamma distribution has thin tails, when β is small and we have thick tails for large β.

Figure 1.

Exponential distribution.

Figure 2.

Gamma distribution.

In Figure 3a, with the β fixed, for small α, the Weibull distribution has a thick tail and a thin tail for large values of α . In Figure 3b, given that α is fixed, as β increases, we observe thicker tails; however, for small β, we observe thin tails. For β fixed, in Figure 4a, a small α yields a thicker tail; however, a larger one yields thin tails. Next in Figure 4b, with α fixed; a small β yields thin tail while a large one yields thicker tails.

Figure 3.

Weibull distributions.

Figure 4.

Pareto distributions.

In Figure 5a, when α is fixed at 0.5 and γ to be 10, we varied β to be 10, 20, and 30, and it is observed that when β is 30 there is a thicker tail and when β is 10 there is a thin tail. That is, when the value of β is increased, the tails become thicker, and the opposite is true. In Figure 5b, α is fixed to be 0.5 and β to be 50, γ is varied to be 5, 10, and 15, and it is observed that when γ is 5 there is a thicker tail and when γ is 15 there is a thin tail. When we increase the value of γ the tails become thinner, and the opposite is true. In Figure 5c, γ is fixed at 10 and β to be 25, α is varied to be 0.5,1, and 1.5 and when α is 1 there is a thicker tail and when α is 1.5 there is a thin tail. When we increase the value of α, the tails become thinner, and the opposite is true.

Figure 5.

Burr distributions.

In Figure 6a, we fixed the mean at 1 and varied standard deviation (stdev) to be 0.5, 1, and 1.5, and it is observed that when the stdev is 1, there is a thicker tail and when stevd is 0.5 there is a thin tail. In Figure 6b, we fixed stdev to be 1 and varied the mean to be 0.5, 1, and 1.5. We can clearly see that when the mean is 1.5, there is a thicker tail and when mean is 0.5 there is a thin tail.

Figure 6.

Log-normal distributions.

3.4 Analysis of Taxi claims data

Again, the Taxi claims data and R codes used to analyze the data can be obtained from the authors on request. Table 5 provides the descriptive summary of the Taxi claims data.

Mean13232.41
Standard deviation28415.63
Median4500
Skewness6.474064
Kurtosis63.63799

Table 5.

Descriptive statistics of the Taxi claims data.

Both the calculated skewness and kurtosis are positive, thus based on the information based on Tables 3 and 4, we can conclude that the dataset is skewed to the right and is heavy-tailed.

From Figure 7a, it is observed that the Taxi claims data has many extreme observations on the right tail because Figure 7b that excludes extreme values shows that the lower quantile of the boxplot. Next, Figure 8 provides the corresponding PDF via a histogram (see Figure 8a) and the CDF (see Figure 8b) of the Taxi claims data.

Figure 7.

Boxplot of Taxi claims data.

Figure 8.

Histogram and CDF of taxi claims data.

Next, we fit a gamma, Weibull, log-normal, Burr, and Pareto distributions to the Taxi claims data. The parameter estimates and goodness-of-fit test results are provided in Figure 9.

Figure 9.

QQ plots of the Taxi claims data fitted for different loss distribution.

From Figure 9, it is observed that the log-normal is a better fit than the other corresponding distributions (Table 6).

DistributionsParetoGammaWeibullLog-normal
Parametersα=1.7596
β=10600.4071
α=0.64547767
β=0.00004879
α=0.7230
β=10097.898
μ=8.5437
σ=1.3211
AD test value338.32461727.97851040.7859115.3129
KS test value0.061500890.13560.08860.0410

Table 6.

MLE parameters and goodness-of-fit values.

It is observed that the log-normal distribution has the lowest KS and AD values out of all the tested distributions, and we can assume that the median of our data and tail is best explained by the log-normal distribution. Overall, we can see that the heavy-tailed distributions, log-normal, and Pareto distribution fit our data very well, and the thin-tailed distributions, gamma and Weibull distributions fit is poor. We can conclude that our data is best modeled by the log-normal distribution.

Figure 10 is the Q-Q plot for the log-normal distribution, although the log-normal distribution was the best distribution to model the Taxi claims data according to the AD test and the KS test, it was not good for modeling extreme losses or extreme values (see the tails of Figure 10).

Figure 10.

Log-normal QQ-plots.

3.5 VaR sensitivity

The maximum amount that can be lost during a specific holding period with a given level of confidence is known as VaR (i.e. value-at-risk). Four methods will be used to determine VaR:

The empirical approach—it entails sorting the data from lowest to highest and taking quantiles. Here, the focus is on the 90th, 95th, 97.5th, and 99th percentiles.

The parametric approach—it entails the use of the fitted model. Here the log-normal was the best distribution for the Taxi claims data. Thus, to calculate the VaR, we used a “VaRes” package on R and put in the shape and scale parameter and computed the 90th, 95th, 97.5th, and 99th percentiles.

The stochastic approach—it entails simulating from a log-normal distribution with shape and scale parameter found from Taxi claims data and computing the 90th, 95th, 97.5th, and 99th percentiles.

The generalized extreme value (GEV) distribution approach—it entails simulating from the GEV distribution and computing the 90th, 95th, 97.5th, and 99th percentiles.

Table 7 and Figure 11 gives a summary of the different approaches of calculating VaR. Note that the column “VaR0.9” in Table 7 under empirical approach can be interpreted as follows: “we can be 90% confident that the maximum amount claimed will be R30 616.67.” In the case of VaR0.9, it is evident that the empirical VaR are close to the extreme value approach, and as the percentiles increase from VaR0.975 to VaR0.99, the GEV distribution was overestimating the risk. Thus, for the Taxi claims data, one would be more inclined to use the empirical approach because it does not assume any underlying distribution as the log-normal distribution seem to be poor in capturing the extreme tail component of the data. The rest of the amounts can be interpreted in the same way for the corresponding percentage levels.

ApproachVaR0.9VaR0.95VaR0.975VaR0.99
Empirical approach30616.6752515.0983327.03139690.1
Parametric approach (log-normal)27911.0445105.268395.31110980.1
Stochastic27981.3645163.6168196.93109861.3
Extreme value30009.2859051.74114121.5269268.2

Table 7.

VaR for the Taxi claims data.

Figure 11.

Graphical representation of VaR.

In this section, we aimed to learn how to quantify operational risk using parametric loss distributions (i.e. exponential, log-normal, gamma, Weibull, Pareto, and Burr distributions). As an example, for the chapter application, we fitted all the distributions to the Taxi claims data. Overall, we can conclude that out of all the distributions fitted, the log-normal distribution seemed to be the best-fitting distribution. We also observed some of the drawbacks of using the parametric distributions in that it tends to fail in capturing the tail as well as the peak of the data very well and makes one think maybe the true underlying distribution could be different from the fitted one. This means that if a parametric distribution method is applied to quantify operational risk, it is better to fit different distributions to the tail and the body to get better estimates. This is evident in our data where after a certain threshold, it is observed that the log-normal distribution underestimates the probability in the tail area; however, the GEV distribution fits better.

Advertisement

4. Conclusion

We have discovered that there are many methods used in quantifying operational risk. Therefore, it is proven handy for a risk analyst or anyone that is working in the risk analysis department of any institution to possess vast knowledge of multiple statistical concepts, and methods to apply in any given situation as each risk requires a different quantification approach. The Taxi claims data we have analyzed provides a good foundation for analyzing operational risk, but it does not represent all possible situations one might find in real life. Because Taxi claims are limited to the highest replacement value of a Taxi; in our case, we have realized that our data does not contain very extreme values; therefore, high losses are limited. The possibility of ruin would likely be due to the risk class of “high frequency and low magnitude.” Using the Taxi claim data, we have provided a good example of how operational risk may be quantified and observed that the log-normal distribution is a better fit in this case. Nevertheless, it failed to model the extreme values of the Taxi claims data. Hence, the GEV distribution can be used to model those extreme values. However, in our research work, we did not dwell much on the GEV distribution. We used four approaches to calculate VaR in our analysis.

According to existing empirical evidence, the overall pattern of operational loss severity data is characterized by significant kurtosis, severe right-skewness, and a very heavy right tail caused by multiple outlying incidents. Fitting some of the common parametric loss distributions such as Weibull, log-normal, Pareto, gamma distributions, and so on is one way to calibrate operational losses. One disadvantage of utilizing these distributions is that they may not suit both the centre and the tails perfectly. Mixture distributions may be explored in this scenario. EVT can be used to fit a GPD to extreme losses surpassing a high predetermined threshold. The characteristics of the GPD distributions which are derived using the EVT approach are very sensitive to extreme data and the choice of threshold, which is a drawback of this approach.

The disadvantages of using loss distributions are that they do not model well many datasets in the presence of outliers and extreme values. Consequently, [2]‘s Chapters 7 and 8 discuss other possible replacements for the loss distributions, e.g. alpha-stable distributions, GEV distribution, and GPD. The latter two distributions are part of EVT family of distributions.

Overall, it appears from the literature study done for this chapter that operational risk managers are focused on developing a model that would accurately represent the likelihood of the tail occurrences and producing a model that would realistically account for the probability of losses reaching a large amount is essential. Because, the latter is important for estimating the VaR.

Since we mainly focused on parametric distributions, and it might be interesting to use the nonparametric approach to quantify risk and some other mixture of distributions method. Therefore, this means that some of the possible additions to research work can be looking at more complex statistical distributions with better tail capturing ability. We also looked at multiple ways of finding VaR and from our findings, and we noticed that the empirical approach was the better way of quantifying the Taxi claims data’s risk—note though the conclusion reached is data-dependent, which means it that a different conclusion will be made for a different dataset.

Advertisement

Acknowledgments

We would like to thank the University of the Free State for their continued support and assistance.

References

  1. 1. Vaughan E, Vaughan T. Fundamentals of Risk and Insurance. New Jersey: John Wiley & Sons; 2003
  2. 2. Chernobai A, Rachev S, Fabozzi F. Operational Risk: A Guide to Basel II Capital Requirements, Models, and Analysis. New Jersey: John Wiley and Sons, Inc; 2007
  3. 3. Sweeting P. Financial Enterprise Risk Management. New York: Cambridge University Press; 2011
  4. 4. Wilson T. Portfolio credit risk. Economic Policy Review. 1998;4(1):71-82
  5. 5. BIS. Basel II: International Convergence of Capital Measurement and Capital Standards: Framework. BIS; 2006
  6. 6. Byatt G. Why and how we should quantify risk. Principal Consultant. Risk Insight Consulting [Internet]. 2019. Available from: https://irp-cdn.multiscreensite.com/8bbcaf75/files/uploaded/Risk%20Insight%20-%20why%20and%20how%20we%20should%20quantify%20risk%20-%20full.pdf. [Accessed: July 25, 2022]
  7. 7. Bank of International Settlements. Working Paper on the Regulatory Treatment of Operational Risk [Internet]. 2001. Available from: www.BIS.org [Accessed: May 09, 2022]
  8. 8. Allen L, Boudoukh J, Saunders A. Understanding Market, Credit, and Operational Risk: The Value-at-Risk Approach. Oxford: Blackwell Publishing; 2004
  9. 9. Shih J, Samad-Khan A, Medapa P. Is the Size of an Operational Loss Related to Firm Size? [Internet]. 2000. Available from: http://www.stamfordrisk.com/docs/Is_the_Size_of_an_Operational_Loss_Related_to_Firm_Size_(Jan_00).pdf. [Accessed: May 5, 2022]
  10. 10. Maccario A, Sironi A, Zazzaea C. Credit risk models: An application to insurance pricing. SSRN Electronic Journal. 2003;384380
  11. 11. Calderín-Ojeda E. A Note on parameter estimation in the composite Weibull–Pareto distribution. Risks. 2018;6(1):11. DOI: 10.3390/risks6010011
  12. 12. Rebello A, Martinez A, Goncalves A. Analytical solution of modified point kinetics equations for linear reactivity variation in subcritical nuclear reactors adopting an incomplete gamma function approximation. Natural Science Journal. 2012;4(1):919-923
  13. 13. Ahmad Z, Mahmoudi E, Hamedani GG. A family of loss distributions with an application to the vehicle insurance loss data. Pakistan Journal of Operational Research. 2019;15(3):731-744
  14. 14. Ahmad A. Bayesian Analysis and Reliability Estimation of Generalized Probability Distributions. Balrampur: AIJR Publisher; 2019
  15. 15. Bolviken E, Haff I. One Family, Six Distributions-A Flexible Model for Insurance Claim Severity. Oslo, Norway: Department of Mathematics, University of Oslo; 2018. DOI: 10.48550/arXiv.1805.10854
  16. 16. Burnecki K, Misiorek A, Weron R. Loss Distributions. Munich Personal RePEc Achive. Wrocław, Poland: Hugo Steinhaus Center, Wrocław University of Technology; 2010
  17. 17. Cizek P, Hardle W, Weron R. Statistical Tools for Finance and Insurance. Berlin: Springer; 2005
  18. 18. Frachot A, Georges P, Roncalli T. Loss distribution approach for operational risk. SSRN Electronic Journal. 2001;1032523:2-12
  19. 19. Hakim A, Fithriani I, Novita M. Properties of Burr distribution and its application to heavy-tailed survival time data. Journal of Physics. 2021;1725(1):012016
  20. 20. Jamal F, Chesneau C, Nasir M, Saboor A, Altun E, Khan M. On A modified Burr XII distribution having flexible Hazard rate shapes. Mathematica Slovaca. 2019;70:1. DOI: 10.1515/ms-2017-0344
  21. 21. Schonbucher P. Taken to the limit: Simple and No so simple loan loss distribution. SSRN Electronic Journal. 2002:23. DOI: 10.2139/ssrn.378640. Available from: https://ssrn.com/abstract=378640
  22. 22. Supriatna A, Parmikanti K, Novita L, Sukono., Betty, S. and Talib, A. Calculation of Net Annual Health Insurance Premium Using Burr Distribution. Paris: IEOM Society International; 2018
  23. 23. Murphy K, Carter C, Brown S. The exponential distribution: The good, the bad and the ugly. A practical guide to its implementation. Annual Reliability and Maintainability Symposium. 2002 Proceedings. Seattle, WA, USA: IEEE. 2002. DOI: 10.1109/RAMS.2002.981701
  24. 24. Valencia A, Jaramillo W. Quantifying operational risk using the loss distribution. In: Proceedings of the Seventh European Academic Research Conference on Global Business. Zurich, Switzerland: Economics, Finance and Banking (EAR17 Swiss Conference); 2017. ISBN: 978-1-943579-46-4. Paper ID: Z702
  25. 25. Eadie W, Drijard D, James F, Roos M, Sadoulet B. Statistical Methods in Experimental Physics. Amsterdam: Elsevier Science Ltd; 1983
  26. 26. Anderson T, Darling D. Asymptotic theory of certain goodness of fit criteria based on stochastic processes. Annuals of Mathematical Statistics. 1952;23(1):193-212
  27. 27. Manganelli S, Engle F. Value at risk model in finance. European Central Bank Working Paper Series. 2001;75(1):1-40
  28. 28. Covert R. Using Method of Moments in Schedule Risk Analysis. MCR, LLC; 2011. DOI: 10.1002/9780470400531.eorms0591

Written By

Retsebile Maphalla, Moroke Mokhoabane, Mulalo Ndou and Sandile Shongwe

Submitted: 01 November 2022 Reviewed: 02 November 2022 Published: 06 December 2022