Open access peer-reviewed chapter

Investigating the Viability of Applying a Lower Bound Risk Metric for Altman’s z-Score

Written By

Hardo Holpus, Ahmad Alqatan and Muhammad Arslan

Submitted: 15 February 2021 Reviewed: 26 March 2021 Published: 17 June 2021

DOI: 10.5772/intechopen.97433

From the Edited Volume

Accounting and Finance Innovations

Edited by Nizar M. Alsharari

Chapter metrics overview

562 Chapter Downloads

View Full Metrics

Abstract

The study aimed to build a risk metric for finding the lower boundary limits for Altman’s z-score bankruptcy model. The new metric included a volatility of Altman’s variables and predicted the riskiness of a firm bankrupting in adverse situations. The research examined whether the new risk metric is feasible and whether it provides satisfying outcomes compared to Altman’s z-score values during the same period. The methods to conduct the analysis were based on Value at Risk methodology. The main tools used in constructing the model were Monte Carlo simulation, Lehmer random number generator, normal and t-distribution, matrices and Cholesky decomposition. The sample firms were selected from FTSE 250 index. The important variables used in the analysis were all Altman’s z-score variables, and the period under observation was 2001–2007. The selected risk horizon was the first quarter of 2008. The first results were promising and showed that the model does work to the specified extent. The research demonstrated that Altman’s z-score does not provide a full and accurate overview. Therefore, the lower bound risk metric developed in this research, produces valuable supplementary information for a well-informed decision making. To verify the model, it must be back- and forward tested, neither of which was carried out in this research. Furthermore, the research elaborated on limitations and suggested further improvement options for the model.

Keywords

  • risk metric
  • Altman’s z-score
  • Value at Risk
  • bankruptcy prediction
  • Monte Carlo simulation
  • Lehmer RNG
  • Cholesky decomposition

1. Introduction

This study investigates a perspective of using a lower bound risk metric on Altman’s z-score variables to determine the lower limits for Altman’s z-score.

The lower bound risk metric is intended to help in assessing a default risk during an economic distress or in situations of extreme volatility and calamity of business. Altman’s model uses values for its variables from a past year’s financial statements. Based on these values, a result of Altman’s formula shows whether a company is in a distressed zone and whether a bankruptcy is expected in the next two years. In situations where financial results of a company are very volatile, Altman’s formula may show a good result, but the result and a health of the company may degrade very rapidly because of a volatility inherent to the company. The volatility may be a result of several factors, including a bad management or a cyclicity of a business. In such instances, Altman’s z-score alone is not the best option for depicting the riskiness of the business. Therefore, it needs another metric which considers the volatility in the variables and which can estimate the risk of having a potentially sharp drop in Altman’s z-score value. The lack of a proper risk estimation in Altman’s z-score model prompted this study to examine methods to construct the lower bound risk metric.

The risk metric is extensively based on the Value at Risk (VAR) methodology. The VaR is used to estimate a maximum loss of value over a certain period of time with a determined probability level. The VaR methodology was developed by J. P. Morgan during the 1980s and the 1990s to measure the riskiness of their assets required by the Basel I set of international banking regulations [1]. Since then, it has become the central theory in a risk management in banking, insurance and asset management. The stress testing is a complement to the VaR and it looks at the riskiest but yet plausible events. The use of a model to predict Altman’s z-score variables in distress situations were not identified in the literature. Previous studies mainly suggest new models or other improvements to Altman’s formula. Altman’s z-score is giving a default risk figure based on the historical information. It does not try to predict extreme situations in which a firm’s position may deteriorate much faster than the equation is able to predict based on the historical information.

The aim of this study is to use the VaR methodology for variables in Altman’s z-score to analyse whether it is a helpful risk metric to describe the depth of variance and to add a factor of predictability. To do that, Altman’s z-score results for the FTSE 250 companies during the crisis of 2008 are compared with the results produced by the new risk metric using the pre-crisis data for Altman’s formula. The assumption is that Altman’s variables in a modelled distress situation help to better understand the depth of insolvency risk a company possesses. The study uses London Stock Exchange index FTSE 250 companies to carry out the empirical research. The panel data from Capital IQ is used for the Monte Carlo simulation to model the base distribution for Altman’s variables. The simulated values for the variables are used to form the new risk metric. It gives the lower bound values for Altman’s z-scores, which inform the underlying volatility and the worst-case scenarios with defined confidence levels. To confirm its validity, the results from the risk metric and stress tests should be back- and forward tested. The analysis reveals the limitations in the study, but it also points out options for further improvement.

Advertisement

2. Literature review and research hypotheses

2.1 Bankruptcy prediction models

Bankruptcy prediction models can be classified into three general groups [2]: the statistical models [3, 4, 5, 6, 7, 8, 9], the Artificial Neural Network (ANN) models [10, 11, 12, 13] and the kernel based learning models [14, 15, 16].

The statistical methods have the longest application history and are the most common still today. There are many methods, models and their variations. Some of the prevalent methods are simple ratio analysis [5], univariate analysis [4], multivariate analysis [3], logit method [7, 9]. Soon after Beaver’s research, Altman [3] used a multivariate discriminant analysis (MDA) and was able to get more accurate results and predict the failure over a longer time horizon. His model received wide acceptance and since then, the multivariate approach has been broadly used to develop bankruptcy models for different sectors, indexes, countries and so on. Even when the artificial intelligence and the computer learning methods achieve better results, the statistical models are fairly simple to understand and have continued to prove to be highly reliable.

2.2 Altman’s Z-score bankruptcy model

The aim of this research is not to be using the best model possible for the lower bound risk metric calculations, but rather to use the most common and widely used bankruptcy model for which it is appropriate to adapt. Taking it into account, the most commonly used model is Altman’s z-score bankruptcy model [17].

He introduced five financial ratios with highest default prediction power: working capital to total assetsx1, retained earnings to total assetsx2, earnings before interests and taxes (EBIT) to total assetsx3, market value of equity to book value of total liabilitiesx4, sales to total assetsx5:

ZScore=1.2x1+1.4x2+3.3x3+0.6x4+1.2x5E1

Altman [3] divided the formula’s results into three categories. The firms which achieve a score above 2.99 are considered to be in a safe area and far from default; results in the range of 1.81–2.99 are considered to be in a grey zone and need attention; results below 1,81 are in a distress zone and firms have a risk to become insolvent.

Altman found his model to be accurate over 70% of times, predicting bankruptcy two years before a default, and 95% accurate one year before a default [3]. In 6% of times the model predicted a default for a survived company. The accuracy rate diminished increasingly after two years. Heine [18] investigated the accuracy of Altman’s model over a period of 31 years from 1968 to 1999 and found that the model was still working fairly well. His findings showed over 80% accuracy in defaults one year before the actual event. Regardless of the results, Hillegeist et al. [19] argue that book value based models are by design limited in predicting defaults, as annual reports are prepared on a going-concern basis. There have been other numerous comparisons which find contradictory as well as supportive evidence for using book and financial ratios, for instance, the works of Balcaen and Ooghe [20], Appiah et al. [21], Mousavi et al. [22], Charalambakis [23], Agarwal and Taffler [24].

Commonly, the insolvency models use annual data, but the accuracy of book value models could be increased by using a quarterly data according to [25]. They did not find significant difference between the quality of quarterly and annual reports. The multivariate discriminant analysis conducted in their research provided more accurate and timely results using unaudited quarterly reports instead of just using annual reports.

2.3 Value at risk

VaR has three main methods of calculation: historical, variance–covariance and Monte Carlo (MC) method [26]. Each of the methods has its own advantages and drawbacks. The historical method is easy to apply for a collected data and it does not need a distributional assumption. The historical simulation assumes that the future events can be described by past events and that recent past trend represents the near future fairly accurately [1]. Such assumption is better than no assumption, but the historical data may consist of events that are not relevant for the future and thus should be treated with care. The variance–covariance method is easy to compute and use to manage portfolio risk. On the other hand, the biggest deficiency for the variance–covariance method is the failure to capture fat tails in the distribution [27]. A normal distribution creates a bias to underestimate the true VaR. Monte Carlo (MC) method overcomes most of the mentioned deficiencies, but has some of its own. MC method is flexible and can be used for shorter and longer time periods, whereas it is considered to be the most accurate method of the three for longer time periods ([1], p. 270). Another advantage of MC is that it can be applied to non-normal distribution, which more accurately describes many of the practical applications [28]. The two main setbacks are the model risk and the time to compute MC simulations ([1], p. 270).

Breuer [29] stresses the setbacks that arise due to the assumptions used in calculating VaR. The first issue he raises is the assumption that the market conditions are static throughout the future. Such assumptions are correct if the future market characteristics are repeating itself and are similar to the present or historical values. It should also be noted that every risk prediction measure has to cope with this dilemma in some scale, it is not specific to VaR. The second major problem that Breuer mentions is the assumption of data following a normal multivariate distribution in many VaR models. This holds true only in some cases and in majority of situations produces an imprecise outcome. He demonstrates such a case with a very illustrative example about 1987 market crash. The crash had a fall in stock prices between 10 to 20 standard deviations. Considering that seven standard deviations fall in a normal multivariate distribution would happen on average one day in three billion years, the assumption of normality in this particular case seems exceptionally poor. Nonetheless, VaR is flexible enough to allow to use more distributions than just a normal distribution, although the prevalence is to use a normal distribution for its simplicity and ease of use. Apart from the previously mentioned setbacks, Krause [30] discusses the limitations in choosing a confidence level and a horizon. The longer the horizon the bigger the variance and the less reliable is the outcome. A selected confidence level also sets the quantile value, beyond which VaR does not describe the losses. To understand the most extreme scale of losses, there are supplementary methods such as the expected shortfall (ES), maximum loss or other stress tests to assist with this information [1]. However, the applicability of stress tests depends on the characteristics of the data, which may limit their suitability in certain analysis.

2.4 Stress testing

Nearly all models try to predict the bankruptcy based on the trend in the business operations, finances and other accessible information. Not so much has been done to investigate hypothetical stress or worst-case scenarios that happen to every business during its lifecycle.

Stress testing is basically a complementary measure for VaR to capture the extreme losses in the tails. There is no one to tell the probability and depth of such extreme scenarios. An important constituent is the plausibility of such scenario. Although such scenarios are rare, they do happen, but the case should neither be too shallow nor almost impossible in terms of severity. The main difference between VaR and historical stress testing methods is the time period. VaR uses relatively short time periods, usually from a day up till one year, and some VaR models weigh recent time more heavily. Historical stress testing, to the contrary, uses periods of distant past and includes market crashes and periods of extreme volatility. Breuer [29] examines four types of common stress testing methods: historical, expected shortfall, maximum loss and Monte Carlo method. He finds that the choice of the methods depends on the aim and data, but acknowledges that the Monte Carlo method performed relatively better than other methods in many observed cases.

2.5 Predictive power and model verification

A model risk always remains, but it can be minimized using back- and forward testing for a model validation. Back-testing compares the actual outcomes with the predicted estimate of VaR before the sample period [26]. Forward testing compares actual outcomes with an estimate after the sample period. If the losses exceed VaR estimate more than the set confidence level, the model is not accurate and needs modification [31]. Halilbegovic and Vehabovic [26] highlight that the values which exceed VaR should be equally distributed along the horizon and be independent. The first and the most referenced test for back-testing is Kupiec [32] “proportion of failures” coverage test. Banking industry is also using the “traffic lights” test published by the Basel Committee [33].

2.6 Research hypotheses

The lower bound risk metric is expected to provide a lower limit for Altman’s z-score within the selected confidence level. It provides a precautionary gauge and includes the measure for downside volatility. The supplementary information from risk metric gives a more informative decision-making tool and indicates the weaknesses of the subject firms on a more extensive scale. To test the hypotheses, Altman’s z-score values of FTSE 250 firms during the selected 2008 recession period are compared with the calculated pre-crisis risk metric values. If the risk metric is reliable, there should not be more outlier firms than the confidence limit allows. It is determined whether the simulated pre-crises risk metric values are providing relevant and sufficient information besides the standard Altman’s z-score values to be a practical risk measure. The research hypotheses are as follows:

H0- The lower bound risk metric does not provide the lower limit for Altman’s z-score within the selected confidence level1.

H1- The lower bound risk metric provides the lower limit for Altman’s z-score within the selected confidence level.

Advertisement

3. Methodology

This study investigated a perspective of using the lower bound risk metric on Altman’s z-score variables to determine the lower limit for the score. The core methodology used in this research is based on Value at Risk (VaR) and Altman’s z-score bankruptcy model. VaR methodology has been widely adopted in measuring financial and market data figures and to report a business or market risk. Using the verified and tested solution on the same type of data and in a similar way, although in a different setting, assured the validity of the approach. The different setting represented the use of VaR approach on Altman’s formula to calculate the lower bound risk metric.

The research was carried out on a sample of firms from FTSE 250 index. The index is consisting of similar midsize companies, whose values and operations tend to follow well the fluctuations in the economy. Therefore, the firms were good subjects to estimate the performance of a lower bound risk metric.

The data was retrieved from S&P Capital IQ [34] database; primarily from financial statements. Annual financial statements did not give many data points for correct volatility evaluation. To compensate the lack of data points, quarterly financial reports were used instead of the annual reports. Consequently, it also provided the basis for a risk horizon to be one quarter ([28], p. 216; [1], p. 311). The risk horizon gave the results to which the risk metric was compared. Quarterly reports provided more data to analyse and this became important considering the limited information available.

The study by Nallareddy et al. [35] mentioned that the Financial Conduct Authority (FCA) in the UK did not require quarterly reporting before 2007. They pointed out that it was mandatory between 2007 and 2014, after which it was overruled by the EU Transparency Directive in 2013. It ruled FCA to stop requiring mandatory quarterly reporting from 2014. Relatively few FTSE 250 companies reported quarterly results voluntarily and on a consistent basis before millennia. However, it became more common thereafter, regardless of legal requirements. From year 2001 onwards, there was enough data for the study, and therefore it was a suitable starting point for the data collection. The data collection period had to be relatively long to provide a sufficient amount of data for the time series analysis. It was also preferable to have the risk horizon during a period of high volatility to examine whether the risk metric model actually worked, or it needed to be modified before further testing. The tipping point of financial crisis was in 2008, which made it a suitable period for the risk horizon. Hence, the period for collecting data to calculate the risk metric was from [2001–2007] and the risk horizon was the first quarter of 2008. The firms listed during that time period are not the same as the firms listed in FTSE 250 in July 2017, which were the firms used in this study. There have been changes in the index, when comparing the companies listed during the seven-year period to July 2017. There are firms, which left the stock market, went bankrupt or were simply excluded from the FTSE 250 list. Nonetheless, in order to allow future data analysis of the same firms in later periods than 2008, the data was collected from firms that were listed in FTSE 250 from 1st of January 2001 until 1st of July 2017. This decision drew some limitations as it excluded firms that could have potentially offered a valuable information. On the other hand, the selected time period and firms provided an opportunity for forward testing and some limited scale back-testing for further studies.

Seven years of quarterly data provided 28 data points per firm. Altman’s z-score was not used for finance firms as this sector is known to be particularly leveraged, regulated and consists of many disguised risks. Therefore, all finance firms were excluded from the list of FTSE 250 for a data analysis purposes, which left 176 firms. The z-score was calculated for all 28 quarters for the 176 firms. Many did not report quarterly results or did it only for short periods and inconsistently, which was not enough for calculating acceptable standard deviation from the variables. Also, many of the observed firms were not listed during the examined period and thus, did not have the data. Therefore, only firms that had enough data to calculate Altman’s z-score for minimum 20 quarters, were included and the rest were excluded. Twenty quarters was an arbitrary figure that provided just enough data points to make meaningful analysis. The limitation of minimum 20 quarters of data left 78 firms with enough quarterly data to conduct the research. Outliers were identified and revealed in a box plots diagram at the end phase of the data analysis. It illustrated well the serious deviations in the data, which could be further investigated to find the root causes of such irregularities.

The examined variables of Altman’s z-score were total assets, total liabilities, working capital and retained earnings from the statement of financial position; sales and EBIT from the income statement; market value of equity was received from market data. To calculate the bankruptcy z-score, the quarterly data, which was derived from income statement, had to be annualized. To do that, the results from previous three quarters for sales and EBIT were combined. Calculations of the average value and standard deviation of each of Altman’s variables for each firm gave the basic data to derive the risk metric using Monte Carlo simulation.

The first assumption about Monte Carlo simulation was that the base distribution is a suitable proxy for the nature of the data [36]. Monte Carlo simulation was the most flexible of previously reviewed methods and it could assume any distribution as its base distribution [27]. A typical simplifying assumption is that the variable is independent and identically distributed (i.i.d.) and that it follows a normal distribution. Such assumptions are often valid, especially in the case of big sample sizes [36]. However, testing a data for normality is a prerequisite for a more accurate model. Samples of all seven variables were plotted on graphs using histograms and Anderson-Darling normality test. Illustrations of both graphs depicting the distribution of standardized total assets can be found in Appendix A. Although graphical interpretation is subjective, it allows to draw fast and simple conclusions. A sample of histograms and Anderson-Darling tests suggested that the variables did not fulfil the requirements for normality and did not follow a normal distribution. A normality requirement was fixed simply by Monte Carlo simulation, where the generated iterations created a big enough sample size to fulfil the normality requirement. Histograms suggested that the distributions were not normal, but more likely followed a fat-tailed leptokurtic t-distributions. Using a multivariate Student t-distribution for the model, all the marginal distributions had to follow the same degrees of freedom parameter. For the analysis in this research both distributions were used. Firstly, it was described the process of getting risk metrics using normal distribution and secondly, using the Student t-distribution as there were only slight differences in implementing a t-distribution compared to a normal distribution.

It was assumed that the variables follow a stochastic process referred to as Geometric Brownian Motion ([1], p. 309). It was stochastic in a sense that changes in the variance were random and did not depend on a past information [37]. The equation for Monte Carlo simulation with Geometric Brownian Motion is

x=μ+σzE2

Where x is the value of the variable, μ is the expected value of the variable, σ is the standard deviation, and z is the standard score expressed as

z=xμσE3

To make the variance change randomly, Monte Carlo simulation method with a designed random number generator was used. Random numbers were generated repeatedly a large number of times for each of the seven variables. This created a typical Monte Carlo simulation, where there were a number of variables that could be easily tested in a simulation, but which may have lacked the data or resources to test experimentally. Simulations are simple to create using random numbers. There are two random number generating techniques. The non-deterministic, in which case each time the random number is generated, it exhibits a different output number. The deterministic technique, which creates pseudo random numbers that keep their output numbers fixed. It is fixed by using a seed number. The latter makes analysis easier by enabling to run the simulations repeatedly and allows to modify and recreate iterations.

For the simulation, a Lehmer random number generator (RNG) with different seed for each variable was used to generate the random integers. Lehmer RNG belongs to a group of linear congruential generators. It generated a uniformly distributed random numbers k between 0 and 1. The equation for Lehmer RNG is

ki=akimodmE4

a0 is the constant multiplier.

m is the modulo m.

ki is the random integer.

Lehmer RNG generated pseudo random numbers in the range of 0a1, where a was the Mersenne prime. Dividing that range with Mersenne prime gave a uniformly distributed probability values εi in the range of 01.

εi=kiaE5

εi is the pseudo random number in the range of 01.

ki is the generated pseudo random integer in the range of 0a1.

a is the Mersenne prime

The generated probability values were fed into the inverse base-distribution selected for Monte Carlo simulation [27]. The inverse function generated the statistical standard score values. Hence, the formula (III.1) could also be written in a form

x=μ+σΦ1αE6

Monte Carlo simulation was used to create 15,000 iterations of described standard score values for each of the seven Altman’s z-score variables. The independent iterations of standard scores had to be adjusted for the correlation between Altman’s variables. The complexity of calculations in multivariate analysis for correlation adjustments required the use of matrices [37]. Several matrix calculations were performed for each Altman’s variable and this was done for each firm under observation. The matrices performed were correlation matrix, variance matrix, variance-correlation matrix, covariance matrix, Cholesky decomposition matrix and the result matrix [37]. The result matrix provided the product of correlated standard scores and the standard deviation of each variable. The results showed how much the expected mean of each variable can deviate. Therefore, when the expected mean was added to each of the calculated results, it provided variable values that followed the chosen base distribution for Monte Carlo simulation. Finally, the produced values were inserted to Altman’s formula to produce 15,000 Altman’s z-scores for each of the observed firm. The selected confidence level determined the lower quantile value, which became the lower bound risk metric for Altman’s z-score

Z<Zh=ZE7

where Z is the simulated iterations, Zh is Altman’s z score for risk horizon h. The lower bound risk metric for Altman’s z-score is noted as Z. It marks the α quantile of the simulated iterations for risk horizon h.

Advertisement

4. Data and analysis

This study examines how the methods and formulas brought out in Methodology are applied to retrieved data. Because a simulation generates a large amount of data, it is not presentable in such a scale. Instead, specific examples are presented to provide comprehension on the subject matter.

4.1 Random number generator

The Lehmer random number generator is used to generate pseudo random numbers ki with chosen seed numbers

ki=ak0modmE8

For instance, the Lehmer RNG with parameters a=2311, m=75, k0=231 and 15,000 iterations generate randomly distributed numbers plotted on Figure 1.

Figure 1.

Random numbers generated by Lehmer RNG.

Park and Miller, in 1988, suggested specific parameters for a=2311 (Mersenne prime) and m=75. The random integer requires an initial value k0. It is typically called a seed value. The seed value is used to run the initial random number. To randomize variance, each of the seven Altman’s variables need a seed number to run its own 15,000 iterations. Thus, each variable is assigned a seed number. The seed number does not have to be random, but for Lehmer RNG, the seed needs to be a coprime to the modulus m. A coprime is such a number that the only number which divides the coprime and the modulus is 1. The seven variables are assigned the coprime seeds as follows: total assets 231, total liabilities 331, working capital 431, retained earnings 531, sales 631, EBIT 731 and market value of equity is assigned a seed value of 831. When all the parameters are applied to the above mentioned random number generator, it generates pseudo random numbers in the range of 0a1, where a is the Mersenne prime.

4.2 Probability distributions

In order to derive a probability function, the generated random numbers within the range of 0a1 need to be divided by the Mersenne prime. The division gives pseudo random numbers εi in the range of 01.

εi=kiaE9

The formula creates numbers, which follow a uniform distribution. For instance, the uniform distribution for total assets with a seed 231 is depicted in Figure 2.

Figure 2.

Uniform probability distribution.

The generated pseudo random numbers are uniformly distributed and can be fed into the inverse standard normal cumulative distribution Φ1α to provide standard score statistic z. Each outcome of the cumulative distribution gives a standard score, which is the corresponding quantile to any given value of α. Φ1α, which produces the standard score, shows how common are samples that are less than or equal to this value. As a reminder, the standard score statistic and Altman’s z-score are not the same and should not be confused. Figures 3 and 4 depict the cumulative standard normal distribution function and the standard normal probability density function respectively.

Figure 3.

Cumulative standard normal distribution function (CDF).

Figure 4.

Standard normal probability density function (PDF).

Setting z=Φ1α, then

x=μ+σΦ1αE10

When α is replicated 15,000 times, x takes values along a normal distribution. Depending on the chosen significance level α, the α quantile is the value of x below which the x is not expected to go with a confidence level of 1α. For instance, Workspace Group PLC value of assets x can be described by a function x=619+229Φ1α. Replicating α randomly by 15,000 times and taking a lower 5% quantile of the distribution gives an asset value 259 M. Therefore, in theory, it means that with 95% confidence, the value of assets is not expected to go below 259 M.

The function can be written in another way

x=σΦ1αμE11

in which case, the x now represents a risk metric similar to Value at Risk and is already presented in absolute terms.

It is possible to plug the generated probability values directly to derive the values for each variable. But doing so, a correlation between variables is totally ignored. The correlation has an impact on the deviation, and it characterizes how each deviation is correlated to the deviations in other variables. Correlation is simple to calculate when using only two variables. In this situation, it is seven variables, which requires a matrix based methodology. The process of producing a correlation between the variables can be divided into six separate calculation steps, which make it easier to understand.

Advertisement

5. Correlation matrices for simulated variables

The first step is to create a correlation matrix. Each individual correlation between variables is entered into the matrix table. The diagonal is always 1, because the correlation of the variable with itself is 1. Normally the upper area from the diagonal is left empty, but in this case, it is filled for computational reasons. The covariance between each variable and the correlation coefficient are calculated. An example of resulting matrix for Workplace Group PLC is illustrated in Table 1.

Correlation matrixTotal assetsTotal liabilitiesWorking capitalRetained earningsTotal revenueEBITMarket capital
Assets1.000.95−0.560.910.970.950.91
Liabilities0.951.00−0.320.810.940.960.94
Working−0.56−0.321.00−0.72−0.48−0.37−0.33
Retained0.910.81−0.721.000.820.780.82
Total0.970.94−0.480.821.000.990.92
EBIT0.950.96−0.370.780.991.000.94
Market0.910.94−0.330.820.920.941.00

Table 1.

Correlation matrix.

σx,y=i=1Nxix¯yiy¯n1E12
ρx,y=σx,yσxσyE13

The second step is to create a variance matrix as shown in Table 2. It is achieved by simply adding the standard deviations of each variable diagonally.

Variance matrixTotal assetsTotal liabilitiesWorking capitalRetained earningsTotal revenueEBITMarket capital
Assets228.540.000.000.000.000.000.00
Liabilities0.00136.980.000.000.000.000.00
Working0.000.0011.960.000.000.000.00
Retained0.000.000.00164.030.000.000.00
Total0.000.000.000.0010.200.000.00
EBIT0.000.000.000.000.006.110.00
Market0.000.000.000.000.000.00213.07

Table 2.

Variance matrix.

The third step is to produce a variance-correlation matrix. It is the product of the correlation and variance matrices and it can be calculated only when the matrix is positive definite. The equation to multiply the above matrices is illustrated and the calculated matrix is presented in Table 3.

Variance - correlation matrixTotal assetsTotal liabilitiesWorking capitalRetained earningsTotal revenueEBITMarket capital
Assets228.54217.58−128.92207.96221.92216.53207.35
Liabilities130.41136.98−44.11110.85128.61131.13128.68
Working−6.74−3.8511.96−8.67−5.69−4.47−3.90
Retained149.26132.74−118.88164.03134.78127.42134.29
Total9.909.57−4.868.3810.2010.069.34
EBIT5.795.85−2.294.756.036.115.74
Market193.32200.16−69.42174.44195.22200.05213.07

Table 3.

Variance-correlation matrix.

=ρ1,1ρ1,2ρ1,dρ2,1ρ2,2ρ2,dρd,1ρd,2ρd,dσ1,1σ2,2σd,dE14

The fourth step is to produce variance–covariance matrix. It is the product of the variance-correlation matrix and the variance matrix and the results are presented in Table 4.

Variance - covariance matrixTotal assetsTotal liabilitiesWorking capitalRetained earningsTotal revenueEBITMarket capital
Assets52229.1229804.56−1541.4134110.542263.111323.7044180.06
Liabilities29804.5618764.59−527.4318182.951311.60801.6527418.56
Working−1541.41−527.43142.96−1421.38−58.06−27.33−829.99
Retained34110.5418182.95−1421.3826904.861374.51778.9928613.43
Total2263.111311.60−58.061374.51104.0061.531990.79
EBIT1323.70801.65−27.33778.9961.5337.371222.98
Market44180.0627418.56−829.9928613.431990.791222.9845398.90

Table 4.

Variance–covariance matrix.

The fifth step is to produce a Cholesky decomposition [38]. It is produced by taking a square root of variance–covariance matrix as illustrated in Table 5 [39].

Cholesky decomposition matrixTotal assetsTotal liabilitiesWorking capitalRetained earningsTotal revenueEBITMarket capital
Assets228.540.000.000.000.000.000.00
Liabilities130.4141.910.000.000.000.000.00
Working−6.748.405.180.000.000.000.00
Retained149.26−30.59−30.4152.600.000.000.00
Total9.900.480.91−1.161.880.000.00
EBIT5.791.100.47−0.711.270.520.00
Market193.3252.666.0529.5642.6417.6147.07

Table 5.

Cholesky decomposition matrix.

The methodology of step one to step five is similar to producing a correlated bivariate distribution from two samples of uncorrelated normal variables [40]. It is more straightforward and makes the calculations of above matrices easily understandable. For instance, the first sample of uncorrelated variable is produced as described above by feeding a uniformly distributed random number into an inverse standard normal cumulative distribution Φ1α to arrive at standard score statistic zi. When applying the standard score into an equation for the first variable, it produces as follows

x1=μ1+σ1z1E15

To make the second sample of uncorrelated variable to correlate with the first one, the variables need to be combined. The combining factor is the correlation between both z-scores, z1 and z2 [40]. The resulting for the second variable is presented below

x2=μ2+σ2z1ρ+z21ρ2E16

Instead of two variables, the five step calculations ending with Cholesky decomposition produces a correlation between seven variables.

Finally, the sixth step produces the results for each variable by summing the product of Cholesky matrix and the matrix of standard score iterations to the mean of each variable. The results for Workspace Group PLC are displayed in Table 6.

IterationsTotal assetsTotal liabilitiesWorking capitalRetained earningsTotal revenueEBITMarket capitalAltman z-score
1−241.36−307.70−45.49−379.304.32−2.34−970.90−4.37
2563.74301.84−19.78137.2035.4719.1797.820.67
3642.62295.42−32.76145.8551.1828.71393.651.28
4978.49567.92−26.31344.6264.1437.60656.501.35
5301.96134.73−16.47−2.0731.7317.81107.790.70
15,000537.72250.66−37.58223.6342.5724.29303.461.45

Table 6.

Result matrix.

Altman’s z-score value in the table is calculated simply by using Altman’s bankruptcy formula and the calculated variables in the row. Given the 15,000 iterations, the 5% and 1% quantile are the smallest 749th and 149th values respectively. In the example about Workplace Group PLC, Altman’s z-score 5% and 1% quantile values are −0.23 and −6.86 respectively.

As stated earlier, the variables are following a fat-tailed distribution that is similar to a leptokurtic t-distribution. Figure 5 is illustrating the comparison of a leptokurtic and a standard normal distribution that are created by using the Lehmer random number generator and seed 231. The entire process for a t-distribution is very similar to that of a normal distribution which was described above. The only difference is that the standard z-score is replaced with t, which is the statistic for t-distribution. To arrive at standardized Student t values, the independent standard Student t simulations are multiplied by ν1ν2, where ν is the degree of freedom ([28], p. 228).

Figure 5.

Comparison of leptokurtic and standard normal distribution.

There are several ways to estimate degrees of freedom. Suggested methods for multivariate Student t-test are the maximum likelihood estimation and method of moments [41]. Both methods are estimating the parameters of the statistical model. For this analysis, the degrees of freedom are estimated approximately. Considering that the unknown parameters are the seven observations and the two known parameters are the mean and variance, the difference of it is the five unknown parameters, which are also the five degrees of freedom.

Advertisement

6. Results

Before presenting the calculated risk metric values, it is good to understand how many of the 78 firms were already in distressed zone in terms of Altman’s z-score limits. The distressed zone for manufacturing firms was defined by Altman as a score lower than 1.81. This study was based on the standard formula of Altman’s z-score and it was used for all the companies despite their primary industry classification. The industry classifications for the 78 firms are displayed in Table 7.

Industry ClassificationsFrequency
Consumer Discretionary (Primary)18
Consumer Staples (Primary)6
Energy (Primary)4
Healthcare (Primary)3
Industrials (Primary)25
Information Technology (Primary)5
Materials (Primary)4
Real Estate (Primary)11
Utilities (Primary)2
Total78

Table 7.

Industry classifications.

All energy, industrials and materials companies were considered to be manufacturing companies and the rest in the list were non-manufacturing. A similar approach has been applied by many researchers and organizations, including the research by Miller [17] and the reports by market intelligence provider S&P Capital IQ. Table 8 provides an overview of distressed firms in each four-quarter period.

ManufacturingNon-manufacturingTotal
0 quarters202444
4 quarters628
8 quarters202
12 quarters235
16 quarters134
20 quarters145
24 quarters134
28 quarters066
Total334578

Table 8.

Distressed firms.

Around 33% of the 78 companies experienced the distressed period for longer than one year according to Altman bankruptcy model. It is also known that the type two error, which classifies firm as bankrupt when it does not go bankrupt, is around 15–20%. In that respect, the 33% figure is too high. The reason could be that the standard Altman’s z-score is not that accurate for non-manufacturing firms, which had 42% of firms in distressed zone. Whereas the number of distressed firms for manufacturing industry was 19%. Anyhow, this could be investigated further.

Returning to calculated Altman’s z-score limits. In Table 9, both, the 95% and 99% confidence level limits from the normal and t-distribution are used to compare them with the actual first quarter results of 2008. In addition, the two calculated Monte Carlo (MC) limit values are compared with 95% and 99% confidence level limits, which were calculated from the actual quarterly Altman’s z-scores from [2001–2007]. Table 9 provides a good comparison of the effectiveness of the calculated MC limit values. From 78 firms, only 64 of them reported first quarter results in 2008. The outliers section on the left hand of Table 9 shows how many of the 64 firms crossed the applied confidence level limits. The failure rate section on the right hand of Table 9 presents the confidence level limit differences between the MC limit values and the limit values from the seven years of quarterly results. Therefore, the right-hand section considers all of the 78 firms not only 64.

confidence leveloutliersPass 95% N-distFail 95% N-distFailure rate
95% N-dist.395% N-dist. MC50280.36
95% N-dist. MC395% t-dist. MC49290.37
95% t-dist. MC3
99% N-dist.1Pass 99% N-distFail 99% N-distFailure rate
99% N-dist. MC099% N-dist. MC56220.28
99% t-dist. MC099% t-dist. MC61170.22

Table 9.

Outliers and failure rate.

Three companies are determined as outliers, which remains in 95% confidence level as 3 out of 64 is less than 5%. Therefore, the results show, that both distributions are giving valid results for the first quarter of 2008. Moreover, the calculated MC limits perform at least as well as the 95% level limit figures from actual quarterly results. Nevertheless, it does not mean that the model is valid. To test validity, the model needs to be back- and forward tested. Testing the validity is discussed afterwards.

Analyzing the number of firms ending up in the distressed zone using the confidence limits and above-mentioned distributions, gives a good estimate of how badly firms may do in terms of Altman’s z-score.

Table 10 shows the firms during the period [2001–2007] that have Altman’s z-score less than 1.81. For calculated 95% and 99% confidence limit, the firms having score less than 1.81 range from 17 to 28 firms. There is a great difference between the 95% and 99% limit for manufacturing firms. Almost 60% more firms fall into distressed zone compared to 95% confidence level. For non-manufacturing firms, such difference is smaller. Again, the discrepancy is likely to come from the standard Altman’s formula used for non-manufacturing firms. The comparison also unveils that the results for the normal and t-distribution are almost the same for each confidence level. It is known from the properties of the two distributions that the difference between them is not big for the 95% confidence level, but when the confidence level is getting bigger, the difference is expected to widen considerably. For the period and firms investigated, the results do not confirm it, which may imply that the variables were nevertheless following a distribution fairly similar to a normal distribution. However, definite conclusions can only be drawn when the model has been back- and forward tested.

Manufacturing out of 33Non-manufacturing out of 45Total out of 78
95% N-dist. MC171330
95% Student t-dist. MC171330
99% N-dist. MC261642
99% Student t-dist. MC281846

Table 10.

Confidence limits.

All the results that the previous tables are based on can be found in Appendix B. To make these results easily accessible in one set, the outcomes are represented on a box plot diagram as seen in Figure 6. The x on the diagram represents the mean and the line within the box represents the median value, which is also the second quartile or 50th percentile. The lower end of the box is the first quadrant and the higher end of the box is the third quadrant. The T-shaped projections are the whisker lines that represent the highest local maximum and the lowest local minimum. All values outside the local maximum or minimum are outliers and marked by dots. The outliers are calculated by using John W. Tukey convention. It determines outliers as data points, which are further than 1.5 times the interquartile range from either end of the box [42]. The interquartile range is the length of the box, from quartile 1 to quartile 3. Tukey determined the length of the whiskers so that it would not be either too exclusive nor too inclusive and established that 1.5 times the interquartile range is a good compromise [42].

Figure 6.

Box plots of Altman’s z-score results.

The box plots reveal how the statistical 95% and 99% confidence limit results are greatly more constrained than the calculated MC results. Especially it can be said for 99% confidence limit, when compared to MC 99% normal or 99% t-distribution box plots. The box plots also illustrate that the results of the 95% limit are closer to each other compared to the 99% limit, where the differences in the results are wider. The two 99% Monte Carlo box plots of the normal and t-distribution display that the t-distribution confidence limits have a somewhat wider and deeper negative range. Box plots also reveal that the means are close to the first quartile boundary, except the first quarter results of 2008, which has a mean higher than the median. It is explainable by the fact that the top quartile has more variation than the lower quartile, resulting in further higher values than lower values relative to the median value.

The diagram unveils also risks posed by the first four box plots compared to the Q1 2008 box plot. If the following quarters were extremely bad, the results of recent quarters may not have had enough impact for the trailing seven years of data to estimate the confidence limits correctly. Therefore, it is risky to rely on any other confidence limit than the 99% limit of Monte Carlo normal or t-distribution. The range of the two latter confidence limits is more appropriate considering the vulnerability of Altman’s z-score values to potential sharp falls. On the other hand, there are trade-offs because the values of some z-scores tend to go extremely negative and may not appropriately express the economic limits of actual situations. Nevertheless, the deep negative values should be a good indication of how bad the situation can go with a specific firm.

The diagram displays how the lower bound risk metric provides a metric to look beyond Altman’s z-score values and determine the real risk metric of bankruptcy the firm may possess. The range of values is much wider and the values have fluctuated considerably more when comparing the 99% confidence level results to the original first quarter results for 2008. An example of an individual firm brings more clarity as it is difficult to interpret the group result on a diagram to individual firms. The data for the following example was taken from the results in Appendix B. For instance, the IMI plc had the first quarter Altman’s z-score result of 2008 as 3.21 and the 95% and 99% t-distribution confidence limit scores as 2.33 and 2.05 respectively. HomeServe plc had a little higher bankruptcy score for 2008, it was 3.95, but the 95% and 99% t-distribution confidence limit scores were far worse, 0.76 and −1.24 respectively. Although, it may seem that HomeServe plc had better bankruptcy score for 2008 and was better protected from insolvency, the lower bound risk metric indicated the opposite. It shows how the lower bound risk metric adds another necessary risk measure to Altman’s z-score to interpret and consider the inherent risk in the results.

The diagram shows inconsistencies in the data of at least with some of the outlier firms. Whether there were mistakes in quarterly reports or the mistakes lay somewhere else, either way the data in these few cases do not appear to be trustworthy. In other instances, there appeared to be no apparent reason in the big deviations, when inspecting the variables and data used. In such situations, it is important to look at the financial statements and other fundamental values of the interested outlier firms to determine the source of inconsistencies.

Advertisement

7. Discussion

This study did not identify any previous research that would have specifically tried to expand Altman’s study by using confidence limits for Altman’s z-score values. Considering that the limits are providing a very valuable information for a risk evaluation and prevention, this study deemed it necessary to fulfil this research gap. The study produced a lower bound risk metric for Altman’s z-score and identified it to be a good gauge for the lower limit measure using a 99% confidence level. It provided satisfactory results for the tested period and for the sample used, but the model needs a more rigorous testing to modify and verify its performance.

7.1 Evaluation

In a bankruptcy literature, a research has tried to establish the best model for a bankruptcy prediction by using ever more complicated methods such as the highly computerized Artificial Neural Networking or kernel based learning models. Instead, this research examined the most popular bankruptcy model, Altman’s z-score, in relation to the worst-case scenario. It examined the variability and correlation between variables with an aim to produce a simulation that repetitively calculates Altman’s z-scores from which it is possible to find the worst-case scenario with a selected confidence level. Not only is this approach applicable for Altman’s z-score model, but this methodology can be used similarly for almost any other bankruptcy model. The Value at Risk methodology, used throughout this study, is very flexible and applicable in a variety of situations.

The biggest drawbacks were not related to the methodological approach, but rather to the availability of data and resources to conduct the study in the most appropriate way possible. Some of the limitations faced during this study were purely related to the lack of resources. Having had necessary resources would have helped to improve and modify the system, resulting in less limitations and better outcomes. It would have given more credibility for the study. Some limitations, such as the lack of data, was practically impossible to overcome. Although simulations help in situations of limited data, the simulations are as good as the quality of data. One option to overcome some of the data related issues, would be to choose a model that uses a data, which has a long historical record and which future outcomes are more predictable. Even then, the model depends on the selected sample size. The size and industry of companies, the culture and regulations of different countries. Has an impact on variables that would be difficult to measure with one standard model. Hence, the model developed in this study is most appropriate to use on FTSE250 index firms or on firms that have similar characteristics. As it was pointed out in the analysis of data, the model is best to use on manufacturing firms as the results may not describe as accurately the riskiness of service firms. For service firms, it could have been more accurate to use the modified z-score model specifically developed for non-manufacturing firms. Its distressed region is also defined as having z-score results of less than 1.1. This would have changed the obtained results, although not considerably.

The positive side of this model is that it is observing a range and it gives the lower quantile figure based on a confidence level. It is much more difficult to overestimate this figure compared to the standard Altman’s z-score. The lower bound risk metric depends on the volatility of variables. Therefore, the firms, whose results from the risk metric are above the distressed zone limit of 1.81 can be considered as relatively safe option in terms of insolvency during times of economic distress. The results from the example introduced in finding display that HomeServe plc was more exposed to the negative economic and adverse business situations. It had a higher risk of insolvency if such situations would have had become true and had continued for an extended period. As can be seen from that example, Altman’s z-score does not give a full and accurate overview. Therefore, the lower bound risk metric developed in this research, provides a valuable supplementary information for a well-informed decision making.

The outcomes in Table 9 showed that there were no material differences between the results from a normal and a t-distribution used in Monte Carlo simulation. It indicates that the distribution does not have the expected fat tails, but it is somewhat closer to a normal distribution. For instance, it could also be a t-distribution with a different degrees of freedom parameter. Nevertheless, this could only be determined with back- and forward testing and with more refinements to the model.

The 99% MC confidence limits have lower and wider z-score range compared to the statistically calculated 99% normal confidence limits. The right-hand section of Table 9 showed that more than 70% of firms had lower z-score when MC limits were used compared to the 99% normal confidence limits. Even when the rest of the values, over 20%, were higher, the model did not perform worse. The left-hand section of Table 9 showed that the number of outliers was the same, which indicates that the Monte Carlo method was following the distribution of variables more closely than the statistical limit method. It gives more confidence in using MC method. Even when the limits with MC method come very negative, there is reason behind it. It indicates a considerable risk and that the financials and the volatility of variables in observed firms need more investigation.

Whilst the outliers remained within the confidence limits as indicated in Table 9, the potential downside effect may not be included when using the 95% MC limits as can be deduced from Table 10 and from the box plots presented by Figure 6. The 99% MC limits provide the confidence even in more adverse situations such as a financial crisis. Same requirement has been also applied by Basel Accords ([28], p.385).

Considering that the number of outliers in Table 9 stayed within the specified confidence limits, we can reject the null hypothesis and accept the alternative hypothesis. The VaR methodological approach to create a lower bound risk metric and determine the lower limits for Altman’s z-score has been demonstrated to be working within the specified boundaries.

Advertisement

8. Conclusion

The research analysed FTSE 250 companies with the aim of providing a lower bound risk metric for Altman’s z-score. The time period examined was from 2001 to 2007 and Altman’s z-score limits were estimated for the first quarter of 2008. Data was collected from quarterly reports. After all limitations and exclusions, 78 firms were analysed.

Essentially, the methods applied in this research were based on the Value at Risk methodology. The VaR methodology was used to generate a new risk metric that set a lower bound confidence limit for Altman’s z-score bankruptcy model. The model used Monte Carlo simulation and correlation matrices to produce the new risk metric.

The results obtained were compared to statistical confidence limits and to Altman’s z-scores from the first quarter of 2008. The number of outliers stayed within the selected confidence limits, which showed that the model does work in the specified limits. The first limitation was set by the chosen sample. The aim of the research was to focus on UK based firms and therefore the FTSE 250 index appeared to be the most appropriate in size, data availability and of interest to study. To actually verify the model, it must be back- and forward tested, neither of which was carried out in this research. It was suggested to use a 99% confidence limit for the model in order to include the potential adverse situations. The contribution of this paper is the new risk metric provides a measure of risk to Altman’s z-score that was not considered before. It produces essential information on the quality of the z-score and helps to make more profound decisions.

The chosen risk horizon throughout this analysis has been one quarter, because this is the minimum period of a financial statement and provides most data points to carry out this analysis. There are possibilities to change the risk horizon. The obvious way is to replace a quarterly result with a half a year result or longer term and carry out the whole calculation process again. Another way is to convert the short-term risk horizon to a long-term, which is called scaling. Assuming that the variables are normally distributed, the values can be approximately scaled by the following equation

x=μr+rσzE17

where r is the multiplier to obtain required risk horizon ([28], p.22). Therefore, in the current analysis, all variable values x need to be recalculated. It also means the recalculation of all the subsequent values of Altman’s z-scores and the confidence limit for each firm.

Advertisement

Appendix

1.8584AD test statistic
<0.0005P-value

Company NameIndustry Classifications95% N-dist.95% N-dist. MC95% t-dist. MC99% N-dist. MC99% t-dist. MC99% N-dist.Q1 2008
A.G. BARR p.l.c. (LSE:BAG)Consumer Staples (Primary)4.294.204.213.473.083.945.31
Aggreko plc (LSE:AGK)Industrials (Primary)2.602.212.271.020.122.203.50
Amec Foster Wheeler plc (LSE:AMFW)Energy (Primary)1.121.381.371.151.070.673.74
AVEVA Group plc (LSE:AVV)Information Technology (Primary)4.09−19.18−15.07−104.54−88.122.827.51
Balfour Beatty plc (LSE:BBY)Industrials (Primary)1.521.561.561.411.361.342.00
BBA Aviation plc (LSE:BBA)Industrials (Primary)1.481.501.501.351.331.361.95
Bellway p.l.c. (LSE:BWY)Consumer Discretionary (Primary)2.972.562.621.060.312.654.33
Bodycote plc (LSE:BOY)Industrials (Primary)1.191.181.190.910.860.941.66
Bovis Homes Group PLC (LSE:BVS)Consumer Discretionary (Primary)2.942.922.952.312.072.374.65
BTG plc (LSE:BTG)Healthcare (Primary)−13.66−6.39−7.03−22.86−45.32−22.502.81
Cairn Energy plc (LSE:CNE)Energy (Primary)1.54−3.72−3.90−65.73−68.63−0.795.19
Carillion plc (LSE:CLLN)Industrials (Primary)1.611.551.53−7.36−5.721.281.43
Clarkson PLC (LSE:CKN)Industrials (Primary)1.80−5.62−4.15−32.77−28.931.262.54
Cranswick plc (LSE:CWK)Consumer Staples (Primary)2.922.042.16−14.92−9.862.433.86
Dairy Crest Group plc (LSE:DCG)Consumer Staples (Primary)2.332.352.352.162.122.112.38
Dechra Pharmaceuticals plc (LSE:DPH)Healthcare (Primary)3.323.133.162.412.012.965.10
Diploma PLC (LSE:DPLM)Industrials (Primary)4.934.714.713.633.164.435.14
Domino’s Pizza Group plc (LSE:DOM)Consumer Discretionary (Primary)2.160.190.44−6.59−9.480.418.44
Euromoney Institutional Investor PLC (LSE:ERM)Consumer Discretionary (Primary)0.96−7.03−4.80−48.77−41.360.641.25
Firstgroup plc (LSE:FGP)Industrials (Primary)1.791.641.661.271.121.612.44
Galliford Try plc (LSE:GFRD)Industrials (Primary)2.40−2.19−0.97−28.46−20.322.112.22
Grafton Group plc (LSE:GFTU)Industrials (Primary)2.361.911.97−4.22−3.842.163.43
Greencore Group plc (LSE:GNC)Consumer Staples (Primary)1.101.121.120.970.950.902.39
Greene King plc (LSE:GNK)Consumer Discretionary (Primary)0.990.990.99−3.71−2.650.831.20
Greggs plc (LSE:GRG)Consumer Discretionary (Primary)5.765.745.745.505.415.436.86
Halma plc (LSE:HLMA)Information Technology (Primary)4.344.784.774.354.263.575.92
Hays plc (LSE:HAS)Industrials (Primary)1.581.971.84−53.82−41.64−0.815.95
Hill & Smith Holdings PLC (LSE:HILS)Materials (Primary)1.471.401.401.060.621.211.77
HomeServe plc (LSE:HSV)Industrials (Primary)1.820.840.76−0.80−1.240.903.95
Hunting plc (LSE:HTG)Energy (Primary)2.362.102.111.401.042.082.02
IMI plc (LSE:IMI)Industrials (Primary)2.292.312.332.092.052.003.21
Inchcape plc (LSE:INCH)Consumer Discretionary (Primary)3.113.083.082.662.452.743.41
J D Wetherspoon plc (LSE:JDW)Consumer Discretionary (Primary)1.601.681.691.551.511.392.18
Kier Group plc (LSE:KIE)Industrials (Primary)2.592.592.592.482.432.493.01
Ladbrokes Coral Group plc (LSE:LCL)Consumer Discretionary (Primary)−2.53−5.17−3.80−35.48−25.93−3.90−0.44
Marston’s PLC (LSE:MARS)Consumer Discretionary (Primary)0.960.960.950.830.690.871.01
Meggitt PLC (LSE:MGGT)Industrials (Primary)1.23−1.06−0.48−15.72−10.100.871.35
Mitie Group plc (LSE:MTO)Industrials (Primary)4.044.204.183.38−2.603.634.05
N Brown Group plc (LSE:BWNG)Consumer Discretionary (Primary)2.823.173.172.902.822.273.69
Northgate plc (LSE:NTG)Industrials (Primary)1.111.071.07−2.80−3.060.911.46
PageGroup plc (LSE:PAGE)Industrials (Primary)7.074.664.61−17.65−17.255.928.36
PZ Cussons Plc (LSE:PZC)Consumer Staples (Primary)1.482.562.630.860.69−1.184.67
Redrow plc (LSE:RDW)Consumer Discretionary (Primary)3.353.223.202.842.713.133.82
Renishaw plc (LSE:RSW)Information Technology (Primary)7.187.587.536.192.065.7713.20
RPC Group Plc (LSE:RPC)Materials (Primary)1.891.771.791.351.181.742.67
Savills plc (LSE:SVS)Real Estate (Primary)2.291.331.36−6.70−7.531.804.04
Senior plc (LSE:SNR)Industrials (Primary)1.901.861.871.571.501.702.63
Serco Group plc (LSE:SRP)Industrials (Primary)1.291.411.37−6.89−4.820.663.07
Spectris plc (LSE:SXS)Information Technology (Primary)1.551.731.741.271.110.973.58
Spirax-Sarco Engineering plc (LSE:SPX)Industrials (Primary)2.863.083.112.372.211.985.47
St. Modwen Properties PLC (LSE:SMP)Real Estate (Primary)1.261.081.04−7.39−6.161.061.39
Stagecoach Group plc (LSE:SGC)Industrials (Primary)0.901.061.060.840.740.601.73
Tate & Lyle plc (LSE:TATE)Consumer Staples (Primary)2.212.222.222.122.072.012.98
Ted Baker PLC (LSE:TED)Consumer Discretionary (Primary)5.615.105.242.470.384.638.40
The Berkeley Group Holdings plc (LSE:BKG)Consumer Discretionary (Primary)1.652.722.681.861.170.375.78
The Go-Ahead Group plc (LSE:GOG)Industrials (Primary)2.352.332.331.210.312.003.01
Travis Perkins plc (LSE:TPK)Industrials (Primary)1.932.042.04−12.96−7.231.352.54
Tullow Oil plc (LSE:TLW)Energy (Primary)0.92−6.35−5.12−34.31−25.130.492.02
UBM plc (LSE:UBM)Consumer Discretionary (Primary)0.00−0.32−0.26−0.89−1.14−0.441.51
UDG Healthcare plc (LSE:UDG)Healthcare (Primary)3.763.753.743.343.243.554.05
Ultra Electronics Holdings plc (LSE:ULE)Industrials (Primary)2.932.692.712.021.722.613.52
Victrex plc (LSE:VCT)Materials (Primary)7.067.257.28−2.44−8.155.758.60
WH Smith PLC (LSE:SMWH)Consumer Discretionary (Primary)2.662.592.581.861.342.065.42
William Hill plc (LSE:WMH)Consumer Discretionary (Primary)−0.430.860.81−0.36−0.99−2.712.06

References

  1. 1. Jorion P. Value at risk: the new benchmark for managing financial risk. New York: McGraw-Hill; 2001
  2. 2. Xu X, Chen Y, Zheng H. The comparison of enterprise bankruptcy forecasting method. Journal Of Applied Statistics. 2011;38(2):301-308. DOI: 10.1080/02664760903406470
  3. 3. Altman EI. Financial Ratios, Discriminant Analysis and the Prediction of Corporate Bankruptcy. Journal of Finance. 1968:189-209. DOI: 10.1111/j.1540-6261.1968.tb00843.x
  4. 4. Beaver WH. Financial ratios predictors of failure. Journal of Accounting Research. 1966;4:71-111
  5. 5. Fisher RA. The Use of Multiple Measurements in Taxonomic Problems. Annals of Eugenics. 1936;7:179. DOI: 10.1111/j.1469-1809.1936.tb02137.x
  6. 6. Merton R. On the Pricing of Corporate Debt: The Risk Structure of Interest Rates. The Journal of Finance. 1974;29(2):449-470. DOI: 10.2307/2978814
  7. 7. Ohlson J. Financial Ratios and the Probabilistic Prediction of Bankruptcy. Journal of Accounting Research. 1980;18(1):109-131. DOI: 10.2307/2490395
  8. 8. Springate GLV. Predicting the possibility of failure in a Canadian firm (Unpublished master’s thesis). Canada: Simon Fraser University; 1978
  9. 9. Zmijewski ME. Methodological issues related to the estimation of financial distress prediction models. Journal of Accounting Research. 1984;22:59-86
  10. 10. Chou C, Hsieh S, Qiu C. Hybrid genetic algorithm and fuzzy clustering for bankruptcy prediction. Applied Soft Computing. 2017. DOI: 10.1016/j.asoc.2017.03.014
  11. 11. Coats PK, Fant LF. Recognizing Financial Distress Patterns Using a Neural Network Tool. The Journal Of The Financial Management Association. 1993;22(3):142-155
  12. 12. Perez M. Artificial neural networks and bankruptcy forecasting: a state of the art. Neural Computing & Applications. 2006;15(2):154-163. DOI: 10.1007/s00521-005-0022-x
  13. 13. Tam KY, Kiang MY. Managerial applications of neural networks: the case of bank failure predictions. Management science. 1992;38(7):926-947
  14. 14. Barboza F, Kimura H, Altman E. Machine Learning Models and Bankruptcy Prediction. Expert Systems With Applications. 2017. DOI: 10.1016/j.eswa.2017.04.006
  15. 15. Van Gestel T, Baesens B, Martens D. From linear to non-linear kernel based classifiers for bankruptcy prediction. Neurocomputing. 2010;73. DOI: 10.1016/j.neucom.2010.07.002
  16. 16. Zhao D, Yu F, Huang C, Wei Y, Wang M, Chen H. An Effective Computational Model for Bankruptcy Prediction Using Kernel Extreme Learning Machine Approach. Computational Economics. 2017;49(2):325-341. DOI: 10.1007/s10614-016-9562-7
  17. 17. Miller W. Comparing Models of Corporate Bankruptcy Prediction: Distance to Default vs. In: Z-Score. 2009 Retrieved 25 June 2017, from https://corporate.morningstar.com/us/documents/MethodologyDocuments/MethodologyPapers/CompareModelsCorpBankruptcyPrediction.pdf
  18. 18. Heine ML. Predicting Financial Distress Of Companies: Revisiting The Z-Score and Zeta models. (Unpublished paper. New York: Stern School of Business; 2000
  19. 19. Hillegeist S, Keating E, Cram D, Lundstedt K. Assessing the Probability of Bankruptcy. Review Of Accounting Studies. 2004;9(1):5-34. DOI: 10.1023/B:RAST.0000013627.90884.b7
  20. 20. Balcaen S, Ooghe H. 35 years of studies on business failure: an overview of the classic statistical methodologies and their related problems. The British Accounting Review. 2006;93. DOI: 10.1016/j.bar.2005.09.001
  21. 21. Appiah KO, Chizema A, Arthur J. Predicting corporate failure: a systematic literature review of methodological issues. International Journal Of Law & Management. 2015;57(5):461. DOI: 10.1108/IJLMA-04-2014-0032
  22. 22. Mousavi MM, Ouenniche J, Xu B. Performance evaluation of bankruptcy prediction models: An orientation-free super-efficiency DEA-based framework. International Review Of Financial Analysis. 2015;75. DOI: 10.1016/j.irfa.2015.01.006
  23. 23. Charalambakis EC. On the Prediction of Corporate Financial Distress in the Light of the Financial Crisis: Empirical Evidence from Greek Listed Firms. International Journal Of The Economics Of Business. 2015;22(3):407-428
  24. 24. Agarwal V, Taffler RJ. Comparing the performance of market-based and accounting-based bankruptcy prediction models. Journal Of Banking And Finance. 2008. DOI: 10.1016/j.jbankfin.2007.07.014
  25. 25. Baldwin J, Glezen GW. Bankruptcy Prediction Using Quarterly Financial Statement Data. Journal Of Accounting, Auditing & Finance. 1992;7(3):269-285
  26. 26. Halilbegovic S, Vehabovic M. Backtesting Value at Risk Forecast: the Case of Kupiec Pof-Test. European Journal Of Economic Studies. 2016;17(3):393-404. DOI: 10.13187/es.2016.17.393
  27. 27. Cheung YH, Powell RJ. Anybody can do Value at Risk: A Teaching Study using Parametric Computation and Monte Carlo Simulation. Australasian Accounting, Business and Finance Journal. 2012;6(5):101-118
  28. 28. Alexander C. Value-at-risk models. 1st ed. Chichester, England: Wiley; 2010
  29. 29. Breuer T. Providing against the worst: Risk capital for worst case scenarios. Managerial Finance. 2006;32(9):716-730. DOI: 10.1108/03074350610681934
  30. 30. Krause A. Exploring the Limitations of Value at Risk: How Good Is It in Practice? The Journal Of Risk Finance. 2003;(2):19
  31. 31. Lambadiaris G, Papadopoulou L, Skiadopoulos G, Zoulis Y. VAR: history or simulation? Risk. 2003;16(9):122-127
  32. 32. Kupiec P. Techniques for Verifying the Accuracy of Risk Management Models. Journal of Derivatives. 1995;3:73-84
  33. 33. Nieto MR, Ruiz E. Review: Frontiers in VaR forecasting and backtesting. International Journal Of Forecasting. 2016;501. DOI: 10.1016/j.ijforecast.2015.08.003
  34. 34. S&P Capital IQ. (2017). FTSE 250 Index Financial Data. Retrieved 1 July 2017, from S&P Capital IQ database.
  35. 35. Nallareddy, S., Pozen, R., and Rajgopal, S. (2017). Consequences of Mandatory Quarterly Reporting: The U.K. Experience. Columbia Business School Research, 17(33).
  36. 36. Marathe R, Ryan SM. On The Validity of The Geometric Brownian Motion Assumption. The Engineering Economist. 2005;50(2):159-192. DOI: 10.1080/00137910590949904
  37. 37. McCrary, S. (2015). Implementing a Monte Carlo simulation: Correlation, skew, and kurtosis. Retrieved 29 June 2017, from http://www.thinkbrg.com/media/publication/687_McCrary_MonteCarlo_Whitepaper_20150923_WEB.pdf
  38. 38. Moore T. Generating Multivariate Normal Pseudo Random Data. Teaching Statistics. 2001;23(1):8-10
  39. 39. Hunt N. Generating Multivariate Normal Data in Excel. Teaching Statistics. 2001;23(2):58-59
  40. 40. Parramore K. On Simulating Realizations of Correlated Random Variables. Teaching Statistics. 2000;22(2):61-63
  41. 41. Villa, C., & Rubio, F. J. (2017). Objective priors for the number of degrees of freedom of a multivariate t distribution and the t-copula. Retrieved 20 July 2017, from https://arxiv.org/abs/1701.05638
  42. 42. Krzywinski M, Altman N. Points of Significance: Visualizing samples with box plots. Nature Methods. 2014;11(2):119-120. DOI: 10.1038/nmeth.2813

Notes

  • The typically selected confidence levels are 95% or 99%. Both levels are used in this study for comparative reasons.

Written By

Hardo Holpus, Ahmad Alqatan and Muhammad Arslan

Submitted: 15 February 2021 Reviewed: 26 March 2021 Published: 17 June 2021