Open access peer-reviewed chapter

Perspective Chapter: Application of Monte Carlo Methods in Strategic Business Decisions

Written By

Chioma Ngozi Nwafor

Submitted: 02 June 2022 Reviewed: 30 June 2022 Published: 23 August 2023

DOI: 10.5772/intechopen.106201

From the Edited Volume

Data and Decision Sciences - Recent Advances and Applications

Edited by Tien M. Nguyen

Chapter metrics overview

63 Chapter Downloads

View Full Metrics

Abstract

Some strategic business problems cannot be expressed in analytical forms and, in most cases, are difficult to define in a deterministic manner. This chapter explores the use of probabilistic modelling using Monte Carlo methods in the modelling of decision problems. The focus is on using Monte Carlo simulation to provide a quantitative assessment of uncertainties and key risk drivers in business decisions. Using an example based on hypothetical data, we illustrate decision problems where uncertainties make simulation modelling useful to obtain decision insights and explore alternative choices. We will explore how to propagate uncertainty in input decision variables to get a probability distribution of the output(s) of interest.

Keywords

  • Monte Carlo methods
  • probability distributions
  • decision variable
  • probabilistic and stochastic modelling
  • decision analytics

1. Introduction

Many critical business challenges involve decision-making under uncertainty. Some examples of decision-making under uncertainty in the business world include decision to introduce a new product, capital budgeting decisions, strategic decisions to increase net profit and decisions to grant loans by a financial institution. Accounting for these sources of uncertainty and balancing the objectives of the business can be very challenging. This chapter discusses probabilistic modelling using Monte Carlo methods in modelling of decision problems. The focus is on using Monte Carlo simulation to provide a quantitative assessment of uncertainties and critical risk drivers in business decisions. Given adequate data and reasonable assumptions, probabilistic modelling techniques, such as Monte Carlo analysis can be viable statistical tools for analyzing uncertainty in business decisions. To make the most out of probabilistic modelling using Monte Carlo methods, executives must combine all available insights about the relevant uncertainties and their impact on their decisions.

Literature uses the terms probabilistic and stochastic modelling interchangeably. It is essential to provide a brief definition of these terms. Probabilistic modelling is a statistical technique used to consider the impact of random events or uncertainty in predicting the potential occurrence of future outcomes [1, 2]. Probabilistic models incorporate probability distributions into the model of an event or phenomenon. These models are hinged on the premise that we rarely know everything about a situation. Therefore, there is always an element of randomness or uncertainties to consider. On the other hand, stochastic modelling forecasts the probability of various outcomes under different conditions using random variables [3]. The word stochastic comes from the Greek word stokhazesthai, meaning to aim or guess. Essentially, a stochastic model estimates probability distributions of potential outcomes by accounting for random variation in the input variables. Some examples of stochastic models include Monte Carlo simulation and Markov chain models amongst others [3]. These models use probability distributions to account for uncertainties in the input variables. In a strict sense, stochastic modelling is an area of probability and statistics used in decision-making under uncertainties. Like probability modelling, stochastic modelling presents data and predicts outcomes that account for certain levels of unpredictability or randomness. In this chapter, we use probabilistic and stochastic modelling interchangeably because both models are based on probability or the fact that randomness plays a role in predicting future events. The stochastic approach we will use is the Monte Carlo simulation method.

Using an example based on hypothetical data, we illustrate decision problems where uncertainties make probabilistic/stochastic modelling useful to obtain decision insights and explore alternative choices. We will explore how to propagate uncertainty in input decision variables to get a probability distribution of the output(s) of interest. The rest of the chapter is structured as follows. Section 2 discusses probability distributions and the distributions used in the example (Normal, Lognormal, Bernoulli, and Triangular distributions). Section 3 explores how to deal with uncertainty using a probabilistic/stochastic model. Section 4 provides an application of Monte Carlo methods in making business decisions, while Section 5 is the conclusion.

Advertisement

2. Probability modelling and Monte Carlo simulation

Business uncertainty is when the outcome of a strategic business decision is unclear, often due to a lack of information or knowledge about the business environment. The conventional approach to strategic business decisions assumes that executives can predict the future of any business accurately enough to choose a clear strategic direction for it by applying standard deterministic spreadsheet models. Such deterministic models allow you to calculate a future event precisely, without the involvement of randomness. However, deterministic models do not account for the fact that the business environments are complex and constantly changing. For instance, deterministic models assume that known average rates with no random deviations can be applied to the broader population. For example, if 10,000 businesses each have a 95% chance of surviving another year, we can be reasonably confident that 9500 businesses will indeed survive.

In contrast, probability models using Monte Carlo simulation allow for random variations due to uncertainties in the parameters or limited sample size for which it may not be reasonable to apply average rates. For example, consider a sample of 10,000 businesses in the U.K. with each having a probability of 0.7 of surviving another year. The average number of companies that may survive another year will be 10,000 × 0.7 = 7000. However, due to the limited sample size of the population, there will be random variations, and a probabilistic description of the population at the end of the year would be preferable. We would use a probability distribution to describe the population in this case. The probability distribution provides the probabilities of having zero survivors, one survivor, two survivors, and more survivors at the end of the year. A probability distribution is given for the number of businesses that will survive another year and not just an average number.

When the future is genuinely uncertain, a deterministic approach is at best marginally helpful and very dangerous, given that underestimating uncertainty can lead to strategies that neither defend a company against the threats nor take advantage of the opportunities that higher levels of uncertainty provide. Major analytical tools such as probabilistic modelling using Monte Carlo simulation and game theory, amongst others, offer enormous opportunities for business executives working in industries facing significant uncertainties. What follows is a discussion of probabilistic modelling using Monte Carlo simulation and an analysis of the probability distributions used in this chapter.

We can categorize data for modelling business decisions into input data (explanatory data) and output data (predicted/outcome data). A major aspect of dealing with business uncertainty is using quantitative methods to model uncertainty. For example, a firm’s net profit 1 year from today is uncertain. We know that a firm’s net profit 1 year from today is a function of many uncertain input variables, including the demand for the company’s goods/services, the cost of goods sold and tax rate, amongst others. Some of these input variables are outside the control of the decision-maker. Probability models can be used to propagate uncertainty in the input variables. Probability modelling uses probability distributions of input assumptions to calculate the probability distribution for chosen output metrics/summary measures [4]. It is important to remember that uncertainty in business decisions is unavoidable because real-world situations cannot be perfectly measured, modelled or predicted. As a result, business decision-makers face significantly complex problems compounded by varying levels of uncertainty. If uncertainty has not been well acknowledged, potential complications arise in the process of decision-making. Decision-makers often want to know the impact of certain business decisions on their bottom line, how much the trade-off between alternative actions reduces their potential profit and the possible consequences of decisions.

Monte Carlo simulation allows executives to see all the possible outcomes of their decisions and assess the impact of risk, thus, allowing for better decision-making under uncertainty. Probability distributions are a realistic way of describing uncertainty in variables. Monte Carlo methods are defined as statistical approaches to provide approximate solutions to complex optimization or simulation problems by using random sequences of numbers [5, 6, 7]. The Monte Carlo method performs analysis by building models of possible results via substituting a range of values—a probability distribution—for any factor with inherent uncertainty. The algorithm then calculates results repeatedly, each time using a different set of random values from the probability functions. In other words, values are sampled at random during a Monte Carlo simulation from the input probability distributions and each set of samples is called an iteration. The resulting outcome from that sample is recorded. Each iteration produces different values for the input assumptions fed into the model. Monte Carlo simulation could involve thousands or tens of thousands of recalculations before producing a probability distribution of possible outcomes.

Using probability distributions allow variables to have different probabilities of different outcomes occurring. Probabilistic modelling using the Monte Carlo method tells you what could happen and how likely it is to happen. What follows is a brief discussion of the common probability distributions used in decision analytics.

2.1 Common probability distributions in business decision making

Organizations and governments use mathematical models, optimization algorithms and other tools in their decision-making to achieve strategic organizational goals. However, essential variables such as future sales, future material costs, etc., are often uncertain. In reality, a plan that promises high performance, in theory, can go wrong when assumptions change or prove false, or we encounter unanticipated events such as regional wars or events like the COVID-19 global pandemics. Probabilistic modelling using Monte Carlo simulation methods can assist decision-makers in making strategic business decisions in the face of uncertainties. Probability models represent events as probabilities rather than certainties, using probability distributions to increase expected returns or reduce downside risk. The main goals of this section are to present a brief review of distribution fitting and a discussion of the following probability distributions - Normal, Lognormal, Bernoulli and Triangular distributions. We start with a discussion of probability distribution and then discuss each probability distribution mentioned above in turn below.

2.1.1 Review of distribution fitting

Distribution fitting is used to select a statistical distribution that best fits a data set. Examples of statistical distributions include the normal, lognormal, Bernoulli and triangular distributions. A distribution characterizes a variable when the distribution conditions match those of the variable. The maximum likelihood estimation (MLE) method estimates the distribution’s parameters from a data set. Once the estimation is complete, you use goodness of fit techniques to help determine which distribution fits your data best. Distributions are defined by parameters. These parameters define the distribution. There are four parameters used in distribution fitting: location, scale, shape and threshold. The location parameter of distribution indicates where the distribution lies along the x-axis (the horizontal axis). The scale parameter of distribution determines how much spread there is in the distribution. The larger the scale parameter, the more spread there is in the distribution. The smaller the scale parameter, the less spread there is in the distribution. On the other hand, the shape parameter allows the distribution to take different shapes. The larger the shape parameter, the more the distribution tends to be skewed to the left. The smaller the shape parameter, the more the distribution tends to be skewed to the right. At the same time, the threshold parameter defines the minimum value of the distribution along the x-axis. The distribution cannot have any values below this threshold.

2.1.2 The normal distribution

The normal distribution is the single most important distribution in statistics. The normal distribution is a continuous distribution characterized by its mean and standard deviation. Therefore, we say that the normal distribution is a two-parameter distribution. The normal curve shifts to the right or left by changing the mean. If the standard deviation is changed, the curve spreads out more or less. The possible values of the normal distribution range over the entire number line - from minus infinity (i.e., − ∞) to plus infinity (i.e., + ∞). Random variables from the normal distribution form the foundation of probabilistic modelling. This distribution is also known as the Bell Curve, and it occurs naturally in many circumstances. For example, the normal distribution is seen in tests like the general certificate of secondary education (GCSE) or National 5 examination in the U.K., where most students will score the average grade (C). In contrast, smaller numbers of students will score a B or D. A much smaller percentage of students will score an F or an A. The score grades create a distribution that looks like a bell. The bell curve is symmetrical in that half of the data will fall to the left of the mean, and half will fall to the right.

Eq. (1) is the formula for a normal density function.

fχ=12πσexμ2/2σ2for<x<+.E1

where μ and σ are the mean and standard deviation of the distribution and are fixed constants. ex is the exponential function. The mean can take both positive and negative values, including zero, while the standard deviation can only take positive values.

The standard deviation controls the spread of the distribution. If the standard deviation is small, the data will cluster tightly around the mean, and the normal distribution will be taller. A bigger standard deviation indicates more dispersion away from the mean, and the normal distribution will be flatter and wider. Below are the properties of a standard normal distribution.

  • The mean, mode and median are all equal.

  • The curve is symmetric around the mean, μ.

  • Half of the values are to the left of the centre, and exactly half the values are to the right.

  • The total area under the curve is 1.

2.1.3 Lognormal distribution

The lognormal distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed [8, 9]. If the random variable X is log-normally distributed, then Y=lnX has a normal distribution. A random variable which is log-normally distributed takes only positive values. The probability density function for a lognormal distribution is defined by two parameters, namely the mean1μ and the standard deviation σ:

Nlnxμσ=12πσexplnxμ22σ2,x>0E2

The shape of the lognormal distribution is defined by three parameters:σ is the shape parameter and is known as the standard deviation for the lognormal distribution. The shape parameter affects the general shape of the distribution and can be calculated from historical data. The shape parameter does not change the location or height of the graph; it affects the overall shape, while the location parameter μ tells you where on the x-axis the graph is located. The probability density function for a lognormal distribution is defined by two parameters, namely the mean and the standard deviation. Lognormal distributions can model growth rates that frequently occur in biology and financial areas. It also models time to failure in reliability studies. The lognormal distribution is widely used in situations where values are positively skewed, for example, in financial analysis for security valuation or in real estate for property valuation, and where values cannot fall below zero. Stock prices are usually positively skewed rather than normally (symmetrically) distributed. Stock prices exhibit this trend because they cannot fall below the lower limit of zero but might increase to any price without limit. Therefore, the lognormal distribution curve can be used to identify the compound return that the stock can expect to achieve over some time. Similarly, real estate prices illustrate positive skewness and are log-normally distributed as property values cannot become negative.

2.1.4 Bernoulli distribution

A Bernoulli distribution is a discrete probability distribution for a Bernoulli trial — a random experiment that has only two outcomes (a ‘Success’ or a ‘Failure’). The two outcomes are labelled by n=0 and n=1 in which n=1 (‘success’) occurs with probability p and n=0 (‘failure’) occurs with probability q1p, where 0<p<1. The probability density function for a Bernoulli distribution is given as:

Pn=1pforn=0pforn=1E3

The distribution of heads and tails in coin tossing is an example of a Bernoulli distribution with

p=q=12.

2.1.5 Triangular distribution

The triangular probability distribution is a continuous distribution that has a probability density function shaped like a triangle. The triangular distribution is defined by three parameters, namely: minimum value (Min), most likely value (Likely) and maximum value (Max). This distribution is practical in real-world applications because we can often estimate the minimum, most likely and maximum value that a random variable will take. So, we can often model the behaviour of random variables by using a triangular distribution with the knowledge of these three values. Essentially, we can use this distribution when we only have limited information about distribution but can estimate the upper and lower bounds, as well as the most likely value. The three conditions underlying the triangular distribution are:

  1. The minimum number of items is fixed.

  2. The maximum number of items is fixed.

  3. The most likely number of items falls between the minimum and maximum values, forming a triangular-shaped distribution.

Values near the minimum and maximum are less likely to occur than those near the most likely value. The equation for the triangular distribution is given below:

fx=2xMinMaxMinLikelyMinforMin<x<Likely2MaxxMaxMinMaxLikelyfor Likely<x<MaxE4
Advertisement

3. Dealing with uncertainty using probabilistic models

Uncertainty in decision analytics emanates from a lack of knowledge about the system being modelled, and it consists of random events or variables. For example, addressing questions such as ‘what are the average future demands for our products?’ and ‘Should we invest in a capital project or not?’ The most common type of uncertainty is uncertainty due to randomness. Uncertainty can be qualitative or quantitative. Qualitative uncertainty may be due to a lack of knowledge about the factors that affect demand. In contrast, quantitative uncertainty may come from a lack of precise knowledge of a model parameter or a lack of confidence that the mathematical model is a correct formulation of the problem. Uncertainty can impact our decisions and actions in desirable as well as undesirable ways. Uncertainty can be reduced by collecting more information or data (i.e., quantitative methods). One of the most commonly used quantitative methods to address uncertainty is the probabilistic modelling using the Monte Carlo method.

Most business decisions are based on a forecast of future variables. These variables could be net present value (NPV), net profit, demand for a product, etc. The future is uncertain. To provide a decision-maker with helpful information, you need to generate a comprehensive range of potential outcomes and their relative likelihoods to make the best possible decisions. Our aim in decision analytics is to reduce uncertainty in our business decisions by envisioning possible scenarios and making forecasts based on what is considered probable within a range of probabilities. All probabilistic models have the following in common:

  1. Correct probability distributions,

  2. Correct use of the input data for these distributions,

  3. Accounting for the associations and relationships between variables.

Selecting the correct probability distributions for the input variables is essential to maximize your results’ confidence. The uncertain input probability distributions should be as realistic as possible. Remember that each distribution has distinctive ranges of possible sampled values and associated probabilities/likelihoods. Therefore, choosing the wrong distribution will create the wrong simulation data. A natural question at this point would be how do we know the ‘right’ probability distributions for our variables? Unfortunately, this is a challenging question, a complete discussion of which is beyond the scope of this chapter. However, some guidelines will enable you to create reasonable models. We will provide a brief discussion of each in turn below.

Discrete or Continuous Data: Probability distributions describe the dispersion of the values of a random variable. Therefore, the type of a variable determines the type of probability distribution. The distribution for a single random variable is divided into discrete and continuous distributions. When identifying the ‘right’ probability distribution for your dataset, the first question is to examine whether the variable or quantity is discrete or continuous. A discrete quantity has a finite or countable number of possible values — for example, the gender of a person or the country of a person’s birth. A continuous quantity can take on any number in the natural number line and has infinitely many possible values within a specified range. An example is the household incomes of Africans living in Scotland. Discrete probability distributions or probability mass functions are used for discrete variables, and probability density functions are used for continuous variables. For discrete probability distribution functions, each possible value has a non-zero likelihood.

Is the Variable Bounded or Unbounded? The second way of identifying a probability distribution that fits your dataset is to know if the continuous variable is bounded - that is, does it have a minimum and maximum value? Some continuous variables have exact lower bounds. For example, the price of a stock on a particular trading day cannot be less than zero. Some quantities also have exact upper bounds. For example, the percentage of a population exposed to the SARS-COV-2 virus (COVID-19) cannot be greater than 100%. Most real-world variables have de facto bounds - that is, it is plausible to assert that there is zero probability that the quantity would be smaller than some lower bound or larger than some upper bound, even though there is no precise way to determine the bound.

The discussions so far relate to where you have historical datasets. Historical data is often a reasonable indicator of the distribution of future outcomes for the input variable, both in terms of the general shape and parameter estimates. However, it is important to note that there is always an implicit assumption that the historical data is an ‘accurate’ representation of the future. But historical data has some possible flaws that need to be considered. For instance, is the data genuinely representative of the potential future, that is, how similar will the future conditions be to those in the past? Second, what is the sample period? - Does the data only go back over a short period. The sample period is vital because certain observations could be over or under-represented?

Theory and Subject Matter Knowledge: It is not uncommon to encounter situations in practice where there is no historical data. A suitable process has to be followed to derive reasonable probability distributions and parameters in such circumstances. We consider a method that can be used to choose a probability distribution in the absence of historical data. A mathematical theory or logic will determine the correct distribution in most situations. For instance, a lognormal distribution is commonly used to describe distributions of financial assets such as share prices in the literature. This is because asset prices cannot be negative [10, 11]. Caution must be applied when using theory to choose the proper probability distribution that fits a dataset. There may be assumptions in theory that may not be valid in your situation. Examples include using a Binomial to model the sum of several identical and independent Bernoulli processes.

Advertisement

4. Application of probabilistic modelling using Monte Carlo simulation in business decision making

4.1 The problem

CEDFA Ltd. deals in new and used cars. The firm has secured a franchise with a major car manufacturer and ordered two car models—saloon and hatchback models. Due to changes in market forces, the unit costs and the demand for the cars are unknown. The firm estimated the demands for each model based on the previous year’s data. According to the franchise agreement, any vehicles not sold by the end of the year will need to be discounted to ensure they are sold. The size of the discount is affected by various factors, including the presence of alternative brands in the market. The firm anticipates that the annual franchise fee could increase this year as the contract expires and has estimated the probability of this increase to be 0.25. If it does, the cost will be $5,000,000. Based on the historical data, the firm’s sales and market analysts have come up with the base-case estimates shown in Table 1 below.

Variable descriptionAmount (£)
Unit cost of a saloon$18,000
Unit cost of a hatchback$25,500
Sales price per saloon$21,000
Sales price per hatchback$26,000
An increase in franchise fee$5,000,000
Number of units purchased
Number of saloon cars purchased3000
Number of hatchback cars purchased2000
Demand for the cars (At full price)
Demand for saloon cars2500
Demand for hatchback1500
Discount rates for unsold cars
Saloon discount rate15%
Hatchback25%
Probability of increasing franchise fee0.25

Table 1.

Base case estimates.

The company is worried about how the uncertainty in the business environment would affect the demands for their cars and the subsequent discount rates to be used in discounting the unsold vehicles as per their franchise agreement. Based on the collected data, the company’s analysts estimate that the unit cost for each car and the demand for each car follow the triangular probability distribution and the discount rates follow the lognormal probability distribution. The analysts suggest the Bernoulli probability distribution to model the potential increase in the franchise fee. These probability distributions represent the uncertainty in these variables, the full range of possibilities and how likely they are. Table 2 below shows the parameters for the probability distributions.

Minimum valueMost likely valueMaximum value
Unit cost of saloon$17,500$18,000$20,000
Unit cost of hatchback$21,500$22,500$25,000
Demand for saloon at full price100025003000
Demand for a hatchback at full price50015002000
Discount rate (Lognormal distribution)
MeanStandard DeviationTrunc MinTrunc Max
Discount to sell leftover saloon15%7%10%30%
Discount to sell leftover hatchback25%10%15%40%
Probability of increase in franchise fee0.25

Table 2.

Probability distribution parameters.

Given that the company will have to discount any leftover car as per the franchise agreement, the company wants to know the effects of the different discount rates on the net profit. Based on historical experience and the current market environment, the firm anticipates that the minimum discount it can give for unsold saloon and hatchback cars would be 10% and 15%, respectively. At the same time, the maximum discount rate is estimated as 30% and 40%, respectively. As part of the firm’s strategic business decisions, the company executives want to understand the impact of the unit costs, the potential increase in the franchise fee, the demand for the cars and the discount rates on the firm’s net profit. Also, the firm wants to decide the minimum and maximum discount rates to apply to the unsold cars and the variable with the most significant impact on the total revenue and net profit.

For this example, we are required to do the following:

  1. Build a total revenue/net profit model

  2. Use the triangular probability distribution to propagate uncertainty in the unit cost and demand for the saloon and hatchback vehicles.

  3. Use the lognormal distribution to account for the uncertainty in the discount rate.

  4. Perform 50,000 iterations and 2 simulations, and determine the probability of incurring a loss at 95% confidence interval.

  5. Perform a sensitivity analysis and determine the variable with the largest effect on net profit. (Contributions to variance).

4.2 Decision scenarios

The company wants to decide the discount rate to apply to unsold cars at the end of the year. To make this decision, the firm created two scenarios, namely, strategy one - allowing the discount rate to be as high as possible (untruncated or unbounded), and strategy two - truncating (or bounded) the discount rate with minimum and maximum rates, as shown in Table 2 above. Evaluate the effects of these two different strategies on the firm’s net profit.

4.3 Solutions

As you can see, most business decisions depend on several different uncertain factors/variables. Suppose we assume that we can determine or estimate the probability distributions of the individual variables. We can then aggregate these into the probability distribution of the output variable. A commonly used approach is Monte Carlo simulation. In the Monte Carlo simulation method, the probability distribution of the output variable is determined by using a large sample of randomly generated values for the individual variables. These values are drawn from the known or estimated probability distributions of the different input variables or directly from corresponding historical data. Monte Carlo simulations do not make distributional assumptions for the individual probability distributions, and correlations between these variables can be easily considered. In Monte Carlo methods, we perform many simulations and iterations. This is because of the principle of the Central Limit Theorem2.

To account for the uncertainties in the input variables, the company’s analysts estimate that the unit cost for each car and the demand for each car follow the triangular probability distribution. The discount rates follow the lognormal probability distribution as shown in Table 2. The analysts suggest the Bernoulli probability distribution to model the potential increase in the franchise fee. Furnished with these pieces of information, we start the modelling process.

First, we develop a deterministic model using the base-case estimates of the input variables provided by the firm. A deterministic model is a model that does not account for randomness/uncertainty in the input variables that are used in forecasting the output variable of interest3 [12]. Consequently, a deterministic model will always produce the same output from a given starting condition. The output from the deterministic model (Table 3) assumes that the unit cost, demand and the discount rate applied to sell leftover cars for both the saloon and hatchback models are constant over time. However, we know that these variables are outside the control of the decision-maker and can change over time. For instance, the demand for these cars is not known with certainty. Some of the factors that affect demand could include the presence of substitutes in the market, the business cycle4 and the advertisement budget, amongst others. So based on a discount rate of 15% and 25% applied to leftover saloon and hatchback, the firm will end up with total revenue of $110,175,000 and a profit of $11,175,000.

VariablesCalculations
Total saloon cost$54,000.00
Total hatchbacks cost$45,000.00
Total cost (including franchise fee increase)$99,000,000.00
Total saloon revenue$61,425,000.00
Total hatchback revenue$48,750,000.00
Total revenue$110,175,000.00
Profit$11,175,000

Table 3.

Total revenue and profit: deterministic model.

It is easy to observe the challenges of relying on this model to make business decisions:

  1. This model does not consider the variabilities or uncertainties in the input variables.

  2. While we can see the potential net profit, we cannot determine the probability of ending up with this amount at the end of the period.

  3. This model does not allow us to envision or see the possible values the net profit can take, alongside the probabilities of taking on those values.

Why is this important? This is important because we may be able to evaluate the potential downside risk of our decision. That is the probability of making a loss.

We will illustrate how to use probability distributions to propagate uncertainty in the input variables. We use Monte Carlo simulation to produce information and insights from our model and its assumptions. As alluded to in the previous section, the precision of a probabilistic model relies heavily on the appropriate use of probability distributions to accurately represent the problem’s uncertainty, randomness and variability. In practice, inappropriate use of probability distributions is a common failure of probabilistic models. We start by replacing the ‘fixed’ estimates of the input variables in the model in table three with the parameters for the probability distributions in table two. Second, we will mark the variables of interest (total revenue and net profit) as the ‘output’ cells. These two variables are the variables we want to analyze. The simulation5 exercise allows us to sample all the input distributions randomly and recalculates the spreadsheet repeatedly, keeping track of the resulting output values. Each separate recalculation in the simulation process is known as an ‘iteration’. A single iteration represents a possible future set of circumstances in the model. So, two iterations represent two possible future sets of circumstances in the model. Since the sampling is random, commonly occurring input ranges and combinations of inputs would appear more frequently in the simulation data. On the other hand, rarer scenarios will be less likely. We present the output from the probabilistic Monte Carlo simulation models for the two strategies. The probability density graphs with some statistics are reported in Figures 1 and 2.

Figure 1.

Profit strategy one (Untruncated Discount Rates).

Figure 2.

Profit strategy two (Truncated Discount Rates).

The primary purpose of quantifying uncertainties is to obtain a sound basis for our decisions. As you can see, all available information on an uncertain (random) variable is, in principle, contained in the corresponding probability distributions (see Figures 1 and 2). In decision analytics, a well-informed decision is based on comparing the decision-maker’s risk appetite or risk tolerance threshold to the outcomes they are exposed to. For instance, we may be interested in evaluating the probability of incurring a loss or reaching a defined target. The outputs from our model allow us to achieve these goals. Statistics such as the mean or standard deviation of the output variable are also computed. These statistics are used to describe various future outcomes. For strategy one, we see that the effect of an untruncated discount rate on the company’s bottom line could result in a loss of ($4,900,126) at a 95% confidence interval. The probability of realizing this loss for this company is 2.5%. The same strategy could result in a potential net profit of $12,780,912 with a probability of 97.5%.

We turn our attention to the second scenario, strategy two, where we truncate/limit the amount of discount that could be used in selling the leftover cars. The idea is that no matter how many cars are left unsold; we cannot go beyond the maximum discount rates for the cars. From the probability density graph in Figure 2, we can see that the upper and lower bounds for the net profit are $414,780 and $12,623,794. The probability of ending up with a lower profit is 2.5% and ending up with a higher profit is 97.5%.

Let us briefly consider how we can make decisions using the results from our probabilistic modelling using Monte Carlo methods. Observe the statistics reported in Figures 1 (truncated model) and 2 (untruncated model). For the untruncated strategy, the highest possible loss that the firm can incur is (£28,055,462), and the maximum profit is £17,279,472. While for the truncated strategy, the maximum loss is (£5,795,584), and the maximum potential profit is £15,496,276. Although the firm may earn a slightly higher profit if they allow market forces to determine how much discount they can apply to unsold cars, it is clear from the reported statistics that this strategy is riskier than the bounded discount rate strategy. Table 4 shows a comparison of the two strategies.

Comparison metricsUntruncated strategyTruncated strategy
Maximum net loss−$28,055,462−$5,795,584
Maximum net profit$17,279,472$15,496,276
Expected profit$5,286,830$7,323,010
Standard deviation$4,551,470$3,224,254

Table 4.

Comparison of the two Strategies (unbounded and Bounded Strategies).

Observe from Table 4 that the bounded/truncated strategy outperforms the untruncated strategy in key metrics like maximum loss ($5,795,584, vs. $28,055,462) and expected profit ($7,323,010 vs. $5,286,830). On the other hand, the unbounded strategy outperforms the bounded strategy in terms of the maximum net profit ($17,279,472 vs. $15,496,276). It is also important to mention that the unbounded/untruncated strategy carries more risk than the bounded/truncated strategy. In this example, we use the standard deviation as a measure of risk. Specifically, we see a standard deviation of $3,224,254.35 for the truncated strategy versus $4,551,470 for the untruncated strategy. The next thing we need to consider is to identify our key inputs using sensitivity analysis. Sensitivity analyses study how various sources of uncertainty in a model contribute to the model’s overall uncertainty or volatility. Our primary goal is to identify the most ‘critical’ inputs, the inputs to concentrate on most when making decisions. We will use the regression coefficient, regression mapped values, correlation and contributions to variance tornado charts to explore the inputs we need to prioritize in our decision-making. The results are shown in Figures 36.

Figure 3.

Sensitivity tornado graph (Estimated Regression Coefficient).

Figure 4.

Regression mapped values.

Figure 5.

Correlation analysis.

Figure 6.

Contribution to variance.

We will discuss the sensitivity graphs from strategy one (untruncated discount rate) for brevity. The tornado graphs allow us to analyse how the inputs in our decision model drive the variations/behaviour of our output variables (in this case, the net profit). Variables with the most considerable impact on the output distribution have the longest bars in the graph.

Looking at the regression coefficients (Figure 3), we see that increase in the franchise fee has the biggest impact on the net profit. Specifically, if the franchise fee is increased by 1%, that increase will result in a 0.47% decrease in the net profit. The demand for the hatchback model is another variable with a great impact on the net profit. If the demand increases by 1%, this will result in a 0.44% increase in the net profit, while a similar increase in the demand for the saloon model will result in a 0.29% increase in profit6. Also, increasing the discount rates for both hatchback and saloon models by one per cent would result in a reduction in net profit of 0.38% and 0.27%, respectively. The company should also pay attention to the unit costs of both models. Specifically, a one per cent increase in the unit cost of saloon and hatchback cars will result in 0.35% and 0.32%, respectively.

In monetary terms, the firm may lose up to $2,163,404.38 for a 1% increase in the franchise fee while holding all the other variables constant - see Figure 4 below. Similarly, an increase in the demand for the hatchback model will lead to about a $2,023,527.94 increase in the net profit. In addition, increasing the discount rates will result in a reduction in profit. For the hatchback cars, increasing the discount rate by 1% will lead to about $1,736,223.46 loss in profit; a similar increase in the discount rate for the saloon cars will lead to a $1,226,373.03 loss in profit. Based on these pieces of information, the firm can make an informed decision while taking its risk appetite and equity capital into consideration.

Understanding the interdependency of the input variables in the model is very important. Correlation measures the extent to which two variables vary together, including the strength and direction of their relationship. Exploring the correlation between variables is an essential part of exploratory data analysis. A high correlation indicates a strong relationship, and a weak correlation indicates that they are not closely related. Figure 5 reveals a negative correlation between the increase in franchise fee, unit cost and discount rate and a positive correlation between demand and net profit.

We look at the contribution to the variance tornado graph (Figure 6) below. This graph allows us to explore the contributions of each of the input variables to changes in the output variable. Again, we see that changes in the franchise fee cause 22.7% of the variation in the output variable, while the demand for hatchback cars caused 19.2% amongst others.

Finally, we look at scenario analysis. We are particularly interested in examining a combination of input variables that contribute significantly towards reaching a specific goal, also referred to as the target scenario associated with the output values. In this case, we want to examine the combination of input variables that could result in the following:

  1. Achieve a net profit value over the 75th percentile, i.e., $8,624,840.69 75th percentile in the probability density.

  2. A second scenario is a net loss or no profit. That is a worst-case scenario.

  3. The final scenario achieves a net profit over the 90th percentile, $10,777,718.05 (90th percentile in the probability density).

The results from the three scenarios analysis are reported in Figures 79 below.

Figure 7.

Net profit at the 75th percentile.

Figure 8.

Net loss (Worst-Case Scenario).

Figure 9.

Net profit at the 90th percentile.

Our first desired scenario is to explore the possibility of having a net profit in the 75th percentile7. In the net profit probability distribution, we locate the 75th percentile as $8,624,840.69. The scenario analysis identifies the input combinations that have the most significant effect in achieving this net profit. Looking at Figure 7 above, we can see that the strategy that would allow the firm to achieve the desired scenario is to increase the demand for hatchback cars at full price by 0.54% and reduce the unit cost of saloon cars to 0.50%. Essentially, scenario analysis helps you identify the input combinations that have the most significant effect in achieving a net profit value at the 75th percentile. Given that the unit cost of the saloon cars is outside the firm’s control, the company may try and negotiate a discount with the manufacturer by increasing the number of vehicles ordered.

Figure 8 below provides the second scenario, which is the worst-case scenario. Observe that increasing the franchise fee by 2.3%, a drop in demand for the hatchback of 0.66%, and an increase in the discount rate applied to leftover hatchbacks will result in a loss. We are always concerned with the worst-case scenario in risk and decision analytics. At this stage, the firm needs to decide whether to continue with this franchise or look for alternative manufacturers. Nevertheless, they must block any attempt to increase the franchise fee by more than 2%.

The final scenario shown in Figure 9 below, which is the best, achieves a net profit value in the 90th percentile. To achieve the desired target, the firm will have to increase the demand for hatchbacks and saloons by 0.85% and 0.60%, respectively. At the same time, negotiating discounts for the saloons and hatchbacks by 0.70% and 0.60%, respectively. Using probabilistic Monte Carlo methods as an exploratory decision-making tool can improve the decision maker’s understanding of the significant business value drivers. This method allows us to appreciate the most relevant uncertain input variables and their sensitivities to the variable of interest. As you can see, much of the value in probabilistic modelling comes from its role in structuring a more constructive management discussion.

Advertisement

5. Conclusion

This chapter presents an application of probabilistic modelling using Monte Carlo simulation methods for strategic business decisions. We examined two decision strategies and evaluated the effects of the critical uncertain input variables on the outcome variables. What is apparent from the discussion is that making big strategic business decisions from the ‘gut’ or using deterministic models could be problematic. This is because deterministic models do not allow us to envision all possible scenarios that could affect our decisions and account for uncertainties in our business decisions. However, the expectation that the likelihood of all possible outcomes in a complex dynamic business problem can be accurately estimated using probabilistic models is unrealistic. Instead, probabilistic modelling using the Monte Carlo simulation method helps to combine all available insights about the relevant uncertainties and their impact on the variable of interest. Probabilistic modelling offers essential support to making risk-informed decisions if used appropriately.

References

  1. 1. Alon N, Spencer JH. The Probabilistic Method. 2nd ed. New York: Wiley-Interscience; 2000 ISBN 0-471-37046-0
  2. 2. Billingsley P. Probability and Measure. Third ed. New York: Wiley; 1995
  3. 3. Florescu I. Probability and Stochastic Processes. John Wiley & Sons; 2014
  4. 4. Xue J, Zartarian VG, Özkaynak H, Dang W, Glen G, Smith L, et al. A probabilistic exposure assessment for children who contact chromated copper arsenate (CCA)-Treated playsets and decks, Part 2: Sensitivity and uncertainty analyses. Risk Analysis. 2006;26:533-541
  5. 5. Sin G, Espuña A. Editorial: Applications of Monte Carlo method in chemical, biochemical and environmental engineering. Frontiers in Energy Research. 2020;8:68. DOI: 10.3389/fenrg.2020.00068
  6. 6. Johansen AM. International Encyclopedia of Education. Third ed2010
  7. 7. Keyun Ruan, in Digital Asset Valuation and Cyber Risk Measurement, 2019, Pg. 75-86. doi:10.1016/B978-0-12-812158-0.00004-1
  8. 8. Aitchison J, Brown JAC. The Lognormal Distribution, with Special Reference to Its Use in Economics. New York: Cambridge University Press; 1957
  9. 9. Crow EL, Shimizu K, editors. Limpert et al. Log-normal Distributions across the Sciences: Keys and Clues. In: Lognormal Distributions: Theory and Applications. New York: Dekker; 1988
  10. 10. Rubinstein M. Implied Binomial Trees. Journal of Finance. 1994;49(3):771-818
  11. 11. Rosenkrantz. Why Stock Prices Have a Lognormal Distribution. University of Manchester at Amherst. Available from: http://janroman.dhis.org/finance/General/log_normal_notes.pdf; 2003. [Accessed: 20 May 2022]
  12. 12. Artzrouni M. Mathematical Demography. Encyclopaedia of Social Measurement2005. pp. 641-651
  13. 13. Harvey AC, Trimbur TM. General model based filters for extracting trends and cycles in economic time series (PDF). Review of Economics and Statistics. 2003;85(2):244-255. DOI: 10.1162/003465303765299774. S2CID 57567527

Notes

  • These two parameters should not be mistaken for the mean or standard deviation from a normal distribution. When the data is transformed using natural logarithms, the mean is the mean of the transformed data, and the standard deviation is the standard deviation of the transformed data.
  • The central limit theorem posits that the probability distribution of a sum of independent random variables can be approximated by a normal distribution if the number of individual random variables is large enough.
  • The variables of interest in this example are the total revenue and the profit.
  • Business cycles are intervals of expansion (economic booms) followed by a recession (burst) in economic activity [13]. Business cycles have implications for the welfare of the wider population and businesses.
  • We used the Palisade @Risk software for the probabilistic modeling.
  • The key to interpreting the graphs is to consider both the signs (positive or negative) and the magnitude of the estimated coefficients.
  • Percentiles indicate the percentage of values that fall below a particular value. They tell you where a value stands relative to other values. The general rule is that if value Y is at the Nth percentile, then Y is greater than N% of the values.

Written By

Chioma Ngozi Nwafor

Submitted: 02 June 2022 Reviewed: 30 June 2022 Published: 23 August 2023