Open access peer-reviewed chapter

The Application of Markov and Semi-Markov Models in Transportation Infrastructure Management

Written By

Omar Thomas and John Sobanjo

Submitted: 13 December 2022 Reviewed: 28 January 2023 Published: 20 March 2023

DOI: 10.5772/intechopen.1001134

From the Edited Volume

Markov Model - Theory and Applications

Hammad Khalil and Abuzar Ghaffari

Chapter metrics overview

55 Chapter Downloads

View Full Metrics

Abstract

This chapter outlines how Markov and semi-Markov models can be used in Transportation Infrastructure Management Systems, in particular Pavement and Bridge Management Systems. The use of Markov models have been used in both Pavement and Bridge Management Systems for years. In more recent times the use of semi-Markov models have been introduced in Bridge Management Systems. Research has shown that if there is enough data available to develop semi-Markov models for transportation infrastructure, then this stochastic technique can be used to predict the future network level conditions and can be used in the development of preservation models for transportation infrastructure. The application of these techniques are not only limited to transportation infrastructure, but can also be applied in other areas.

Keywords

  • bridge management
  • pavement management
  • bridge deterioration
  • pavement deterioration
  • preservation model

1. Introduction

This chapter demonstrates how Markov and semi-Markov models are used in Transportation Infrastructure Management. Transportation agencies are interested in being able to model the deterioration of their infrastructure when no action is taken to neither maintain, repair nor rehabilitate that infrastructure. The Transportation agencies also want to know how to best estimate the extension in the service life of an infrastructure when improvement actions (maintenance, repair or rehabilitation) are done. The chapter uses examples containing real life data to demonstrate models for the following actions: deterioration or ‘do-nothing’ actions, improvement action and rehabilitation.

Markov chain has been used to model the performance of pavements in Pavement Management Systems (PMSs), such as The Arizona Department of Transportation (ADOT) Network Optimization System (NOS) [1, 2]. Wang et al. [2] outlined an approach to compute the transition probabilities of a Markov chain model using pavement performance data. Nasseri et al. [3] used Markov chain applications to investigate the crack histories of flexible pavements and to deduce the cause of rapid deterioration of surface cracks. Markov chain can also be used to model the deterioration of other infrastructures, including bridge elements, storm water pipes and wastewater pipes [4, 5, 6]. Yang et al. [7, 8] mentioned the use of semi-Markov processes for modeling the crack performance of flexible pavements, as a precursor to outlining how recurrent Markov chains can be used to model crack performance of flexible pavements. Semi-Markov processes are used in the deterioration models for other assets such as bridge elements and transformers [9, 10, 11, 12]. Although sections 2 and 3 in this chapter uses pavement condition data to describe the application of stochastic processes for ‘do-nothing’ actions, the techniques can be applied to other transportation infrastructure, such as bridge elements [13].

Advertisement

2. Markov chain model

This section demonstrates a Markov chain model for flexible (asphalt) road pavement. The stochastic process, known as the Markov chain [14], can be described as follows: If Xn=i describes a process such that the process is in state i at time n, and the process in state i has a fixed probability Pi,j of being in state j, after a transition, then

PXn+1=jXn=iXn1=in1X0=i0=Pi,jE1

for all states i0,i1,Uin1,i,j and all n0.

Consider the range of crack indices associated with flexible (asphalt) road pavement that have been assigned the respective condition states in the Table 1. The following equation, used by Wang et al. [2], can be used to generate transition probabilities for a Markov chain model:

Crack Index (CRK) rangeCondition state
9.5CRK1010
8.5CRK<9.59
7.5CRK<8.58
6.5CRK<7.57
5.5CRK<6.56
4.5CRK<5.55
CRK<4.54

Table 1.

The range of crack indices and corresponding condition states.

pi,jak=mi,jakmiakE2

for i,j=10,9,8,7,6,5&4 where,

  • k = kth rehabilitation action, in this case, the ‘do-nothing’ action, i.e. k = 1.

  • pi,jak = transition probability from state i to j after action k is taken.

  • mi,jak = total number of miles of pavement for which the state prior to action k was i and the state after the action k was j.

  • miak = total number of miles of pavement for which the state prior to action k was i.

Advertisement

3. Semi-Markov model

This section demonstrates a Semi-Markov model for flexible (asphalt) road pavement. Consider a stochastic process having states 0,1,2,, which is such that whenever it enters state i,i0, then: (1) it will enter the next state j with probability Pij,i,j,0, and (2) given that the next state is j the sojourn time from i to j has distribution Fij. For a semi-Markov process, the sojourn times may follow a specific distribution and the method of Maximum Likelihood Estimation (MLE) can be used to estimate the parameters of that distribution, such as that of a Weibull Distribution [13]. Before discussing how the semi-Markov process can be applied to model deterioration, it is beneficial to define the basic concepts of the MLE method. The Maximum Likelihood is a well-known technique in statistics used for deriving estimators [15].

3.1 Maximum likelihood estimation

If there is an identical and independently distributed (iid) sample, X1,,Xn from a population with probability density function (pdf) or probability mass function (pmf) fxθ1Uθk, then the likelihood function is defined by

LθX=Lθ1θkx1xn=Πi=1nfxiθ1θkE3

Let us assume that the sojourn time follows a Weibull Distribution. The pdf of the Weibull distribution according to Billington and Allan [16], Tobias and Trindade [17] is defined by:

ft=βαtαβ1etαβE4

where α and β are the respective scale and shape parameters, and t represents the number of years that each unit of a mile of pavement segment takes to sojourn in one condition state before transitioning to another state. If η=1/α, then

ft=βηηtβ1eηtβE5

Using Eq. 3 [18] it follows that the likelihood is:

Lt1tnηβ=βηβneηβt1β++tnβΠi=1ntiβ1E6

After differentiating the log likelihood and equating it to zero, the MLE of the parameters β̂ and η̂ are:

β̂=i=1ntiβ̂lntii=1ntiβ̂1ni=1nlnti1E7

and

η̂=ni=1ntiβ̂1/β̂E8

respectively.

If there are n units of pavement in a particular condition state and k units have transitioned to a lower condition state (with complete individual sojourn times t1<t2<<tk) and the sojourn time for nk units of pavement are not known, then the sojourn times T1,,Tnk for the nk units of pavement should also be accounted for in the evaluation [13]. For the Weibull distribution, if it is assumed that the incomplete sojourn times of T1,,Tnk have been observed in addition to the complete sojourn times t1,,tk, then the likelihood function can be expressed as:

L=Πi=1kftiθΠj=1nkf1FTjθE9

where i sums over all completed sojourn times, j sums over all incomplete sojourn times, and θ can be a vector [13].

Lt1tnηβ=βηβkeηβt1β++tkβΠi=1ktiβ1Πj=1nkeηTjβE10

producing the MLE of the parameters β̂ and η̂:

β̂=i=1ktiβ̂lnti+j=1nkTjβ̂lnTji=1ktiβ̂+j=1nkTjβ̂1ki=1klnti1E11

and

η̂=ki=1ktiβ̂+nkki=1Tjβ̂1/β̂E12

3.2 Semi-Markov Kernel

To demonstrate how the semi-Markov can be developed by knowing the sojourn times in a particular condition state before transitioning, one may consider the semi-Markov kernel in the form shown in Eq. 13. Ibe [19] defines the one-step transition probability Qi,jt of the semi-Markov process as:

Qi,jt=PXn+1=jGntXn=it0E13

where Qi,jt is the conditional probability that the process will be in state j next, given that it is in state i currently and that the waiting time in the current state i is no more than t. Gn is the time the process spends in i before transitioning to j. It also follows that:

Qi,jt=pi,jHi,jtE14

where pi,j is defined as the transition probability of the embedded Markov chain, and

Hi,jt=PGntXn=iXn+1=jE15

3.2.1 Semi-Markov process

Howard [20] provided the following formulation to determine the probability that a continuous-time semi-Markov process will be in state j at time n given that it entered state i at time zero.

ϕijn=δij>win+k=1Npik0nhikmϕkjnmi=1,2,,N;j=1,2,,N;n=0,1,2,E16
δij=1i=j;0ij.E17

where ϕijn is the probability that a continuous-time semi-Markov process will be in state j at time n given that it entered state i at time n=0 and is referred to as the interval transition probability from state i to state j in the interval 0n. >win is the probability that the process will exit its starting state i at a time greater than n.

The second term in 3.2.1 describes the probability of the sequence of events, such that the process makes an initial transition from state i to some state k at some time m and thereafter proceeds from state k to state j in the remaining time nm. To account for all possible scenarios, the probability is summed over all states k to which the first transition could have been made and over all times of the initial transition, m, between l and n. pik is the probability of transitioning from i to k, and hikm represents the probability distribution of the sojourn time from i to k at time m. The matrix formulation of 3.2.1 can be expressed as:

Φn=>Wn+0nPHmΦnmn=0,1,2,E18

In addition, let

Cm=PHmE19

where C(m) is defined as the core matrix [20]. The elements of Cm are cijm=pijhijm, where pij represents the transition probability of the embedded Markov chain and hijm represents the probability distribution of the sojourn time in state i, before transitioning to j at time m.

As a result, the interval transition matrix representing a single transition at time m, Φ0,mm, for ‘do-nothing’ actions can be expressed as:

Φ0,mm=1j=49p10,jH10,jmp10,5H10,5mp10,4H10,4m0p9,5H9,5mp9,4H9,4m00p8,5H8,5mp8,4H8,4m000p7,5H7,5mp7,4H7,4m0000p6,5H6,5mp6,4H6,4m000001p5,4H5,4mp5,4H5,4m0000001E20

where pij represents the probability for the embedded Markov chain of the semi-Markov process, and Hij represents the cumulative distribution of the sojourn time between condition state i and condition state j at time m.

It can be seen by 3.2.1 that as the number of years, m, increases, a number of permutations must be considered when computing the overall transition probabilities for an interval 0n, and so another approach is suggested to model the overall transition over time. In the context of modeling asset deterioration, the focus is on ‘do-nothing’ scenarios, where there is no action taken on the infrastructure. It therefore can be assumed that once the infrastructure exits a condition state, that condition state is not visited again, and the semi-Markov process occurs in one direction only. There is also the probability of the condition of some infrastructure, in this case the pavement segments, ‘skipping’ condition states in a single transition, however the probability of ‘skipping’ two (2) or more condition states is extremely small and therefore may not have to be considered. If it is assumed that only one condition state may be ‘skipped’ in a single transition, then the transition probability (for the embedded Markov chain), pij, of ‘skipping’ a condition state, k, is one minus the transition probability (for the embedded Markov chain) of transitioning to the next state, pik. Therefore,

pij=1piki=10,9,8,7,6,5;j=i2,k=i1,minjk=4E21

This is because the total probability of eventually leaving a particular condition state (which is not a terminal state), to a lower condition state is 1, since deterioration is being considered.

Another assumption is that the sojourn time in condition state i before transitioning to j is the time from the pavement segment first entered condition state i to the time the pavement segment first entered condition state j. The transition diagram shown in Figure 1 shows the possible ‘single step’ transitions in developing the semi-Markov model, where pij is the probability for the embedded Markov chain from condition state i to condition state j, and hijm is the probability density function of the sojourn time between condition state i and condition state j at time m. The term ‘single step’ transition is based on the assumption that only a single transition can take place in a year. For the transition labeled p10,9h10,9m the pavement segment spends some time in condition state 10 before transitioning to condition state 9, and for the transition labeled p10,8h10,8m the pavement segment spends some time in condition state 10 before transitioning to condition state 8 without going to 9 [13].

Figure 1.

Transition diagram that represents the possible ‘single step’ transitions between condition states.

Other than determining the interval transition probabilities for the interval 0n, the conditional transition probabilities for each yearly interval for m=1,2,,n is determined, which is multiplied by each other to estimate the transition probabilities for the interval 0n. It is also assumed that only a single transition takes place in a year. In other words, let

Φ0,nn=Φ0,1Φ1,2Φn1,nE22

where Φm1,m is a 1 year ‘single’ transition probability matrix for the time m1 to m (i.e. mth interval), m=1,2,,n. Based on Eq. (20), if the condition states can drop by either one (1) or two (2) states, then the interval transition probability for the first year results to:

Φ0,mm=1j=89p10,jH10,jmp10,9H10,9mp10,8H10,8m00000000000000000000000000000001E23

where m=1. On the other hand, to determine the respective transition probability matrices for the subsequent intervals, a different formulation is used, in which it is assumed that the sojourn time is left truncated at the start of each interval.

3.2.2 Service life analysis and left truncation

Statistical analyses of the service lives of transportation infrastructures can be done using Reliability Theory otherwise called Survival Analysis. In Survival Analysis, sometimes subjects are selected and followed prospectively until the event or censoring occurs, but sometimes the start time at the point of selection is not t=0 (i.e. not at birth), but at a value t=t0>0. It therefore means that the life or censoring time of the subjects, Ti, is greater than t0 [21, 22], and the life Ti is considered to be left truncated at t0. Applying the same principle to the service life of a transportation infrastructure then,

FTT>t0t=0ift0t;FTtFTt01FTt0ift0<t.E24

To determine the probabilities that are associated with the sojourn time for interval 12, we assume that from Eq. (24) t0=1 and 1<t2. It therefore means that at this point only sojourn times greater than t=1 are being considered and the cumulative distribution of the sojourn time in the interval can be considered truncated [22]. It is therefore can be described as:

Hi,jTT>1t=Hi,jTtHi,jT11Hi,jT1,1<t2E25

It follows that for interval m1m the cumulative distribution of the sojourn time in the interval can be described as:

Hi,jTT>m1t=Hi,jTtHi,jTm11Hi,jTm1,m1<tmE26

At t=m the cumulative distribution of the sojourn time then becomes

Hi,jTT>m1m=Hi,jTmHi,jTm11Hi,jTm1,m1<tmE27

Therefore, the transition probability for interval m1m is:

Φm1,mm=1j=89p10,jH10,jT|T>m1mp10,9H10,9T|T>m1m00000000000000000000000000000001E28

Eq. (27) have been used to determine the transition probabilities for one-step state-based yearly transitions in deterioration models by Black et al. [11, 12], for which the transition probability of the embedded Markov chain was assumed to be 1. It therefore means that the probability obtained using Eq. (27) can be used to describe the probability associated with the sojourn time to the end of the period, given that it had ‘survived’ up to the start of the period.

3.2.3 Sojourn times of flexible pavement condition states

For developing the semi-Markov model, the number of miles for a particular segment can be rounded to the nearest one-tenth of a mile and the sojourn time distribution for each one-tenth unit of a mile of pavement segment is then analyzed. Figure 2 gives a schematic of how the pavement segment can be divided into one-tenth of a mile sub-sections. An algorithm can be created and used to help organize and analyze the data on the infrastructure to estimate the parameters of the sojourn time distributions in each condition state. The following outlines the steps:

  1. The length of each segment to the nearest one-tenth of a mile is determined.

  2. The yearly decreases in CRKs for each segment subsequent to the year being newly constructed or overlaid is extracted, such that a series of decreasing CRKs for each segment can be used to model ‘do-nothing’ actions on the pavement segments over time.

  3. The pavement segments are assigned to condition states over time, in accordance with Table 1, as outlined earlier.

  4. At some point in time, all the pavement segments tracked, are either just becoming ‘new’ or entered the current condition state from a higher or equal condition state. If a unit of pavement segment exits a particular condition state to a lower condition state at a known time, then the sojourn time of that unit of pavement segment in the current condition state is essentially known. However, if a unit of pavement segment is in a particular condition state and the tracking of the pavement ended because the condition state either increased or the because the ‘study’ was terminated, then the sojourn time of that unit of pavement segment is not precisely known and is considered right-censored [23]. Figure 3 gives a representation of the change in the condition states of pavement segments over time, outlining examples of complete and censored times spent in particular condition states.

  5. Based on the complete and censored durations obtained from the data, the distribution of the sojourn time in condition state i before transitioning to condition state j (Hi,jt) can be determined, where j is either i1 or i2, i=10,9,8,7,6,5minj=4. The proportion of the number of units of the infrastructure that left condition state i and transitioned to condition state j, to the total number of units of assets that left condition state i and transitioned to all condition states other than itself (pi,j) can be determined, for each condition state i. From Figure 3, one can see that segment 3 spends 6 years in condition state 10, has a one state drop and then spends 7 years in condition state 9, and then has another one state drop to condition state 8. It shows that segment 3 then spends 1 year in condition state 8, before transitioning to condition state 7, where the pavement segment spent at least 6 years in condition state 7. In Figure 3, for segment 4, it is seen that the segment spends 10 years in condition state 10 before having a two-state drop to condition state 8, followed by a series of one state drops for which the respective sojourn times can also be inferred.

Figure 2.

Schematic of pavement segment divisions into equal units of one-tenth of a mile.

Figure 3.

An example of the change in the condition states of pavement segments over time.

The maximum likelihood estimate of the scale (α) and shape (β) parameters of the Weibull distribution, used to describe the sojourn time distributions, can be computed. An algorithm can be written to determine the series of transition probability matrices, based on the Weibull parameters (α and β values) obtained for the sojourn times between condition states. The MLE of the α and β values serves as inputs for this algorithm. The transition probabilities according to Eqs. (23) and (28) can then be determined and used to simulate the survival curves and the expected deterioration over time.

Advertisement

4. Examples of transition probabilities associated with the Markov chain and semi-Markov models

4.1 Transition probabilities in a Markov chain model

An example of a set of transition probabilities for a Markov chain model are represented as a transition probability matrix, as shown below in Eq. (29). At the left and top of the transition probability matrix in Eq. (29) are the condition states i and j respectively, where Πij is transition probability matrix used for a Markov chain model.

10987654Πij=109876540.9050.0720.0170.00600000.7370.1570.0900.01600000.6600.2740.0420.0140.0100000.7070.1880.0860.01900000.7240.1120.164000000.5820.4180000001.E29

4.2 Transition probabilities in a semi-Markov model

An example of the transition probability of the embedded Markov chain of a semi-Markov process are shown for each transition in Table 2.

Transitions i to jTransition probability
10 to 90.707
9 to 80.752
8 to 70.645
7 to 60.468
6 to 50.214
5 to 41.000
10 to 80.293
9 to 70.248
8 to 60.355
7 to 50.532
6 to 40.786

Table 2.

Transition probabilities of the embedded Markov chain of the semi-Markov process.

Figures 47 show examples of the frequency distributions of the observed sojourn time for each unit mile before transition, including both the uncensored and right-censored sojourn times.

Figure 4.

Frequency of sojourn times in condition state 10 before transitioning to 9.

Figure 5.

Frequency of sojourn times in condition state 10 before transitioning to 8.

Figure 6.

Frequency of sojourn times in condition state 9 before transitioning to 8.

Figure 7.

Frequency of sojourn times in condition state 9 before transitioning to 7.

From Figures 4 and 5 it can be seen that there are more units of pavement that transitioned completely from condition state 10 to 9, than from 10 to 8, which is expected. The total pavement length that are right-censored in Figures 4 and 5 are the same. This is because it is not known whether each unit of pavement would have transitioned from condition state 10 to 9 or from 10 to 8.

Based on the Maximum Likelihood Estimation (MLE), examples of the Weibull distribution parameters are shown in Table 3, while the associated uncertainties are illustrated in Table 4. The 95% C.I. limits of the scale and shape parameters are shown in Table 3. Table 4 provides the means and standard deviations for each sojourn time (Weibull) distribution. The standard deviations for the sojourn time in condition state 8 before transitioning to 6, and that of condition state 7 before transitioning to 5 seems relatively high in comparison to the others, which is a function of the quantity of data representing each transition. As more data for each transition becomes available, then the standard deviations associated with the particular transition is expected to be less. A Goodness-of-fit test can also be done on the distribution of the complete sojourn times.

Transition i to jα̂95% C.I. limits for α̂β̂95% C.I. limits for β̂
LowerUpperLowerUpper
10 to 99.4329.3329.5332.1282.0942.163
9 to 84.8874.7774.9991.5791.5391.62
8 to 73.4963.3943.6021.3451.3041.387
7 to 65.0394.8115.2781.2571.2081.308
6 to 56.3045.7546.9061.5231.4121.641
5 to 43.1643.033.3042.0621.942.193
10 to 813.12612.90413.3513.1823.0883.278
9 to 76.1035.8456.3721.2491.2041.295
8 to 69.6728.74410.6971.4651.351.591
7 to 59.1038.3559.9181.2361.1651.312
6 to 45.4175.0925.7631.6931.5911.802

Table 3.

Example of the maximum likelihood estimation of the scale (α) and shape (β) parameters for the holding time (Weibull) distributions in condition state i.

Transitions i to jMean (years)Standard deviation
10 to 98.354.13
9 to 84.392.84
8 to 73.212.41
7 to 64.693.75
6 to 55.683.80
5 to 42.801.43
10 to 811.754.05
9 to 75.694.58
8 to 68.766.08
7 to 58.506.91
6 to 44.842.94

Table 4.

Means and standard deviations of sojourn times (Weibull).

As an example, the transition probability matrices of the first 5 years for the semi-Markov model are shown in Eqs. (30)(34). In Eq. (30), ϕ10,10=0.991 means that there is a 0.991 probability that the pavement segment remains in condition state 10. The other ϕi,j terms represent the probability of transition from i to j in a given year.

Year 1

10987654Φij1=109876540.9910.0080.001000000.8220.0780.100000000.7950.1700.035000000.8140.1230.063000000.8860.0590.056000000.9110.0890000001.E30

Year 2

10987654Φij2=109876540.9700.0280.002000000.7160.1500.134000000.6900.2490.061000000.7490.1660.085000000.7730.1070.120000000.7440.2560000001.E31

Year 3

10987654Φij3=109876540.9440.0490.007000000.6520.1970.151000000.6330.2900.077000000.7170.1880.095000000.6950.1380.167000000.6020.3980000001.E32

Year 4

10987654Φij4=109876540.9150.0710.014000000.6030.2340.163000000.5910.3190.090000000.6940.2030.103000000.6310.1630.206000000.4840.5160000001.E33

Year 5

10987654Φij5=109876540.8830.0930.024000000.5620.2650.173000000.5570.3430.100000000.6760.2150.109000000.5770.1830.240000000.3880.6120000001.E34
Advertisement

5. Stochastic preservation model

This section outlines a stochastic preservation model based on bridge elements. The model can be used for transportation infrastructure management, in which the underlining decision model is based on a semi-Markov decision process (SMDP). The “Bare Concrete Deck” is an example of a bridge element that has different condition states to which the preservation model can be applied in a Bridge Management System. The preservation model is adaptive and this concept has been used by researchers in the field of infrastructure management, particularly as it relates to pavement and rail infrastructures [24, 25, 26, 27]. Madanat et al. [27] also describes the methodology used in the Pontis® Bridge Management System as being adaptive, since transition probabilities are updated over time. The methodology outlined here is based on the application of SMDP.

The maintenance activities on a bridge element can encompass a wide range of types of work having varying durations. Again, the Weibull distribution can be assumed to estimate the sojourn time, which in this case, is the time it takes for maintenance actions to be completed. Unlike the ‘do-nothing’ action, it is assumed that the sojourn times for maintenance activities are irrespective of the current and subsequent condition states of the bridge element. It can also be assumed that rehabilitation works predominantly encompass the replacement of the respective bridge element, which has a more predictable duration time than that of maintenance works. The time taken for improvement (maintenance and rehabilitation) works to be completed, can be interpreted to start from the point at which the problem to the bridge element was first identified, including the time for the required works to be sent out for bid if necessary, followed by the time taken to undertake the works.

5.1 Network level optimization

One of the main goals of a Bridge Management System (BMS) is to be able to determine the minimum-cost long-term policy for each bridge element [28]. This policy consists of a set of recommended actions that minimizes the long-term Maintenance, Repair and Rehabilitation (MR&R) cost requirements, while keeping the bridge element out of risk of failure. If the minimum-cost long-term policy can be determined, it represents the most cost-efficient set of actions for the bridge element. Therefore, if any of the actions are delayed, it results in more expense in the long-term. Also, if more improvement actions other than what is recommended are done, then it will also result in higher costs in the long-term. This is based on the steady state concept. Bridge elements are expected to remain in service for long periods of time providing transportation connectivity on a continuous basis. Having an optimal policy that is sustainable far into the future is of great importance. In a BMS the following three (3) things typically takes place each year: (1) Bridge elements deteriorate when no improvement actions are done to them, otherwise called ‘do-nothing’ actions; (2) Improvement actions are done to some of the bridge elements, which results in corresponding costs; and (3) The improvement actions cause an overall improvement in network conditions [13].

At the network level, any given condition state will have elements passing in and out of condition states based on the MR&R actions. For steady state to occur across the entire bridge network or across a subset of that network, the total quantity of bridge element entering a particular condition state is equal to the total quantity of that similar bridge element leaving that same condition state. As a result the distribution of a bridge element among its condition state remains constant from year to year within the group, and the policy becomes sustainable over the long-term [28]. The optimal policy is the one policy that satisfies the requirements of a steady state and consequently minimizes annual expenditure of the transportation agency in charge of MR&R works.

The Markov Decision Process MDP have been used in the Pontis Bridge Management System to determine the optimal policy, in which there is a means to update the transition probabilities over time or as needed. This is because a Markov chain model assumes that the sojourn time in one state before transitioning to another follows an exponential distribution for continuous time, thus making it more restrictive than the semi-Markov process in which the sojourn times can be assumed to follow a different distribution, such as the Weibull distribution. The SMDP facilitates the natural updating or changing of the transition probabilities over time. Again, it is assumed that the sojourn time in a particular condition state follows the Weibull distribution, and the deterioration process can be represented as a semi-Markov process. The probability density function (pdf) for the Weibull distribution is [29]:

ft=βαtαβ1etαβE35

where α and are the scale and shape parameters respectively. When the shape parameter equals 1, the resulting distribution becomes exponential, and the rate deterioration is constant. If the shape parameter is greater than 1, it means that the rate of deterioration increases with time, and if the shape parameter is less than 1, the rate of deterioration decreases with time. Typically, the latter is not expected for transportation infrastructure.

One component of the preservation model for bridge elements is the deterioration model similar to that explained above, where the Maximum Likelihood Estimation of the parameters of the Weibull distribution, used to describe the sojourn time in one condition state before transitioning to a lower condition state, is determined. A similar approach can be used to determine the sojourn time distribution for maintenance works.

5.2 Discount coefficient

Let us consider continuous-time discounting at a rate s>0, such that the present value of one unit received t times units in the future equals est. For discounting over 1 year, let t=1, therefore, es=d, where d represents corresponding discrete-time discount rate that would be used in a MDP. So, for example, d=0.9 in the MDP corresponds to s=log0.9=0.105 in the SMDP, which is the corresponding discount factor based on continuous-time [30].

5.3 The Laplace transform (s-transform)

When taking into consideration the discounting for continuous-time in the SMDP model, the Laplace transform (s-transform) of the distribution of the sojourn times between states has to be computed [19]. It is therefore prudent to look at Laplace transform. Now, if is the pdf of a continuous random variable X that takes only non-negative values; then fXx=0 for x<0. It follows that the Laplace transform of fXx, denoted by Mxs is [19]:

Mxs=EesX=0esxfxxdxE36

An essential Laplace transform property is that when it is evaluated at s = 0, its value is equal to 1:

Mxss=0=0fxxdx=1E37

The Laplace transform of the Weibull distribution can be complex, as shown in Eq. (38) [31].

EetX=1λktkpkq/p2πq+p2Gp,qq,p1kp,2kp,,pkp0q,1q,,q1qppqλktkqE38

where G is known as the Meijer G-function.

As a result, it may have to be solved numerically. The numerical solutions are determined by substituting the scale and shape (Weibull) parameter estimates and the continuous-time discount rate, s, into the expression provided in Eq. (39).

Mxs=EesX=0esxβαtαβ1etαβdxE39

For the case of when the bridge element condition is in the terminal state, mathematically, it can be assumed that the bridge element spends 1 year in that state before ‘transitioning’ back into the terminal state. When a particular transition takes 1 year every time, then the equivalent to the Laplace transform in this scenario is provided by Eq. (40) [30]:

Mxs=EesX=esE40

5.4 Semi-Markov decision process with discounting

The computation of the present values based on the SMDP is provided by the following formulation [19, 20]:

viaλ=riaλ+j=1Npijaτ=0vjaλeλτfHijτai=1,,NE41

where, viaλ = total expected long-term discounted cost; i = the condition state of a bridge element; a = set of feasible MR&R actions for condition state i; riaλ = the expected cost for an MR&R action a in condition state i; λ = the continuous-time discount factor; pija = the transition probability of the embedded Markov chain that the bridge element will transition from condition state i to condition state j after action a is taken; fHij is the probability density function of the sojourn time, τ, from condition state i to condition state j, when action a is taken. Therefore,

viaλ=riaλ+j=1NpijaMijHaλvjaλE42

where MijHaλ is the Laplace transform (s-transform) of fHijτa evaluated at s=λ, and transition probability matrix, Pi,j is given by,

Pi,ja=p1,1ap1,2ap1,Nap2,1ap2,2ap2,NapN1,1apN1,2apN1,NapN,1apN,2apN,NaE43

If we let

qijaλ=pijaMijHaλE44

we obtain

viaλ=riaλ+j=1Nqiaλvjaλi=1,,N.E45

According to [19], the long-term value for riaλ is given by

=Bia+j=1Npijτ=0x=0teλxbijxafHijτadxdτE46

where Bia is the immediate cost and is determined by

Bia=j=1NpijaBijai=1,,NBija<E47

Mathematically, the second term in Eq. (46) represents the sum of the costs that accumulates at the rate per unit time until the transition to state j occurs, bijxa<. For the SMDP model, it is assumed that only ‘immediate’ costs are incurred when there is a transition from one condition state to another. It can therefore be assumed that the second term in Eq. (46) equals zero. Expressing Eq. (45) in a matrix form, we have [19]:

Vaλ=v1aλv(aλ)v(aλ)TE48
Raλ=Vaλ=r1aλr2aλrNaλTE49
Qaλ=q1,1aλq1,2aλq1,Naλq2,1aλq2,2aλq2,NaλqN,1aλqN,2aλqN,NaλTE50

Based on Eq. (45), considering steady state conditions,

Vaλ=Raλ+QaλVaλE51

then,

Vaλ=IQaλ1RaλE52

As a result, there is a Qaλ that gives the minimum long-term cost, which is based on an optimum policy. The minimum long-term cost is defined by Ibe [19] as:

vi=minriaλ+j=1Nqij(aλ)vj(aλ)i=1,,NE53

5.5 Do-nothing action (action ‘d’)

As mentioned before, for ‘do-nothing’ actions (ad) it can be assumed that the sojourn time in a condition state before transition, follows a Weibull distribution, except when sojourning in the terminal state. Using Eq. (39), the numerical solution for the Laplace of the Weibull distribution for each scenario, except failure, can be determined. Let:

MijHadλ=Lfadαijβiji,j=1,,NSE54

where NS represents the number of states, not including the terminal state. The terminal state can be referred to an absorption condition state or the point at which the bridge element has come to the end of its useful life. In other words, when considering the ‘do-nothing’ action on a bridge element, once the condition state enters the terminal state, it will remain in that state unless rehabilitated. It follows that,

MijHadλ=es=eλs=λ;i=j=FE55

where F represents the terminal state. From Eqs. (44), (50) and (54),

Qadλ=p1,1Lfadα1,1β1,1p1,FLfadα1,Fβ1,FpF,1LfadαF,1βF,1pF,FLfadαF,FβF,F000eλE56

where ad represents ‘do-nothing’ action.

It can be assumed that the deterioration of bridge elements is a one-step process, and the bridge element can only drop by one condition state at a given point in time. In other words, a bridge element in a particular condition state will eventually transition to the next lower condition state when there are no other actions on that element other than ‘do-nothing’ actions. It can also assumed that for ‘do-nothing’ actions, once the condition of the bridge element exits a condition state, that condition state will not be visited again. The transition probability for the embedded Markov chain of the semi-Markov process, pij, for ‘do-nothing’ action is therefore assumed to be 1 between the current and succeeding condition states, and for the terminal state. Therefore, for the bridge element that has five (5) condition states plus the terminal state,

Pi,jad=010000001000000100000010000001000001E57

and so Eq. (56) can be simplified to:

Qadλ=0Lf,adα1,2,β1,2000000Lf,adα2,3,β2,3000000000000000000Lf,adα5,F,β5,F00000eλE58

5.6 Maintenance action (action ‘m’)

For the maintenance action (am) the sojourn time distribution can also be Weibull. Each time a maintenance action is done it may affect the condition of each bridge element in the network differently. In other words, the same maintenance action may result in the condition state of a bridge element to either increase, stay the same or decrease at a slower rate than it would have performed if the action was not done. To estimate the transition probability of the embedded Markov chain due to the maintenance action, one can observe the changes that take place in a sample of bridge elements to which the maintenance action was done. To achieve this, the method of least squares using matrices computation can be used to determine the expected transition probability matrix between pairs of observations [32]. Consider Eq. (59):

x1xFx1xF.p1,1amp1,FampF,1ampF,Fam=y1yFy1yFE59

in which each row of xi’s are the proportions of bridge element (in each state) in condition state i before the maintenance action, am; each row of yi’s are the proportions of the same bridge element in condition state i after the maintenance action, am, for i=1,2, NS, F. Each row of the respective matrices with the xi’s and yi’s values represents a different inspection record of a particular bridge element. If the xi’s and yi’s are known for a sample of the same type of bridge element, then the transition probability matrix representing action am can be computed using the method of least squares by matrices computations, using Eq. (60). The closest pair of inspection dates before and after a maintenance action can be used to capture the pairs of records having the respective condition states.

Pi,jam=XTX1XTYE60

The resulting transition matrix does not take into consideration the uncertainties as it relates to the time it takes for the maintenance works to be undertaken, which can be described by the distribution of the sojourn time in each condition state. The sojourn time distribution for the maintenance action can also be assumed to follow a Weibull distribution. Using Eq. (39), the numerical solution for the Laplace of the Weibull distribution of the sojourn times irrespective of the current and subsequent condition states can be determined for NS states, excluding the terminal state. Since the solutions for NS states are numerical and the scale (α) and shape (β) estimates are not state specific, then the Laplace of the Weibull distribution for each i,j is represented by:

MijHamλ=Lfαmαβforconditionstates1,,NSE61

where NS represents the number of states, except the terminal state. Eq. (55) can be used to determine qF,Famλ. For a five (5) condition state (plus terminal state) element,

Qamλ=p1,1Lfamαβp1,FLfamαβp2,1Lfamαβp2,FLfamαβp3,1Lfamαβp3,FLfamαβp4,1Lfamαβp4,FLfamαβp5,1Lfamαβp5,FLfamαβ00000eλE62

5.7 Rehabilitation action (action ‘r’)

The duration of particular rehabilitation actions can be determined if sufficient data is available on these rehabilitation projects. If this information is not available, the duration between the two (2) closest inspections prior to the start and subsequent to the completion of the rehabilitation action can be used as an estimate of the duration of the rehabilitation action. The latter was done in this case, assuming two (2) years for the sojourn time. The transition probability of the embedded Markov chain for the rehabilitation action can be assumed to be fixed in which the condition state of the bridge element reverts to 1. As such, the transition probability for the embedded Markov chain of the semi-Markov process, pij, for rehabilitation action for a five (5) condition state element (plus terminal state) can be represented as:

Pi,jar=100000100000100000100000100000100000E63

Based on Eqs. (40), (44) and (50),

Qarλ=e2λ000e2λ000e2λ000e2λ000e2λ000e2λ000E64
Advertisement

6. Conclusions

This chapter outlined feasible approaches that demonstrate how the condition of transportation infrastructure can be used to develop Markov and semi-Markov deterioration models that are able to model network level performance when no actions are done to the infrastructure. The use of semi-Markov processes to model deterioration relaxes the assumption of the distribution of the sojourn time in condition states for ‘do-nothing’ actions and is therefore less restrictive than Markov chain deterioration models. A preservation model for transportation infrastructure using Semi-Markov Decision Processes was presented. The objective of the preservation model is to determine the minimum long-term costs for the preservation of a transportation infrastructure within a group or network of similar infrastructure. The main ‘inputs’ for the preservation model are: (a) the scale and shape parameter estimates of the Weibull distribution used to describe ‘do-nothing’ actions, and (b) the transition probabilities of the embedded Markov chain and the scale and shape parameter estimates of the Weibull distribution for maintenance actions. If there is sufficient data available, the use of the Semi-Markov Decision Process is a useful tool to use for modeling Transportation Infrastructure preservation.

Advertisement

Acknowledgments

I would like to acknowledge and express my gratitude to my wife, Tamika, and my children, Aiden and Grace, for their support and the encouragement during the process of writing this chapter. I must say thanks to Almighty God for His continuous grace and guidance!

Advertisement

Nomenclature

CRKCrack Index
NSnumber of states
Advertisement

Abbreviations

PMSPavement Management System
ADOTArizona Department of Transportation
NOSNetwork Optimization System
MLEMaximum Likelihood Estimation
iididentical and independent distributed
pdfprobability density function
pmfprobability mass function
MDPMarkov Decision Process
SMDPSemi-Markov Decision Process
BMSBridge Management System
MR&Rmaintenance, repair and rehabilitation

References

  1. 1. Wang CP. Pavement network optimization and analysis [PhD dissertation]. Tempe, AZ, USA: Arizona State University; 1992
  2. 2. Wang KCP, Zaniewski J, Way G. Probabilistic behavior of pavements. Journal of Transportation Engineering, American Society of Civil Engineers (ASCE). 1994;120(3):358-375
  3. 3. Nasseri S, Gunaratne M, Yang J, Nazef A. Application of improved crack prediction methodology in florida’s highway network. Transportation Research Record: Journal of the Transportation Research Board. 2009;2093:67-75
  4. 4. Golabi K, Thompson PD, Hyman WA. Pontis Version 2.0 Technical Manual, A Network Optimization System for Bridge Improvements and Maintenance. Washington DC: Federal Highway Administration; December 1993
  5. 5. Micevski T, Kuczera G, Coombes P. Markov model for storm water pipe deterioration. Journal of Infrastructure Systems, American Society of Civil Engineers (ASCE). 2002;8(2):49-56
  6. 6. Baik H-S, Seok HJ, Abraham DM. Estimating transition probabilities in markov chain-based deterioration models for management of wastewater systems. Journal of Water Resources Planning and Management, American Society of Civil Engineers (ASCE). 2006;132(1):15-24
  7. 7. Yang J, Gunaratne M, Jian John L, Dietrich B. Use of recurrent markov chains for modeling the crack performance of flexible pavements. Journal of Transportation Engineering, American Society of Civil Engineers (ASCE). 2005;131(11):861-872
  8. 8. Yang J, Lu JJ, Gunaratne M, Dietrich B. Modeling crack deterioration of flexible pavements: Comparison of recurrent markov chains and artificial neural networks. Transportation Research Record: Journal of the Transportation Research Board. 1974;18–25:2009
  9. 9. Ng S-K, Moses F. Bridge deterioration modeling using semi-markov theory. A.A. Balkema Uitgevers B.V. Structural Safety and Reliability. 1998;1:113-120
  10. 10. Sobanjo JO. State transition probabilities in bridge deterioration based on weibull sojourn times. Structure and Infrastructure Engineering: Maintenance, Management, Life-Cycle Design and Performance. 2011;7(10):747-764
  11. 11. Black M, Brint AT, Brailsford JR. Comparing probabilistic methods for the asset management of distributed items. Journal of Infrastructure Systems, American Society of Civil Engineers (ASCE). 2005;11(2):102-109
  12. 12. Black M, Brint AT, Brailsford JR. A semi-markov approach for modelling asset deterioration. Journal of the Operational Research Society. 2005;56:1241-1249
  13. 13. Thomas O. Stochatic preservation model for transportation infrastructure [PhD dissertation]. Tallahassee, FL, USA: Florida State University; 2011
  14. 14. Ross SM. Stochastic Processes. 2nd ed. USA: John Wiley and Sons, Inc.; 1996
  15. 15. Casella G, Berger RL. Statistical Inference. 2nd ed. Duxbury, Pacific Grove, CA, USA: Cenage Learning; 2001
  16. 16. Billington R, Allan RN. Reliability Evaluation of Engineering Systems: Concepts and Techniques. London: Pitman Books Limited; 1983
  17. 17. Tobias PA, Trindade DC. Applied Reliability. 2nd ed. Florida: Chapman and Hall/CRC Press; 1995
  18. 18. Birolini A. Reliability Engineering, Theory and Practice. 5th ed. Berlin, Heidelberg: Springer, Verlag; 2007
  19. 19. Ibe OC. Markov Processes for Stochastic Modeling. Massachusetts: Elsevier Academic Press; 2009
  20. 20. Howard RA. Dynamic probabilistic systems. In: Volume II: Semi-Markov and Decision Processes. Canada: John Wiley and Sons Inc.; 1971
  21. 21. Cleves MA, Gould WW, Guitierrez RG. An Introduction to Survival Analysis Using Stata. Revised ed. 4905 Lakeway Drive, College Station, Texas 77845: Stata Press; 2001
  22. 22. Castillo E, Hadi AS, Balakrishnan N, Sarabia JM. Extreme Value and Related Models with Applications in Engineering and Science. New Jersey: John Wiley and Sons, Inc.; 2005
  23. 23. Lee ET. Statistical Methods for Survival Data Analysis. 2nd ed. USA: John Wiley and Sons, Inc.; 1992
  24. 24. Durango PL, Madanat SM. Optimal maintenance and repair policies in infrastructure management under uncertain facility deterioration rates: And adaptive approach. Transportation Research Part A. 2002;36:763-778
  25. 25. Guillaumot VM, Durango-Cohen PL, Madanat SM. Adaptive optimization of infrastructure maintenance and inspection decisions under performance model uncertainty. Journal of Infrastructure Systems, American Society of Civil Engineers (ASCE). 2003;9(4):133-139
  26. 26. Gonzalez J, Romera JCR, Perez JM. Optimal railway infrastructure maintenance and repair policies to manage under uncertainty with adaptive control. UC3M Working Papers. Statistics and Econometrics. 2006;5:1-15
  27. 27. Madanat SM, Park S, Kuhn K. Adaptive optimization of infrastructure maintenance and inspection decisions under performance model uncertainty. Journal of Infrastructure Systems, American Society of Civil Engineers (ASCE). 2006;12(3):192-198
  28. 28. Thompson PD, Harrison FD. Pontis Version 2.0 User’s Manual, A Network Optimization System for Bridge Improvements and Maintenance. Washington DC: Federal Highway Administration; December 1993
  29. 29. Evans M, Hastings N, Peacock B. Statistical Distributions. 3rd ed. New York: John Wiley and Sons; 2000
  30. 30. Puterman ML. Markov Decision Processes: Discrete Stochastic Dynamic Programming. New Jersey: John Wiley and Sons; 2005
  31. 31. Sagias NC, Karagiannidis GK. Gaussian class multivariate weibull distributions: Theory and applications in fading channels. Institute of Electrical and Electronics Engineers. Transactions on Information Theory. 2005;51(10):3608-3619
  32. 32. Thompson PD, Johnson MB. Markovian bridge deterioration: Developing models from historical data. Structure and Infrastructure Engineering. 2005;1:85-91

Written By

Omar Thomas and John Sobanjo

Submitted: 13 December 2022 Reviewed: 28 January 2023 Published: 20 March 2023