Reliability and Maintainability in Operations Management

The study of component and process reliability is the basis of many efficiency evaluations in Operations Management discipline. For example, in the calculation of the Overall Equipment Effectiveness (OEE) introduced by Nakajima [1], it is necessary to estimate a crucial parameter called availability. This is strictly related to reliability. Still as an example, consider how, in the study of service level, it is important to know the availability of machines, which again depends on their reliability and maintainability.


Introduction
The study of component and process reliability is the basis of many efficiency evaluations in Operations Management discipline. For example, in the calculation of the Overall Equipment Effectiveness (OEE) introduced by Nakajima [1], it is necessary to estimate a crucial parameter called availability. This is strictly related to reliability. Still as an example, consider how, in the study of service level, it is important to know the availability of machines, which again depends on their reliability and maintainability.
Reliability is defined as the probability that a component (or an entire system) will perform its function for a specified period of time, when operating in its design environment. The elements necessary for the definition of reliability are, therefore, an unambiguous criterion for judging whether something is working or not and the exact definition of environmental conditions and usage. Then, reliability can be defined as the time dependent probability of correct operation if we assume that a component is used for its intended function in its design environment and if we clearly define what we mean with "failure". For this definition, any discussion on the reliability basics starts with the coverage of the key concepts of probability.
A broader definition of reliability is that "reliability is the science to predict, analyze, prevent and mitigate failures over time." It is a science, with its theoretical basis and principles. It also has sub-disciplines, all related -in some way -to the study and knowledge of faults. Reliability is closely related to mathematics, and especially to statistics, physics, chemistry, mechanics and electronics. In the end, given that the human element is almost always part of the systems, it often has to do with psychology and psychiatry.
In addition to the prediction of system durability, reliability also tries to give answers to other questions. Indeed, we can try to derive from reliability also the availability performance of a To study reliability you need to transform reality into a model, which allows the analysis by applying laws and analyzing its behavior [2]. Reliability models can be divided into static and dynamic ones. Static models assume that a failure does not result in the occurrence of other faults. Dynamic reliability, instead, assumes that some failures, so-called primary failures, promote the emergence of secondary and tertiary faults, with a cascading effect. In this text we will only deal with static models of reliability.
In the traditional paradigm of static reliability, individual components have a binary status: either working or failed. Systems, in turn, are composed by an integer number n of components, all mutually independent. Depending on how the components are configured in creating the system and according to the operation or failure of individual components, the system either works or does not work.
Let's consider a generic X system consisting of nelements. The static reliability modeling implies that the operating status of the ith component is represented by the state function X i defined as: The state of operation of the system is modeled by the state function Φ ( X ) Φ ( X ) = { 1 if the system works 0 if the system fails (2) The most common configuration of the components is the series system. A series system works if and only if all components work. Therefore, the status of a series system is given by the state function: where the symbol ∏ indicates the product of the arguments.
System configurations are often represented graphically with Reliability Block Diagrams (RBDs) where each component is represented by a block and the connections between them express the configuration of the system. The operation of the system depends on the ability to cross the diagram from left to right only by passing through the elements in operation. Figure  1 contains the RBD of a four components series system.
Accordingly, the state of a parallel system is given by the state function: where the symbol ∐ indicates the complement of the product of the complements of the arguments. Figure 2 contains a RBD for a system of four components arranged in parallel. Another common configuration of the components is the series-parallel systems. In these systems, components are configured using combinations in series and parallel configurations.
An example of such a system is shown in Figure 3.
State functions for series-parallel systems are obtained by decomposition of the system. With this approach, the system is broken down into subsystems or configurations that are in series or in parallel. The state functions of the subsystems are then combined appropriately, depending on how they are configured. A schematic example is shown in Figure 4.  A particular component configuration, widely recognized and used, is the parallel kout ofn.
A system k out of nworks if and only if at least kof the ncomponents works. Note that a series system can be seen as a system nout of nand a parallel system is a system 1 out ofn. The state function of a system k out of nis given by the following algebraic system: The RBD for a system k out of nhas an appearance identical to the RBD schema of a parallel system of ncomponents with the addition of a label "k out of n". For other more complex system configurations, such as the bridge configuration (see Figure 5), we may use more intricate techniques such as the minimal path set and the minimal cut set, to construct the system state function.
A Minimal Path Set -MPS is a subset of the components of the system such that the operation of all the components in the subset implies the operation of the system. The set is minimal because the removal of any element from the subset eliminates this property. An example is shown in Figure 5. A Minimal Cut Set -MCS is a subset of the components of the system such that the failure of all components in the subset does not imply the operation of the system. Still, the set is called minimal because the removal of any component from the subset clears this property (see Figure 6).
MCS and MPS can be used to build equivalent configurations of more complex systems, not referable to the simple series-parallel model. The first equivalent configuration is based on the consideration that the operation of all the components, in at least a MPS, entails the operation of the system. This configuration is, therefore, constructed with the creation of a series subsystem for each path using only the minimum components of that set. Then, these subsystems are connected in parallel. An example of an equivalent system is shown in Figure 7. The second equivalent configuration, is based on the logical principle that the failure of all the components of any MCS implies the fault of the system. This configuration is built with the creation of a parallel subsystem for each MCS using only the components of that group. Then, these subsystems are connected in series (see Figure 8).
After examining the components and the status of the system, the next step in the static modeling of reliability is that of considering the probability of operation of the component and of the system. The reliability R i of the ith component is defined by: while the reliability of the systemR is defined as in equation 8: The methodology used to calculate the reliability of the system depends on the configuration of the system itself. For a series system, the reliability of the system is given by the product of the individual reliability (law of Lusser, defined by German engineer Robert Lusser in the 50s): For an example, see Figure 9. For a parallel system, reliability is: Operations Management 88 In fact, from the definition of system reliability and by the properties of event probabilities, it follows: In many parallel systems, components are identical. In this case, the reliability of a parallel system with n elements is given by: For a series-parallel system, system reliability is determined using the same approach of decomposition used to construct the state function for such systems. Consider, for instance, the system drawn in Figure 11, consisting of 9 elements with reliability R 1 = R 2 = 0.9; R 3 = R 4 = R 5 = 0.8 and R 6 = R 7 = R 8 = R 9 = 0.7. Let's calculate the overall reliability of the system. To calculate the overall reliability, for all other types of systems which can't be brought back to a series-parallel scheme, it must be adopted a more intensive calculation approach [3] that is normally done with the aid of special software.
Reliability functions of the system can also be used to calculate measures of reliability importance.
These measurements are used to assess which components of a system offer the greatest opportunity to improve the overall reliability. The most widely recognized definition of reliability importance I ' i of the components is the reliability marginal gain, in terms of overall system rise of functionality, obtained by a marginal increase of the component reliability: For other system configurations, an alternative approach facilitates the calculation of reliability importance of the components. Let R(1 i )be the reliability of the system modified so that R i = 1 and R(0 i )be the reliability of the system modified withR i = 0, always keeping unchanged the other components. In this context, the reliability importance I i is given by: In a series system, this formulation is equivalent to writing: Thus, the most important component (in terms of reliability) in a series system is the less reliable. For example, consider three elements of reliability R 1 = 0.9, R 2 = 0.8 e R 3 = 0.7. It is therefore: I 1 = 0.8 • 0.7 = 0.56, I 2 = 0.9 • 0.7 = 0.63 and I 3 = 0.9 · 0.8 = 0.72 which is the higher value.
If the system is arranged in parallel, the reliability importance becomes as follows: It follows that the most important component in a parallel system is the more reliable. With the same data as the previous example, this time having a parallel arrangement, we can verify Eq. 16 for the first item: For the calculation of the reliability importance of components belonging to complex systems, which are not attributable to the series-parallel simple scheme, reliability of different systems must be counted. For this reason the calculation is often done using automated algorithms.

Fleet reliability
Suppose you have studied the reliability of a component, and found that it is 80% for a mission duration of 3 hours. Knowing that we have 5 identical items simultaneously active, we might be interested in knowing what the overall reliability of the group would be. In other words, we want to know what is the probability of having a certain number of items functioning at the end of the 3 hours of mission. This issue is best known as fleet reliability.
Consider a set of midentical and independent systems in a same instant, each having a reliabilityR. The group may represent a set of systems in use, independent and identical, or could represent a set of devices under test, independent and identical. A discrete random variable of great interest reliability isN , the number of functioning items. Under the assumptions specified, N is a binomial random variable, which expresses the probability of a Bernoulli process. The corresponding probabilistic model is, therefore, the one that describes the extraction of balls from an urn filled with a known number of red and green balls. Suppose that the percentage Rof green balls is coincident with the reliability after 3 hours. After each extraction from the urn, the ball is put back in the container. Extraction is repeated mtimes, and we look for the probability of finding ngreen. The sequence of random variables thus obtained is a Bernoulli process of which each extraction is a test. Since the probability of obtaining N successes in mextractions from an urn, with restitution of the ball, follows the binomial distribution B ( m, R ) B, the probability mass function of N is the well-known: The expected value of N is given by: E ( N ) = μ N = m • Rand the standard deviation is: Let's consider, for example, a corporate fleet consisting of 100 independent and identical systems. All systems have the same mission, independent from the other missions. Each system has a reliability of mission equal to 90%. We want to calculate the average number of missions completed and also what is the probability that at least 95% of systems would complete their mission. This involves analyzing the distribution of the binomial random variable characterized by R = 0.90andm = 100. The expected value is given by The probability that at least 95% of the systems complete their mission can be calculated as the sum of the probabilities that complete their mission 95, 96, 97, 98, 99 and 100 elements of the fleet:

Time dependent reliability models
When reliability is expressed as a function of time, the continuous random variable, not negative, of interest is T , the instant of failure of the device. Let f ( t ) be the probability density function of T , and let F ( t ) be the cumulative distribution function of T . F ( t ) is also known as failure function or unreliability function [4].
In the context of reliability, two additional functions are often used: the reliability and the hazard function. Let's define Reliability R ( t ) as the survival function: The Mean Time To Failure -MTTF is defined as the expected value of the failure time: Integrating by parts, we can prove the equivalent expression:

Hazard function
Another very important function is the hazard function, denoted by λ ( t ) , defined as the trend of the instantaneous failure rate at time t of an element that has survived up to that time t. The failure rate is the ratio between the instantaneous probability of failure in a neighborhood of tconditioned to the fact that the element is healthy in tand the amplitude of the same neighborhood.
The hazard function λ ( t ) [5] coincides with the intensity function z ( t ) of a Poisson process. The hazard function is given by: Thanks to Bayes' theorem, it can be shown that the relationship between the hazard function, density of probability of failure and reliability is the following: Thanks to the previous equation, with some simple mathematical manipulations, we obtain the following relation: In fact, since ln R ( 0 ) = ln 1 = 0, we have: From equation 24 derive the other two fundamental relations: The most popular conceptual model of the hazard function is the bathtub curve. According to this model, the failure rate of the device is relatively high and descending in the first part of the device life, due to the potential manufacturing defects, called early failures. They manifest themselves in the first phase of operation of the system and their causes are often linked to structural deficiencies, design or installation defects. In terms of reliability, a system that manifests infantile failures improves over the course of time.
Later, at the end of the life of the device, the failure rate increases due to wear phenomena. They are caused by alterations of the component for material and structural aging. The beginning of the period of wear is identified by an increase in the frequency of failures which continues as time goes by. The wear-out failures occur around the average age of operating; the only way to avoid this type of failure is to replace the population in advance.
Between the period of early failures and of wear-out, the failure rate is about constant: failures are due to random events and are called random failures. They occur in non-nominal operating conditions, which put a strain on the components, resulting in the inevitable changes and the consequent loss of operational capabilities. This type of failure occurs during the useful life of the system and corresponds to unpredictable situations. The central period with constant failure rate is called useful life. The juxtaposition of the three periods in a graph which represents the trend of the failure rate of the system, gives rise to a curve whose characteristic shape recalls the section of a bathtub, as shown in Figure 12. The CFR model is based on the assumption that the failure rate does not change over time. Mathematically, this model is the most simple and is based on the principle that the faults are purely random events. The IFR model is based on the assumption that the failure rate grows up over time. The model assumes that faults become more likely over time because of wear, as is frequently found in mechanical components. The DFR model is based on the assumption that the failure rate decreases over time. This model assumes that failures become less likely as time goes by, as it occurs in some electronic components.
Since the failure rate may change over time, one can define a reliability parameter that behaves as if there was a kind of counter that accumulates hours of operation. The residual reliability function R(t + t 0 | t 0 ), in fact, measures the reliability of a given device which has already survived a determined time t 0 . The function is defined as follows: Operations Management 94 Applying Bayes' theorem we have: And, given that P(T > t 0 | T > t + t 0 ) = 1, we obtain the final expression, which determines the residual reliability: The residual Mean Time To Failure -residual MTTF measures the expected value of the residual life of a device that has already survived a time t 0 : The characteristic life of a device is the time t C corresponding to a reliability R(t C ) equal to 1 e , that is the time for which the area under the hazard function is unitary: Let us consider a CFR device with a constant failure rate λ. The time-to-failure is an exponential random variable. In fact, the probability density function of a failure, is typical of an exponential distribution: The corresponding cumulative distribution function F ( t ) is: The reliability function R ( t ) is the survival function: For CFR items, the residual reliability and the residual MTTF both remain constant when the device accumulates hours of operation. In fact, from the definition of residual reliability, ∀ t 0 ∈ 0, ∞ , we have: Similarly, for the residual MTTF, is true the invariance in time: This behavior implies that the actions of prevention and running are useless for CFR devices. Figure 13 shows the trend of the function f ( t ) = λ • e -λ•t and of the cumulative distribution function F ( t ) = 1 -e -λ•t for a constant failure rate λ = 1. In this case, since λ = 1, the probability density function and the reliability function, overlap: The probability of having a fault, not yet occurred at time t, in the next dt, can be written as follows: Recalling the Bayes' theorem, in which we consider the probability of an hypothesis H, being known the evidence E: we can replace the evidence E with the fact that the fault has not yet taken place, from which we obtain P ( E ) → P ( T > t ) . We also exchange the hypothesis H with the occurrence of the fault in the neighborhood of t, obtaining P ( H ) → P ( t < T < t + dt ) . So we get: Operations Management 96 Since P ( T > t | t < T < t + dt ) = 1, being a certainty, it follows: As can be seen, this probability does not depend on t, i.e. it is not function of the life time already elapsed. It is as if the component does not have a memory of its own history and it is for this reason that the exponential distribution is called memoryless.
The use of the constant failure rate model, facilitates the calculation of the characteristic life of a device. In fact for a CFR item, t C is the reciprocal of the failure rate. In fact: Therefore, the characteristic life, in addition to be calculated as the time value t C for which the reliability is 0.368, can more easily be evaluated as the reciprocal of the failure rate.
The definition of MTTF, in the CFR model, can be integrated by parts and give: In the CFR model, then, the MTTF and the characteristic life coincide and are equal to 1 λ .  For the law of the reliability R ( t ) = e -λ•t , you get the reliability at 10000 hours: The probability that the component survives other 10000 hours, is calculated with the residual reliability. Knowing that this, in the model CFR, is independent from time, we have: Suppose now that it has worked without failure for 6000 hours. The expected value of the residual life of the component is calculated using the residual MTTF, that is invariant. In fact:

CFR in series
Let us consider n different elements, each with its own constant failure rate λ i and reliability , arranged in series and let us evaluate the overall reliability R S . From equation 9 we have: Since the reliability of the overall system will take the form of the type R S = e -λ s • t , we can conclude that: In a system of CFR elements arranged in series, then, the failure rate of the system is equal to the sum of failure rates of the components. The MTTF can thus be calculated using the simple relation: For example, let me show the following example. A system consists of a pump and a filter, used to separate two parts of a mixture: the concentrate and the squeezing. Knowing that the failure rate of the pump is constant and is λ P = 1,5 • 10 -4 failures per hour and that the failure rate of the filter is also CFR and is λ F = 3 • 10 -5 , let's try to assess the failure rate of the system, the MTTF and the reliability after one year of continuous operation.
To begin, we compare the physical arrangement with the reliability one, as represented in the following figure: As can be seen, it is a simple series, for which we can write: MTTF is the reciprocal of the failure rate and can be written: As a year of continuous operation is 24 · 365 = 8,760 hours, the reliability after one year is:

CFR in parallel
If two components arranged in parallel are similar and have constant failure rate λ, the reliability of the system R P can be calculated with equation 10, wherein R C is the reliability of the component R C = e -λt : The calculation of the MTTF leads to MTTF = 3 2λ . In fact we have: (54) Therefore, the MTTF increases compared to the single component CFR. The failure rate of the parallel system λ P , reciprocal of the MTTF, is: As you can see, the failure rate is not halved, but was reduced by one third.
For example, let us consider a safety system which consists of two batteries and each one is able to compensate for the lack of electric power of the grid. The two generators are equal and have a constant failure rate λ B = 9 • 10 -6 failures per hour. We'd like to calculate the failure rate of the system, the MTTF and reliability after one year of continuous operation.
As in the previous case, we start with a reliability block diagram of the problem, as visible in Figure 15. As a year of continuous operation is 24 · 365 = 8,760 hours, the reliability after one year is: It is interesting to calculate the reliability of a system of identical elements arranged in a parallel configuration k out of n. The system is partially redundant since a group of k elements is able to withstand the load of the system. The reliability is: Let us consider, for example, three electric generators, arranged in parallel and with failure rate λ = 9 · 10 -6 . In order for the system to be active, it is sufficient that only two items are in operation. Let's get the reliability after one year of operation.
We'll have: n = 3, k = 2. So, after a year of operation (t = 8760 h ), reliability can be calculated as follows: A particular arrangement of components is that of the so-called parallel with stand-by: the second component comes into operation only when the first fails. Otherwise, it is idle. Figure 16. RBD diagram of a parallel system with stand-by. When component 1 fails, the switch S activates component 2. For simplicity, it is assumed that S is not affected by faults.
If the components are similar, then λ 1 = λ 2 . It's possible to demonstrate that for the stand-by parallel system we have: Thus, in parallel with stand-by, the MTTF is doubled.=

Repairable systems
The devices for which it is possible to perform some operations that allow to reactivate the functionality, deserve special attention. A repairable system [6] is a system that, after the failure, can be restored to a functional condition from any action of maintenance, including replacement of the entire system. Maintenance actions performed on a repairable system can be classified into two groups: Corrective Maintenance -CM and Preventive Maintenance -PM. Corrective maintenance is performed in response to system errors and might correspond to a specific activity of both repair of replacement. Preventive maintenance actions, however, are not performed in response to the failure of the system to repair, but are intended to delay or prevent system failures. Note that the preventive activities are not necessarily cheaper or faster than the corrective actions.
As corrective actions, preventive activities may correspond to both repair and replacement activities. Finally, note that the actions of operational maintenance (servicing) such as, for example, put gas in a vehicle, are not considered PM [7].
Preventative maintenance can be divided into two subcategories: scheduled and on-condition. Scheduled maintenance (hard-time maintenance) consists of routine maintenance operations, scheduled on the basis of precise measures of elapsed operating time. Condition-Based Maintenance -CBM [8] (also known as predictive maintenance) is one of the most widely used tools for monitoring of industrial plants and for the management of maintenance policies. The main aim of this approach is to optimize maintenance by reducing costs and increasing availability. In CBM it is necessary to identify, if it exists, a measurable parameter, which expresses, with accuracy, the conditions of degradation of the system. What is needed, therefore, is a physical system of sensors and transducers capable of monitoring the parameter and, thereby, the reliability performance of the plant. The choice of the monitored parameter is crucial, as is its time evolution that lets you know when maintenance action must be undertaken, whether corrective or preventive.
To adopt a CBM policy requires investment in instrumentation and prediction and control systems: you must run a thorough feasibility study to see if the cost of implementing the apparatus are truly sustainable in the system by reducing maintenance costs.
The CBM approach consists of the following steps: • group the data from the sensors;

Operations Management
• diagnose the condition; • estimate the Remaining Useful Life -RUL; • decide whether to maintain or to continue to operate normally.
CBM schedule is modeled with algorithms aiming at high effectiveness, in terms of cost minimization, being subject to constraints such as, for example, the maximum time for the maintenance action, the periods of high production rate, the timing of supply of the pieces parts, the maximization of the availability and so on.
In support of the prognosis, it is now widespread the use of diagrams that do understand, even graphically, when the sensor outputs reach alarm levels. They also set out the alert thresholds that identify ranges of values for which maintenance action must arise [9].
Starting from a state of degradation, detected by a measurement at the time t k , we calculate the likelihood that the system will still be functioning within the next instant of inspection t k +1 . The choice to act with a preventive maintenance is based on the comparison of the expected value of the cost of unavailability, with the costs associated with the repair. Therefore, you create two scenarios: • continue to operate: if we are in the area of not alarming values. It is also possible that being in the area of preventive maintenance, we opt for a postponement of maintenance because it has already been established replacement intervention within a short interval of time • stop the task: if we are in the area of values above the threshold established for preventive maintenance of condition.
The modeling of repairable systems is commonly used to evaluate the performance of one or more repairable systems and of the related maintenance policies. The information can also be used in the initial phase of design of the systems themselves.
In the traditional paradigm of modeling, a repairable system can only be in one of two states: working (up) or inoperative (down). Note that a system may not be functioning not only for a fault, but also for preventive or corrective maintenance.

Availability
Availability may be generically be defined as the percentage of time that a repairable system is in an operating condition. However, in the literature, there are four specific measures of repairable system availability. We consider only the limit availability, defined with the limit of the probability A ( t ) that the system is working at time t, when t tends to infinity.
The limit availability just seen is also called intrinsic availability, to distinguish it from the technical availability, which also includes the logistics cycle times incidental to maintenance actions (such as waiting for the maintenance, waiting for spare parts, testing...), and from the operational availability that encompasses all other factors that contribute to the unavailability of the system such as time of organization and preparation for action in complex and specific business context [10].
The models of the impact of preventive and corrective maintenance on the age of the component, distinguish in perfect, minimal and imperfect maintenance. Perfect maintenance (perfect repair) returns the system as good as new after maintenance. The minimal repair, restores the system to a working condition, but does not reduce the actual age of the system, leaving it as bad as old. The imperfect maintenance refers to maintenance actions that have an intermediate impact between the perfect maintenance and minimal repair.
The average duration of maintenance activity is the expected value of the probability distribution of repair time and is called Mean Time To Repair -MTTR and is closely connected with the concept of maintainability. This consists in the probability of a system, in assigned operating conditions, to be reported in a state in which it can perform the required function. Figure 17 shows the state functions of two repairable systems with increasing failure rate, maintained with perfect and minimal repair.

The general substitution model
The general substitution model, states that the failure time of a repairable system is an unspecified random variable. The duration of corrective maintenance (perfect) is also a random variable. In this model it is assumed that preventive maintenance is not performed.
Let's denote by T i the duration of the ith interval of operation of the repairable system. For the assumption of perfect maintenance (as good as new), {T 1 , T 2 , … , T i , … , T n } is a sequence of independent and identically distributed random variables.
Let us now designate with D i the duration of the ith corrective maintenance action and assume that these random variables are independent and identically distributed. Therefore, each cycle (whether it is an operating cycle or a corrective maintenance action) has an identical probabilistic behavior, and the completion of a maintenance action coincides with time when system state returns operating Regardless of the probability distributions governing T i and D i , the fundamental result of the general pattern of substitution is as follows:

The substitution model for CFR
Let us consider the special case of the general substitution model where T i is an exponential random variable with constant failure rate λ. Let also D i be an exponential random variable with constant repair rate μ. Since the reparable system has a constant failure rate (CFR), we know that aging and the impact of corrective maintenance are irrelevant on reliability performance. For this system it can be shown that the limit availability is: Let us analyze, for example, a repairable system, subject to a replacement policy, with failure and repair times distributed according to negative exponential distribution. MTTF=1000 hours and MTTR=10 hours. Let's calculate the limit availability of the system. The formulation of the limit availability in this system is given by eq. 63, so we have: This means that the system is available for 99% of the time.

General model of minimal repair
After examining the substitution model, we now want to consider a second model for repairable system: the general model of minimal repair. According to this model, the time of system failure is a random variable. Corrective maintenance is instantaneous, the repair is minimal, and not any preventive activity is performed.
The times of arrival of faults, in a repairable system corresponding to the general model of minimal repair, correspond to a process of random experiments, each of which is regulated by the same negative exponential distribution. As known, having neglected the repair time, the number of faults detected by time t, { N ( t ) , t ≥ 0 } , is a non-homogeneous Poisson process, described by the Poisson distribution.

Minimal repair with CFR
A well-known special case of the general model of minimal repair, is obtained if the failure time T is a random variable with exponential distribution, with failure rate λ.
It should be noted, as well, a linear trend of the expected number of failures given the width of the interval taken.
Finally, we can obtain the probability mass function of N ( t ) , being a Poisson distribution: Also, the probability mass function of N ( t + s ) -N ( s ) , that is the number of faults in a range of amplitude t shifted forward of s, is identical: Since the two values are equal, the conclusion is that in the homogeneous Poisson process (CFR), the number of faults in a given interval depends only on the range amplitude.
The behavior of a Poisson mass probability distribution, with rate equal to 5 faults each year, representing the probability of having n ∈ N faults within a year, is shown in Figure 18.

Minimal repair: Power law
A second special case of the general model of minimal repair, is obtained if the failure time T is a random variable with a Weibull distribution, with shape parameter β and scale parameter α.
In this case the sequence of failure times is described by a Non-Homogeneous Poisson Process -NHPP with intensity z ( t ) equal to the probability density function of the Weibull distribution: Since the cumulative intensity of the process is defined by: the cumulative function is: As it can be seen, the average number of faults occurring within the time t ≥ 0 of this not homogeneous poissonian process E N ( t ) = Z ( t ) , follows the so-called power law.
If β > 1, it means that the intensity function z ( t ) increases and, being this latter the expression of the average number of failures, it means that faults tend to occur more frequently over time. Conversely, if β < 1, faults decrease over time.
In fact, if we take α = 10 hours (λ = 0.1 failures/h) and β = 2, we have: E N ( 10000 ) = ( 0.1 • 10000 ) 2 = 1000000 = 1000 2 . We can observe a trend no longer linear but increasing according to a power law of a multiple of the time width considered.
The probability mass function of N ( t ) thus becomes: For example, let us consider a system that fails, according to a power law, having β = 2.2 and α = 1500 hours. What is the average number of faults occurring during the first 1000 hours of operation? What is the probability of having two or more failures during the first 1000 hours of operation? Which is the average number of faults in the second 1000 hours of operation?
The average number of failures that occur during the first 1000 hours of operation, is calculated with the expected value of the distribution: The average number of faults in the succeeding 1000 hours of operation is calculated using the equation: that, in this case, is:

Conclusion
After seeing the main definitions of reliability and maintenance, let's finally see how we can use reliability knowledge also to carry out an economic optimization of replacement activities.
Consider a process that follows the power law with β > 1. As time goes by, faults begin to take place more frequently and, at some point, it will be convenient to replace the system.
Let us define with τthe time when the replacement (here assumed instantaneous) takes place. We can build a cost model to determine the optimal preventive maintenance time τ * which optimizes reliability costs.
Let's denote by C f the cost of a failure and with C r the cost of replacing the repairable system. If the repairable system is replaced every τ time units, in that time we will have the replacement costs C r and so many costs of failure C f as how many are the expected number of faults in the time range ( 0; τ . The latter quantity coincides with the expected value of the number of faults E N ( τ ) .
The average cost per unit of time c ( τ ) , in the long term, can then be calculated using the following relationship: Then follows: Differentiating c ( τ ) with respect to τ and placing the differential equal to zero, we can find the relative minimum of costs, that is, the optimal time τ * of preventive maintenance. Manipulating algebraically we obtain the following final result: Consider, for example, a system that fails according to a Weibull distribution with β = 2.2 and α = 1500 hours. Knowing that the system is subject to replacement instantaneous and that the cost of a fault C f = 2500 € and the cost of replacing C r = 18000 €, we want to evaluate the optimal interval of replacement.
The application of eq. 81 provides the answer to the question: