Open access peer-reviewed chapter

An Experimental Study on Developing a Cognitive Model for Human Reliability Analysis

Written By

Domenico Falcone, Fabio De Felice, Antonella Petrillo and Alessandro Silvestri

Submitted: 25 October 2016 Reviewed: 14 April 2017 Published: 21 June 2017

DOI: 10.5772/intechopen.69230

From the Edited Volume

Theory and Application on Cognitive Factors and Risk Management - New Trends and Procedures

Edited by Fabio De Felice and Antonella Petrillo

Chapter metrics overview

1,390 Chapter Downloads

View Full Metrics

Abstract

Serious incidents that occur inside or caused by industrial plants represent a very critical issue. In this context, the human reliability analysis (HRA) is an important tool to assess human factors that influence human behaviour in disasters scenario. In fact, the reliability assessment of interaction between human-machine systems is an important factor that affects the overall performance and safety in industrial plants. However, even though HRA techniques have been available for decades, there is not a universal method/procedure to reduce human errors that affect human performance. This study aims to design a novel approach to investigate the relationship between human reliability and operator performance considering the dependence on the available time to make decisions.

Keywords

  • disaster management
  • human reliability analysis
  • cognitive model
  • PSFs

1. Introduction

The increasing complexity of industrial systems requires the adoption of adequate approaches to manage emergency situations in case of accidents and disasters. In this context, the analysis of human reliability represents a crucial task [1]. In fact, the human factor is a predominant element in the study of accidents/disasters, not only in probability level, but also in terms of severity of the expected effects [2]. HRA is a set of techniques which describes the conditions of the operator during the work, taking into account errors and unsafe actions [3]. In other words, HRA aims to describe the physical and environmental conditions in which operators shall carry out their tasks, considering errors, skills, experience and ability [4]. The importance of the topic is the reason as we conducted a research on Scopus database, the largest abstract and citation database of peer-reviewed literature. Search string used in the literature survey was ‘human reliability analysis’. String was defined according to the standards of Scopus database. Only articles in which the string ‘human reliability analysis’ was found in key words were analysed. The analysis on Scopus pointed out that from 1964 (first year in which the first article appeared) until February 2017 (period of survey) a set of 40,958 documents have been published divided in 32,865 articles, 3671 conference papers and the remaining part on books, editorials, letters, etc. Result showed that the scientific production on this topic is very wide and covers many scientific areas (engineering, medicine, social science, etc.). Furthermore, it is interesting to note that most of the publications (13,842) have been published in the USA. Of course, since the research was very general, it is evident that the large amount of documents found does not allow to have a specific analysis regarding our specific scientific interest. Thus, considering our specific field of interest, we refined our search applying a preliminary filter. Search string used was ‘human reliability analysis AND industrial plant’. Only articles in which the string ‘human reliability analysis AND industrial plant’ was found in key words were analysed. In this case, only 46 documents were found from 1984 to 2017. It means that on average two articles per year have been published. Similarly, we conducted a deeper analysis applying a second filter. Search string used was ‘human reliability analysis AND industrial plant AND cognitive model”. Firstly, considering the three criteria: (1) article title, (2) abstract and (3) key words), only six articles were found. While taking into account the criterion ‘keywords’, only four articles were found.

Among the documents found, we selected some of them. An interesting point of view is analysed by Massaiu [5]. In his paper, a new approach is proposed to address the weaknesses of the HRA method or in other words the lack of empirical support of HRA method. In detail, a test of the ability to identify regularities among environmental conditions (procedures), crew expertise (teamwork) and crew behaviours were investigated. Liu et al. [6] apply the cognitive reliability and error analysis method (CREAM) to calculate the failure probability of human cognitive activities for mine hoisting operation. Cheng and Hwang [7] outline the human error identification (HEI) techniques that currently exist to assess latent human errors on the chemical cylinder change task.

Literature review shows that the human reliability analysis is an issue of growing importance in the scientific world. But, there are some limits. The major limit of HRA is related to the uncertainty which does not allow full use of the reliability analysis [8]. Furthermore, several human reliability models follow a static approach, in which human errors are described as omission/commission errors [9]. Really, in our opinion, it is essential to consider the physical and cognitive environment depending on the time in which human errors develop. This consideration led us to the development of an integrated reliability model which takes into account the dynamic influence of operators. Thus, our study aims to propose a novel approach to investigate the relationship between performance shaping factors (PSFs) and operator performances. In HRA, PSFs encompassed the various factors that affect human performance. PSFs can increase or decrease the probability of human error [10, 11]. Our research aims to develop a multi-dimensional and structural model in order to apply it in different types of activities and in different disaster scenarios to avoid potential operational errors. The model takes into account technical and environmental factors that can influence the decisions and the actions of operators. The model combines the cognitive aspects of operational analysis, the mathematical approach and the probabilistic quantification of the error. A real case study concerning the adoption of best practices for a petrochemical plant’s control room during an emergency situation is presented. The rest of the chapter is organized as follows: in Section 2, experimental design is analysed; in Section 3, a detailed model in a real case study is presented; in this section the main results of the model are discussed and finally, in Section 4, conclusion and future development are summarized.

Advertisement

2. Experimental study: the model framework

The most influential models of operator behaviour [12, 13] assume three levels of behavioural errors: (1) automatic reactions demanding little or no attention; (2) attentive reactions when one knows how to handle in a certain, well-known situation; and (3) creative, analytical reactions when confronted with new, unknown problems without off-the-shelf solutions. The above classification is certainly helpful, but it is not sufficient to take into account the dynamism that characterized human-machine systems reliability. It is necessary to reduce human error and hence to develop the capability to find (intuitively) solutions to unexpected problems. Our model is based on the above consideration. In detail, the model framework consists in nine different steps (as shown in Figure 1): Step 1—Preliminary analysis; Step 2—Generic tasks assessment (GTTs); Step 3—Definition of the Weibull distribution function; Step 4—Choice of performance shaping factors (PSFs); Step 5—Determination PSFcomp; Step 6—Determination HEPcont; Step 7—Determination HEPcont w/d; Step 8—Rating HEPnom after the 8th hour of work and Step 9—Determination HEPtot.

Figure 1.

Methodological flowchart (author’s elaboration).

The model is applied in a real case study concerning the emergency management within a petrochemical company. In detail, the model aims to investigate the adoption of best practices for company’s control room in order to ensure a consistent response under demanding circumstances.

Advertisement

3. Model development: description of a real case study

In the present section, a detailed analysis of each step is provided.

3.1. Step 1: preliminary analysis

The first step aims to define actions carried out by operators. At each operations is assigned a human error probability (HEPnom), that represents the unreliability of the operator and it represents a critical point to perform a proper human reliability analysis [14]. Obviously, the probability of error is a function of the time. Therefore, increasing the working hours, it means that the likelihood of error increases. Scenario analysed concerns the management of a fire occurred in the petrochemical plant. The key element is about maintaining a state of readiness and having an awareness of the working environment. Figure 2 shows the industrial plant under the study and the control room.

Figure 2.

Scenario under study.

During a fire emergency, it is important to set common standards and to ensure that personnel are continuously trained, assessed and re-assessed against these summaries of best practice. All fire alarms are to be taken seriously. Evacuation of the facility is mandatory until the signal to re-enter has been given by appropriate personnel and the alarm bells have ceased ringing. Figure 3 shows procedures that are to be followed any time a fire alarm sounds.

Figure 3.

Fire emergency protocol.

In detail, two operators were engaged in the control room, but only one was in charge of handling emergency procedures during the fire. Operator was responsible to activate emergency procedure that includes (1) Total block of the furnaces; (2) Close all turbines; (3) Propane valve closing; (4) Locking the sequence handling propane; (5) Flow control valves closing and (6) Closure of the emergency procedure. The three main emergency conditions that may be occurred are (1) Low hazard occurring despite the emergency the decision maker has been monitoring the situation; (2) Moderate hazard to occur the decision maker emergency can take wrong decisions; and (3) High hazard, the decision maker can make a mistake with a good chance.

3.2. Step 2: generic tasks assessment (GTTs)

The present step aims to define generic tasks (GTTs) or a set of generic error probabilities for different type of tasks. The tasks were defined according the scientific literature [15]. Literature proposes for each task of human unreliability a set of values defined than the 5th percentile (for the first hour of work) and at the 95th percentile (for the eighth hour of work). The reliability is maximum at the first hour of work (t = 1) and minimum at the eighth work time (t = 8), as defined in Eq. (1):

k=1HEPnom(t)t[1;8]E1

k parameter represents the value of the operator’s reliability.

In our case study, only three significant generic tasks (4, 7 and 8) were considered in order to approximate operator’s activities in the control room, as shown in Table 1.

No. Generic task Limitations of unreliability (%) k (t = 1) k (t = 8) α β
1 Totally unfamiliar 0.35–0.97 0.65 0.03 0.1661 1.5
2 System recovery 0.14–0.42 0.86 0.58 0.0213 1.5
3 Complex task requiring high level of comprehension and skill 0.12–0.28 0.88 0.72 0.0108 1.5
4 Fairly simple task performed rapidly or given scant attention 0.06–0.13 0.94 0.87 0.0042 1.5
5 Routin, highly practised 0.007–0,045 0.993 0.955 0.0021 1.5
6 Restoring a system by following the procedures of controls 0.008–0.007 0.992 0.993 −5.44E−05 1.5
7 Completely familiar, well designed, highly practised, routine task 0.00008–0.009 0.9999 0.991 0.00005 1.5
8 Respond correctly to system command even when there is an augmented or automated supervisory system 0.00000–0.0009 1 0.9991 4.86E−05 1.5

Table 1.

Generic tasks.

3.3. Step 3: definition of the Weibull distribution function

After defining GTTs, the probability of error associated with each GTTs were defined, according to Weibull probability distribution that best describes the probability of error. In detail, the probability of error is described by the index Human Error Probability (HEP), defined according to the Weibull distribution, as follows (Eq. (2)):

HEPnom=1eαtβE2

where the parameters α and β represent respectively the scale and the shape of the curves. The above formula assumes the minimum value of reliability during the first hour of work and a maximum value at the eighth hour of work. Consequently, the probability distribution of error Eq. (2) is adapted as follows (Eq. (3)):

{HEPnom(t)=1k*eα(1 t)β  t [0; 1]HEPnom(t)=1k*eα(t 1)β  t ]1; [E3

The value of k is calculated according to the value that the curves takes for t = 1, while the parameter β = 1.5 is deducted according to the scientific literature of the human error assessment and reduction technique (HEART) model developed by Williams [16]. The value of α is determined by setting the value of the function for t = 8 for each GTTs. Starting from this function, it is possible to calculates the value of α through the inverse formulas, see Eq. (4):

HEPnom(t)=1k*eα(t1)β  t ]1;[E4

α coefficient is represented by Eq. (5), as follows:

α= ln[k(t = 8)k(t = 1)](t1)βE5

Figure 4 shows the reliability performance according to Weibull distribution.

Figure 4.

Reliability performance (t = 0–8).

Table 2 shows the HEPnom values for the case study, calculated for the three different generic tasks.

Generic task 4 Generic task 7 Generic task 8
HEPnom (t = 1) 0.06 0.0001 0
HEPnom (t = 2) 0.0639 0.0006 0.00005
HEPnom (t = 3) 0.0710 0.0014 0.0001
HEPnom (t = 4) 0.0802 0.0026 0.0003
HEPnom (t = 5) 0.0909 0.0039 0.0004
HEPnom (t = 6) 0.1029 0.0055 0.0005
HEPnom (t = 7) 0.1160 0.0072 0.0007
HEPnom (t = 8) 0.1300 0.0090 0.0009

Table 2.

HEPnom.

3.4. Step 4: choice of performance shaping factors (PSFs)

In the present step, PSFs were defined. PSFs allow to take into account all the environmental and behavioural factors that influence operator’s cognitive behaviour. In particular, PSFs simulate different emergency scenarios. Analytically, PSFs increase the value of the error probability introducing external factors that could strain the ‘decision maker’. PSFs and their values are deducted by standardized plant analysis risk-human reliability analysis (SPAR-H) method [17, 18]. Table 3 shows the PSFs considered.

PSFs PSF level Multipliers
Available time Inadequate time HEP = 1
Time available > 5× time required 0.1
Time available > 50× time required 0.01
Stress/stressors Extreme 5
High 2
Nominal 1
Complexity High complex 5
Moderately complex 2
Nominal 1
Good 0.5

Table 3.

Performance shaping factors.

While Table 4 shows PSFs, defined according the three emergency conditions (see Step 1).

PSF Low hazard Moderate hazard High hazard
Available time 0.01 0.1 1
Stress 1 2 5
Complexity 1 2 5

Table 4.

PSFs for the three emergency conditions.

3.5. Step 5: determination PSFcomp

Defined PSFs and its multipliers, it is important to evaluate the overall PSF index (PSFcomp), as follows (Eq. (6)):

PSFcomp=i=1nPSFiE6

PSFcomp index summarizes the weight of each influencing factor with respect to the actions/decisions operator. Table 5 describes the values for the PSFcomp according to three emergencies levels.

Low hazard Moderate hazard High hazard
PSFcomp = (PSF1 × PSF2 × PSF3) 0.01 0.4 25

Table 5.

PSFcomp.

3.6. Step 6: determination HEPcont

The last step consists to contextualize the probability error analysis, defined as follows (Eq. (7)):

HEPcont=HEPnom*PSFcompHEPnom*(PSFcomp  1)+1E7

The value of HEPcont provides the level of probability of error of the decision maker, in function of influencing factors. The HEPcont value increases with the increase of time. The HEPcont is closely linked to two parameters. The first one is the time (1 ≤ t ≤ 8). The second one is the value of PSFs. In other words, HEPcont value increases with the time and increases with the increase of the ‘danger’ of the emergency scenario assumed. Table 6 shows HEPcont considering generic task 4 and different emergency levels.

Generic task HEPnom (t) HEPcont
Low hazard Moderate hazard High hazard
Fairly simple task performed rapidly or given attention t = 1 0.0600 6.38E−04 2.49E−02 6.15E−01
t = 2 0.0639 6.82E−04 2.66E−02 6.31E–01
t = 3 0.0710 7.64E−04 2.97E−02 6.56E–01
t = 4 0.0802 8.71E−04 3.37E−02 6.86E−01
t = 5 0.0909 9.99E−04 3.85E−02 7.14E–01
t = 6 0.1029 1.15E−03 4.39E02 7.41E−01
t = 7 0.1160 1.31E−03 4.99E−02 7.66E−01
t = 8 0.1300 1.49E03 5.64E−02 7.89E−01

Table 6.

HEPcont.

From a graphic point of view, Figure 5 shows the trend of HEPcont the worst case scenario.

Figure 5.

HEPcont (high hazard).

3.7. Step 7: determination HEPcontw/d

As stated, PSFs have been modelled starting from PSFs proposed by the SPAR-H methodology. It is worthy to note that the values attributable to each PSFs are proportional to the severity of their impact. However, this index does not take into account, any interdependencies among PSFs chosen. To cover this gap, a correlation among PSFs, developed by Boring [19], analysing 82 incidental reports in the US nuclear plants have been taken into account for our case study, as shown in Table 7.

For diagnosis Available time Stress/stressors Complexity Experience/training Procedures Ergonomics/HMI Fitness for duty Work processes
Available time 1
Stress/stressors 0.67* 1
Complexity −0.02 0.15* 1
Experience/training −0.03 0.06 0.21* 1
Procedures −0.07 0.01 0.25* 0.28* 1
Ergonomics/HMI 0.01 0.06 −0.05 0.20* 0.09 1
Fitness for duty −0.03 0.03 −0.03 0.18* 0.09 0.44* 1
Work processes −0.06 0 0.24* 0.55* 0.36* 0.15* 0.10 1
For action
Available time 1
Stress/stressors 0.50* 1
Complexity 0.38* 0.35* 1
Experience/training 0.31* 0.21* 0.32* 1
Procedures 0.05 −0.01 0.12* 0.08* 1
Ergonomics/HMI 0.10* 0.04 0.08* 0.08* 0.29* 1
Fitness for duty 0.20* 0.29* 0.22* 0.17* 0.12* 0.27* 1
Work processes 0 0.13* 0.16* 0.20* 0.35* 0.12* 0.15* 1

Table 7.

PSFs correlation developed by Boring.

Asterisk (*) indicated significant correlations with p value <0.05.

Thus, HEP index is given by Eq. (8):

HEPTask1|{PSFi;PSFj}=HEPTask1|PSFi+(1 kij)*HEPTask1|PSFjE8

where

  • PSFi means the value obtained by the calculation PSFcomp (with independent PSFs);

  • PSFj indicates the additional PSF, which is supposed to be dependent on the previous;

  • kij is the value of the parameter representative of the inter-dependent between two (or more) PSFs.

To quantify the influence of PSFs, the HEPcont w/d is calculated through Eq. (9):

HEPcont w/d=HEPnom*[PSFi+(1kij)*PSFj]E9

Referring to our case study, HEPcont w/d is given by Eq. (10):

HEPcontw/d(t=4){PSFi;PSFj}=0.0802*[25+(1kij)*3]0.0802*[25+(1kij)*31]+1E10

The attribution of the kij value can be assumed, considering the value of the correlation coefficients, or even based on a combination of expert judgment and data extrapolated from previous observations. The correlation between experience/training and the others PSFs is assumed moderate. In particular, a decision tree (Figure 6) is defined in order to choose the best value for kij. The final result is kij = 0.6.

Figure 6.

Decision tree.

Then, the value of HEPcont w/d is given by Eq. (11):

HEPcontwd(t=4){PSFi;PSFj}=0.0802*[25+(1  0.6)*3]0.0802*[25+(1 0.6)*3  1]+1=0.695E11

Table 8 shows the new values for PSFs and Table 9 shows the values of HEPcontw/d for the fourth generic task.

PSF Low hazard Moderate hazard High hazard
Available time 0.01 0.1 1
Stress 1 2 5
Complexity 1 2 5
Experience/training 0.5 1 3

Table 8.

New PSFs value.

Generic task HEPnom (t) HEPcont w/d
Low hazard Moderate hazard High hazard
Fairly simple task performed rapidly or given attention t = 1 0.0600 1.32E−02 4.86E−02 6.26E−01
t = 2 0.0639 1.41E−02 5.18E−02 6.42E−01
t = 3 0.0710 1.58E−02 5.77E−02 6.67E−01
t = 4 0.0802 1.80E−02 6.52E−02 6.95E−01
t = 5 0.0909 2.06E−02 7.41E−02 7.24E−01
t = 6 0.1029 2.35E−02 8.41E−02 7.50E−01
t = 7 0.1160 2.68E−02 9.50E−02 7.75E−01
t = 8 0.1300 3.04E−02 1.07E−01 7.97E−01

Table 9.

HEPcontw/d.

3.8. Step 8: rating HEPnom after the 8th hour of work

In this step, the analysis was extended over the 8 hours of work. Figure 7 shows the reliability comprised between 0 and 16 hours of work.

Figure 7.

Reliability performance (t = 0–16).

For analysis after 8 hours, the only thing that changes is the k factor. For the analysis after 8 hours, we used the factor k (t = 8), while for the first 8 hours, we considered the factor k (t = 1). The remaining steps are unchanged. The use of a new factor k (t = 8) defines a raising of operator fatigue. After the 8th hour of work, there is a step on the reliability of the operator. Table 10 represents the HEPcont values for the first 16 hours of work.

Generic task HEPnom (t) HEPcont
Low hazard Moderate hazard High hazard
Fairly simple task performed rapidly or given attention t = 1 0.0600 6.38E−04 2.49E−02 6.15E−01
t = 2 0.0639 6.82E−04 2.66E−02 6.31E−01
t = 3 0.0710 7.64E−04 2.97E−02 6.56E−01
t = 4 0.0802 8.71E−04 3.37E−02 6.86E−01
t = 5 0.0909 9.99E−04 3.85E−02 7.14E−01
t = 6 0.1029 1.15E−03 4.39E−02 7.41E−01
t = 7 0.1160 1.31E−03 4.99E−02 7.66E−01
t = 8 0.1300 1.49E−03 5.64E−02 7.89E−01
t = 9 0.2085 2.63E−03 9.53E−02 8.68E−01
t = 10 0.2228 2.86E−03 1.03E−01 8.78E−01
t = 11 0.2377 3.11E−03 1.11E−01 8.86E−01
t = 12 0.2530 3.38E−03 1.19E−01 8.94E−01
t = 13 0.2687 3.66E−03 1.28E−01 9.02E−01
t = 14 0.2847 3.96E−03 1.37E−01 9.09E−01
t = 15 0.3010 4.29E−03 1.47E−01 9.15E−01
t = 16 0.3175 4.63E−03 1.57E−01 9.21E−01

Table 10.

HEPcont (t = 1–16).

3.9. Step 9: determination HEPtot: discussion and results

During emergency situations, the work shifts may be longer than 8 hours of work, so the operators are subject to high stress loads. For this reason, we considered the variation of PSFs with the passage of time. To calibrate the uncertainty due to the change of time using the success likelihood index method (SLIM) [20], the worse conditions of the work of the operators, through the use of the SLIM methodology, is analysed. The operator fatigue is the first element to consider. Fatigue is quantified using the standard sleepiness scale (SSS) [21]. The scale is represented in Table 11. The result is a score related to drowsiness.

Degree of sleepiness Scale rating (R)
Feeling active, vital, alert, or wide awake 1
Functioning at high levels, but not at peak; able to concentrate 2
Awake, but relaxed; responsive but not fully alert 3
Somewhat foggy, let down 4
Foggy; losing interest in remaining awake; slowed down 5
Sleepy, woozy, fighting sleep; prefer to lie down 6
No longer fighting sleep, sleep onset soon; having dream-like thoughts 7

Table 11.

Stanford sleepiness scale.

The next step is to define the incidence of each PSFs relative to fatigue (W). The values are reported in percentage scale. The sum of the weights must give 100%. At this point, it calculates the modified SLI index using Eq. (12):

SLIj=iRijWiE12

Table 12 shows the SLI index calculation for generic task 4, considering the presented PSFs.

Weighting (Wk) PSFs Rating (Rk) SLI = W∙R
0.2 Available time 4 0.8
0.5 Stress 4 2
0.3 Complexity 4 1.2

Table 12.

SLI for the GTT 4.

Fairly simple task performed rapidly or given attention (t = 10) ∑ 4.

The SLI index must be transformed into HEP. It is assumed that between SLI and HEP exist the relation as follows (Eq. (13)):

log(P)=aSLI+bE13

where P represents the HEP value and a and b are constants. At this point it is necessary to calibrate the value of the constants to obtain the value of HEP. To do this, a comparative comparison between the values of SLI is carried out. In the previous step, HEP index values for 1 < t < 16 have been obtained. In this step, the extreme values of that range are used, as follows:

  • HEPmin = 6,38E−04 ➔ t = 1 (low hazard) SLI = 1

  • HEPmax = 9,21E−01 ➔ t = 16 (high hazard) SLI = 7

Using the inverse formula, the final equation that calculates the HEP index for each task is obtained. The formula is defined according to Eq. (14):

log(HEP)=aSLI+bE14

Here is a mathematical formulation that allows to define a and b:

{log10(0.000638)=aSLI+blog10(0.921)=aSLI+b{3.19=a+b0.035=a*7+b{b=3.19a7a=0.035b{b=3.19a7a=0.035+3.19+a{b=3.19a6a=0.035+3,19{a=0.52b=3.71E15

Having obtained the values a and b, the formulation for the final calculation of the index HEP becomes:

log(HEP)=0.52SLI3.71, SLI = 4log(HEP)=1.63HEP=0.023E16

It is possible to note that the value of HEP is lower compared to the previous HEP calculated for t ranging from 1 to 16. In order to obtain a more accurate model a calibration is carried out. Figure 8 compares the HEPnom curve with the calibrated HEP curve.

Figure 8.

HEPnom—calibrated HEP (SLI).

HEPtot value is calculated by adding up the values HEPnom that the HEP values calculated with SLI calibration (ΔHEP). HEPtot value replaces the model EHEA the HEPnom value, as shown in Table 13.

EHEA SLIM
HEPnom Log10 (HEP nom) ∆HEP Log10 (HEP nom) SLI HEPtot
0.060 −1.221 0.0585 −1.232 1 0.118
0.063 −1.194 0.0585 −1.232 1 0.122
0.071 −1.148 0.0585 −1.232 1 0.129
0.080 −1.095 0.0663 −1.178 2 0.146
0.090 −1.041 0.0663 −1.178 2 0.157
0.102 −0.987 0.0663 −1.178 2 0.169
0.116 −0.935 0.0751 −1.124 3 0.191
0.130 −0.886 0.0751 −1.124 3 0.205
0.208 −0.680 0.0751 −1.124 3 0.283
0.222 −0.652 0.0851 −1.069 4 0.307
0.237 −0.623 0.0851 −1.069 4 0.322
0.253 −0.596 0.0964 −1.015 5 0.349
0.268 −0.570 0.0964 −1.015 5 0.365
0.284 −0.545 0.1093 −0.961 6 0.394
0.301 −0.521 0.1093 −0.961 6 0.410
0.317 −0.498 0.1239 −0.906 7 0.441

Table 13.

HEPtot.

Figure 9 shows the graph of the HEPtot. It assumes to model the increase of the HEPnom nominal value, to which are summed the corresponding ΔHEP values, starting from t = 8.

Figure 9.

HEPnomHEPtot.

The main result of the model shows how the human reliability depends on the time and how it is important to consider operator performance beyond the canonical 8 hours.

Advertisement

4. Conclusion

In general, the modelling approaches used in HRA are focused to describe sequential low-level tasks, which are not the main source of systemic errors. On the contrary, we believe that it is important to analyse in deeper human behaviour that causes errors in order to develop managerial practices that could be applied to reduce the failures that occur at the interface of human behaviour and technology. Thus, the aim of this work was to develop an innovative methodology for human reliability analysis in emergency scenarios. A hybrid model that integrates the advantages of the following methodologies: human error assessment and reduction technique (HEART), standardized plant analysis risk-human reliability analysis (SPAR-H) and Success Likelihood Index Method (SLIM) has been proposed. The key point that we have been trying to convey in this research is the analysis of all environmental and behavioural factors that influence human reliability. Results obtained from the analysis of a real case study give an empirical contribution and a theoretical contribution referring to the framework used to detect human error in risk and reliability analysis. Furthermore, the study could be a useful perspective for the entire academic community to make the community fully aware of new assumptions in human reliability analysis.

Advertisement

Acknowledgments

This research represents a result of research activity carried out with the financial support of MiuR, namely PRIN 2012 “DIEM-SSP, Disasters and Emergencies Management for Safety and Security in industrial Plants”.

References

  1. 1. Dhillon BS. Human Reliability, Error, and Human Factors in Power Generation. New York: Springer; 2014
  2. 2. Jung, Won Dea; Kang, Dae Il; Kim, Jae Whan, 2005. Development of A Standard Method for Human Reliability Analysis of Nuclear Power Plants: Report of Korea Atomic Energy Research Institute, code KAERI/TR--2961/2005; (37)50. Republic of Korea
  3. 3. Hollnagel E. Cognitive Reliability and Error Analysis Method (CREAM). Amsterdam, Netherlands: Elsevier; 1998
  4. 4. De Felice F, Petrillo A, Zomparelli F. A hybrid model for human error probability analysis. IFAC-PapersOnLine. 2016;49(12):1673–1678
  5. 5. Massaiu, S., 2012. A model-based approach for the collection of Human Reliability data. Advances in Safety, Reliability and Risk Management - Proceedings of the European Safety and Reliability Conference, ESREL 2011, Troyes France 18–22 September 2011, pp. 595–603
  6. 6. Liu, Z., Chen, L., Ren, D., 2009. Prediction analysis of human error probability for mine hoisting systems. IE and EM 2009 - Proceedings 2009 IEEE 16th International Conference on Industrial Engineering and Engineering Management, Beijing, China 21 Oct – 23 Oct 2009, pp. 1184–1188.
  7. 7. Cheng C-M, Hwang S-L. Applications of integrated human error identification techniques on the chemical cylinder change task. Applied Ergonomics. 2015;47:274–284
  8. 8. De Felice F, Falcone D, Petrillo A, Bruzzone A, Longo F. A simulation model of human decision making in emergency conditions. 28th European Modeling and Simulation Symposium, EMSS 2016; 2016: pp. 148–154
  9. 9. Thiruvengadachari S, Khasawneh MT, Bowling SR, Jiang X. Human-machine systems reliability: In: Current Status and Research Perspective. IIE Annual Conference and Exposition; 2005
  10. 10. Park J, Lee D, Jung W, Kim J. An experimental investigation on relationship between PSFs and operator performances in the digital main control room. Annals of Nuclear Energy. 2017;101:58–68
  11. 11. Kim JW, Jung WD. A taxonomy of performance influencing factors for human reliability analysis of emergency tasks. Journal of Loss Prevention in the Process Industries. 2003;16(6):479–495
  12. 12. Rasmussen J. Outlines of a hybrid model of the process operator. In: Sheridan TB, Johannsen G, editors. Monitoring Behaviour and Supervisory Control. New York: Plenum Press; 1976
  13. 13. van der Schaaf TW, Kanse L. Biases in incident reporting databases: An empirical study in the chemical process industry. Safety Science. 2004;42:57–67
  14. 14. Di Pasquale V, Miranda S, Iannone R, Riemma S. A simulator for human error probability analysis (SHERPA). Reliability Engineering & System Safety. 2015;139:17–32
  15. 15. Thommesen J, Andersen Henning B. Human error probabilities (HEPs) for generic tasks and performance shaping factors (PSFs) selected for railway operations. DTU Management Engineering Report No. 3. Department of Management Engineering, Technical University of Denmark; Denmark: 2012
  16. 16. Williams, J.C. (1985) HEART ? A proposed method for achieving high reliability in process operation by means of human factors engineering technology in Proceedings of a Symposium on the Achievement of Reliability in Operating Plant, Safety and Reliability Society (SaRS). NEC, Birmingham.
  17. 17. Gertman, D., Blackman, H., Marble, J., Byers, J., Smith, C., 2004. The SPAR-H Human Reliability Analysis Method. Washington, DC
  18. 18. U.S. NRC. The Spar-H method. NUREG/CR-6883; 2005
  19. 19. Boring RL. Dynamic human reliability analysis: Benefits and challenges of simulating human performance. Risk, Reliability and Societal Safety. 2007;2:1043–1049
  20. 20. Embrey DE, Humphreys PC, Rosa EA, Kirwan B, Rea K. SLIM-MAUD: An approach to assessing human error probabilities using structured expert judgement. NUREG/CR-3518. Washington, DC: U.S. Nuclear Regulatory Commission; 1984
  21. 21. Roelen ALC, Wever R, Hale AR, Goossens LHJ, Cooke RM, Lopuhaa R, Simons M, Valk PJL. Causal modelling of air safety. Demonstration model. NLR-CR-2002-662. Amsterdam: NLR; 2002

Written By

Domenico Falcone, Fabio De Felice, Antonella Petrillo and Alessandro Silvestri

Submitted: 25 October 2016 Reviewed: 14 April 2017 Published: 21 June 2017