Open access peer-reviewed chapter

An Experimental Study on Developing a Cognitive Model for Human Reliability Analysis

By Domenico Falcone, Fabio De Felice, Antonella Petrillo and Alessandro Silvestri

Submitted: October 25th 2016Reviewed: April 14th 2017Published: June 21st 2017

DOI: 10.5772/intechopen.69230

Downloaded: 837

Abstract

Serious incidents that occur inside or caused by industrial plants represent a very critical issue. In this context, the human reliability analysis (HRA) is an important tool to assess human factors that influence human behaviour in disasters scenario. In fact, the reliability assessment of interaction between human-machine systems is an important factor that affects the overall performance and safety in industrial plants. However, even though HRA techniques have been available for decades, there is not a universal method/procedure to reduce human errors that affect human performance. This study aims to design a novel approach to investigate the relationship between human reliability and operator performance considering the dependence on the available time to make decisions.

Keywords

  • disaster management
  • human reliability analysis
  • cognitive model
  • PSFs

1. Introduction

The increasing complexity of industrial systems requires the adoption of adequate approaches to manage emergency situations in case of accidents and disasters. In this context, the analysis of human reliability represents a crucial task [1]. In fact, the human factor is a predominant element in the study of accidents/disasters, not only in probability level, but also in terms of severity of the expected effects [2]. HRA is a set of techniques which describes the conditions of the operator during the work, taking into account errors and unsafe actions [3]. In other words, HRA aims to describe the physical and environmental conditions in which operators shall carry out their tasks, considering errors, skills, experience and ability [4]. The importance of the topic is the reason as we conducted a research on Scopus database, the largest abstract and citation database of peer-reviewed literature. Search string used in the literature survey was ‘human reliability analysis’. String was defined according to the standards of Scopus database. Only articles in which the string ‘human reliability analysis’ was found in key words were analysed. The analysis on Scopus pointed out that from 1964 (first year in which the first article appeared) until February 2017 (period of survey) a set of 40,958 documents have been published divided in 32,865 articles, 3671 conference papers and the remaining part on books, editorials, letters, etc. Result showed that the scientific production on this topic is very wide and covers many scientific areas (engineering, medicine, social science, etc.). Furthermore, it is interesting to note that most of the publications (13,842) have been published in the USA. Of course, since the research was very general, it is evident that the large amount of documents found does not allow to have a specific analysis regarding our specific scientific interest. Thus, considering our specific field of interest, we refined our search applying a preliminary filter. Search string used was ‘human reliability analysis AND industrial plant’. Only articles in which the string ‘human reliability analysis AND industrial plant’ was found in key words were analysed. In this case, only 46 documents were found from 1984 to 2017. It means that on average two articles per year have been published. Similarly, we conducted a deeper analysis applying a second filter. Search string used was ‘human reliability analysis AND industrial plant AND cognitive model”. Firstly, considering the three criteria: (1) article title, (2) abstract and (3) key words), only six articles were found. While taking into account the criterion ‘keywords’, only four articles were found.

Among the documents found, we selected some of them. An interesting point of view is analysed by Massaiu [5]. In his paper, a new approach is proposed to address the weaknesses of the HRA method or in other words the lack of empirical support of HRA method. In detail, a test of the ability to identify regularities among environmental conditions (procedures), crew expertise (teamwork) and crew behaviours were investigated. Liu et al. [6] apply the cognitive reliability and error analysis method (CREAM) to calculate the failure probability of human cognitive activities for mine hoisting operation. Cheng and Hwang [7] outline the human error identification (HEI) techniques that currently exist to assess latent human errors on the chemical cylinder change task.

Literature review shows that the human reliability analysis is an issue of growing importance in the scientific world. But, there are some limits. The major limit of HRA is related to the uncertainty which does not allow full use of the reliability analysis [8]. Furthermore, several human reliability models follow a static approach, in which human errors are described as omission/commission errors [9]. Really, in our opinion, it is essential to consider the physical and cognitive environment depending on the time in which human errors develop. This consideration led us to the development of an integrated reliability model which takes into account the dynamic influence of operators. Thus, our study aims to propose a novel approach to investigate the relationship between performance shaping factors (PSFs) and operator performances. In HRA, PSFs encompassed the various factors that affect human performance. PSFs can increase or decrease the probability of human error [10, 11]. Our research aims to develop a multi-dimensional and structural model in order to apply it in different types of activities and in different disaster scenarios to avoid potential operational errors. The model takes into account technical and environmental factors that can influence the decisions and the actions of operators. The model combines the cognitive aspects of operational analysis, the mathematical approach and the probabilistic quantification of the error. A real case study concerning the adoption of best practices for a petrochemical plant’s control room during an emergency situation is presented. The rest of the chapter is organized as follows: in Section 2, experimental design is analysed; in Section 3, a detailed model in a real case study is presented; in this section the main results of the model are discussed and finally, in Section 4, conclusion and future development are summarized.

Advertisement

2. Experimental study: the model framework

The most influential models of operator behaviour [12, 13] assume three levels of behavioural errors: (1) automatic reactions demanding little or no attention; (2) attentive reactions when one knows how to handle in a certain, well-known situation; and (3) creative, analytical reactions when confronted with new, unknown problems without off-the-shelf solutions. The above classification is certainly helpful, but it is not sufficient to take into account the dynamism that characterized human-machine systems reliability. It is necessary to reduce human error and hence to develop the capability to find (intuitively) solutions to unexpected problems. Our model is based on the above consideration. In detail, the model framework consists in nine different steps (as shown in Figure 1): Step 1—Preliminary analysis; Step 2—Generic tasks assessment (GTTs); Step 3—Definition of the Weibull distribution function; Step 4—Choice of performance shaping factors (PSFs); Step 5—Determination PSFcomp; Step 6—Determination HEPcont; Step 7—Determination HEPcont w/d; Step 8—Rating HEPnomafter the 8th hour of work and Step 9—Determination HEPtot.

Figure 1.

Methodological flowchart (author’s elaboration).

The model is applied in a real case study concerning the emergency management within a petrochemical company. In detail, the model aims to investigate the adoption of best practices for company’s control room in order to ensure a consistent response under demanding circumstances.

3. Model development: description of a real case study

In the present section, a detailed analysis of each step is provided.

3.1. Step 1: preliminary analysis

The first step aims to define actions carried out by operators. At each operations is assigned a human error probability (HEPnom), that represents the unreliability of the operator and it represents a critical point to perform a proper human reliability analysis [14]. Obviously, the probability of error is a function of the time. Therefore, increasing the working hours, it means that the likelihood of error increases. Scenario analysed concerns the management of a fire occurred in the petrochemical plant. The key element is about maintaining a state of readiness and having an awareness of the working environment. Figure 2 shows the industrial plant under the study and the control room.

Figure 2.

Scenario under study.

During a fire emergency, it is important to set common standards and to ensure that personnel are continuously trained, assessed and re-assessed against these summaries of best practice. All fire alarms are to be taken seriously. Evacuation of the facility is mandatory until the signal to re-enter has been given by appropriate personnel and the alarm bells have ceased ringing. Figure 3 shows procedures that are to be followed any time a fire alarm sounds.

Figure 3.

Fire emergency protocol.

In detail, two operators were engaged in the control room, but only one was in charge of handling emergency procedures during the fire. Operator was responsible to activate emergency procedure that includes (1) Total block of the furnaces; (2) Close all turbines; (3) Propane valve closing; (4) Locking the sequence handling propane; (5) Flow control valves closing and (6) Closure of the emergency procedure. The three main emergency conditions that may be occurred are (1) Low hazard occurring despite the emergency the decision maker has been monitoring the situation; (2) Moderate hazard to occur the decision maker emergency can take wrong decisions; and (3) High hazard, the decision maker can make a mistake with a good chance.

3.2. Step 2: generic tasks assessment (GTTs)

The present step aims to define generic tasks (GTTs) or a set of generic error probabilities for different type of tasks. The tasks were defined according the scientific literature [15]. Literature proposes for each task of human unreliability a set of values defined than the 5th percentile (for the first hour of work) and at the 95th percentile (for the eighth hour of work). The reliability is maximum at the first hour of work (t= 1) and minimum at the eighth work time (t= 8), as defined in Eq. (1):

k=1HEPnom(t)t[1;8]E1

kparameter represents the value of the operator’s reliability.

In our case study, only three significant generic tasks (4, 7 and 8) were considered in order to approximate operator’s activities in the control room, as shown in Table 1.

No.Generic taskLimitations of unreliability (%)k(t= 1)k(t= 8)αβ
1Totally unfamiliar0.35–0.970.650.030.16611.5
2System recovery0.14–0.420.860.580.02131.5
3Complex task requiring high level of comprehension and skill0.12–0.280.880.720.01081.5
4Fairly simple task performed rapidly or given scant attention0.06–0.130.940.870.00421.5
5Routin, highly practised0.007–0,0450.9930.9550.00211.5
6Restoring a system by following the procedures of controls0.008–0.0070.9920.993−5.44E−051.5
7Completely familiar, well designed, highly practised, routine task0.00008–0.0090.99990.9910.000051.5
8Respond correctly to system command even when there is an augmented or automated supervisory system0.00000–0.000910.99914.86E−051.5

Table 1.

Generic tasks.

3.3. Step 3: definition of the Weibull distribution function

After defining GTTs, the probability of error associated with each GTTs were defined, according to Weibull probability distribution that best describes the probability of error. In detail, the probability of error is described by the index Human Error Probability (HEP), defined according to the Weibull distribution, as follows (Eq. (2)):

HEPnom=1eαtβE2

where the parameters αand βrepresent respectively the scale and the shape of the curves. The above formula assumes the minimum value of reliability during the first hour of work and a maximum value at the eighth hour of work. Consequently, the probability distribution of error Eq. (2) is adapted as follows (Eq. (3)):

{HEPnom(t)=1k*eα(1 t)β  t [0; 1]HEPnom(t)=1k*eα(t 1)β  t ]1; [E3

The value of k is calculated according to the value that the curves takes for t= 1, while the parameter β= 1.5 is deducted according to the scientific literature of the human error assessment and reduction technique (HEART) model developed by Williams [16]. The value of αis determined by setting the value of the function for t= 8 for each GTTs. Starting from this function, it is possible to calculates the value of αthrough the inverse formulas, see Eq. (4):

HEPnom(t)=1k*eα(t1)β  t ]1;[E4

αcoefficient is represented by Eq. (5), as follows:

α= ln[k(t = 8)k(t = 1)](t1)βE5

Figure 4 shows the reliability performance according to Weibull distribution.

Figure 4.

Reliability performance (t= 0–8).

Table 2 shows the HEPnomvalues for the case study, calculated for the three different generic tasks.

Generic task 4Generic task 7Generic task 8
HEPnom(t= 1)0.060.00010
HEPnom(t= 2)0.06390.00060.00005
HEPnom(t= 3)0.07100.00140.0001
HEPnom(t= 4)0.08020.00260.0003
HEPnom(t= 5)0.09090.00390.0004
HEPnom(t= 6)0.10290.00550.0005
HEPnom(t= 7)0.11600.00720.0007
HEPnom(t= 8)0.13000.00900.0009

Table 2.

HEPnom.

3.4. Step 4: choice of performance shaping factors (PSFs)

In the present step, PSFs were defined. PSFs allow to take into account all the environmental and behavioural factors that influence operator’s cognitive behaviour. In particular, PSFs simulate different emergency scenarios. Analytically, PSFs increase the value of the error probability introducing external factors that could strain the ‘decision maker’. PSFs and their values are deducted by standardized plant analysis risk-human reliability analysis (SPAR-H) method [17, 18]. Table 3 shows the PSFs considered.

PSFsPSF levelMultipliers
Available timeInadequate timeHEP= 1
Time available > 5× time required0.1
Time available > 50× time required0.01
Stress/stressorsExtreme5
High2
Nominal1
ComplexityHigh complex5
Moderately complex2
Nominal1
Good0.5

Table 3.

Performance shaping factors.

While Table 4 shows PSFs, defined according the three emergency conditions (see Step 1).

PSFLow hazardModerate hazardHigh hazard
Available time0.010.11
Stress125
Complexity125

Table 4.

PSFs for the three emergency conditions.

3.5. Step 5: determination PSFcomp

Defined PSFs and its multipliers, it is important to evaluate the overall PSF index (PSFcomp), as follows (Eq. (6)):

PSFcomp=i=1nPSFiE6

PSFcomp index summarizes the weight of each influencing factor with respect to the actions/decisions operator. Table 5 describes the values for the PSFcomp according to three emergencies levels.

Low hazardModerate hazardHigh hazard
PSFcomp= (PSF1× PSF2× PSF3)0.010.425

Table 5.

PSFcomp.

3.6. Step 6: determination HEPcont

The last step consists to contextualize the probability error analysis, defined as follows (Eq. (7)):

HEPcont=HEPnom*PSFcompHEPnom*(PSFcomp  1)+1E7

The value of HEPcont provides the level of probability of error of the decision maker, in function of influencing factors. The HEPcont value increases with the increase of time. The HEPcont is closely linked to two parameters. The first one is the time (1 ≤ t≤ 8). The second one is the value of PSFs. In other words, HEPcont value increases with the time and increases with the increase of the ‘danger’ of the emergency scenario assumed. Table 6 shows HEPcont considering generic task 4 and different emergency levels.

Generic taskHEPnom (t)HEPcont
Low hazardModerate hazardHigh hazard
Fairly simple task performed rapidly or given attentiont= 10.06006.38E−042.49E−026.15E−01
t= 20.06396.82E−042.66E−026.31E–01
t= 30.07107.64E−042.97E−026.56E–01
t= 40.08028.71E−043.37E−026.86E−01
t= 50.09099.99E−043.85E−027.14E–01
t= 60.10291.15E−034.39E027.41E−01
t= 70.11601.31E−034.99E−027.66E−01
t= 80.13001.49E035.64E−027.89E−01

Table 6.

HEPcont.

From a graphic point of view, Figure 5 shows the trend of HEPcont the worst case scenario.

Figure 5.

HEPcont(high hazard).

3.7. Step 7: determination HEPcontw/d

As stated, PSFs have been modelled starting from PSFs proposed by the SPAR-H methodology. It is worthy to note that the values attributable to each PSFs are proportional to the severity of their impact. However, this index does not take into account, any interdependencies among PSFs chosen. To cover this gap, a correlation among PSFs, developed by Boring [19], analysing 82 incidental reports in the US nuclear plants have been taken into account for our case study, as shown in Table 7.

For diagnosisAvailable timeStress/stressorsComplexityExperience/trainingProceduresErgonomics/HMIFitness for dutyWork processes
Available time1
Stress/stressors0.67*1
Complexity−0.020.15*1
Experience/training−0.030.060.21*1
Procedures−0.070.010.25*0.28*1
Ergonomics/HMI0.010.06−0.050.20*0.091
Fitness for duty−0.030.03−0.030.18*0.090.44*1
Work processes−0.0600.24*0.55*0.36*0.15*0.101
For action
Available time1
Stress/stressors0.50*1
Complexity0.38*0.35*1
Experience/training0.31*0.21*0.32*1
Procedures0.05−0.010.12*0.08*1
Ergonomics/HMI0.10*0.040.08*0.08*0.29*1
Fitness for duty0.20*0.29*0.22*0.17*0.12*0.27*1
Work processes00.13*0.16*0.20*0.35*0.12*0.15*1

Table 7.

PSFs correlation developed by Boring.

Asterisk (*) indicated significant correlations with p value <0.05.

Thus, HEP index is given by Eq. (8):

HEPTask1|{PSFi;PSFj}=HEPTask1|PSFi+(1 kij)*HEPTask1|PSFjE8

where

  • PSFimeans the value obtained by the calculation PSFcomp(with independent PSFs);

  • PSFjindicates the additional PSF, which is supposed to be dependent on the previous;

  • kij is the value of the parameter representative of the inter-dependent between two (or more) PSFs.

To quantify the influence of PSFs, the HEPcont w/d is calculated through Eq. (9):

HEPcont w/d=HEPnom*[PSFi+(1kij)*PSFj]E9

Referring to our case study, HEPcont w/d is given by Eq. (10):

HEPcontw/d(t=4){PSFi;PSFj}=0.0802*[25+(1kij)*3]0.0802*[25+(1kij)*31]+1E10

The attribution of the kij value can be assumed, considering the value of the correlation coefficients, or even based on a combination of expert judgment and data extrapolated from previous observations. The correlation between experience/training and the others PSFs is assumed moderate. In particular, a decision tree (Figure 6) is defined in order to choose the best value for kij. The final result is kij= 0.6.

Figure 6.

Decision tree.

Then, the value of HEPcont w/d is given by Eq. (11):

HEPcontwd(t=4){PSFi;PSFj}=0.0802*[25+(1  0.6)*3]0.0802*[25+(1 0.6)*3  1]+1=0.695E11

Table 8 shows the new values for PSFs and Table 9 shows the values of HEPcontw/d for the fourth generic task.

PSFLow hazardModerate hazardHigh hazard
Available time0.010.11
Stress125
Complexity125
Experience/training0.513

Table 8.

New PSFs value.

Generic taskHEPnom (t)HEPcont w/d
Low hazardModerate hazardHigh hazard
Fairly simple task performed rapidly or given attentiont= 10.06001.32E−024.86E−026.26E−01
t= 20.06391.41E−025.18E−026.42E−01
t= 30.07101.58E−025.77E−026.67E−01
t= 40.08021.80E−026.52E−026.95E−01
t= 50.09092.06E−027.41E−027.24E−01
t= 60.10292.35E−028.41E−027.50E−01
t= 70.11602.68E−029.50E−027.75E−01
t= 80.13003.04E−021.07E−017.97E−01

Table 9.

HEPcontw/d.

3.8. Step 8: rating HEPnomafter the 8th hour of work

In this step, the analysis was extended over the 8 hours of work. Figure 7 shows the reliability comprised between 0 and 16 hours of work.

Figure 7.

Reliability performance (t= 0–16).

For analysis after 8 hours, the only thing that changes is the kfactor. For the analysis after 8 hours, we used the factor k(t= 8), while for the first 8 hours, we considered the factor k(t= 1). The remaining steps are unchanged. The use of a new factor k(t= 8) defines a raising of operator fatigue. After the 8th hour of work, there is a step on the reliability of the operator. Table 10 represents the HEPcont values for the first 16 hours of work.

Generic taskHEPnom (t)HEPcont
Low hazardModerate hazardHigh hazard
Fairly simple task performed rapidly or given attentiont= 10.06006.38E−042.49E−026.15E−01
t= 20.06396.82E−042.66E−026.31E−01
t= 30.07107.64E−042.97E−026.56E−01
t= 40.08028.71E−043.37E−026.86E−01
t= 50.09099.99E−043.85E−027.14E−01
t= 60.10291.15E−034.39E−027.41E−01
t= 70.11601.31E−034.99E−027.66E−01
t= 80.13001.49E−035.64E−027.89E−01
t= 90.20852.63E−039.53E−028.68E−01
t= 100.22282.86E−031.03E−018.78E−01
t= 110.23773.11E−031.11E−018.86E−01
t= 120.25303.38E−031.19E−018.94E−01
t= 130.26873.66E−031.28E−019.02E−01
t= 140.28473.96E−031.37E−019.09E−01
t= 150.30104.29E−031.47E−019.15E−01
t= 160.31754.63E−031.57E−019.21E−01

Table 10.

HEPcont (t= 1–16).

3.9. Step 9: determination HEPtot: discussion and results

During emergency situations, the work shifts may be longer than 8 hours of work, so the operators are subject to high stress loads. For this reason, we considered the variation of PSFs with the passage of time. To calibrate the uncertainty due to the change of time using the success likelihood index method (SLIM) [20], the worse conditions of the work of the operators, through the use of the SLIM methodology, is analysed. The operator fatigue is the first element to consider. Fatigue is quantified using the standard sleepiness scale (SSS) [21]. The scale is represented in Table 11. The result is a score related to drowsiness.

Degree of sleepinessScale rating (R)
Feeling active, vital, alert, or wide awake1
Functioning at high levels, but not at peak; able to concentrate2
Awake, but relaxed; responsive but not fully alert3
Somewhat foggy, let down4
Foggy; losing interest in remaining awake; slowed down5
Sleepy, woozy, fighting sleep; prefer to lie down6
No longer fighting sleep, sleep onset soon; having dream-like thoughts7

Table 11.

Stanford sleepiness scale.

The next step is to define the incidence of each PSFs relative to fatigue (W). The values are reported in percentage scale. The sum of the weights must give 100%. At this point, it calculates the modified SLI index using Eq. (12):

SLIj=iRijWiE12

Table 12 shows the SLI index calculation for generic task 4, considering the presented PSFs.

Weighting (Wk)PSFsRating (Rk)SLI = W∙R
0.2Available time40.8
0.5Stress42
0.3Complexity41.2

Table 12.

SLI for the GTT 4.

Fairly simple task performed rapidly or given attention (t= 10) ∑ 4.

The SLI index must be transformed into HEP. It is assumed that between SLI and HEPexist the relation as follows (Eq. (13)):

log(P)=aSLI+bE13

where Prepresents the HEPvalue and aand bare constants. At this point it is necessary to calibrate the value of the constants to obtain the value of HEP. To do this, a comparative comparison between the values of SLI is carried out. In the previous step, HEPindex values for 1 < t < 16 have been obtained. In this step, the extreme values of that range are used, as follows:

  • HEPmin = 6,38E−04 ➔ t= 1 (low hazard) SLI= 1

  • HEPmax = 9,21E−01 ➔ t= 16 (high hazard) SLI= 7

Using the inverse formula, the final equation that calculates the HEPindex for each task is obtained. The formula is defined according to Eq. (14):

log(HEP)=aSLI+bE14

Here is a mathematical formulation that allows to define aand b:

{log10(0.000638)=aSLI+blog10(0.921)=aSLI+b{3.19=a+b0.035=a*7+b{b=3.19a7a=0.035b{b=3.19a7a=0.035+3.19+a{b=3.19a6a=0.035+3,19{a=0.52b=3.71E15

Having obtained the values aand b, the formulation for the final calculation of the index HEPbecomes:

log(HEP)=0.52SLI3.71, SLI = 4log(HEP)=1.63HEP=0.023E16

It is possible to note that the value of HEPis lower compared to the previous HEPcalculated for tranging from 1 to 16. In order to obtain a more accurate model a calibration is carried out. Figure 8 compares the HEPnomcurve with the calibrated HEPcurve.

Figure 8.

HEPnom—calibratedHEP(SLI).

HEPtotvalue is calculated by adding up the values HEPnomthat the HEPvalues calculated with SLI calibration (ΔHEP). HEPtotvalue replaces the model EHEA the HEPnom value, as shown in Table 13.

EHEASLIM
HEPnomLog10 (HEPnom)∆HEPLog10 (HEPnom)SLIHEPtot
0.060−1.2210.0585−1.23210.118
0.063−1.1940.0585−1.23210.122
0.071−1.1480.0585−1.23210.129
0.080−1.0950.0663−1.17820.146
0.090−1.0410.0663−1.17820.157
0.102−0.9870.0663−1.17820.169
0.116−0.9350.0751−1.12430.191
0.130−0.8860.0751−1.12430.205
0.208−0.6800.0751−1.12430.283
0.222−0.6520.0851−1.06940.307
0.237−0.6230.0851−1.06940.322
0.253−0.5960.0964−1.01550.349
0.268−0.5700.0964−1.01550.365
0.284−0.5450.1093−0.96160.394
0.301−0.5210.1093−0.96160.410
0.317−0.4980.1239−0.90670.441

Table 13.

HEPtot.

Figure 9 shows the graph of the HEPtot. It assumes to model the increase of the HEPnom nominal value, to which are summed the corresponding ΔHEPvalues, starting from t= 8.

Figure 9.

HEPnomHEPtot.

The main result of the model shows how the human reliability depends on the time and how it is important to consider operator performance beyond the canonical 8 hours.

4. Conclusion

In general, the modelling approaches used in HRA are focused to describe sequential low-level tasks, which are not the main source of systemic errors. On the contrary, we believe that it is important to analyse in deeper human behaviour that causes errors in order to develop managerial practices that could be applied to reduce the failures that occur at the interface of human behaviour and technology. Thus, the aim of this work was to develop an innovative methodology for human reliability analysis in emergency scenarios. A hybrid model that integrates the advantages of the following methodologies: human error assessment and reduction technique (HEART), standardized plant analysis risk-human reliability analysis (SPAR-H) and Success Likelihood Index Method (SLIM) has been proposed. The key point that we have been trying to convey in this research is the analysis of all environmental and behavioural factors that influence human reliability. Results obtained from the analysis of a real case study give an empirical contribution and a theoretical contribution referring to the framework used to detect human error in risk and reliability analysis. Furthermore, the study could be a useful perspective for the entire academic community to make the community fully aware of new assumptions in human reliability analysis.

Acknowledgments

This research represents a result of research activity carried out with the financial support of MiuR, namely PRIN 2012 “DIEM-SSP, Disasters and Emergencies Management for Safety and Security in industrial Plants”.

© 2017 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Domenico Falcone, Fabio De Felice, Antonella Petrillo and Alessandro Silvestri (June 21st 2017). An Experimental Study on Developing a Cognitive Model for Human Reliability Analysis, Theory and Application on Cognitive Factors and Risk Management - New Trends and Procedures, Fabio De Felice and Antonella Petrillo, IntechOpen, DOI: 10.5772/intechopen.69230. Available from:

chapter statistics

837total chapter downloads

1Crossref citations

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

A Cognitive Model for Emergency Management in Hospitals: Proposal of a Triage Severity Index

By Marco Frascio, Francesca Mandolfino, Federico Zomparelli and Antonella Petrillo

Related Book

First chapter

Analytic Hierarchy Process Applied to Supply Chain Management

By Valerio Antonio Pamplona Salomon, Claudemir Leif Tramarico and Fernando Augusto Silva Marins

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us