Open access

An Overview of Human Reliability Analysis Techniques in Manufacturing Operations

Written By

Valentina Di Pasquale, Raffaele Iannone, Salvatore Miranda and Stefano Riemma

Submitted: 08 June 2012 Published: 13 March 2013

DOI: 10.5772/55065

From the Edited Volume

Operations Management

Edited by Massimiliano M. Schiraldi

Chapter metrics overview

5,334 Chapter Downloads

View Full Metrics

1. Introduction

In recent years, there has been a decrease in accidents due to technical failures through technological developments of redundancy and protection, which have made systems more reliable. However, it is not possible to talk about system reliability without addressing the failure rate of all its components; among these components, "man" – because his rate of error changes the rate of failure of components with which he interacts. It is clear that the contribution of the human factor in the dynamics of accidents – both statistically and in terms of severity of consequences – is high [2].

Although valid values are difficult to obtain, estimates agree that errors committed by man are responsible for 60–90% of the accidents; the remainder of accidents are attributable to technical deficiencies [2,3,4]. The incidents are, of course, the most obvious human errors in industrial systems, but minor faults can seriously reduce the operations performances, in terms of productivity and efficiency. In fact, human error has a direct impact on productivity because errors affect the rates of rejection of the product, thereby increasing the cost of production and possibly reduce subsequent sales. Therefore, there is need to assess human reliability to reduce the likely causes of errors [1].

The starting point of this work was to study the framework of today’s methods of human reliability analysis (HRA): those quantitative of the first generation (as THERP and HCR), those qualitative of second (as CREAM and SPAR-H), and new dynamic HRA methods and recent improvements of individual phases of HRA approaches. These methods have, in fact, the purpose of assessing the likelihood of human error – in industrial systems, for a given operation, in a certain interval of time and in a particular context – on the basis of models that describe, in a more or less simplistic way, the complex mechanism that lies behind the single human action that is potentially subject to error [1].

The concern in safety and reliability analyses is whether an operator is likely to make an incorrect action and which type of action is most likely [5]. The goals defined by Swain and Guttmann (1983) in discussing the THERP approach, one of the first HRA methods developed, are still valid: The objective of a human reliability analysis is ‘to evaluate the operator’s contribution to system reliability’ and, more precisely, ‘to predict human error rates and to evaluate the degradation to human–machine systems likely to be caused by human errors in association with equipment functioning, operational procedures and practices, and other system and human characteristics which influence the system behavior’ [7].

The different HRA methods analysed allowed us to identify guidelines for determining the likelihood of human error and the assessment of contextual factors. The first step is to identify a probability of human error for the operation to be performed, while the second consists of the evaluation through appropriate multipliers, the impact of environmental, and the behavioural factors of this probability [1]. The most important objective of the work will be to provide a simulation module for the evaluation of human reliability that must be able to be used in a dual manner [1]:

  • In the preventive phase, as an analysis of the possible situation that may occur and as evaluation of the percentage of pieces discarded by the effect of human error;

  • In post-production, to understand what are the factors that influence human performance so they can reduce errors.

The tool will also provide for the possibility of determining the optimal configuration of breaks through use of a methodology that, with assessments of an economic nature, allow identification of conditions that, in turn, is required for the suspension of work for psychophysical recovery of the operator and then for the restoration of acceptable values of reliability [1].

Advertisement

2. Literature review of HRA methods

Evidence in the literature shows that human actions are a source of vulnerability for industrial systems, giving rise to HRA that aims to deepen the examination of the human factor in the workplace [1]. HRA is concerned with identifying, modelling, and quantifying the probability of human errors [3]. Nominal human error probability (HEP) is calculated on the basis of operator’s activities and, to obtain a quantitative estimate of HEP, many HRA methods utilise performance shaping factors (PSF), which characterise significant facets of human error and provide a numerical basis for modifying nominal HEP levels [24]. The PSF are environmental factors, personal, or directed to activities that have the potential to affect performance positively or negatively; therefore, identifying and quantifying the effects of a PSF are key steps in the process of HRA [3]. Another key step concerns interpretation and simulation of human behaviour, which is a dynamic process driven by cognitive and behavioural rules, and influenced by physical and psychological factors. Human behaviour, although analysed in numerous studies, remains difficult to fully represent in describing all the nuances that distinguish it [1]. It is abundantly clear how complex an effort has been made in the literature to propose models of human behaviour, favoring numerical values of probability of error to predict and prevent unsafe behaviours. For this reason, the study of human reliability can be seen as a specialised scientific subfield – a hybrid between psychology, ergonomics, engineering, reliability analysis, and system analysis [4].

The birth of HRA methods dates from the year 1960, but most techniques for assessment of the human factor, in terms of propensity to fail, have been developed since the mid-’80s. HRA techniques or approaches can be divided essentially into two categories: first and second generation. Currently, we come to HRA dynamic and methods of the third generation, understood as an evolution of previous generations.

2.1. First generation HRA methods

The first generation HRA methods have been strongly influenced by the viewpoint of probabilistic safety assessment (PSA) and have identified man as a mechanical component, thus losing all aspects of dynamic interaction with the working environment, both as a physical environment and as a social environment [33]. In many of these methods – such as Technique for Human Error Rate Prediction (THERP) [2, 3, 13–15], Accident Sequence Evaluation Program (ASEP) [16], and Human Cognition Reliability (HCR) [2] – the basic assumption is that because humans have natural deficiencies, humans logically fail to perform tasks, just as do mechanical or electrical components. Thus, HEP can be assigned based on the characteristics of the operator’s task and then modified by performance shaping factors (PSF). In the first HRA generation, the characteristics of a task, represented by HEPs, are regarded as major factors; the context, which is represented by PSFs, is considered a minor factor in estimating the probability of human failure [8]. This generation concentrated towards quantification, in terms of success/failure of the action, with less attention to the depth of the causes and reasons of human behaviour, borrowed from the behavioural sciences [1].

THERP and approaches developed in parallel – as HCR, developed by Hannaman, Spurgin, and Lukic in 1985 – describe the cognitive aspects of operator’s performance with cognitive modelling of human behaviour, known as model skill-rule-knowledge (SKR) by Rasmussen (1984) [2]. This model is based on classification of human behaviour divided into skill-based, rule-based, and knowledge-based, compared to the cognitive level used (see Fig. 1).

The attention and conscious thought that an individual gives to activities taking place decreases moving from the third to first level. This behaviour model fits very well with the theory of the human error in Reason (1990), according to which there are several types of errors, depending on which result from actions implemented according to the intentions or less [2]. Reason distinguishes between: slips, intended as execution errors that occur at the level of skill; lapses, that is, errors in execution caused by a failure of memory; and mistakes, errors committed during the practical implementation of the action. In THERP, instead, wrong actions are divided into errors of omission and errors of commission, which represent, respectively, the lack of realisation of operations required to achieve the result and the execution of an operation, not related to that request, which prevents the obtainment of the result [1, 4].

Figure 1.

Rasmussen’s SKR model [2].

The main characteristics of the methods can be summarised as follows [9]:

  • Binary representation of human actions (success/failure);

  • Attention on the phenomenology of human action;

  • Low concentration on human cognitive actions (lack of a cognitive model);

  • Emphasis on quantifying the likelihood of incorrect performance of human actions;

  • Dichotomy between errors of omission and commission;

  • Indirect treatment of context.

Among the first generation techniques are: absolute probability judgement (APJ), human error assessment and reduction technique (HEART), justified human error data information (JHEDI), probabilistic human reliability analysis (PHRA), operator action tree system (OATS), and success likelihood index method (SLIM) [31,32]. Among these, the most popular and effectively method used is THERP, characterised as other first generation approaches by an accurate mathematical treatment of the probability and error rates, as well as computer programs well-structured for interfacing with the trees for evaluation of human error of a fault event and trees [11]. The base of THERP is event tree modelling, where each limb represents a combination of human activities, influences upon these activities, and results of these activities [3]. The basic analytical tool for the analysis of human reliability is represented with the graphics and symbols in Figure 2.

First generation HRA methods are demonstrated with experience and use, not able to provide sufficient prevention and adequately perform its duties [10]. The criticism of base to the adequacy of the traditional methods is that these approaches have a tendency to be descriptive of events in which only the formal aspects of external behaviour are observed and studied in terms of errors, without considering reasons and mechanisms that made them level of cognition. These methods ignore the cognitive processes that underlie human performance and, in fact, possess a cognitive model without adequate human and psychological realism. They are often criticised for not having considered the impact of factors such as environment, organisational factors, and other relevant PSFs; errors of commission; and for not using proper methods of judging experts [4,10,25]. Swain remarked that “all of the above HRA inadequacies often lead to HRA analysts assessing deliberately higher estimates of HEPs and greater uncertainty bounds, to compensate, at least in part, for these problems” [4]. This is clearly not a desirable solution.

Figure 2.

Scheme for the construction of a HRA-THERP event tree [2]: Each node in the tree is related to an action, the sequence of which is shown from the top downwards. Originating from each node are two branches: The branch to the left, marked with a lowercase letter, indicates the success; the other, to the right and marked with the capital letter, indicates the failure.

Despite the criticisms and inefficiencies of some first-generation methods, such as THERP and HCR, they are regularly used in many industrial fields, thanks to their ease of use and highly quantitative aspects.

2.2. Second generation HRA methods

In the early 1990s, the need to improve HRA approaches interested a number of important research and development activities around the world. These efforts led to much progress in first generation methods and the birth of new techniques, identified as second generation. These HRA methods have been immediately unclear and uncertain, substantially because the methods have been defined in terms of what should not be – that is, they should be as the first generation of HRA methods [5]. While the first generation HRA methods are mostly behavioural approaches, the second generation HRA methods aspire to be of conceptual type [26]. The separation between generations is evident in the abandonment of the quantitative approach of PRA/PSA in favour of a greater attention to qualitative assessment of human error. The focus shifted to the cognitive aspects of humans, the causes of errors rather than their frequency, the study of the interaction of the factors that increase the probability of error, and the interdependencies of the PSFs [1].

Second generation HRA methods are based on a cognitive model more appropriate to explain human behaviour. It is evident that any attempt at understanding human performance needs to include the role of human cognition, defined as “the act or process of knowing including both awareness and judgement” by an operator [1]. From the HRA practitioner’s perspective, the immediate solution to take into consideration human cognition in HRA methods was to introduce a new category of error: “cognitive error”, defined both as failure of an activity that is predominantly of a cognitive nature and as the inferred cause of an activity that fails [4]. For example, in CREAM, developed by Erik Hollnagel in 1993, maintained division between logical causes and consequences of human error [5]. The causes of misbehaviour (genotypes) are the reasons that determine the occurrence of certain behaviours, and the effects (phenotypes) are represented by the incorrect forms of cognitive process and inappropriate actions [2,17,25].

Moreover, the second generation HRA methods have aimed at the qualitative assessment of the operator’s behaviour and the search for models that describe the interaction with the production process. Cognitive models have been developed, which represent the process logical–rational of the operator and summarise the dependence on personal factors (such as stress, incompetence, etc.) and by the current situation (normal conduction system, abnormal conditions, or even emergency conditions), and models of man–machine interface, which reflect the control system of the production process [33]. In this perspective, man must be seen in an integrated system, men–technology–organisation (MTO), or as a team of operators (men) who collaborate to achieve the same objective, intervening in the mechanical process (technology) within a system of organisation and management of the company (organisation) and, together, represent the resources available [1,6].

The CREAM operator model is more significant and less simplistic than that of first generation approaches. The cognitive model used is the contextual control model (COCOM), based on the assumption that human behaviour is governed by two basic principles: the cyclical nature of human cognition and the dependence of cognitive processes from context and working environment. The model refers to the IPS paradigm and considers separately the cognitive functions (perception, interpretation, planning and action) and their connection mechanisms and cognitive processes that govern the evolution [2,4,5,8]. The standardised plant analysis risk–human reliability analysis method (SPAR-H) [11,12,34] is built on an explicit information-processing model of human performance, derived from the behavioural sciences literature. An information-processing model is a representation of perception and perceptual elements, memory, sensory storage, working memory, search strategy, long-term memory, and decision-making [34]. The components of the behavioural model of SPAR-H are presented in Figure 3.

A further difference between generations relates to the choice and use of PSF. None of the first generation HRA approaches tries to explain how PSFs exert their effect on performance; moreover, PSFs – such as managerial methods and attitudes, organisational factors, cultural differences, and irrational behaviour – are not adequately treated in these methods. PSFs in the first generation were mainly derived by focusing on the environmental impacts on operators, whereas PSFs in the second generation were derived by focusing on the cognitive impacts on operators [18]. The PSFs of both generations were reviewed and collected in a single taxonomy of performance influencing factors for HRA [16].

Figure 3.

Model of human performance [12].

Among the methods of the second generation can be mentioned: a technique for human error analysis (ATHEANA), Cognitive Environmental Simulation (CES), Connectionism Assessment of Human Reliability (CAHR) and Méthode d’Evaluation de la Réalisation des Missions Opérateur pour la Sûreté (MERMOS) [31,32].

Many proposed second generation methods still lack sufficient theoretical or experimental bases for their key ingredients. Missing from all is a fully implemented model of the underlying causal mechanisms linking measurable PSFs or other characteristics of the context of operator response. The problem extends to the quantification side, where the majority of the proposed approaches still rely on implicit functions relating PSFs to probabilities [25]. In short, some of the key shortcomings that motivated the development of new methods still remain unfulfilled. Furthermore, unlike first generation methods, which have been largely validated [13–15], the second generation has yet to be empirically validated [32].

There are four main sources of deficiencies in current HRA methods [3]:

  • Lack of empirical data for model development and validation;

  • Lack of inclusion of human cognition (i.e. need for better human behaviour modelling);

  • Large variability in implementation (the parameters for HRA strongly depend on the methodology used)

  • Heavy reliance on expert judgement in selecting PSFs and use of these PSFs to obtain the HEP in human reliability analysis.

2.3. Last generation

In recent years, the limitations and shortcomings of the second generation HRA methods have led to further developments related to the improvement of pre-existing methods. The only method now defined as third generation is nuclear action reliability assessment (NARA) and is, in fact, an advanced version of HEART for the nuclear field. The shortcomings in the second generation, highlighted above, have been the starting point of HRA experts for new research and improvement of existing methods.

Some of the more recent studies have focused on lack of empirical data for development and validation of an HRA model and were intended to define the database HRA, which may provide the methodological tools needed to make greater use of more types of information in future HRAs and reduce uncertainties in the information used to conduct human reliability assessments. Currently, there are some databases for HRA analysts that contain the human error data with cited sources to improve the validity and reproducibility of HRA results. Examples of databases are the human event repository and analysis (HERA) [17] and the human factors information system (HFIS).

The PSFs are an integral part of the modelling and characterisation of errors and play an important role in the process of human reliability assessment; for this reason in recent years, HRA experts have focused their efforts on PSFs. Despite continuing advances in research and applications, one of the main weaknesses of current HRA methods is their limited ability to model the mutual influence among PSFs, intended both as a dependency among the states of the PSFs’ dependency among PSFs’ influences (impacts ) on human performance (Fig. 4) [20,26].

Figure 4.

Possible types of dependency among PSFs: (A) dependency between the states (the presence) of the PSFs and (B) dependency between the state of the PSFj and the impact of PSFi over the HEP [20].

Some HRA methods – such as CREAM, SPAR-H, and IDAC – try to provide guidance on how to treat dependencies at the level of the factor assessments but do not consider that a PSF category might depend on itself and that the presence of a specific PSF might modulate the impact of another PSF on HEP; therefore, they do not adequately consider the relationships and dependencies between PSFs [20]. Instead, De Ambroggi and Trucco’s (2011) study deals with the development of a framework for modelling the mutual influences existing among PSFs and a related method to assess the importance of each PSF in influencing performance of an operator, in a specific context, considering these interactions (see Fig. 5).

Figure 5.

The procedure for modelling and evaluation of mutual influences among PSFs (De Ambroggi and Trucco 2011)

Another limitation of current HRA methods is the strong dependence on expert opinion to assign values to the PSFs; in fact, during this assignment process, subjectivity plays an important role, causing difficulties in assuring consistency. To overcome this problem and obtain a more precise estimation, Park and Lee (2008) suggest a new and simple method: AHP–SLIM [19].This method combines the decision-making tool AHP – a multicriteria decision method for complex problems in which both qualitative and quantitative aspects are considered to provide objective and realistic results – with success likelihood index method (SLIM), a simple, flexible method of the expert judgement for estimating HEPs [6,19]. Therefore through a type of HEP estimation using an analytic hierarchy process (AHP), it is possible to quantify the subjective judgement and confirm the consistency of collected data (see Fig. 6).

Figure 6.

AHP–SLIM procedure scheme [19].

The real development concerns, however, are the so-called methods of reliability dynamics. Cacciabue [7] outlined the importance of simulation and modelling of human performance for the field of HRA. Specifically, simulation and modelling address the dynamic nature of human performance in a way not found in most HRA methods [23]. A cognitive simulation consists of the reproduction of a cognition model using a numerical application or computation [21,22].

As depicted in Figure 7, simulation and modelling may be used in three ways to capture and generate data that are meaningful to HRA [23]:

  • The simulation runs produce logs, which may be analysed by experts and used to inform an estimate of the likelihood of human error;

  • The simulation may be used to produce estimates PSFs, which can be quantified to produce human error probabilities (HEPs);

  • A final approach is to set specific performance criteria by which the virtual performers in the simulation are able to succeed or fail at given tasks. Through iterations of the task that systematically explore the range of human performance, it is possible to arrive at a frequency of failure (or success). This number may be used as a frequentist approximation of an HEP.

Concurrent to the emergence of simulation and modelling, several authors (e.g. Jae and Park 1994; Sträter 2000) have posited the need for dynamic HRA and begun developing new HRA methods or modifying existing HRA methods to account for the dynamic progression of human behaviour leading up to and following human failure events (HFEs) [23]. There is still not a tool for modelling and simulation that fully or perfectly combines all the basic elements of simulation HRA. There is, however, a significant work in progress, as for the simulator PROCOS, developed by Trucco and Leva in 2006 or for the IDAC system, which combines a realistic plant simulator with a system of cognitive simulation capable of modelling the PSF. In addition to systems such as MIDAS, in which the modelling of the error was already present, further efforts are to instill the PSF of SPAR-H in the simulation system [24]. PROCOS [21,22] is a probabilistic cognitive simulator for HRA studies, developed to support the analysis of human reliability in operational contexts complex. The simulation model comprised two cognitive flow charts, reproducing the behaviour of a process industry operator. The aim is to integrate the quantification capabilities of HRA methods with a cognitive evaluation of the operator (see Fig. 8).

Figure 7.

Uses of simulation and modelling in HRA [23].

Figure 8.

Architecture of PROCOS simulator [21].

The model used for the configuration of the flow diagram that represents the operators is based on a combination of PIPE and SHELL. The two combined models allow for representation of the main cognitive processes that an operator can carry out to perform an action (PIPE) and describe the interaction among procedures, equipment, environment and plants present in the working environment, and the operator, as well as taking into account the possibility of interaction of the operator with other operators or supervisors (SHELL).

The IDAC model [25–30] is an operator behaviour model developed based on many relevant findings from cognitive psychology, behavioural science, neuroscience, human factors, field observations, and various first and second generation HRA approaches. In modelling cognition, IDAC combines the effects of rational and emotional dimensions (within the limited scope of modelling the behaviour of operators in a constrained environment) through a small number of generic rules-of-behaviour that govern the dynamic responses of the operator. The model constrained behaviour, largely regulated through training, procedures, standardised work processed, and professional discipline. This significantly reduces the complexity of the problem, as compared to modelling general human response. IDAC covers the operator’s various dynamic response phases, including situation assessment, diagnosis, and recovery actions in dealing with an abnormal situation. At a high level of abstraction, IDAC is composed of models of information processing (I), problem-solving and decision-making (D), and action execution (A) of a crew (C). Given incoming information, the crew model generated a probabilistic response, linking the context to the action through explicit causal chains. Due to the variety, quantity, and details of the input information, as well as the complexity of applying its internal rules, the IDAC model can only be presently implemented through a computer simulation (see Fig. 9).

Figure 9.

IDAC model of operator cognitive flow (Chang and Mosleh 2007).

Figure 10.

High-level vision of the IDAC dynamic response [25].

Advertisement

3. Literature review of rest breaks

One of the most important factors influencing the physical and mental condition of an employee – and, thus, his or her ability to cope with work – is the degree to which employees are able to recover from fatigue and stress at work. Recovery can be defined as the period of time that an individual needs to return to prestressor level of functioning following the termination of a stressor [35]. Jansen argued that fatigue should not be regarded as a discrete disorder but as a continuum ranging from mild, frequent complaints seen in the community to the severe, disabling fatigue characteristics of burnout, overstrain, or chronic fatigue syndrome [35]. It is necessary that recovery is properly positioned within this continuum not only in the form of lunch breaks, rest days, weekends or summer holidays, but even in the simple form of breaks or micro-pauses in work shifts.

Work breaks are generally defined as “planned or spontaneous suspension from work on a task that interrupts the flow of activity and continuity” [36]. Breaks can potentially be disruptive to the flow of work and the completion of a task. The potential negative consequences of breaks for the person being interrupted include loss of available time to complete a task, a temporary disengagement from the task, procrastination (i.e. excessive delays in starting or continuing work on a task), and the reduction in productivity; the break can lead to a loss of time to complete activities. However, breaks can serve multiple positive functions for the person being interrupted, such as stimulation for the individual performing a job that is routine or boring, opportunities to engage in activities that are essential to emotional wellbeing, job satisfaction, sustained productivity, and time for the subconscious to process complex problems that require creativity [36]. In addition, regular breaks seem to be an effective way to control the accumulation of risk during the industrial shift. The few studies on work breaks indicate that people need occasional changes... the shift or an oscillation between work and recreation, mainly when fatigued or working continuously for an extended period [36]. A series of laboratory studies in the workplace have been conducted to evaluate the effects of breaks in more recent times; however, there appears to be a single recent study that examined in depth the impact of rest breaks, focusing on the risk of injury. Tucker’s study [37,38] focused attention on the risk of accidents in the workplace, noting that the inclusion of work breaks can reduce this risk. Tucker examined accidents in a car assembly plant, where workers were given a 15-minute break after each 2-hour period of continuous work. The number of accidents within each of four periods of 30 minutes between successive interruptions was calculated, and the risk in each period of 30 minutes was expressed in the first period of 30 minutes immediately after the break. The results are shown in Figure 5, and it is clear that the accident risk increased significantly, and less linearly, between the successive breaks. The results showed that rest breaks neutralise successfully accumulation of risk over 2 hours of continuous work. The risk immediately after a pause has been reduced to a rate close to that recorded at the start of the previous work period. However, the effects of the breaks are short-term recovery.

Figure 11.

The trend in relative risk between breaks [38].

A 2006 study by Folkard and Lombardi showed the impact of frequent pauses of different shift systems [39]. The results of these studies confirm that breaks, even for a short period of time, are positively reflected from physical and psychic viewpoints on the operator’s work (see Fig. 12).

Figure 12.

Effect of breaks in different shift systems [39].

Proper design of work–rest schedule that involves frequency, duration, and timing of rest breaks may be effective in improving workers’ comfort, health, and productivity. But today, work breaks are not taken into proper consideration, and there are ongoing efforts to create systems that better manage the business in various areas, especially in manufacturing. From the analysis of the literature, in fact, there has been the almost total lack of systems for the management of work breaks in an automatic manner. The only exception is the software that stimulates workers at VDT to take frequent breaks and recommend performing exercises during breaks. The validity and effectiveness of this type of software has been demonstrated by several studies, including one by Van Den Heuvel [41] that evaluated the effects of work-related disorders of the neck and upper limbs and the productivity of computer workers stimulated to take regular breaks and perform physical exercises with the use of an adapted version of WorkPace, Niche Software Ltd., New, and that of McLean (2001) [40] that examined the benefits of micro-breaks to prevent onset or progression of cumulative trauma disorders for the computerised environment, mediated using the program Ergobreak 2.2.

In future, therefore, researchers should focus their efforts on the introduction of management systems of breaks and countering the rates of increase in the risk of accidents during long periods of continuous work to improve productivity.

Advertisement

4. Research perspectives in HRA

The previous paragraphs described the development of HRA methods from their origin to the last generation. In this generation, there are literally dozens of HRA methods from which to choose. However, many difficulties remain: Most of the techniques, in fact, do not have solid empirical bases and are essentially static, unable to capture the dynamics of an accident in progress or general human behaviour. Therefore, the limitations of current methods are natural starting point for future studies and work.

As described in this paper, the path has been paved for the next generation of HRA through simulation and modelling. The human performance simulation reveals important new data sources and possibilities for exploring human reliability, but there are significant challenges to be resolved, both as regards the dynamic nature of HRA versus the mostly static nature of conventional first and second generation HRA methods both for the weakness of the simulators themselves [23]. The simulator PROCOS, in particular, requires further optimisation, as evidenced by the same Trucco and Leva in [21]. Additionally, in its development, some sensitivity analysis has still to be performed on the main elements on which the simulator is based – blocks of the flow chart, decision block criteria, PSF importance – to test the robustness of the method [21]. Mosleh and Chang, instead, are conducting their studies to eliminate the weak points of IDAC as outlined in [25]. First of all, is development of an operator behaviour model more comprehensive and realistic; it can be used not only for nuclear power plants but also for more general applications. This is a subject of current research effort by the authors.

Many researchers are moving to the integration of their studies with those of other researchers to optimise HRA techniques. Some future plans include, for example, extending AHP–SLIM into other HRAs methods to exploit its performance [19]. The method proposed by De Ambroggi and Trucco for modelling and assessment of dependent performance shaping factors through analytic network process [20] is moving towards better identification of dependencies among PSFs using the simulator PROCOS or Bayesian networks.

Bayesian networks (BN) represent, in particular, an important field of study for future developments. Many experts are studying these networks with the aim of exploiting the features and properties in the techniques HRA [44,45]. Bayesian methods are appealing since they can combine prior assumptions of human error probability (i.e. based on expert judgement) with available human performance data. Some results already show that the combination of the model conceptual causal model with a BN approach can not only qualitatively model the causal relationships between organisational factors and human reliability but can also quantitatively measure human operational reliability, identifying the most likely root causes or prioritisation of root causes of human error [44]. This is a subject of current research effort by the authors of the IDAC model as an alternative way for calculating branch probability and representing PIF states as opposed to the current method; in the current method, branch probabilities are dependent on the branch scores that are calculated based on explicit equations reflecting the causal model built, based on the influence of PIFs and other rules of behaviour.

Additional research and efforts are related to the performance shaping factors (PSFs). Currently, there are more than a dozen HRA methods that use PIFs/PSFs, but there is no standard set of PIFs used among methods. The performance shaping factors at present are not defined specifically enough to ensure consistent interpretation of similar PIFs across methods. There are few rules governing the creation, definition, and usage of PIF sets. Within the HRA community, there is a widely acknowledged need for an improved HRA method with a more robust scientific basis. Currently, there are several international efforts to collect human performance data that can be used to improve HRA [46].

Of course, many studies that are being carried out are aimed at improving the application of HRA methods in complex environments, such as nuclear power plants. The methods already developed in these areas are adapting to different situations by expanding their scope.

References

  1. 1. Iannone, R., Miranda, S., Riemma S.: Proposta di un modello simulativo per la determinazione automatica delle pause di lavoro in attività manifatturiere a prevalente contenuto manuale. Treviso - Italy ANIMP Servizi Srl Pag. 46–60 (2004).
  2. 2. Madonna, M., et al.: Il fattore umano nella valutazione dei rischi: confronto metodologico fra le tecniche per l’analisi dell’affidabilità umana. Prevenzione oggi. 5 (n. 1/2), 67–83 (2009).
  3. 3. Griffith, C.D., Mahadevan, S.: Inclusion of fatigue effects in human reliability analysis. Reliability Engineering & System Safety, 96 (11), 1437–1447 (2011).
  4. 4. Hollnagel, E.: Cognitive Reliability and Error Analysis Method CREAM (1998).
  5. 5. Hollnagel, E.: Reliability analysis and operator modelling. Reliability Engineering and System Safety, 52, 327–337 (1996).
  6. 6. Bye, A., Hollnagel, E., Brendeford, T.S.: Human-machine function allocation: a functional modelling approach. Reliability Engineering and System Safety, 64 (2), 291–300 (1999).
  7. 7. Cacciabue, P.C.: Modelling and simulation of human behaviour for safety analysis and control of complex systems. Safety Science, 28 (2), 97–110 (1998).
  8. 8. Kim, M.C., Seong, P.H., Hollnagel, E.: A probabilistic approach for determining the control mode in CREAM. Reliability Engineering and System Safety, 91 (2), 191–199 (2006).
  9. 9. Kim, I.S.: Human reliability analysis in the man–machine interface design review. Annals of Nuclear Energy, 28, 1069–1081 (2001).
  10. 10. Sträter, O., Dang, V., Kaufer, B., Daniels, A.: On the way to assess errors of commission. Reliability Engineering and System Safety, 83 (2), 129–138 (2004).
  11. 11. Boring, R.L., Blackman, H.S.: The origins of the SPAR-H method’s performance shaping factor multipliers. In: Joint 8th IEEE HFPP/13th HPRCT (2007).
  12. 12. Blackman, H.S., Gertman, D.I., Boring, R.L.: Human error quantification using performance shaping factors in the SPAR-H method. In: 52nd Annual Meeting of the Human Factors and Ergonomics Society (2008).
  13. 13. Kirwan, B.: The validation of three human reliability quantification techniques – THERP, HEART and JHEDI: Part 1 – Technique descriptions and validation issues. Applied Ergonomics, 27 (6), 359–373 (1996).
  14. 14. Kirwan, B.: The validation of three human reliability quantification techniques – THERP, HEART and JHEDI – Part 2 – Results of validation exercise. Applied Ergonomics, 28 (1), 17–25 (1997).
  15. 15. Kirwan, B.: The validation of three human reliability quantification techniques – THERP, HEART and JHEDI – Part 3 – Practical aspects of the usage of the techniques. Applied Ergonomics, 28 (1), 27–39 (1997).
  16. 16. Kim, J.W., Jung, W.: A taxonomy of performance influencing factors for human reliability analysis of emergency tasks. Journal of Loss Prevention in the Process Industries, 16, 479–495 (2003).
  17. 17. Hallbert, B.P., Gertmann, D.I.: Using Information from operating experience to inform human reliability analysis .In: International Conference On Probabilistic Safety Assessment and Management (2004).
  18. 18. Lee, S.W., Kim, R., Ha, J.S., Seong, P.H.: Development of a qualitative evaluation framework for performance shaping factors (PSFs) in advanced MCR HRA. Annals of Nuclear Energy, 38 (8), 1751–1759 (2011).
  19. 19. Park, K.S., Lee, J.: A new method for estimating human error probabilities: AHP–SLIM. Reliability Engineering and System Safety, 93 (4), 578–587 (2008).
  20. 20. De Ambroggi, M., Trucco, P.: Modelling and assessment of dependent performance shaping factors through analytic network process. Reliability Engineering & System Safety, 96 (7), 849–860 (2011).
  21. 21. Trucco, P., Leva, M.C.: A probabilistic cognitive simulator for HRA studies (PROCOS). Reliability Engineering and System Safety, 92 (8), 1117–1130 (2007).
  22. 22. Leva, M.C., et al.: Quantitative analysis of ATM safety issues using retrospective accident data: the dynamic risk modelling project. Safety Science, 47, 250–264 (2009).
  23. 23. Boring, R.L.: Dynamic human reliability analysis: benefits and challenges of simulating human performance. In: Proceedings of the European Safety and Reliability Conference (ESREL 2007) (2007).
  24. 24. Boring, R.L.: Modelling human reliability analysis using MIDAS.In: International Workshop on Future Control Station Designs and Human Performance Issues in Nuclear Power Plants (2006).
  25. 25. Mosleh, A., Chang, Y.H.: Model-based human reliability analysis: prospects and requirements. Reliability Engineering and System Safety, 83 (2), 241–253 (2004).
  26. 26. Mosleh, A., Chang, Y.H.: Cognitive modelling and dynamic probabilistic simulation of operating crew response to complex system accidents – Part 1: Overview of the IDAC model. Reliability Engineering and System Safety, 92, 997–1013 (2007).
  27. 27. Mosleh, A., Chang, Y.H.: Cognitive modelling and dynamic probabilistic simulation of operating crew response to complex system accidents – Part 2: IDAC performance influencing factors model. Reliability Engineering and System Safety, 92, 1014–1040 (2007).
  28. 28. Mosleh, A., Chang, Y.H.: Cognitive modelling and dynamic probabilistic simulation of operating crew response to complex system accidents – Part 3: IDAC operator response model. Reliability Engineering and System Safety, 92, 1041–1060 (2007).
  29. 29. Mosleh, A., Chang, Y.H.: Cognitive modelling and dynamic probabilistic simulation of operating crew response to complex system accidents – Part 4: IDAC causal model of operator problem-solving response. reliability engineering and system safety, 92, 1061–1075 (2007).
  30. 30. Mosleh, A., Chang, Y.H.: Cognitive modelling and dynamic probabilistic simulation of operating crew response to complex system accidents – Part 5: Dynamic probabilistic simulation of the IDAC model. Reliability Engineering and System Safety, 92, 1076–1101 (2007).
  31. 31. http://www.hse.gov.uk/research/rrpdf/rr679.pdf
  32. 32. http://www.cahr.de/cahr/Human%20Reliability.PDF
  33. 33. http://conference.ing.unipi.it/vgr2006/archivio/Archivio/pdf/063-Tucci-Giagnoni-Cappelli-MossaVerre.PDF
  34. 34. http://www.nrc.gov/reading-rm/doc-collections/nuregs/contract/cr6883/cr6883.pdf
  35. 35. Jansen, N.W.H., Kant, I., Van den Brandt, P.A.: Need for recovery in the working population: description and associations with fatigue and psychological distress. International Journal of Behavioral Medicine, 9 (4), 322–340 (2002).
  36. 36. Jett, Q.R., George, J.M.: Work interrupted: a closer look at the role of interruptions in organizational life. Academy of Management Review, 28 (3), 494–507 (2003).
  37. 37. Tucker, P., Folkard, S., Macdonald, I.: Rest breaks and accident risk. Lancet, 361, 680 (2003).
  38. 38. Folkard, S., Tucker, P.: Shift work, safety, and productivity. Occupational Medicine, 53, 95–101 (2003).
  39. 39. Folkard, S., Lombardi, D.A.: Modelling the impact of the components of long work hours on injuries and ‘‘accidents’’. American Journal of Industrial Medicine, 49, 953–963 (2006).
  40. 40. McLean, L., Tingley, M., Scott, R.N, Rickards, J.: Computer terminal work and the benefit of microbreaks. Applied Ergonomics, 32, 225–237 (2001).
  41. 41. Van Den Heuvel, S.G., et al.: Effects of software programs stimulating regular breaks and exercises on work-related neck and upper-limb disorders. Scandinavian Journal of Work, Environment & Health, 29 (2), 106–116 (2003).
  42. 42. Jaber, M.Y., Bonney, M.: Production breaks and the learning curve: the forgetting phenomenon. Applied Mathematics Modelling, 20, 162–169 (1996).
  43. 43. Jaber, M.Y., Bonney, M.: A comparative study of learning curves with forgetting. Applied Mathematics Modelling, 21, 523–531 (1997).
  44. 44. Li Peng-cheng, Chen Guo-hua, Dai Li-cao, Zhang Li: A fuzzy Bayesian network approach to improve the quantification of organizational influences in HRA frameworks. Safety Science, 50, 1569–1583 (2012).
  45. 45. Kelly, D.L., Boring, R.L., Mosleh, A., Smidts, C.: Science-based simulation model of human performance for human reliability analysis. Enlarged Halden Program Group Meeting, October 2011.
  46. 46. Groth, K.M., Mosleh, A.: A data-informed PIF hierarchy for model-based human reliability analysis. Reliability Engineering and System Safety, 108, 154–174 (2012).

Written By

Valentina Di Pasquale, Raffaele Iannone, Salvatore Miranda and Stefano Riemma

Submitted: 08 June 2012 Published: 13 March 2013