Open access peer-reviewed chapter

Tree-Network Overrun Model Associated with Pilots’ Actions and Flight Operational Procedures

Written By

Michelle Carvalho Galvão Silva Pinto Bandeira, Anderson Ribeiro Correia and Marcelo Ramos Martins

Submitted: April 11th, 2018 Reviewed: September 24th, 2018 Published: December 19th, 2018

DOI: 10.5772/intechopen.81663

Chapter metrics overview

937 Chapter Downloads

View Full Metrics

Abstract

The runway excursions are defined as the exit of an aircraft from the surface of the runway. These excursions can take place at takeoff or at landing and consist of two types of events: veer off and overrun. This last one, which occurs when the aircraft exceeds the limits at the end of the runway, is the event of interest in the current study. This chapter aims to present an accident model with a new approach in aeronautical systems, based on the tasks of the pilots related to the operational procedures necessary for the approach and landing, in order to obtain the chain of events that lead to this type of accident. Thus, the tree-network overrun model (TNO model) was proposed, unlike most traditional models, which consider only the hardware failures or which do not satisfactorily explain the interrelationship between the factors influencing the operator. The proposed model is developed in a fault tree and transformed into a Bayesian network up to the level of the basic elements. The results showed the qualitative model of the main tasks performed by the pilots and their relation to the accident. It has also been suggested how to find and estimate the probability of factors that can impact on each of the tasks.

Keywords

  • overrun
  • TNO model
  • fault tree
  • Bayesian networks
  • safety
  • aviation

1. Introduction

Around the world, the occurrence of runway excursions in commercial and general aviation is the highest ones. The International Air Transport Association (IATA) and the International Civil Aviation Organization (ICAO), through the Runway Excursion Risk Reduction Toolkit [1], define runway excursions as the exit of an aircraft from the surface of the track. These excursions might take place at takeoff or landing and consist of two types of events: veer off and overrun. For the landing, they can be described as:

  • Veer off (LDVO): when there is an exit in which the aircraft exceeds the lateral limits of a runway in the landing phase.

  • Overrun (LDOR): when overtaking occurs at the end of the runway during the landing phase. Event of interest of the current study.

The latest Boeing data from a survey conducted from 2006 to 2015 show that the final phase and landing phase together account for 49% of fatal accidents in the world’s commercial jet fleet [2]. The number of onboard fatalities on the aircraft in these same phases of flight accounts for 47% of the total. The statistic was evaluated according to the aircraft exposure time for each of the mentioned phases (percentage of flight time estimated for 1.5-h flight). The phases of this study interest—descent, initial approach, final approach, and landing—correspond together to 59% of fatal accidents and 61% of fatalities on board.

1.1 Literature review

Most of the aviation accident statistics cited in the literature today begins with the data collected in the late 1950s and early 1960s, and it is possible to observe a marked decline in the accident rate [3]. Beginning in the 1950s, a number of research efforts was undertaken to document the precise location of aircraft accidents so that effective data safety and security planning could be obtained from the airport and its surroundings. It is noteworthy that “the airport and its neighbors” identified the location of more than 30 military and commercial aircraft accidents, which occurred outside the physical boundaries of the airport with fatal victims or injured people on soil [4]. Despite limited data, this report led to the establishment of “clear zones,” which are now known as “track protection zones.” Besides that, they also brought important contributions to the literature: “Air Installation Compatible Use Zone (AICUZ) Program” of the US Department of Defense served to define potential areas of accident for military aircraft, known as “accident potential zones (APZs)” [5]; “location of aircraft accidents/incidents relative to runways,” compiled data on the location of accidents with commercial airplanes on the airport runway [6]; and surveys conducted by the Airline Pilots Association indicated that 5% of accidents occur in route, 15% occur in the vicinity of airports, while the remaining 80% occur on runways, overpass areas, and clear zones [7]. However, the increasing complexity in technological systems, such as aviation systems, maritime systems, air traffic control, telecommunications, nuclear plants, aerospace defense systems among others, has raised points of discussion about modes of failure and related new issues to security, such as the analysis of human factors and organizational factors in a system.

To reduce these negative effects, it has been observed that studies are being carried out with a larger number of samples (accidents or incidents). As an example, there are the accident analysis studies developed by [8, 9, 10, 11, 12, 13, 14]. As a result, it was observed that this distance differs for each type of operation, whether landing or takeoff, as well as for each type of accident, whether overrun, undershoot, or veer off. The studies previously mentioned were important to present the differences among the events on runway excursions and to report which runway conditions influence each type of the accident. They also showed that aircraft operational factors are important in the analysis of an accident. Despite the contributions mentioned, they were mainly limited to environmental factors and models based on historical data. The relationship between occurrences and human performance factors, for example, was not explained.

Many researchers have attempted to develop theories or models to describe the causes of an accident [15]. One of the earliest models of accident causes is the “Domino theory” proposed by Heinrich in the 1940s, which describes an accident as a chain of discrete events occurring in a particular temporal order [16]. This theory belongs to the class of models of sequential accidents or models based on accident events, which gave subsidies for most models of analysis of accidents introduced later [17]. These models were known to use causality methods such as: failure mode and effect analysis (FMEA), fault tree analysis (FTA), event tree analysis (ETA), and cause-consequence analysis (CCA). A large part of this approach has been strongly criticized for being based only on causal relationships among the events [18, 19, 20].

1.2 Concept of the study

Safety is generally understood as a state of the transportation system; therefore, it has a qualitative nature. In aviation, there are neither widely accepted security measures, nor is there a common agreement on the limits of the indicators that can be considered acceptable [21]. In this context, interdisciplinary research and studies are necessary to understand the complexity of sociotechnical systems [18, 20]. In addition, through a broad systemic view, one can understand the multidimensional aspects of safety, to later achieve the modeling of accidents in a more global way.

Since the middle of the last century, safety models of the technical and human parts of the systems have been introduced [17]. Further studies provided important reviews of the various existing accident models [22, 23, 24, 25, 26]. The latter one presents an extensive research with 121 accident models described and their applications. In [25], the authors develop quantitative indicators to assess the status of the flight team and the impact of these indicators in air traffic safety. In [22], the authors particularly review the models of accident analysis, and in [27], the author develops a model for analysis of incidents using petri net, both for air traffic. In [28], the authors present a proposal to relate human factors, abilities, organizational factors and environmental factors to the task being performed by the pilot. This application proposes several relationships between these factors. These authors based on literature and research with pilots in flight simulators to obtain the results of relationship of the factors. A summary of the major accident models identified are highlighted in Table 1 [12, 29, 30, 31, 32, 33, 34, 35, 36].

Table 1.

Identification of aviation accident models.

The most recent model presents the purpose of this study. The methods or techniques that were used in these analyzes are shown in Table 2. The latter table was adapted according to the categories presented in [24] to classify the methods and/or techniques used. Thus, accident models can be divided into four categories: (i) causality model, (ii) collision risk model, (iii) human error models, and (iv) third-party risk model.

Table 2.

Category of accident models vs. accident investigation methods.

The TNO model is conceptually similar to [40], which uses the same tools to develop the model’s ship collision accident. These authors used fault tree to obtain the main human failures related to the ship’s crew tasks, and Bayesian networks (BNs) to obtain the probability of collision and the relationships between the contributing factors. Two other models similar to the proposed model are the flight model [33] and CATS model [35]. The first presents a model in Bayesian networks with a selection of contributing factors in order to obtain the probability of an aviation accident. Despite the contribution of human and organizational factors, this model does not represent the main operational procedures, nor does all the flight phases. The second one, CATS model [35], presents an aviation accident model developed by fault tree, where human failure is the only element that is obtained by Bayesian networks. This implies that the top event is static in relation to the other factors, making it impossible to obtain the contribution of this element with the accident and the possibility of the relationship between the various factors of the tree.

The objective of this chapter is to present a probabilistic accident model for the landing overrun of medium and large aircraft with the purpose of evaluating operational safety during approach and landing through the pilot-aircraft interface, considering the main operational procedures and the pilot’s tasks. So that, it is possible from these elements to observe the abilities and human factors of pilots, the performance of the airline, airport infrastructure, and environmental conditions in the field of commercial aviation.

Advertisement

2. Development of the TNO model

The methodology presents the fault tree developed to represent the chain of events, which brings the consequences of human errors. Thus, this topic presents the development of FTA and its basic elements. Then, the FTA is transformed into a Bayesian network (BN). For each basic event, a BN is developed related to the task it is associated with, in which the performance factors will be aggregated. These factors, as well as the development of the model are presented throughout this chapter.

The methodology of this research presents four stages—familiarization, qualitative analysis, quantitative analysis, and incorporation to obtain the proposed accident model. These steps were adapted from the methodology proposed by [41] that aimed a human reliability analysis (HRA).

In the familiarization stage, besides the literature review, it was consulted the technical documentation of entities related to the sector to understand the operation and to describe the procedures of approach and landing of medium and large commercial aircraft and their flight stages, in addition to the current norms emanating from the competent organs. The following references were used: ALAR report [11], risk analysis report [8], ACRP 3 [10], TAM general operations manual [42], Flight Crew Training Manual for Aircraft Model A319, A320 and A321 [43], Flight Crew Operation Manual [44], and Flight Crew Training Manual for the 737NG [45]. In addition, fieldwork was carried out in an A-320 aircraft simulator; consultation with specialists—pilots and industry analysts—was an important point for the development of the model, showing the best coherence among the relationships between the operational procedures and the pilots’ activities. Finally, the accident analysis presents the NTSB database data on the causes of accidents of the LDO type, which helped the analysis of the relationships of the elements of the proposed model. Step 2 basically presents the FTA technique and the BN method used to construct the proposed model. Step 3, in summary, concerns the population of network elements developed by the model. And, step 4 presents the results and inferences.

2.1 Fault tree in the construction of the TNO model

The fault tree analysis (FTA) technique is widely used in aerospace, nuclear, and electronic systems [46]. FTA is a quantitative technique of the type “top-down” in which the top event refers to a single event from which the intermediate events lead to component failures as well as to human actions. Logical trees can be used both for a qualitative and quantitative evaluation of the system; they employ a deductive procedure to determine the possible causes of an event of interest located at the top of the tree that may be the fault or success in the execution of a given mission. The qualitative evaluation aims at identifying the cause-effect relationship between the events that may contribute to the occurrence of the top event (of interest) as well as its logical dependencies, while the quantitative evaluation aims to determine the probability of occurrence of the same top event from the probability of occurrence of the events that make up the tree. Moreover, the final objective of a qualitative analysis of an FTA is mainly the probability of occurrence of events, in addition to obtaining the set of minimum cuts and prioritizing them according to their order. Table 3 shows the logic gates used in the current study.

Table 3.

Logical gates used in the model.

It is important to emphasize that the quantitative evaluation is deterministic and performed from the basic events, not allowing a diagnostic evaluation based on the evidence, and in both qualitative and quantitative analyzes, the basic events are considered Boolean; that is, they have only two possible states. Then, the logic of the model is represented by Boolean algebra rules, where each variable may have one of the binary values corresponding to the concepts of true (1) or false (0) [47]. If the top event is the failure of a system in the execution of a given mission, the tree is said to be faulty, and if the top event is the success of the system, the tree is said to be successful. In the latter case, it is said that the probability Pof the top event will be the reliability of the system being analyzed, while in the first one, the reliability of the system will be 1-P(top event).

2.2 Operational procedures selected for the TNO model

The development of the proposed model followed steps in which each element was designated by a number in the FTA, symbolized in the parentheses:

  1. It was highlighted the landing overrun as the top event (#1).

  2. In order for an overrun to occur, it was determined that two situations must occur simultaneously: the “unwanted state in the operation of the aircraft” (#2) and the “flight crew did not go-around the aircraft” (#16). This association is warranted by the Flight Crew Training Manualfor the A319, A320, and A321 [43] aircrafts and Flight Crew Training Manualfor the 737NG [45] aircraft that indicate the go-around for destabilized approach in order to avoid a runway excursion. Therefore, the connection of these factors was represented by an “E” logic gate. It is worth noting that in the BN model, this event assumed a 75% probability of overrun occurrence when both dangerous events occur, and 25% of the accident does not occur under the same conditions, according to [48]. This condition is not represented in the FTA because of its Boolean structure.

  3. The “unwanted state in the operation of the aircraft” event implies in two situations: “unwanted state in the descent” (#3) or “unwanted state in the landing” (#39). Either of these two situations makes the landing operation unwanted. This way, the logic gate “OR” was used.

  4. For the “unwanted state in the descent” event to occur (#3), two situations were observed: “undesired state in the briefing” (#4) or “unwanted state in flight management” (#17). Either of these two dangerous events can lead to an undesired state of descent.

  5. The “unwanted state in the briefing” (#4) was designed in consultation with experts. This way, they obtained two dangerous events: the nonexistent briefing (#5), when the flight crew decides not to make the necessary configurations for the descent procedure, and the inadequate briefing (#8) when the flight crew performs the task but does not meet the appropriate safety conditions, classified as incomplete (#14) or incorrect (#9). For the “unwanted state in flight management” (#17), they considered three situations: “inadequate checklist” (#18), “inadequate flight control” (#25) or “inadequate final approach” (#32), all linked to a logic gate “OR.” These events and their ramifications were arranged according to the consultation of the possible dangerous events with experts and are based on the description of operational safety reports [49, 50, 51, 52]. According to the literature, the cause of the factors is linked to omission or error in action, criteria not met for stabilized approximation, inadequate monitoring, among others. Additionally, the basic events were obtained with observations in the field and consultation with specialists. According to the pilots, once an error occurs in the procedure, it is quickly detected by the flight crew. The detection of the error in some of the activities developed in the proposed model has practically a 100% chance to occur. However, the error correction action may be flawed, as represented in FTA and BN (#20, #27, #34).

  6. Finally, the event “unwanted landing state” (#39) was considered to occur when there is an “unfavorable runway” (#40) or “inadequate braking” (#41). Therefore, the connection of these factors was represented by an “OR” logic gate. Such a link was justified according to the flight simulator cockpit monitoring, where an overrun event was observed in both conditions, with the approach stabilized until the moment of landing consultation with experts also suggested the occurrence of this dangerous event. In addition, the hazardous event “inadequate braking” (#41) presents the “landing gear procedure error” (#42) and the error in the reverse procedure (#43) as basic events. In the fault tree, the designated logic gate was “OU.” However, the relationship of these two events was modeled in the BN with the ratio of 80% being braking adequate when the landing gear procedure is adequate and the reverse procedure is inadequate, and 20% of braking adequate when landing gear procedure is inadequate and the reverse procedure is appropriate. This condition is not represented in the FTA because of its Boolean structure.

The framework of the model proposed in FTA is in Figure 1. The pilot tasks that must be analyzed in the proposed model are listed in Table 4. The model elements with negligible failure are chosen based on field research and consultation with experts.

Figure 1.

Fault tree with basic events that lead to landing overrun.

Table 4.

Basic elements of the fault tree (FTA).

2.3 Bayesian network in the construction of the TNO model

Bayesian network (BN) is defined as a graphical structure for representing the probabilistic relationships among a large number of variables and for making probabilistic inferences with those variables [53]. Bayesian networks—also known as opinion networks, causal networks, or graphs of dependency—are graphic reasoning models based on uncertainty that use the concept of probability as the analyst’s degree of belief, allowing for expert judgments to be used as information to support a decision-making process related to complex systems [54, 55, 56]. The BNs showed to be useful in studies of system reliability [40, 57] and in risk analysis studies [58, 59, 60]. Yet, it has been applied to complex systems such as nuclear plants [61, 62], maritime transport [63, 64], and in the last 10 years, several studies on human reliability are also being developed in aviation using BNs [24, 28, 33, 35, 65, 66, 67, 68, 69, 70, 71].

A BN is a directed acyclic graph (DAG), which is defined as G = (V,E), where Vare the nodes representing either discrete or continuous variables and E is a set of ordered pairs of distinct elements of V, called arcs (or edges), and represents the dependencies between the nodes. The conditional probabilities associated with the variables are the quantitative components. The nodes and arcs are the qualitative components of the networks and provide a set of conditional independence assumptions, which means that each arc built from variable Xto variable Yis a direct dependence, such as a cause-effect relationship and, in that case, the node representing variable Xis said to be a parent node of node Y[53].

Each node within a Bayesian network is classified as “parent,” “child,” or both. These classifications relate to their respective relations to other nodes, where children nodes are those connected to antecedent nodes or are influenced by other nodes; parents are those connected to decedent nodes or which have an influence on other nodes [72]. Once we have specified the topology, we need to specify the conditional probability table (CPT) for each node. Each row in the table contains the conditional probability of each node value for a conditioning case. A conditioning case is just a possible combination of values for the parent nodes.

Considering a BN containing nnodes, X1to Xn, taken in that order, a particular value in the joint distribution is represented by P(X1 = x1, X2 = x2, …, Xn = xn), or more compactly, P(x1, x2, …, xn), and the chain rule of probability theory allows to factorize these joint probabilities as shown in Eq. (1). Then, this process is repeated, reducing each conjunctival probability to a conditional probability and a smaller conjunction, until it forms a great product as shown in Eq. (2).

Px1,x2,,xn=Pxn|x1,,xn1Px1|P(x1,,xn1E1
P(x1,x2,,xn)=P(xn|x1,,xn1)P(xn1|(x1,,xn2)P(x1)P(x2|x1)=i=1nP(xi,|x1,,xi1)E2
Px1|x2,,xn=i=1nPxi|Parents(Xi)E3

The quantitative analysis is based on the conditional independence assumption. Considering three random variables X, Y, and Z, Xis said to be conditionally independent of Ygiven Z, if P(X,Y|Z= P(X|Z)P(Y|Z). The joint probability distribution of a set of variables, based on conditional independence, can be factorized as shown in Eq. (3) since the constraint defined in Eq. (4) is verified. This equation allows obtaining any joint probability from values found in conditional probabilities tables, in the case of discrete variables, or from the conditional probability density function, for continuous variables. A complete example can be found in [69].

ParentsXiX1,,Xi1E4

Thus, each entry in the joint is represented by the product of the appropriate elements of the conditional probability tables (CPTs) in the belief network. The CPTs therefore provide a decomposed representation of the joint. The possibility of using evidences of the system to reassess the probabilities of network events is another important feature of the BNs. Given some evidence, beliefs can be recalculated to evaluate their impact on the network nodes. The process of obtaining a posteriori probability from a priori probability is called Bayesian inference [53]. As emphasized by [73], inferences can be made using Bayesian networks in three distinct ways: causal, diagnostic, and intercausal.

2.4 Fault tree conversion in Bayesian networks

It is possible to combine a structured methodology as fault tree with the modeling and analytical power of the Bayesian network [74]. The authors also point out that any fault tree can be converted into a Bayesian network without losing information. It is important to note that the flexibility of Bayesian network modeling can accommodate several types of dependencies among variables that cannot be included in fault tree modeling. Studies have shown that the transformation of a problem described by a fault tree into a Bayesian network is not a complex process [74, 75]. To convert the fault tree into a Bayesian network, the basic premises of the standard FTA methodology are highlighted, as follows [74]:

  • events are binary (example: appropriate/not appropriate);

  • events are statistically independent;

  • the relations between events and causes are represented by logic gates through Boolean logic, i.e., AND and OR gates; and

  • the root of the fault tree is the unwanted event; i.e., it is the top event to be analyzed.

Thus, one node must be created for each event and for each basic element in the FTA. It is important to note that in BN, each element in the FTA must be represented only once, even if there are repetitions in the fault tree. Then, the nodes must be connected, according to the logic gates present in the FTA.

A subsystem composed of a logical gate whose Boolean algebra is of any nature (union, intersection, excluding union, or others) with kbranched components, being events or subsystems, which can be converted into their corresponding Bayesian network. If the logical gate is represented by a union, then, only the nonoccurrence of all events avoids the occurrence of the top event, i.e., (E1cE2cE3c), where P(Top|E1cE2cEkc) = 0, and any other combination of Ekcleads to such occurrence. It is highlighted that according to Morgan’s theorem (propositions for simplifying expressions in Boolean algebra), XCindicates the complementary event of X, where (X⋃ Y ⋃ Z)C = Xc⋂ Yc⋂ Zc e (X ⋂ Y ⋂ Z)C = Xc⋃ Yc⋃. Considering that the logic gate represents an intersection, only the simultaneous occurrence of all events leads to the top event, that is, only P (Top|E1⋂E2⋂…⋂Ek) is not null, being it equal to 1. Figure 2 illustrates the conversion of FTA into BN. Each of the figures has two independent basic events A and B and the top event C.

Figure 2.

BN to logic gate “E” (at the left) and the logic gate “Exclusive OR” (at the right).

Advertisement

3. Results

The result of the FTA transformation in BN is presented, qualitatively, in Figure 3. The Bayesian network of the proposed model presents two states, negative and positive, for each node. The negative state represents the probability of occurrence of the node (characterized by the word YES). And the positive state represents the probability of not occurring the node; that is, the fault does not occur (characterized by the word NO). The node in red represents the landing overrun, and its positive and negative states represent the probability of the accident occurring, given the factors of the developed network.

Figure 3.

Bayesian network for the chain of events that lead to landing overrun.

According to field research and expert opinion, the tasks that require most pilots during approach and landing are listed below. On these tasks, a chain of dangerous events was also obtained, described in the development of the model as below:

  • decide if the aircraft continues to approach and/or landing (go-around);

  • landing briefing;

  • landing checklist;

  • control of aircraft parameters;

  • execution of the procedure drag on final approach; and

  • execution of the braking procedure (landing gear and reverse).

It should be noted that this work is not intended to introduce the factors of each task and its probabilities in this example, but to present the accident model for the approach and landing phases related to the tasks performed by the pilots that can be visually understood. The TNO model includes the main tasks performed by the crew and the chain of dangerous events that can lead to landing overrun.

From this model, it is possible to obtain the relation between the factors that can influence the performance of the pilots, and therefore, this can indicate how this can impact in the success or failure of the tasks related to the procedures of approach and landing. For each of these tasks, it is possible to develop more focused studies and to obtain the organizational, environmental, human factors, and the main abilities around each one of them. One way to get the factors contributing to the negative state of each of these tasks was suggested in [28]. Once obtained, a way to develop the Bayesian network with these factors and to find the probability of each of the states, positive and negative, is in [71].

The main advantage of transforming the FTA model to BN is to verify the sensitivity of each of the nodes given the accident and to obtain their impact. It is also possible to obtain the probability of an accident occurring because an error occurred in some task, for example. This type of approach is only possible in BN, one of the advantages of using this method for risk analysis. Finally, the network data can be obtained by consulting specialists and/or obtained from the literature.

Advertisement

4. Conclusions

Human factors are the most important source of uncertainties of any model, though many techniques and computational tools arise in recent decades to deal with the complexity of sociotechnical systems. To be able to get a representative analysis of the real system, a systemic vision of process is required. However, to model operational procedures of a system, or its main tasks, is not an easy step. So, first it is important to know the system that is intended to be modeled, and then analyze the factors (and their relationships) that can contribute to an occurrence. For such information, a search in the literature and a research with pilots and accident investigators become extremely important.

The proposed model was described and used to model the relationship between the main operational procedures performed by the flight crew and the pilots’ skills and to support the construction of a BN to quantitatively analyze the event of interest. Differently from other studies, the TNO model proposes a systematic and efficient way to organize the influence factors through an FTA and, consequently, to obtain a probabilistic analysis through a BN. The use of BN to find the most probable cause with the objective of identifying the most important factors and prioritizing the mitigation action is also an important contribution of this work. As far as we know, no other study has proposed a similar approach.

It should be noted that factors related to component failures in aircraft systems are not being considered in the general model. This is because studies of failures in aeronautical equipment are already traditionally considered and modeled, besides presenting a low probability of occurrence. Therefore, the emphasis was placed on the human actions of pilots. Thus, our intention was to model one of the main tasks of the flight team considering factors that precede team error. This model must be able to obtain a representative analysis of the real system; a systemic view of the process is also needed. In this sense, this model of accident fills this gap.

The results indicate subsidies to propose mitigating actions and can collaborate with the management of air transport operational safety. The best way to improve the latter is to attack the most sensitive points. Thus, the factors highlighted in the analysis, once prioritized within the company, can promote the reduction of runway excursions during the landing procedure of medium and large aircraft.

Advertisement

Acknowledgments

The authors are especially grateful to the pilots and engineers who participated in the consultations conducted by the research. The authors are grateful for the financial support given by CAPES and FAPESP in Brazil for this study.

References

  1. 1. IATA, ICAO. Runway Excursion Risk Reduction Toolkit. 2nd ed. Montréal, Québec: IATA & ICAO; 2012
  2. 2. Boeing. Statistical Summary of Commercial Jet Airplane Accidents: Worldwide Operations 1959-2015. Seattle, Washington: Boeing Commercial Airplanes; 2016
  3. 3. Wiegmann DA, Shappell SA. A Human Error Approach to Aviation Accident Analysis: The Human Factors Analysis and Classification System. England: Ashgate Publ; 2012
  4. 4. President's Airport Commission. The Airport and Its Neighbors. Washington, DC: Government Printing Office; 1952. 116p. Available from:http://www.dot.state.mn.us/aero/planning/documents/airportanditsneighbors.pdf[Accessed: June 3, 2012]
  5. 5. U.S. Environmental Protection Agency. EPA 550/9-77-353: Air Installations Compatible Use Zones (AICUZ) Program. Washington, D.C.: Federal Noise Program Reports Series; 1977;1:96
  6. 6. Federal Aviation Administration. Location of Commercial Aircraft Accidents/Incidents Relative to Runways. Washington, DC: FAA; 1990. Report No. DOT/FAA/AOV90-1
  7. 7. Ashford N, Wright PH. Airport Engineering. 3rd ed. New York: Wiley-Interscience Publ.; 1992
  8. 8. Eddowes M, Hancox J, Macinnes A. Final Report on the Risk Analysis in Support of Aerodrome Design Rules: A Report Produced for the Norwegian Civil Aviation. United Kingdom: AEA Technology; 2001. Report No. AEAT/RAIR/RD02325/R/002
  9. 9. CALTRANS. California Airport Land Use Planning Handbook. California: California Department of Transportation; 2002. 455p. Available from:http://www.dot.ca.gov/hq/planning/aeronaut/documents/alucp/AirportLandUsePlanningHandbook.pdf[Accessed: May 06, 2012]
  10. 10. Hall J et al. ACRP Report 3: Analysis of Aircraft Overruns and Undershoots for Runway Safety Areas. Washington, DC: Transportation Research Board, TRB; 2008
  11. 11. Flight Safety Foundation. Runway Excursion. Alexandria, VA: ALAR; 2009. Available from:http://www.skybrary.aero/bookshelf/books/865.pdf[Accessed: March 28, 2015]
  12. 12. Ayres M et al. ACRP Report 50: Improved Model for Risk Assessment of Runway Safety Areas. Washington, DC: Transportation Research Board; 2011
  13. 13. European Aviation Safety Agency (EASA). Annual Safety Review. Germany: EASA; Available from:https://www.easa.europa.eu/sites/default/files/dfu/218639_EASA_ASR_MAIN_REPORT_2018.pdf[Accessed: November 29th, 2018]
  14. 14. International Air Transport Association (IATA). Safety Report. Montreal: IATA; 2014. Available from:http://www.iata.org/publications/Pages/safety_report.aspx. [Accessed: January 12, 2015]
  15. 15. Roelen ALC. Causal risk model of air transport - comparison of user needs and model capabilities. PhD thesis. Delft University of Technology, 2008
  16. 16. Heinrich HW, Petersen D, Roos N. Industrial Accident Prevention: Safety Management Approach. 5th ed. New York: McGraw-Hill; 1980
  17. 17. Leveson N. Safeware: System Safety and Computers. 1st ed. New York: Addison-Wesley Professional Company, Inc.; 1995. 704 p. ISBN: 0-201-11972-2
  18. 18. Rasmussen J. Risk management in a dynamic society: A modelling problem. Safety Science. 1997;27(2):183-213
  19. 19. Hollnagel E. Barriers and Accident Prevention. Hampshire: Ashgate Publ; 2004
  20. 20. Leveson N. A new accident model for engineering safer systems. Safety Science. 2004;42:237-270
  21. 21. Skorupski J. About the need of a new look at safety as a goal and constraint in air traffic management. Procedia Engineering. 2017;187:117-123. DOI: 10.1016/j.proeng.2017.04.357
  22. 22. Machol RE. Thirty years of modelling midair collisions. Interfaces. 1995;25:151-172
  23. 23. Greenberg RA. Quantitative safety model of systems subject to low probability high consequence accidents [Thesis]. Univ. of South Australia; 2007. 465f
  24. 24. Netjasov F, Janic M. A review of research on risk and safety modelling in civil aviation. Journal of Air Transport Management. 2008;14(4):213-220
  25. 25. Skorupski J, Wiktorowski M. Chapter 129: The model of a pilot competency as a factor influencing the safety of air traffic. In: Nowakowski T, editor. Safety and Reliability: Methodology and Applications. London: Taylor & Francis Group; 2015. DOI: 10.1201/b17399-138
  26. 26. Hughes BP et al. A review of models relevant to road safety. Accident Analysis and Prevention. 2015;74:250-270
  27. 27. Skorupski J. Modelling of traffic incidents in transport. TransNav: The International Journal on Marine Navigation and Safety of Sea Transportation. 2012;6(3):357-365
  28. 28. Bandeira MCGSP, Correia AR, Martins MR. General model analysis of aeronautical accidents involving human and organizational factors. Journal of Air Transport Management. 2018;69:137-146
  29. 29. Reich P. Analysis of long range air traffic systems: Separation standards—I, II and III. Journal of the Institute of Navigation. 1966;19:88-96, 169-176, 31-338
  30. 30. Hawkins FH. Human Factors in Flight. 2nd ed. Aldershot, England: Avebury Technical; 1993
  31. 31. Helmreich RL, Merritt AC. Culture at Work in Aviation and Medicine: National, Organizational, and Professional Influences. England, Brookfield, VT: Ashgate, Aldershot; 1998
  32. 32. Shapell S, Wiegmann D. The Human Factors Analysis and Classification System (HFACS). Washington, DC: Federal Aviation Administration; 2000. Report No. DOT/FAA/AM-00/7
  33. 33. Greenberg R, Cook SC, Harris D. A civil aviation safety assessment model using a Bayesian belief network (BBN). Aeronautical Journal. 2005;109(1101):557-568. DOI: 10.1017/S0001924000000981
  34. 34. LIOU JJH, Yen L, Tzeng G-H. Building an effective safety management system for airlines. Journal of Air Transport Management. 2008;14(1):20-26
  35. 35. Roelen ALC, Lin PH, Hale AR. Accident models and organizational factors in air transport: The need for multi-method models. Safety Science. 2011;49:5-10. DOI: 10.1016/j.ssci.2010.01.022
  36. 36. Lower M, Magott J, Skorupski J. A system-theoretic accident model and process with human factors analysis and classification system taxonomy. Safety Science. 2018;110(Part A):393-410. DOI: 10.1016/j.ssci.2018.04.015
  37. 37. Edwards E. Man and machine: Systems for safety. In: Proceedings of British Airline Pilots Associations Technical Symposium. London: British Airline Pilots Association; 1972. pp. 21-36
  38. 38. Reason J. Human Error. Cambridge UK: Cambridge University Press; 1990
  39. 39. Leveson N. Engineering a Safer World, Systems Thinking Applied to Safety. London: MIT Press; 2012
  40. 40. Martins MR, Maturana MC. Application of Bayesian belief networks to the human reliability analysis of an oil tanker operation focusing on collision accidents. Reliability Engineering and System Safety. 2013;110:89-109
  41. 41. Swain AD, Guttmann HE. Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications. Albuquerque, NM, USA: Sandia National Laboratories; 1983. Report No. NUREG/CR-1278-F, SAND80-0200
  42. 42. TAM. Manual Geral de Operações (MGO). Rev. 05. São Paulo, SP: Airlines TAM; 2011. 752 p. Available from:https://pt.scribd.com/document/246034510/MGO-TAM-REV5-pdf[Accessed: June 19, 2016]
  43. 43. AIRBUS. Flight Crew Training Manual: Reference Tam A319/A320/A321.[S.l]. Blagnac Cedex, France: FLEET FCTM; 2013. 412 p
  44. 44. AIRBUS. Flight Crew Operating Manual: Reference Tam A319/A320/A321. [S.l]. Blagnac Cedex, France: FLEET FCOM; 2014. 618 p
  45. 45. Boeing. B737 NG/MAX: Flight Crew Training Manual. Revision Number 14. Washington: Boeing; 2015. 404p
  46. 46. Lee WS et al. Fault tree analysis, methods, and application: A review. IEEE Transactions on Reliability. 1995;34(3):194-203
  47. 47. Modarres M. What every Engineer Should Know about Reliability and Risk Analysis. New York: Marcel Drekker, Inc.; 1993
  48. 48. National Transportation Safety Board (NTSB). Aviation Accident Data Summary. Accident Number: DCA13FA1312013. Washinigton, DC. 2015. p. 12. Available from:https://app.ntsb.gov/pdfgenerator/ReportGeneratorFile.ashx?EventID=20130723X13256&AKey=1&RType=Final&IType=FA [Accessed: February 16, 2016]
  49. 49. Flight Safety Foundation (FSF). Approach and Landing Accident Reduction (ALAR). Alexandria, Virginia: FSF ALAR Task Force; 1998. Available from:http://flightsafety.org/current-safetyinitiatives/approach-and-landing-accident-reduction-alar[Accessed: March 28, 2015]
  50. 50. Airbus. Flight Operations Briefing Notes (No. FLT OPS – SOP – SEQ 01 – REV 04), Standard Operating Procedures (SOPs): Operating Philosophy. Toulouse, France: Airbus Customer Services; 2006
  51. 51. International Federation Of Airline Pilots Associations (IFALPA). Runway Safety Manual. Montreal, Québéc: IFALPA; 2009
  52. 52. Federal Aviation Administration. Instrument Approach Handbook. Washington, DC: FAA; 2014. Available from:http://www.faa.gov/regulations_policies/handbooks_manuals/aviation/instrument_procedures_handbook/media/Chapter_4.pdf[Accessed: May 08, 2015]
  53. 53. Neapolitan RE. Learning Bayesian Networks. New Jersey: Pearson P. Hall; 2004
  54. 54. Pearl J. Probabilistic Reasoning in Intelligent Systems. San Mateo, California: Morgan Kaufmann; 1988
  55. 55. Jensen FV, Nielsen TD. Bayesian Networks and Decision Graphs, Information Science and Statistics. New York, NY: Springer; 2007
  56. 56. Cowell RG, Dawid P, Lauritzen S, Spiegelhalter D. Probabilistic Networks and Expert Systems: Exact Computational Methods for Bayesian Networks, Statistics for Engineering and Information Science Series. New York: Springer; 2007
  57. 57. Schleder AM, Martins MR, Modarres M. The use of Bayesian networks in reliability analysis of the LNG regasification system on a FSRU under different scenarios. In: Twenty-Second International Offshore and Polar Engineering Conference. Rhodes, Grécia: ISOPE; June 17-22, 2012. pp. 881-888
  58. 58. Ale B, Van Gulijk C, Hanea A, Hanea D, Hudson P, Lin PH, et al. Towards BBN based risk modelling of process plants. Safety Science. 2014;69:48-56. DOI: 10.1016/j.ssci.2013.12.007
  59. 59. Martins MR, Schleder AM, Droguett EL. A methodology for risk analysis based on hybrid Bayesian networks: Application to the regasification system of liquefied natural gas onboard a floating storage and regasification unit. Risk Analysis. 2014;34(12):2098-2120
  60. 60. Martins MR et al. Quantitative risk analysis of loading and offloading liquefied natural gas (LNG) on a floating storage and regasification unit (FSRU). Journal of Loss Prevention in the Process Industries. 2016;43:629-653
  61. 61. Sundaramurthi R, Smidts C. Human reliability modeling for the next generation system code. Annals of Nuclear Energy. 2013;52:137-156
  62. 62. Martins MR, Maturana MC, Frutuoso PFF. Methodology for system reliability analysis during the conceptual phase of complex system design considering human factors. In: International Topical Meeting on Probabilistic Safety Assessment and Analysis. Sun Valley, ID: NAS/PSA; April 26-30, 2015. pp. 1-14
  63. 63. Martins MR, Maturana MC. Human error contribution in collision and grounding of oil tankers. Risk Analysis. 2010;30(4):674-698
  64. 64. Zhang G, Thai VV. Expert elicitation and Bayesian network modeling for shipping accidents: A literature review. Safety Science. 2016;87:53-62. DOI: 10.1016/j.ssci.2016.03.019
  65. 65. Ale BJM, Bellamy LJ, Cooke RM, Goossens LHJ, Hale AR, Roelen ALC, et al. Towards a causal model for air transport safety—An ongoing research project. Safety Science. 2006;44:657-673. DOI: 10.1016/j.ssci.2006.02.002
  66. 66. Luxhøj JT, Coit DW. Modeling low probability/high consequence events: an aviation safety risk model. In: Reliability and Maintainability Symposium. Newport Beach, CA, USA: IEEE; June 14-16, 2006. pp. 215-221
  67. 67. Ale BJM et al. Further development of a causal model for air transport safety (CATS): Building the mathematical heart. Reliability Engineering and System Safety. 2009;94(9):1433-1441
  68. 68. Mohaghegh Z, Kazemi R, Mosleh A. Incorporating organizational factors into probabilistic risk assessment (PRA) of complex socio-technical systems: A hybrid technique formalization. Reliability Engineering and System Safety. 2009;94:1000-1018. DOI: 10.1016/j.ress.2008.11.006
  69. 69. Brooker P. Experts, Bayesian belief networks, rare events and aviation risk estimates. Safety Science. 2011;49:1142-1155. DOI: 10.1016/j.ssci.2011.03.006
  70. 70. Bandeira MCGSP, Correia AR, Martins MR. Method for measuring factors that affect the performance of pilots. Transport. 2017;25(2):156-169
  71. 71. Bandeira MCGSP, Correia AR, Martins MR. Landing accident model for medium and large sized commercial aircraft. In: 22nd Air Transport Research Society World Conference, Seoul, South Korea; 2018
  72. 72. Stanton N, Landry S, Di Bucchianico G, Vallicelli A. Advances in human aspects of transportation: Part III, Advances in human factors and ergonomics. In: AHFE Conference; 2014
  73. 73. Russell J, Norvig P. Artificial Intelligence: A Modern Approach. Upper Saddle River, N.J: Prentice Hall/Pearson Education; 2010
  74. 74. Bobbio A et al. Improving the analysis of dependable systems by mapping fault trees into Bayesian networks. Reliability Engineering and System Safety. 2001;71(3):249-260
  75. 75. Hamada M et al. A fully Bayesian approach for combining multilevel failure information in fault tree quantification and optimal follow-on resource allocation. Reliability Engineering and System Safety. 2004;86(3):297-305

Written By

Michelle Carvalho Galvão Silva Pinto Bandeira, Anderson Ribeiro Correia and Marcelo Ramos Martins

Submitted: April 11th, 2018 Reviewed: September 24th, 2018 Published: December 19th, 2018