Open access peer-reviewed chapter

Probabilistic Modeling Processes for Oil and Gas

Written By

Vsevolod Kershenbaum, Leonid Grigoriev, Petr Kanygin and Andrey Nistratov

Submitted: 08 October 2017 Reviewed: 06 February 2018 Published: 26 September 2018

DOI: 10.5772/intechopen.74963

From the Edited Volume

Probabilistic Modeling in System Engineering

Edited by Andrey Kostogryzov

Chapter metrics overview

1,470 Chapter Downloads

View Full Metrics

Abstract

Different uncertainties are researched for providing safe and effective development of hydrocarbon deposits and rational operation of oil and gas systems (OGS). The original models and methods, applicable in education and practice for solving problems of system engineering, are proposed. These models allow us to analyze natural and technogenic threats for oil and gas systems on a probabilistic level for a given prognostic time. Transformation and adaptation of models are demonstrated by examples connected with non-destructive testing. The measures of counteraction to threats for the typical manufacturing processes of gas preparation equipment on enterprise are analyzed. The risks for pipelines, pumping liquefied natural gas across the South American territory, are predicted. Results of probabilistic modeling of the sea gas and oil-producing systems from their vulnerability point of view (including various scenarios of possible terrorist influences) are analyzed and interpreted.

Keywords

  • analysis
  • modeling
  • operation
  • probability
  • process
  • risk
  • system
  • threat

1. Introduction

A history of development of the oil and gas industry all over the world, and in the Russian Federation, is impressive. Since 1930s large oil and gas fields have been opened; a huge number of oil refining and petrochemical factories are constructed. In recent years, the role of the gas branch has essentially increased; pipeline transport, thanks to which the basic part of Russia’s territory is provided with gas, oil and mineral oil, has actively developed; export of these products is carried out; there has been development of sea deposits. Hydrocarbon reservoirs, pipeline transport, oil refining and petrochemical factories, various storehouses of oil and gas, sea platforms and terminals and so on are examples of objects of modeling in oil and gas systems (OGS)—see Figure 1.

Figure 1.

Objects of modeling.

Technological processes of oil and gas branches are various. As a matter of fact, it is all a spectrum of processes from hydrocarbon extraction to end-product production. There are geological and geophysical researches; drilling; developing of hydrocarbon reservoirs (both on land, shelves and on the sea); pipeline transport and oil and gas storage; refining and chemistry. The end production of oil and gas manufacturing is used in majority branches of the modern economy. Unfortunately, up-to-date claims for deposits of hydrocarbons are the reasons for international conflicts.

Features which are necessary for consideration for the creation of control systems by technological processes and at construction are peculiar to the oil and gas branch. So technological processes are continuous, and objects are difficult and demand at the management level of performing synergistic researches. Objects of oil and gas manufacturing are technologically dangerous; therefore, the role of systems’ safety and ecological monitoring is significant. The initial information possesses are characterized by the high level of uncertainty generated by natural factors.

The automated dispatching control (ADC) meets the requirements of continuous technological processes control (ADC is the heterogeneous man–machine control system of the technological process integrating the dispatcher with an information-operating system, providing automatic information gathering, transfer, processing and display [1, 2]).

Theoretical bases for creating heterogeneous control systems are at the formation stage. Effective ACD operation in general depends on the quality of modeling objects and managerial processes. Problems of modeling for oil and gas systems should be considered for two levels:

  • problems of technological process control taking into account features of oil and gas manufacture and.

  • problems of monitoring and prediction of the integrated metrics, providing safe and competitive development of the OGS enterprises.

In such a manner, the systemic uncertainty inherent in oil and gas technologies due to the specificity of the objects under study leads to the need for modeling oil and gas systems, the goal of which is ultimately to manage risks at all levels of the hierarchy and all stages of the life cycle [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].

The problems posed are quite sophisticated, due to the complexity of the systems being studied, the operation of which is clearly non-linear. And at the same time, it is highly an actual one, taking into account the noted role of the OGS in the economy of modern world.

The proposed probabilistic approaches, applicable in the system’s life cycle, help to answer the main question: “What rational measures should lead to expected effects without wasted expenses, when, by which controllable and uncontrollable conditions and costs?”

Advertisement

2. About the problems that are due to be solved by probabilistic modeling

Modeling demands the analysis of specificity for OGS, estimating existing uncertainties. Prominent features of objects of oil and gas manufacture, characterizing uncertainties and complexity of modeling are presented in Table 1.

Object or process Distinctive factors (uncertainty) and applicable modeling tool
Geological structures (it is the isolated area of the earth shell, differing from the cross-border regions for the tectonic behavior, i.e. specific combination of the geological formations, its bedding and structuring conditions) The text and cartographical information symbiosis is used. The most popular tasks: correlation analysis, cluster analysis, association theory, interpolation theory. The spatial data modeling with the variograms evaluation (the spatial correlation measure) and other estimations.
Natural phenomena modeling – one of the most progressive lines of the modern science. It is based on the digital models of the geological data combined with the spatial databases. The computer modeling tasks in the practical geology are solved with help of the modeling software packages. Actual tendencies of the geostatistics are connected with the development of: spatial analogues of the Monte-Carlo methods; approaches based on the multipoint statistics; hybrid models with artificial intelligence algorithms application; with additional information of the varied type and applications in the images processing and transfer area and others [11].
The Oil (it is typified as the oil disperse system) In the oil disperse system the behavior principles and physical–chemical properties in the molecular or disperse state can be quite differ from each other, and this is the reason of the non-linear response while changing of the external input character and scale. Then the phase transitions occur, and system properties are changed in a qualitative manner. This field researches show that oil systems are structured at the nanoscale level, what creates the basics for the new technologies development in the oil and gas industry.
So the phase transitions are possible with the aggregate state changing. This is the object of the synergistic analysis; also the physical–chemical analysis models are applied; fractal analysis is used.
Hydrocarbons reservoirs They were formed for millions years. Recently the count of hypothesizes about the hydrocarbon reservoirs origin increased, and generally speaking the self-organizing processes are typical for the oil and gas reservoirs.
Hydrocarbons reservoir rock
(it is mine rock containing the voids, i.e. the pores, cavities and others and having the potential to store and filtrate the fluids)
Porosity, permeability are the key indexes for the estimation task solution. The deformations are typical for the reservoir rocks. In certain cases it is needed to take into account the non-Newtonian fluid and to use the rheological models.
The most dependencies have the non-linear character.
The initial information has the statistical character, then the mathematical statistics and probabilistic modeling apparatus is actively applied. The percolation task is also has its special features.
Processes of the petrochemical industry and oil refining Non-linear processes with catalysts’ application, and its activity, are varied with time. Most wide-spread modeling and engineering evaluations systems are actively used for the project design as well as the calculations providing in management.
Management of the oil production process
(Intelligent field, i.e. I-field)
While solving the problem of the hydrocarbons production on the reservoir the task of the adjusted management with Kalman filter application.
In classic case the adjusted management task supposes the object model correction with control input generation. Toward to the hydrocarbons reservoir the uncertainty is increased with the changing of the object characteristics while the reservoir development process.
The tasks of the estimation and identification are widely used and applied.
Management in the emergency situations and accidents In the oil disperse system occur the negative phenomena, which are connected with the phase transitions in technological processes of the oil refining, hydrocarbons reservoirs development, wells drilling and other processes of the oil and gas industry.
These phenomena may include: the asphaltenes, paraffins and salts sedimentations, gas hydrates formation and others.
Physical–chemical analysis of the oil disperse systems is supposed to be the key point in development of the decisions support systems in the emergency situations.
Combined to the experimental researches practice the computer modeling at the molecular level is carried. Based on the results the type of the catastrophe is evaluated and the order parameter is identified.
Finally the recommendations to the management in the emergency situations are developed.

Table 1.

Distinctive uncertainties of the objects and processes in the oil and gas industry.

The performed analysis has revealed prominent features of uncertainties for separate objects of oil and gas manufacture and has shown that probabilistic modeling, models for estimations and identifications, a method of Monte-Carlo, is widely and successfully applied for solving problems of technological process control. The nature of uncertainty of processes and objects of oil and gas manufacture is various; that is in many respects caused by long processes of hydrocarbon formation. Therefore, the occurrence of technology of evolutionary modeling as often named synergistic analysis (with the theory of non-linear systems and the self-organizing processes, the determined chaos, fractal analysis, etc.) has considerably expanded possibilities of the researches of the natural uncertainty of oil and gas manufacture.

Evolutionary processes as a development basis, actively acted not only the system analysis in a control context, but also for the decision of problems at level of organizational-economic management. At this level, the nature of uncertainty is connected with many criteria. In Figure 2, the evolution of risk-oriented criteria is shown: from economic criteria to competitiveness.

Figure 2.

Evolution of risk-oriented criteria.

In different areas, the heterogeneous threats for complex systems are inevitable. The uncertainties in the system’s life cycle are usual. Different problems, connected with evaluations, comparisons, selections, controls, system analysis and optimization, are solved by the probabilistic modeling of processes according to system engineering standards (general–ISO/IEC/IEEE 15288, ISO 9001, IEC 60300, 61,508, CMMI и т.д. and specific for the oil and gas industry ISO 10418, 13,702, 14,224, 15,544, ISO 15663, ISO 17776 etc.). The saved-up experience confirms the high importance of scientific system researches based on probabilistic modeling. For example, in general cases, prediction and optimization are founded on modeling different processes. Any process is a repeated sequence of consuming time and resources for outcomes’ receiving in all application areas. From the probability point of view, the moments for any activity beginning and ending are random events on the timeline. In practice, a majority of timed activities is repeated during the system’s life cycle (estimations, comparisons, analysis, rationale, etc.) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. See some problems that are due to be and can be solved by the mathematical modeling of processes according to ISO/IEC/IEEE 15288 “Systems and software engineering. System life cycle processes” in Figure 3. The applications of models allows one to manage rationally risks, raise the quality and safety of oil and gas systems and at the expense of them be successful in the market—see in Figure 4 the example of formalized problems which are solved on the basis of probabilistic modeling in the life cycle [6].

Figure 3.

The problems that are due to be and can be solved by probabilistic modeling processes.

Figure 4.

Examples of formalized problems which are solved on the base of probabilistic modeling.

The summary of the analysis of existing approaches is presented in the next section.

An existing risk control concept tries to consider different uncertainties. But in the application for various areas, the results of information gathering and processing are not used purposefully for modeling, because as the used models of risk prediction that are used in a majority of complex systems, are specific, results and interpretations are not comparable. A universal objective scale of measurement is not established yet. Moreover, the terms “acceptable quality” and “admissible risk” should be defined on a probability scale level only in dependence with corresponding methods and precedents (considering system analogue). For heterogeneous threats, an analytical rationale of the balanced preventive measures of system integrity support at limitations on admissible risks and resources cannot be solved in many cases. The probabilistic modeling, aimed at pragmatic effects, helps to prove probability levels of” acceptable quality” and “admissible risk” for different systems in uniform interpretation, creates techniques to solve different problems for quality and helps in risk optimization. It supports making-decisions in quality and safety and/or helps to avoid wasted expenses in the system’s life cycle—see the proposed purposeful way in Figure 5, based on dozens of probabilistic models and software tools [6]. There are proposed universal metrics for system processes: probabilities of success or failure during a given period for an element, subsystem and system. A calculation of these metrics within the limits of the offered probability space built on the basis of the theory for random processes allows one to predict quality and risks on a uniform probability scale, quantitatively proving comprehensive levels of acceptable quality and admissible risks from “precedents cases.” The prediction of risks can use widely safety monitoring data and statistics. In practice, an application of the proposed model and method allows a customer to formulate better justified requirements and specifications, a developer to implement them rationally without wasted expenses and a user to use the system’s potential in the most effective way [1–12].

Figure 5.

The proposed way to support making-decisions in quality and safety.

In a general case, a probabilistic space (Ω, B, P) for the evaluation of system operation processes should be proposed, where Ω is a limited space of elementary events; B is a class of all subspaces of the Ω space, satisfied to the properties of σ algebra; P is a probability measure on the space of elementary events Ω. Because Ω = {ωk} is limited, there is enough to establish a reflection ωk→pk = P(ωk) like pk30 and k p k = 1 .

The descriptions for some from the proposed probabilistic models and methods for their transformations, adaptations, applications and result interpretations are the following.

Advertisement

3. Model for estimating non-destructive testing

Problems of item content analysis are everywhere for any oil and gas systems in their life cycle. Pipes and pipelines, the equipment (e.g., fountain armature, columned heads and welded tanks), monolithic walls of buildings and the constructions, to be checked in the presence of emptiness, can be considered as such items—see Figure 6.

Figure 6.

Examples of item content analysis.

For solving some problems of item content analysis, the existing probabilistic model “information faultlessness after checking” may be used by renaming input and output [6]. For example, for estimating non-destructive testing, the probability of soundness of the checked item (renamed) may be estimated instead of the probability of information faultlessness during the required term (according to referenced model [6]). A soundness of the checked item means the zero of defects (or anomalies) after non-destructive testing during the given term.

What about the effectiveness of non-destructive testing methods for some technical items?

Example 1: Let an application of some instruments of non-destructive testing be planned in the applications to check 10,000 conditional items (the items can be meters of pipes, square meters of walls in storehouses and so on). The operator using instruments forms a system for non-destructive testing. Speed of testing equals 5000 items a day. Taking into account the human factor, a frequency of first-type errors (when the absence of defect [anomaly] is accepted as defect [anomaly]) equals one error a week. The mean time between second-type errors for the system (when real defect [anomaly] does not come to light) is equal once a month. The non-destructive testing is performed permanently for 10 days. It needs to estimate the maximum density of defects (anomalies) for which the probability of soundness of the checked 10,000 conditional items is more than 0.90.

Results of probabilistic modeling have shown that the required density is about 0.02%, that is, 2 defects (anomalies) on 10,000 items. In addition it is expedient to notice that since density of defects about 1%, the probability of soundness is stabilized at level 0.88. It does not fall as less, because first-type and second-type errors seldom occur in example 1.

Example 2: Continuing example 1, it needs to prove minimum speed of non-destructive testing, the checked volume for which the probability of soundness of 10,000 conditional items will exceed 0.95 at continuous work within 8 h of working hours.

The results of probabilistic modeling are reflected in Figure 7.

Figure 7.

The way for rationale speed of non-destructive testing.

The analysis shows that the found rational speed is about 1100 items per hour. And the part of defects after the control in the checked-up volume of 10,000 items will be 0.0008% against the primary 0.02%. It can be interpreted: at the checked volume of 1,00,000 items (i.e., in 10 times more primary 10,000, when quantity of defects is 20), the average residual quantity of defects will not exceed 1. It means that under the second example conditions, 19 from 20 defects will be revealed in time with probability 0.95 and more.

Advertisement

4. Models for “black box” and for complex structures

The probabilistic approaches for modeling “black box” and complex structures operating in conditions of heterogeneous threats are proposed.

4.1. “Black box”

There are two general technologies proposed of providing protection from critical influences on the system: technology 1 is the periodical diagnostics of system integrity (without the continuous monitoring between diagnostics) and technology 2 is the continuous monitoring between periodical diagnostics added to technology 1—see Figure 8.

Figure 8.

Some accident events for technology 2 (left – “Correct operation”, right – “a loss of integrity” during Treq.).

Technology 1 is based on periodical diagnostics of system integrity, which is carried out to detect danger source penetration from threats (destabilizing factors) into a system or the consequences of negative influences. The lost system integrity can be detected only as a result of diagnostics, after which system recovery starts. Dangerous influence on a system is acted upon step by step: at first, a danger source penetrates into a system and then after its activation begins to influence. System integrity cannot be lost before a penetrated danger source is activated. Danger from threats (destabilizing factors) is considered to be realized only after a danger source has influenced a system.

Technology 2, unlike the previous one, implies that operators alternating each other trace system integrity between diagnostics (the operator may be a man or a special device or their combination). In case of detecting a danger source an operator recovers system integrity. The ways of integrity recovering are analogous to the ways of technology 1. Faultless operator’s actions provide the neutralization of a danger source trying to penetrate into a system. When operators alternate a complex diagnostic is held. A penetration of a danger source is possible only if an operator makes an error but a dangerous influence occurs if danger is activated before the next diagnostic. Otherwise the source will be detected and neutralized during the next diagnostic.

It is supposed for technologies 1 and 2 that the used diagnostic allows to provide necessary system integrity recovery after revealing danger source penetration into a system or consequences of influences. Assumption: for all time input characteristics, the probability distribution function (PDF) exists. Thus, the probability of the correct system operation within the given prognostic period (i.e., the probability of success) may be computed as a result of the use of models. For identical damage risk, to lose integrity is an addition to 1 for the probability of correct system operation, R = 1−P [3–4].

There are possible next variants for technologies 1 and 2: variant 1 in the given prognostic period Treq is less than the established period between neighboring diagnostics (Treq < Tbetw. + Tdiag); variant 2 in the assigned period Treq is more than or equals to the established period between neighboring diagnostics (Treq3Tbetw. + Tdiag). Here, Tbetw. is the time between the end of the diagnostic and the beginning of the next diagnostic, Tdiag is the diagnostic time.

4.2. Integration of probabilistic models for complex structures

The main output of integration modeling is the probability of the correct system operation or risk to losing system integrity during the given period of time. If probabilities for all points Тreq. from 0 to ∞ are computed, it means a trajectory of the PDF, depending on the characteristics of threats, periodic control, monitoring and recovery. And the building of PDF is the real base to prediction metrics P and R for given time Тreq.. In analogy with reliability, it is important to know a mean time between neighboring losses of integrity like mean time between neighboring failures in reliability (MTBF), but in application to quality, safety, etc.

For complex systems, parallel or serial structure existing models with known PDF can be developed by usual methods of probability theory. Let’s consider the elementary structure from two independent parallel or series elements. Let PDF of time between losses of the ith element of integrity be Вi(t), that is, Вi(t) = Р (τi ≤ t); then:

1. Time between losses of integrity for the system combined from series-connected independent elements is equal to a minimum from two times τi: failure of first or second elements (i.e., the system goes into a state of lost integrity when either the first or second element integrity is lost). For this case the PDF of time between losses of system integrity is defined by expression:

В t = Р min τ 1 , τ 2 t = 1 Р min τ 1 , τ 2 > t = 1 Р τ 1 > t Р τ 2 > t = 1 1 В 1 t 1 В 2 t E1

2. Time between losses of integrity for system combined from parallel-connected independent elements (hot reservation) is equal to a maximum from two times τi: failure of first and second elements (i.e., the system goes into a state of lost integrity when both first and second elements have lost integrity). For this case the PDF of time between losses of system integrity is defined by the expression:

В t = Р max τ 1 τ 2 t = Р τ 1 t Р τ 2 t = В 1 t В 2 t E2

Applying recurrently expressions (1), (2), it is possible to build PDF of time between losses of integrity for any complex system with parallel and/or series structures.

All these ideas for analytical modeling operation processes are supported by the software tools “Mathematical modeling of system life cycle processes”—“know how” (registered by Rospatent №2,004,610,858), “Complex for evaluating quality of production processes” (registered by Rospatent №2,010,614,145) and others [1–4].

Advertisement

5. Optimization

By using the models and software tools above the problems of optimization for an element, subsystem and system can be solved through calculating probabilities of success or failure during a given period on the timeline. This approach considers the threats, conditions of counteractions and the given admissible risk established by the precedent principle. Thus, the final choice of integrated measures is allocated on a payoff to the customer in view of specificity of the created or maintained system.

For example, the next general formal statements of problems for optimization can be used [6]:

1. For the stages of system concept, development, production and support: System parameters, technical and management measures, presented in terms of time characteristics of threats, control and/or monitoring of conditions and comprehensible recovery of lost integrity are the most rational for the given period if the minimum amount of expenses for the creation of the system is reached at limitations on an admissible level of risk to lose integrity and/or probability of an admissible level of quality and expenses for operation under other developments, operations or maintenance conditions.

2. On an operation stage: System parameters, technical and management measures, presented in terms of time characteristics of threats, control and/or monitoring of conditions and comprehensible recovery of lost integrity, are the most rational for the given period of operation if the minimum of risks to system integrity loss is reached at limitations on the admissible level of risk and/or probability of an admissible level of quality and expenses for operations under other operations or maintenance conditions.

The combination of these formal statements also can be used in the system’s life cycle.

The approach for using the developed models, methods and software tools to analyze and optimize system processes is illustrated in Figure 9.

Figure 9.

The approach to analyze and optimize system processes.

Advertisement

6. Examples for complex structures: quality prediction for manufacturing processes

A typical set of manufacturing processes of gas preparation equipment (GPE) on the enterprise includes:

  • processes connected with operation of entrance threads;

  • processes of low temperature gas separations;

  • process of economical measure of gas;

  • processes of gas heating and reduction;

  • processes of candle and torch separation;

  • processes connected with storage and use methanol;

  • processes connected with storage, supply and drainage dumps of the weathered condensation and diesel fuel;

  • managing processes in the engineering division;

  • managing processes in the manufacturing division;

  • managing processes in booster compressor station division;

  • managing processes in the administrative department.

Not to tire the attentive reader, we will not state results of modeling for all processes—in examples 3 and 4, there are only results for processes connected with the operation of entrance threads and managing processes.

Example 3: It is required to predict the quality of the production processes and reliability of equipment connected with the operation of entrance threads.

Input data for modeling are formed as an analysis result of the average statistical data and requirements for production processes of the enterprise. A separate quality of each group of processes is estimated; then, quality of productions for GPE as a whole is predicted. Let an average time of recovery of each group of the processes earlier is equal to the duration of work of one shift, that is, 8 h. The predicted period is 1 month, 1 year and 5 years at observance of set modes for processes.

Note: For a pre-emergency condition, input data can essentially differ; that will cause also change of modeling results.

For the decision the models above are used. The results of modeling of the productions connected with the operation of entrance threads are analyzed in Figure 10.

Figure 10.

Prediction of quality of production processes connected with operation of entrance threads.

Results of modeling: Owing to the recovery in time technological and production processes as a result of periodic control, the mean time between failures (MTBF), affecting quality, increases from 1361 to 20,431 h, that is, by 15 times. It is reached at the expense of timely reaction in process control. The integral probability of the performing processes, connected with the operation of entrance threads with an acceptable quality, is 0.97 for a month of GPE operation, 0.70 for GPE operation in a year and 0.32 for GPE operation in 5 years. The last probability (0.32) means that it may be a real one or more accidents or failures for 5 years of GPE operation, when counter-emergency measures should be performed. Risk of this is about 0.68, that is, twice more than the probability of success.

And what about reliability? The maintenance and diagnostic measures are performed every half a year according to recommendations of equipment suppliers. How much it is effectively for real operation conditions on the level of predicted reliability?

Results of predicting reliability of equipment connected with the operation of entrance threads are demonstrated in Figure 11. Expected integral MTBF is equal to 5770 h. It is 3.5 times less in comparison with 20,431 h owing to daily periodic control (see the earlier section).

Figure 11.

Predicted reliability of equipment connected with operation of entrance threads.

Summary: The account of daily results of control and measurements is necessary. Otherwise, if it is to be guided by only guarantee recommendations of equipment suppliers’ occurrence, at least one accident or failure demanding counter-emergency measures of protection annually is possible and for 5 years it is inevitable.

Advertisement

7. General estimation of predicted quality

Example 4: The next system question is very important: What about the benefit for enterprise “the prediction of complex quality” based on the probabilistic modeling of processes? Modeling allows one to compare the quality of various productions on a uniform scale, to establish levels of acceptable quality, taking into account expenses, to allocate “bottlenecks” in each of these processes and also to develop the general and separate recommendations about process improvements. For example, the comparative results of modeling of production processes are demonstrated in Table 2.

Processes Probability of providing acceptable quality during a year
1 Processes connected with operation of entrance threads 0.70–0.90
2 Processes of low temperature gas separations 0.62–0.87
3 Processes of economical measure of gas 0.9999
4 Processes of gas heating and reduction 0.94
5 Processes of candle and torch separation 0.82
6 Processes connected with storage and use methanol 0.63
7 Processes connected with storage, supply and drainage dumps of the weathered condensation and diesel fuel 0.60
8 Managing processes in engineering Division, manufacturing Division, booster compressor station Division, administrative Department 0.67

Table 2.

Comparative results of production processes modeling.

Thus, with other things being equal, a more complex structure of processes, as a rule, possesses more risks. It should be considered.

On the basis of the analysis of modeling results, numerous logical decisions should be made by enterprise management according to the criterion “quality-risks-cost.”

Advertisement

8. Examples of complex structures: modeling pipelines

Example 5: There is system which consists of a 560-km pipeline for pumping liquefied natural gas across the South American territory (the source of modeling data is a technical report of one of the oil companies). All lay of the line conventionally is divided into three parts (subsystems) by service conditions: first part through the jungle (200 km), second part through the mountains (300 km) and third through the plains (60 km). These characteristics of pipeline subsystems are presented in Table 3. It is assumed that the annual profit of operation of the pipeline in the first 5 years is 1500.000 and after is 2500.000 conventional units of accounts per year. It is required to predict the risks taking into account profits and the estimated costs (in conventional units of account) for the construction and maintenance of various sections of the pipeline between 10 and 50 years of its operation.

Characteristics Part through the jungle (200 km) Part through the mountains (300 km) Part through the plains (60 km)
The frequency of potentially hazards impacts on 100 km lay of the line (technical, natural, human or criminal, etc.) 15 times a year 10 times a year 50 times a year
The period between system controls the integrity of area 1 month 1 month 1 week
The mean time to failure of monitoring tools at the area (without using or using existing/ prospective monitoring tools) 1 day/1 year 1 day/1 year 1 day/1 year
The resistance of areas (the average time of preserving the integrity) for the dangerous influences statistically and in comparison with analogues 228.1 days 331.8 days 1217 days
The average cost of construction and maintenance of the area, over 1 km, 1000 c.u. per year 2000 c.u. per year 200 c.u. per year
Average recovery time pipeline integrity after occurrence of the fault 10 days

Table 3.

Characteristics of hazards, measures of control, monitoring and maintaining pipeline integrity.

The solution of a problem: The traditional approach to risk analysis is limited by the obtainment values of the frequency of potential hazard impacts on a 100-km lay of the line— see the first row of characteristics in the table. The proposed solution allows not only for obtaining risks from frequency but also implies how security will change as a result of management. Traditional approaches are not possible to feel the effectiveness of the measures taken for measures of control, monitoring and maintaining integrity. Measures should not seem effective but should be really effective! It is necessary to understand their influence on securing ultimate security. The correct understanding of the possibilities of the impact on safety from measures of control, monitoring and maintaining integrity will allow rationally managing their parameters. The proposed approach provides for the use of these and other data of Table 3 as input data for subsequent mathematical modeling using the models of Section 3. The results of predictive modeling for 10 and 50 years showed the following (see Figure 12).

Figure 12.

Predicted risks taking into account monitoring possibilities.

As a result of applying technologies, which had been developed in 2008, the average time achievable of the safe operation is approximately 3000–5000 h. At the same mean time, failure in the jungle is 5767–8745 h; in the mountains it is 8255–12,676 h; and on the plain it is 29,500–1,22,145 h. Note that the upper estimate was inherent for the systematic maintaining of pipeline integrity (when all failures and critical areas with potential danger are identified) in the jungle and in the mountains every month and at the plains weekly. Subsystems’ state monitoring is tracked mainly in the days of control. The analysis of the results of the calculations shows that systematic monitoring allows one to increase the safety of operations of the pipeline in the jungle and in the mountains by 1.5 times, in the plain by 4 times, but throughout the 560-km stretch of the pipeline it is by 1.6 times! This is a real job for pre-emption as compared with the case of the absence of any control; when troubleshooting, it is only after the accident that cannot be overlooked. It is assumed that operative repair with restore the integrity follows after the failure detection immediately.

Promising technologies will implement a continuous monitoring of the pipeline at any point. For example, it may be a scan of the air situation using electronic locator fighters of the fourth and fifth generations with the smart cover. Similarly, intellectual filling of the pipeline will signal the dangers of the results with relevant coordinates and diagnosis. If we know the location and cause of the potential failure of the restoration of the integrity it becomes a routine “matter of technique.” Under these conditions there’s real will be the mean time of safe operation of the pipeline of about 165,000 h, which is achieved when the mean time of failure of monitoring tools is about a year. The mean time of failure in the jungle will be more than 28,0000 h, in the mountains more than 40,000 h and on the plains more than 18 million h (as in the engines of space vehicles!). The analysis of calculation results shows that rational frequency of periodically controlled, continuous monitoring and prompt removal of the detected faults increase security in 33–54% compared to existing technology.

The results obtained show clearly the following:

for the existing technologies, the security risks for 10 years constitute 0.95–0.97 (which means that a number of accidents seem almost inevitable, while in the jungles, with a probability 0.91–0.94, in the mountains with a probability 0.88–0.92 and on the plains with a probability 0.42–0.75; in 50 years, the risk exceeds 0.99 (dozens of accidents, even in the jungle, with a probability of 0.98–0.99, in the mountains with a probability of 0.97–0.98 and on the plains with a probability of 0.78–0.94);

for the promising technologies, the security risks in 10 years constitute 0.35 (i.e., practically for some 10 years, we can even avoid accidents, and in the jungle, the accident will be possible with a probability of 0.24, in the mountains with a probability of 0.18 and on the plains with a probability of 0.005), in 50 years it is 0.73 (2–4 crashes in 50 years in the jungle with a probability of 0.61, in the mountains with a probability of 0.52 and in the plain with probability 0.02);

  • the cost will be in 10 years 8.012.000 c.u. and over 50 years it will be 40.060.000 c.u.; moreover, the costs of an area of the pipeline in the mountains are twice more than costs in the jungles and on the order more than ones on the plains;

  • the approximate profit of the pipeline owner costs less and without adjustment of inflation in 10 years is 11.988.000 c.u.; that in one and a half times exceeds the costs, and in 50 years is 79.940.000 c.u., which is double the costs. Moreover, the expenditure will produce returns in less than in a year. That means that when using promising technologies the quantity of accidents may be reduced on the matter; even these accidents happen either in the jungle or in the mountains. It’s quite a profitable and secure project. It must be admitted that the level of security obtained—the risks are 0.35 in 10 years and 0.73 in 50 years—can be considered as normative “acceptable.”

Thus, the examples of forecasting the security operation of the pipelines have illustrated the ability to proactively manage risk. The effectiveness is not just using the universal models but also in the justification of the necessary system requirements for new materials (pipes should be intelligent with the ability of continuous monitoring and mean time of failure for at least a year) and in technologies of restoring functional integrity, in minimizing risks on the basis of the control parameters of the processes of control, monitoring and restoring even before promising technologies have appeared! It is therefore proposed to manage the risks for pipelines of the future even before their creation and based on this, to justify the technical requirements to the system and their components.

Summary for Example 5:

1. Rational control, continuous monitoring and prompt elimination of the revealed accidents and failures allow one to increase the safety tens of times compared to the lack of a systematic control and monitoring!

2. With using advanced technology accidents and failures on plains it is possible to virtually excludes, and in the jungles and mountains to reduce of their number many times.

Advertisement

9. Examples of modeling sea gas-and-oil producing system (GOPS) processes

There are many standards used in the oil and gas industry (ISO 10418 “Basic surface safety systems”, ISO 13702 “Control & mitigation of fire & explosion”, ISO 14224 “Reliability/maintenance data”, ISO 15544 “Emergency response”, ISO 15663 “Life cycle costing”, ISO 17776 “Assessment of hazardous situations” etc.), but they focus on technical aspects and do not consider terrorist threats.

The principal difference of GOPS consists of the fact that safety problems should be resolved in the sea because long distances from the shore and probable ice conditions in northern regions exclude any help from the outside—see Figure 13.

Figure 13.

Some explanation of conditions for examples 6 and 7.

Oil and gas are usually produced on stationary stills and concrete platforms located up to 200 km from the shore at the depth from several dozens to several hundred meters. There are nearly 5000 sea platforms dispersed all over the world. Dozens of thousand oil wells are drilled from these platforms. Produced oil is delivered to the buyers by tankers or directly through pipelines.

Produced gas before transportation goes into the liquefied natural gas terminals. After the liquefaction its volume reduces by 600 times, that makes its transportation profitable. Statistics shows that during the time of sea field development, emergencies are distributed in the following ways [10, 11]: drilling—32% (including 23% at survey and 9% at production drilling); gas-and-oil production—19%; ship collision and towing of floating drilling rigs and blocks for platform construction —14%; storms—11%; floating drilling rigs’ delivery to the point of drilling—6%; and other kinds of works—18%.

A safety policy concerning sea GOPS safety includes accident prevention and drawing, plans concerning failure consequences of liquidation and actions taken in case of emergencies. Special brigades are formed and trained to prevent and liquidate failure consequences. High-quality materials and pipes and application of computer diagnostics for pipe integrity monitoring provide safety of the GOPS operation.

All safety measures undertaken nowadays provide system protection from inexperienced personnel (because according to statistics about 80% of all failures are connected with the human factor) or from the natural causes and “cataclysms” which are of an unpremeditated character. However, the attitude to safety cardinally varies in case of terrorist threats because terrorist actions are malicious and aimed at damaging the system through its vulnerable “bottlenecks.” As a result the existing risks of system safety violation essentially grow.

The examples 6 and 7 are devoted to modeling processes of possible terrorist influence and GOPS safety provision (including platforms, coastal technological complexes including terminals for floating storage and offloading, liquefied natural gas terminals, pipelines, tubing stations) and to withdraw quantitative evaluations of their vulnerability in various scenarios.

Example 6: connected with an estimation of effectiveness of a safety monitoring system for sea GOPS. Before we start the analysis of possible terrorist threats, let us consider the basic dangers that can arise on sea platforms in case of failures. They are explosions of fuel-air mixed clouds; generation and burning of fire balls; oil spill and burning; separation and spread of technological equipment parts; and others. Each of these dangers can aggravate consequences of failures, that is, lead to the “dominoes” effect. To control risks the following measures are taken: application of safe technologies; measures preventing dangerous situations; applications of systems providing early detection of emergencies; control over operating parameters of the technological process, the signal system and the notification about emergency technologies; measures directed on mitigation of emergency consequences; and preparation of a platform staff to react immediately.

The analysis shows that the basic preventive mechanism of risk reduction is safety monitoring in various variations of its application. Let us estimate commonly used safety technologies, technology 1 (periodical diagnostics of system integrity without the continuous monitoring between diagnostics) and technology 2 (continuous monitoring between periodical diagnostics is added to technology 1)—see Section 4.

Let’s estimate efficiency of sea GOPS safety technologies used in the case of emergencies for dozens of years. Thus we take into account that the basis of safety systems consists of automatic facilities’ mean time where failures of which are estimated for several years.

To form inputs for probabilistic modeling the arising of basic dangers for sea platforms in the case of failures is considered. There is generally one of the abovementioned technologies to provide safety of GOPS components (platforms, coastal technological complexes including floating storage and offloading terminals, liquefied natural gas terminals, pipelines, tubing stations). The script of emergency development provides frequency of danger source appearance equal to 1 time per 24 h with a mean time of activization within an hour. Time between the termination of the previous and the beginning of the next diagnostics taking into account broken integrity recovery is 2 h. Let us suppose that monitoring is performed by automatic means of tracking the integrity of system components. To such means systems of fire and gas detection, systems of water fire-fighting and foaming, circled fire mains, systems of platform irrigation, pressure relief systems, emergency switching of systems, various locking device and so on may be related. Let the mean time between failures of these means be not less than 2 years.

It needs to estimate a safety of the sea platform operation in such scenarios within several hours, a day, several weeks and a month.

The integrated results of calculations prove that at the realization of technology 1, the required safety is provided only for several working days—what is inadmissible in practice. If the most effective technology, technology 2 is realized, the probability that a dangerous influence does not occur within 24 h is above 0.99997, that is, the probability of emergency is about 0.00003. At the same time provision of the required safety within a month in conditions of daily failure danger this risk increases up to 0.001 that also appears to be a practically admissible result.

Summary for example 6: An effectiveness of the existing safety systems of sea GOPSs appears to be rather enough or high if the frequency of danger source appearances is about once a day. The high level of GOPS protection in emergencies is mainly provided by application of approved automatic safety technologies.

Against the background of proved measures of counteraction to sources of emergency the situation concerning the struggle against terrorist threats appears to be cardinally worse because this problem is still at the initial stage. Other things being equal, let us estimate the expected sea GOPS protection from terrorist threats with differences in abilities of a security service operator to reveal suspicious actions and objects which can be a means of terrorist purpose realization.

Example 7: Let the deliberately formal conditions of a terrorist influence scenario be similar to the emergency dangers in example 6. Let us suppose that for providing protection of sea GOPS platforms from terrorist threats, any of the protecting technologies are used. Let the scenario of the potentially dangerous influence of terrorists provide the frequency of a danger source appearance from air, the water table or from under water equal to 1 time per 24 h with the mean time of activation after penetration onto a platform equal to 1 h. The time between the termination of the previous and the beginning of the following diagnostics taking into consideration the broken integrity recovery is 2 h. Let us assume the mean continuous time of potentially faultless operators’ works in each shift to be 6 hours. It is required to evaluate the safety of the sea platform operation in such scenarios within several hours, days, weeks and a month.

The integrated calculation results prove that without any additional protection the system remains in safety with the probability of 0.9 only within 2–3 h. It is explained by a comparative rarity of a danger source appearance. If the 1st technology of counteraction to terrorists is applied (this technology exists on the most of platforms and implies visual tracking of air conditions, the working hours what is an equivalent to the case if there are no measures of counteraction to terrorists on a platform at all.

If the most effective second technology is used, the probability of GOPS integrity within a day is more than 0.92, that is, the risks of latent introductions of a terrorist danger source into a system and the overcoming of all technological protection barriers preventing realization of terrorist threats in the conceived volume approximate to 0.08. At the same time to provide the required safety within a month in conditions of daily danger that a sudden terrorist attack happens, this risk runs up to 0.93. The main cause of this is insufficient preparedness of operators to recognize terrorist threats at the background of other technical threats. That’s why it is necessary to increase the mean time between failures of a safety service operator to tens and hundreds of hours what requires creation of special “smart subsystems” in order to support operator functions (radar-tracking, optical, acoustic, electromagnetic means etc.). Compare the results of examples 6 and 7.

Pragmatic interpretation of example 7 results: If the characteristics of terrorist dangers growing are similar to the characteristics of emergency danger, the risks of terrorist threats’ realization in the conceived volume are incommensurably higher. Owing to insufficient preparedness and technical equipment of operators for timely and valid recognition of terrorist threats at the background of other technical threats, a variety of GOPSes are completely helpless in case of terrorist dangers that are growing.

Advertisement

10. Instead of conclusion

The presented probabilistic approaches allow us to research different problems for providing safe and effective development of hydrocarbon deposits and rational operation of oil and gas systems. Their application in the system’s life cycle helps to answer the main question. “What rational measures should lead to expected effects without wasted expenses, when, by which controllable and uncontrollable conditions and costs?” The efficiency from implementation is commensurable with expenses for system creation.

The probabilistic modeling, comprehensive and systematic studies on the competitiveness of OGS have been carried out in Gubkin Russian State University of Oil and Gas (National Research University) over a period of several years. These researches are concerned with the most important economic branch, which largely determines the country’s energy security and efficiency. Certainly, the integral view of OGS competitiveness seamlessly includes the most important components of quality, safety, energy efficiency, environmental compatibility, economic aspects and so on. In turn, these components are also complex, integral and affect a wide range of activities. Moreover, competitiveness as a complex integral metric characterizes the studied considered systems and objects (as living organisms) that have the property of changes in the life time. The proposed probabilistic models and methods are widely used in the practice of education and research.

References

  1. 1. Kostogryzov AI, Stepanov PV. Innovative Management of Quality and Risks in Systems Life Cycle. Moscow: APC; 2008. 404 p
  2. 2. Grigoriev LI, Kershenbaum VY, Kostogryzov AI. System Foundations of the Management of Competitiveness in Oil and Gas Complex. Moscow: National Institute of Oil and Gas; 2010. 374p
  3. 3. Kolowrocki K, Soszynska-Budny J. Reliability and Safety of Complex Technical Systems and Processes. Springer-Verlag London Limited; 2011. 405p. DOI: 10.1007/978-0-85729-694-8
  4. 4. Kostogryzov A et al. Mathematical models and applicable technologies to forecast, analyze and optimize quality and risks for complex systems. Proceedings of the 1st Intern. Conf. on Transportation Information and Safety (ICTIS), June 30–July 2, 2011, Wuhan, China. 2011. p. 845-854
  5. 5. Kostogryzov A, Nistratov A, Nistratov G. Some Applicable Methods to Snalyze and Optimize System Processes in Quality Management. InTech; 2012. ISBN 979-953-307-778-8 http://www.intechopen.com/books/total-quality-management-and-six-sigma/some-applicable-methods-to-analyze-and-optimize-system-processes-in-quality-management
  6. 6. Kostogryzov A, Grigoriev L, Nistratov G, Nistratov A, Krylov V. Prediction and optimization of system quality and risks on the base of modelling processes. American Journal of Operations Research. 2013, 3, p. 217-244. DOI: 10.4236/ajor.2013.31A021. http://www.scirp.org/journal/ajor
  7. 7. Grigoriev L, Kostogryzov A, Tupysev A. Automated dispatch control; problems and details of modeling. Proceedings of IFAC Conference on Manufacturing modeling, management and control. Saint Petersburg, Russia; June 19-21, 2013. pp. 1157-1161
  8. 8. Kostogryzov A, Nistratov A. George Nistratov the innovative probability models and software technologies of risks prediction for systems operating in various fields. International Journal of Engineering and Innovative Technology (IJEIT). September 2013;3(3):146-155. http://www.ijeit.com/archive.php
  9. 9. Demyanov VV, Savelyeva EA. Geostatistics: Theory and Practice. Under the editorship of R.V. Arutyunyan; Nuclear Safety Institute of the Russian Academy of Sciences. The Science. 2010. 327 p
  10. 10. Leonid G, Chingiz G, Vsevolod K, Andrey K. The methodological approach, based on the risks analysis and optimization, to research variants for developing hydrocarbon deposits of Arctic regions. Journal of Polish Safety and Reliability Association. Summer Safety and Reliability Seminars. 2014;5(1-2):71-78. http://jpsra.am.gdynia.pl/archives/jpsra-2014-contents/
  11. 11. Guseinov C, Tagiev R. The Safety Fundamentals to Design the Objects for Development to Commercial Level of the Production Capacity of a Hydro-Carbon Deposit on a Shelf of Arctic Seas. Manual book. Moscow University of Oil and Gas; 2001

Written By

Vsevolod Kershenbaum, Leonid Grigoriev, Petr Kanygin and Andrey Nistratov

Submitted: 08 October 2017 Reviewed: 06 February 2018 Published: 26 September 2018