Open access peer-reviewed chapter

Applications of Artificial Intelligence Techniques in Optimizing Drilling

Written By

Mohammadreza Koopialipoor and Amin Noorbakhsh

Submitted: 30 November 2018 Reviewed: 24 February 2019 Published: 15 January 2020

DOI: 10.5772/intechopen.85398

From the Edited Volume

Emerging Trends in Mechatronics

Edited by Aydin Azizi

Chapter metrics overview

1,432 Chapter Downloads

View Full Metrics

Abstract

Artificial intelligence has transformed the industrial operations. One of the important applications of artificial intelligence is reducing the computational costs of optimization. Various algorithms based on their assumptions to solve problems have been presented and investigated, each of which having assumptions to solve the problems. In this chapter, firstly, the concept of optimization is fully explained. Then, an artificial bee colony (ABC) algorithm is used on a case study in the drilling industry. This algorithm optimizes the problem of study in combination with ANN modeling. At the end, various models are fully developed and discussed. The results of the algorithm show that by better understanding the drilling data, the conditions can be improved.

Keywords

  • optimization
  • ROP
  • ABC algorithm
  • prediction
  • ANN

1. Introduction

Optimization is the process of setting decision variable values in such a way that the objective in question is optimized. The optimal solution is a set of decision variables that maximizes or minimizes the objective function while satisfying the constraints. In general, optimal solution is obtained when the corresponding values of the decision variables yield the best value of the objective function, while satisfying all the model constraints.

Apart from the gradient-based optimization methods, some new optimization methods have also been proposed that help solve complex problems. In the available classifications, these methods are recognized as “intelligent optimization,” “optimization and evolutionary computing,” or “intelligent search.” One of the advantages of these algorithms is that they can find the optimal point without any need to use objective function derivatives. Moreover, compared to the gradient-based methods, they are less likely to be trapped in local optima.

Optimization algorithms are classified into two types: exact algorithms and approximate algorithms. Exact algorithms are capable of precisely finding optimal solutions, but they are not applicable for complicated optimization problems, and their solution time increases exponentially in such problems. Approximate algorithms can find close-to-optimal solutions for difficult optimization problems within a short period of time [1].

There are two types of approximate algorithms: heuristics and metaheuristics. Two main shortcomings of the heuristic algorithms are (1) high possibility of being trapped into local optima and (2) performance degradation in practical applications on complex problems. Metaheuristic algorithms are introduced to eliminate the problems associated with heuristic algorithms. In fact, metaheuristic algorithms are approximate optimization algorithms that enjoy specific mechanisms to exit local optima and can be applied on an extensive range of optimization problems.

Advertisement

2. Methodology

2.1 Optimization model

The decision-making process consists of three steps: problem formulation, problem modeling, and problem optimization. A variety of optimization models are actually applied to formulate and solve decision-making problems (Figure 1). The most successful models used in this regard include mathematical programming and constraint programming models.

Figure 1.

Optimization models.

2.2 Optimization method

The optimization methods are presented in Figure 2. Since the problem is complicated, exact or approximate methods are used to solve it. The exact methods provide optimal solutions and guarantee optimality. Approximate methods lead to favorable and near-optimal solutions, but they do not guarantee optimality.

Figure 2.

Optimization methods.

Advertisement

3. Theoretical foundations

3.1 Theoretical foundations of optimization

Any problem in the real world has the potential to be formulated as an optimization problem. Generally, all optimization problems with explicit objectives can be expressed as nonlinearly constrained optimization problem as presented in Eq. (1).

maxorminfx,x=x1x2xnTRnSubject toϕjx=0,j=1,2,,Mψkx0,k=1,2,,NE1

where fx, ϕjx, and ψkx are the scalar functions of the column vector x. The xi elements of the vector x are the design variables, or the decision variables, that could be either continuous, discrete, or mixed of the two. The vector x is often referred to as the decision vector, which varies in an n-dimensional space Rn. The function f (x) is called the objective function or the energy function. The objective function is called the cost function in minimization problems and fitness function in maximization problems. Moreover, ϕjx are constraints in terms of M equalities and ψkx are constraints in terms of N inequalities. Thus, in general, we will have a total of M + N constraints. The space spanned by the decision variables is known as the search space, and the space spanned by the objective function value is called the solution space. The optimization problem maps the search space on the solution space.

3.1.1 Norms

For a vector v, p-norm is denoted by vp and defined as Eq. (2).

vp=i=1nvip1pE2

where p is a positive integer. According to this definition, one can understand that a p-norm satisfies the following conditions: ‖v0 for all ‖v=0 if and only if v = 0. This shows the nonnegativeness of p-norm. In addition, for each real number α, we have the scaling condition αv=αv. Three most commonly used norms are 1-, 2-, and infinity norms, when p is equal to 1, 2, and ∞, respectively.

3.1.2 Eigenvalues and eigenvectors

The eigenvectors for a square matrix An×n are defined as Eq. (3).

AλIu=0E3

where I is a unitary matrix with the same size as A. All the nontrivial solutions are obtained from Eq. (4).

a11λa12a21a22λa1na2nan1an2annλ=0detAλI=0E4

which can be written as a polynomial in form of Eq. (4).

λn+αn1λn1++α1λ+α0=λλ1λλ2λλn=0E5

where λi are eigenvalues and could be complex numbers as well. For each eigenvalue λi, we have a corresponding eigenvector ui, whose direction can be defined uniquely, but the eigenvector length will not be unique, since any nonzero multiple of vector u can satisfy Eq. (3) and can thus be considered as an eigenvector.

3.1.3 Spectral radius of the matrix

The spectral radius of a square matrix is another important concepts associated with eigenvalues of matrices. Assuming that λi are eigenvalues of the square matrix A, the spectral radius of the matrix ρA will be defined as Eq. (6).

ρA=maxλiE6

which is equal to the maximum absolute value of all eigenvectors. Geometrically speaking, if we draw all the eigenvalues of matrix A on a complex plane and then draw a circle on the plane, in such a way that it encloses all the eigenvalues, then the minimum radius of such a circle is referred to as the spectral radius. Spectral radius is useful in determining the stability or instability of iterative algorithms.

3.1.4 Hessian matrix

The gradient vector of a multivariate function f (x) is defined according to Eq. (7),

G1xfxf/x1f/x2f/xnTE7

where x=x1x2xn is a vector. Since fx is a linear function, it is defined as the vector constant k, and the linear function is generated from Eq. (8).

fx=kTx+bE8

where b is a vector constant.

Second derivative of a general function f (x) of a matrix n × n is called the Hessian matrix,

G2x=2fx=2fx122fx1xn2fxnx12fxn2E9

3.1.5 Convexity

Linear programming problems are usually classified according to the convexity of their defining functions. Geometrically speaking, an object is called convex when for any two points within the object, every point on the straight line connecting them also lies within the object (Figure 3). Mathematically, a set SRn within the space of a real vector is called a convex set when Eq. (10) holds true.

Figure 3.

Convex object (a) and nonconvex object (b).

tx+1tyS,xyS,t01E10

A function fx defined on the convex set Ω is called convex if and only if:

fαx+βyfαx+fβy,xyΩα0,β0,α+β=1E11

An interesting feature of the convex function f is that it ensures that the gradient at a point dfdxx=o approaches zero. In this case, x is an absolute minimum point for f.

3.1.6 Optimality criteria

Mathematical programming includes several concepts. Here, we will first introduce three related concepts: feasible solution, strong local maximum, and weak local maximum.

Point X that satisfies all the constraints of the problem is called a feasible solution. The set of all feasible points will form the feasible region.

Point x is a strong local maximum if f (x) is defined in δ neighborhood Nδx and satisfies fx>fx for each uNδx where uNδx and ux.The inclusion of equality in the condition fxfx will define x as a weak local maximum. A schematic view of strong and weak local maxima and minima is presented in Figure 4.

Figure 4.

Strong and weak local minima and maxima.

3.1.7 Computational complexity

The efficiency of an algorithm is usually measured by algorithmic complexities or computational complexities. Such complexities are often referred to as Kolmogorov complexity in literature. For a given problem with complexity of n, this complexity is represented by big-O notations, for example, On2 or Onlogn [1]. For two functions f (x) and g (x), if we have,

limxx0fx/gxK;f=OgE12

where K is a finite and nonzero value. The big-O notation indicates that f is asymptotically equivalent to the order of g. If the limit value is K = 1, it can be argued that f is of the same order as g [1]. The small-o notation is applied when the limit tends to be zero,

limxx0fx/gx0;f=ogE13

3.1.8 Nondeterministic polynomial (NP) problems

In mathematical programming, an easy or tractable problem is a problem that can be solved using a computer algorithm, with a reasonable solution time, as a polynomial function of problem size n. An algorithm is referred to as a P-problem, or a polynomial-time problem, when the number of steps needed to find the solution is represented by a polynomial in terms of n and there is at least one algorithm to solve it.

On the other hand, a hard or intractable problem is a problem whose solution time is an exponential function of n. In case the solution to a polynomial problem is estimated in polynomial time, then it is called a nondeterministic polynomial. But it should be noted that there is no specific rule for making such a guess. As a result, the estimated solutions cannot be guaranteed to be optimal or even near-optimal solutions. In fact, there is no specific algorithm for solving hard-NP problems, and only approximate or heuristic solutions are applicable. Therefore, heuristic and metaheuristic methods can provide us the near-optimal/suboptimal responses with acceptable accuracy.

A given problem can be called NP-complete if it is actually an NP-hard problem, and other NP problems can be reduced to it using certain reduction algorithms. The reduction algorithm has a polynomial time. The traveling salesman problem can be counted as an example of NP-hard problem, which aims to find the shortest route or the lowest traveling cost to visit all n cities once and then return to the starting city.

3.2 Theoretical foundations of metaheuristic optimization

Two opposite criteria should be taken into account in development of a metaheuristic algorithm: (1) exploration of the search space and (2) exploitation of the best solution (Figure 5).

Figure 5.

Metaheuristic algorithm design space.

Promising areas are specified by good solutions obtained. In intensification, the promising regions are explored accurately to find better solutions. In diversification, attempts are made to make sure that all regions of the search space are explored.

In the exploration approach, random algorithms are the best algorithms for searching. Random algorithms generate a random solution in each iteration and completely exploit the search space in this way.

3.2.1 Representation

The simulation of any metaheuristic algorithm requires an encoding method. In other words, the problem statement procedure is referred to as representation. Encoding plays a major role in the productivity and efficiency of any metaheuristic algorithm and is recognized as a necessary step in the algorithm. Additionally, the representation efficiency depends on the search operators (neighborhood, recombination, etc.). In fact, when defining a representation, we first need to remember how the problem is evaluated and how the search operator will work. A representation needs to have the following characteristics:

Completeness: It is one of the main characteristics of representation; in the sense that all the solutions of a given problem need to be represented.

Connectivity: It means that a search path must exist between any two solutions in the search space.

Efficiency: Representation should be easily available to search operators.

Representations can be divided into two types in terms of their structure: linear and nonlinear. In this study, linear representation has been used. Some linear representations include the following:

Binary encoding: It is performed using binary alphabets.

Continuous encoding: In continuous optimization problems, encoding is performed based on real numbers.

Discrete encoding: It is used for discrete optimization problems such as the assignment problem.

Permutation encoding: It is used in problems where the objective is to find a permutation.

Random Key: This type of encoding converts real numbers into a permutation.

Diploid representation: In the diploid representation, two values are considered for each subset of the decision vector.

3.2.2 Objective function

The objective function generates a real number for any solution in the search space. This number describes the quality or the fitness of the solution. The objective function is an important element in development of a metaheuristic algorithm that directs the search toward the best solution. If the objective function is wrongly defined, it will generate unacceptable solutions. In the present work, the objective function is maximization of drilling penetration rate.

3.2.3 Constraint

Constraint handling is another critical issue for the efficient design of metaheuristic algorithms. In fact, many continuous or discrete optimization problems are constrained. As mentioned earlier, constraints might be linear or nonlinear, equal or unequal. Constraints can mostly be applied to the decision variables or objective function. Some constraint handling strategies are presented in this section; these strategies can be categorized as follows:

Reject strategy: In this approach, infeasible solutions are rejected, and only the feasible ones are taken into account.

Penalizing strategy: In this strategy, infeasible solutions obtained during the search process are preserved in the search space. This strategy is the most popular strategy used to handle constraints. This strategy uses the penalizing strategy to transform problems with constraints into a problem with no constraint.

Repairing strategy: In this strategy, infeasible solutions turn into feasible solutions.

Preserving strategy: In this strategy, specific operators are used to generate feasible solutions alone.

3.2.4 Search strategy

Search strategy is of particular importance in metaheuristic algorithms. This strategy carries out the search process without using the derivative of the problem. Some of the leading search models are listed below.

Golden Section search: This is a technique used to find the extremum (maximum and minimum) of a unimodal function by narrowing the range of values inside which the extremum is known to exist.

Random search: Random search is a numerical optimization method independent of the gradient and hence can be used for noncontinuous or non differentiable functions.

Nelder-Mead method: The Nelder-Mead method, also known as downhill simplex, is usually used for nonlinear optimization. This method is a numerical method that can converge to nonstationary points.

3.2.5 Classification of metaheuristic algorithms

The criteria used for classification of metaheuristic algorithms are as follows:

Nature-inspired vs. nonnature inspiration: Many of the metaheuristic algorithms are inspired by natural processes. Evolutionary algorithms and artificial immune systems, ranging from biological behavior of bee, social behavior of bird flocking, and physical behavior of materials in simulated annealing to human-sociopolitical behavior in imperialist competitive algorithm, belong to these nature-inspired algorithms.

Memory usage versus memoryless methods: Some metaheuristic algorithms are memoryless. These algorithms do not store data dynamically during search time. Simulated annealing lies in this category of algorithms, while some other metaheuristic algorithms use information explored during the search process. Short-term and long-term memory used in tabu search algorithm are of this type.

Deterministic or stochastic: Deterministic metaheuristic algorithms solve optimization problems through deterministic decision-making (such as local search and tabu search). In stochastic metaheuristic algorithms, several stochastic rules are applied to searching. In deterministic algorithms, the initial solution leads to the generation of a final solution similar to the initial one.

Population-based vs. single-point search algorithms: Single-point algorithms (such as simulated annealing) direct and transmit a single solution throughout the search process, while population-based algorithms (such as particle swarm optimization) will involve the whole solution population. Single-point search algorithms apply an exploitive approach; these algorithms have the power to concentrate searching on the local space. Population-based algorithms have exploratory trajectory and allow for more diversified exploration of the search space.

Iterative or greedy approach: In iterative algorithms, the search starts with an initial set of solutions (population), and the solutions vary in each iteration. In greedy algorithms, the search begins with a null solution, and a decision variable is determined at each step until the final solution is obtained. Most metaheuristic algorithms follow an iterative approach.

Advertisement

4. Review of literature

In this section, firstly, a brief explanation of some of the mostly used metaheuristic algorithms is provided. Next, previous works dealing with prediction and optimization of penetration rate performed by various authors are introduced.

4.1 Literature on metaheuristic optimization

The optimization literature changed dramatically with the advent of metaheuristic algorithms in the 1960s. Alan Turing might be the first to use heuristic algorithms. During the Second World War, Alan Turing and Gordon Welchman managed to design the Bambe machine and finally crack the German Enigma machine in 1940. In 1948, he managed to get a patent for his ideas in the field of intelligent machinery, machine learning, neural network, and evolutionary algorithms.

4.1.1 Genetic algorithm

The genetic algorithm that was developed by John Holland et al. during 1960–1970 is a biological evolutionary model inspired by Charles Darwin’s natural selection and survival of the fittest. Holland was the first to use crossover, recombination, mutation, and selection in comparative studies and artificial systems [2]. Figures 6 and 7 indicate the application of crossover and mutation operators.

Figure 6.

The schematic view of crossover at a random point [2].

Figure 7.

The schematic view of mutation at a random point [2].

4.1.2 Simulated annealing algorithm

Patrick et al. developed a simulated annealing algorithm to solve optimization problems. When steel is cooled, it develops into a crystallized structure with minimum energy and larger crystalline sizes, and the defects of steel structure are decreased (Figure 8) [3].

Figure 8.

Simulated annealing search technique.

The search technique used in this algorithm is a movement-based search, which starts from an initial guess at high temperatures and the system cools down with a gradual decrease in temperature. A new movement or solution is accepted if it is better. Otherwise, it will be accepted as a probable solution so that the system can be freed from the local optima trap [3].

4.1.3 Tabu search algorithm

Tabu search was discovered by Glover [4]. It is a memory-based search strategy that searches the memory history as an integrative element. Two important points should be taken into account in this search: (1) how to efficiently use memory and (2) how to integrate the algorithm into other algorithms to develop a superior algorithm. Tabu search is the centralized local search algorithm that uses memory to avoid potential cycles of local solutions to increase search efficiency.

In the algorithm running stages, recent attempts (memory history) are recorded and listed as tabu, such that new solutions should avoid those available in the tabu list. Tabu list is one of the most important concepts in the tabu search method and records the search moves as a recent history, so that any new move must avoid the previous move list. This will also lead to time saving because the previous move is not repeated [4].

4.1.4 Ant colony optimization

When ants find a food source, they use pheromones to mark the food source and the trails to and from it. As more ants cross the same path, that path turns into a preferred path (Figure 9). Thus, several preferred paths will emerge during the process. Using this behavioral property of the ants, scientists have managed to develop a number of robust ant colony optimization methods. Dorigo was known as a pioneer in this field in 1992 [5].

Figure 9.

Ant preferred trail formation process.

4.1.5 Particle swarm optimization

Sometime later, the particle swarm optimization was developed by [6]. This method is inspired by the collective behavior exhibited by birds, fish, and even humans, which is referred to as swarm intelligence. Particles swarm around the search space based on initial random guess. This swarm communicates the current best and the global best and is updated based on the quality of the solutions. The movement of particles includes two main components: a stochastic component and a deterministic component. A particle is attracted toward the current global best while it has a tendency to move randomly. When a particle finds a location that is better than the previous ones, it updates it as the new best location. Figure 10 shows the schematic view of the motion of particles [7].

Figure 10.

Schematic representation of particle motion in the particle swarm method.

Figure 11.

The effect of neighbors and herd center on the movement of krill [16].

4.1.6 Harmony search

Harmony search was first developed by Geem et al. [8]. Harmony search is a metaheuristic algorithm inspired by music, which is developed based on the observation that the aim of music is to search for a perfect state of harmony. This harmony in music is analogous to find optimality in an optimization process. When a musician wants to play a piece of music, there are three choices:

• Harmony memory accurately plays a piece of famous music on memory.

• Pitch adjusting plays something similar to a famous piece.

• Randomization sets a random or new note [8].

4.1.7 Honeybee algorithm

Honeybee algorithm is another type of optimization algorithm. This algorithm is inspired by the explorative behavior of honeybees, and many variants of this algorithm have already been formulated: honeybee algorithm, virtual bee algorithm, artificial bee colony, and honeybee mating algorithm.

Literature suggests that the honeybee algorithm was first formulated by Sunill Nakrani and Craig Tovey (2004) at Oxford University in order to be used to allocate computers among different clients and web hosting servers [9].

4.1.8 Big Bang-Big Crunch

Big Bang-Big Crunch was first presented by Erol and Eksin [10]. This approach relies on theories of the evolution of the universe, namely the Big Bang-Big Crunch evolution theory. In the Big Bang phase, energy dissipation causes a state of disorder or chaos, and randomization is known as the principal feature of this stage. In the Big Crunch stage, however, the randomly distributed particles are drawn into an order [10].

4.1.9 Firefly algorithm

The Firefly algorithm was developed by Yang [11] at Cambridge University based on idealization of the flashing characteristics of fireflies. In order to develop the algorithm, the following three idealized rules are used:

All fireflies are unisex, such that a firefly will be attracted to other fireflies, regardless of their gender.

Attractiveness is proportional to its desired brightness, hence for any of the two flashing fireflies, the less brighter firefly will move toward the more brighter one.

The brightness of a firefly can be determined by the landscape of the objective function [11].

4.1.10 Imperialist competitive algorithm

The imperialist competitive algorithm was developed by Atashpaz Gargari and Lucas in 2007. Drawing on mathematical modeling of sociopolitical evolution process, this algorithm provides an approach to solving mathematical optimization problems. During the imperialist competition, weak empires lose their power gradually and are finally eliminated. The imperialist competition makes it possible for us to reach a point where there is only one empire left in the world. This comes to realization when the imperialist competitive algorithm reaches the optimal point of the objective function and stops [12].

4.1.11 Cuckoo search

Cuckoo search is an optimization algorithm developed by Yang and Deb in 2009. This algorithm is inspired by the obligate brood parasitism of some cuckoo species by laying their eggs in the nests of other host birds. The following idealized rules are used for more simplicity:

Each cuckoo lays egg once at a time and puts it in a randomly selected nest.

The best nests with high-quality eggs will carry over to the next generation.

The number of hosts nests is fixed, and the egg laid by a cuckoo can be discovered by the host bird with a probability P a ∈ (0,1). In this case, the host bird will either dump the eggs or just leaves the nest to build a new one somewhere else [13].

4.1.12 Bat algorithm

The bat algorithm is a metaheuristic optimization algorithm developed by Yang [14]. This algorithm is based on the echolocation behavior of microbats with varying pulse rates of emission and loudness. Echolocation is a biological sound tracking system that is used by bats and some other animals, such as dolphins. By idealization of some of the echolocation features, one can develop various bat-inspired algorithms:

All bats use echolocation to sense distance, and they also “know” the difference between food/prey and background barriers in some magical way.

Bats fly randomly with velocity vi at position xi with a fixed frequency fmin, varying wavelength λ and loudness A0 to search for prey. They can automatically adjust the wavelength (or frequency) of their emitted pulses and adjust the rate of pulse emission r ∈ [0, 1], depending on the proximity of their target.

Although loudness may vary in many ways, it is assumed that loudness variations range from a large (positive) A0 to a minimum constant value Amin [14].

4.1.13 Charged system search

Charged system search was presented by Kaveh and Talatahari [15] for optimization of mathematical model. Each search agent is referred to as a charged particle, which behaves like a charged sphere with a known radius and a charge proportional to the quality of the produced solution. Thus, the particles are able to exert force on one another and cause other particles move. In addition, exploitation of particle’s previous velocity as a consideration of the particle’s past performance can be effective in changing the particle position. Newtonian mechanic rules were used to precisely determine these changes from the rules used here provided some sort of balance between the algorithm power at the conclusion and search stage [15].

4.1.14 Krill herd algorithm

The krill herd algorithm was proposed by Gandomi and Alavi [16] to optimize the mathematical model. This algorithm is classified as a swarm intelligence algorithm. This algorithm is inspired by the herding behavior of krill swarms in the process of food finding. In the krill herd algorithm, minimum distance of the krill individual from food and from the highest density of the herd is considered as the objective functions for krill movement. The specific location of the individual krill varies with time depending on the following three actions: movement induced by other krill individuals; foraging activity; and random diffusion (Figure 11).

4.1.15 Dolphin echolocation

Dolphin echolocation was first proposed by Kaveh and Farhoudi as a new optimization method. Scientists believe that dolphins are ranked second (after humans) in terms of smartness and intelligence. This optimization method was developed according to echolocation ability of dolphins [17].

4.2 Literature on drilling operations

Drilling operations lead to significant costs during the development of oil and gas fields. Therefore, drilling optimization can decrease the costs of a project and hence increase the profit earned from the oil and gas production. In most of the studies, rate of penetration (ROP) has been considered as the objective function of the optimization process. ROP depends on many factors including well depth, formation characteristics, mud properties, rotational speed of the drill string, etc. Several studies have been conducted to gain a profound insight into the effective parameters on ROP. Maurer [18] introduced an equation for ROP, in which it was accounted for rock cratering mechanisms of roller-cone bits. Galle and Woods [19] proposed a mathematical model for estimating ROP, where formation type, weight on bit, rotational speed of bit, and bit tooth wear were taken as input parameters. Mechem and Fullerton [20] proposed a model with input variables of formation drilling ability, well depth, weight on bit, bit rotational speed, mud pressure, and drilling hydraulics. Bourgoyne and Young [21] used multiple regression analysis to develop an analytical model and also investigated the effects of depth, strength, and compaction of the formation, bit diameter, weight on bit, rotational speed of bit, bit wear, and hydraulic interactions associated with drilling. Bourgoyne and Young [21] introduced a technic for selection of optimum values for weight on bit, rotational speed, bit hydraulics, and calculation of formation pressure through multiple regression analysis of drilling data. Tanseu [22] developed a new method of ROP and bit life optimization based on the interaction of raw data, regression, and an optimization method, using the parameters of bit rotational speed, weight on bit, and hydraulic horsepower. Al-Betairi et al. [23] used multiple regression analysis for optimization of ROP as a function of controllable and uncontrollable variables. They also studied the correlation coefficients and multicollinearity sensitivity of the drilling parameters. Maidla and Ohara [24] introduced a computer software for optimum selection of roller-cone bit type, bit rotational speed, weight on bit, and bit wearing for minimizing drilling costs. Hemphill and Clark [25] studied the effect of mud chemistry on ROP through tests conducted with different types of PDC bits and drilling muds. Fear [26] conducted a series of studies using geological and mud logging data and bit properties in order to develop a correlation for estimating ROP. Ritto et al. [27] introduced a new approach for optimization of ROP as a function of rotational speed at the top and the initial reaction force at the bit, vibration, stress, and fatigue limit of the dynamical system. Alum and Egbon [28] conducted a series of studies, which led to the conclusion that pressure loss in the annulus is the only parameter that affects ROP significantly, and finally, they proposed an analytical model for estimation of ROP based on the model introduced by Bourgoyne and Young. Ping et al. [29] utilized shuffled frog leaping algorithm to optimize ROP as a function of bit rotational velocity, weight on bit, and flow rate. Hankins et al. [30] optimized drilling process of already drilled wells with variables of weight on bit, rotational velocity, bit properties, and hydraulics to minimize drilling costs. Shishavan et al. [31] studied a preliminary managed pressure case to minimize the associated risk and decrease the drilling costs. Wang and Salehi [32] used artificial intelligence for prediction of optimum mud hydraulics during drilling operations and performed sensitivity analysis using forward regression. A variety of artificial intelligence works have recently been conducted in civil and oil engineering [33, 34, 35, 36].

In the following sections, a new approach was used for prediction and optimization of ROP, based on artificial neural network (ANN). According to the authors’ knowledge, ANN application on ROP optimization has not been widely used by previous studies. The variables used in this study were well depth (D), weight on bit (WOB), bit rotational velocity (N), the ratio of yield point to plastic viscosity (Yp/PV), and the ratio of 10 min gel strength to 10 s gel strength (10MGS/10SGS). Using ANN technic, several models were developed for prediction of ROP, and the best one was selected according to their performances. Then, an artificial bee colony (ABC) algorithm was used for optimization of ROP based on the selected ANN predictive model, and the drilling parameters were evaluated to determine their effects on ROP.

Advertisement

5. Methodology of the problem of the case study

In the present work, it is aimed to apply neural networks in combination with artificial bee colony (ABC) algorithm on a real case of penetration rate prediction and optimization. The basic definitions regarding the problem of study are provided in the nest subsections. Then, the case used in our work is explained. At the end, ABC algorithm used in the optimization process is described.

5.1 Hydrocarbon reservoir

Hydrocarbon is the general term used for any substance, which is composed of hydrogen and carbon. From clothing to energy, there are different areas in which hydrocarbons serve as the main material. Hydrocarbons are usually extracted from reservoirs located deep in the formation of the earth’s crust. Underground hydrocarbon reservoirs, which are also known as oil and gas reservoirs, have been exploited since more than one and half a century ago. And there have been several developments in technologies associated with oil and gas industry [37, 38].

The term hydrocarbon reservoir is used for a large volume of rock containing hydrocarbon either in oil or gas form, which is usually found in deep formation in the earth. This type of reservoir is far different from what most of people imagine when they think about. A hydrocarbon reservoir is not a tank or something like that. In fact, it is a rock having numerous pores, which make it capable of storing fluid. There are two types of hydrocarbon reservoirs: conventional and unconventional [39].

A conventional reservoir consists of porous and permeable rock, which is bounded by an impermeable rock, usually called cap rock. Due to the high pressure in the deep layers, the fluid in the reservoir rock tends to move out of the rock toward lower depths, which usually have lower pressures. The role of cap rock is to seal the rock in order to prevent the hydrocarbon from migrating to low-pressure depths.

Conventional reservoirs were the only type of exploited hydrocarbon reservoirs until the recent years. As the conventional reserves became rare and depleted, oil and gas industries started to study the feasibility of production from unconventional reservoirs. Thanks to the recent developments in the related technologies, production of hydrocarbon from unconventional reservoirs has been started in different locations of the earth. The major difference between conventional and unconventional reservoirs is that in unconventional reservoirs, there is no traditional placement of reservoir and cap rock. The reservoir rock has high ?porosity, but because of low permeability, the fluid cannot move out of it and is entrapped into the rock. Since the example of the present work deals with a conventional reservoir, we avoid discussing more about unconventional reservoirs.

In order to produce oil and gas from a reservoir, at the first step, it is required to find a location in which hydrocarbon is accumulated in such a large volume that it can be exploited in an economic way. This exploration step is typically done using seismic technics. In the next step, the location with high probability of having hydrocarbon storage is drilled. The drilled well is called exploration well, and if it reaches a relatively large amount of hydrocarbon, more wells are drilled after preparing a field development plan. The production of the reservoir continues until the production rate falls below an economic criterion, which is usually defined as net present value.

Due to the high pressure of the reservoir rock, the hydrocarbon tends to move toward a lower pressurized region. In order to exploit the entrapped hydrocarbon and providing a flow path, one or more wells are needed. The well is drilled deep into the rocks, and after passing the cap rock, it reaches the reservoir rock. Then, due to the pressure difference between the rock and surface, the hydrocarbons start to move from the reservoir to the surface through the drilled well. Sometimes the pressure difference is not so large that the fluid can reach the surface. In these cases, some technics, called artificial lift methods, are used to increase the energy for delivering the fluid to higher altitude. After extraction of hydrocarbon, it is delivered to treatment facilities and the next steps are designed according to the producer company’s plan.

5.2 Drilling operations

As mentioned above, exploitation of oil and gas reservoirs typically consists of the three types of operation: exploration, drilling, and production. The drilling phase involves costly operations, which consume a high portion of the capital expenditure of the field development. Therefore, optimizing the operations associated with drilling can reduce the investments significantly, increasing the net present value of the project [40].

In the early years of oil and gas industry, the wells were drilled using percussion table tools. These technics became inefficient as demand for drilling deep and hence more pressurized formations increased. In the early twentieth century, rotary drilling technic was introduced to oil and gas industries and it paved the way for drilling faster and deeper wells.

Rotary drilling simply defines the process in which a sharp bit penetrates into the rock due to its weight and rotational movement [41]. Rotary drilling system comprises prime movers, hoisting equipment, rotary equipment, and circulating equipment, all of which mounted on a rig. The prime mover, usually a diesel engine, provides the power required for the whole rig. Hoisting system is responsible for raising and lowering the drill string in and out of the hole. Rotary equipment supports the rotation of the drill bit by transforming electrical power to rotational movement. In order to transport the cuttings to the surface and also to cool the bit, the circulation equipment provides mud flow that is directed into the drill string down to the bit and returns to surface transporting the debris accumulated in the bottom of the hole.

One of the important factors in drilling process is rate of penetration, which is usually measured in terms of meter per minute or foot per minute. This parameter shows how fast the drilling process has been done, and thus, how much cost has been reduced. Through the survey of previous studies, a series of parameters were identified as having significant effect on rate of penetration during drilling operations. These parameters include rotation speed of the bit, weight on the bit, shut-in pipe pressure, mud circulation rate, yield point and plastic viscosity of the mud, and mud gel strength. In the following, each parameter is briefly described.

Bit rotation speed: In a drilling process, the bit is rotated using rotary table or top drive system. The rotation of the bit is usually measured in rotation per minute (rpm).

Weight on the bit: In order to provide the required downward force for penetrating into the rock, several drill collars are installed before the bit. The parameter is generally called weight on bit (WOB) and measured in thousand of pounds (Klb).

Standpipe pressure: Standpipe pressure (SPP) refers to the total pressure loss due to fluid friction. In detail, SPP is the summation of pressure losses in drill string, annulus, bottom hole assembly, and across the bit. The unit for measuring the SPP is pounds per square inch (psi).

Mud flow rate: In order to lubricate and cool down the bit under drilling process, a mixture of additives mixed in water or oil, which, respectively, are called water-based and oil-based drilling mud, is pumped through the drill pipe down to the bit. Drilling mud also cleans up the bottomhole by transporting the cuttings up to the surface. It also helps penetration rate as it passes bit nozzles and penetrates the rock as a water jet system. Mud flow rate is often expressed in gallons per minute (gpm).

Mud yield point: Yield point, which is usually expressed in lbf/100 ft2, is an indicator for determining the resistance of a fluid to movement. It is a parameter of Bingham plastic model, which is equal to shear stress at zero shear rate. As attractive force among the colloidal particle increases, the mud needs more force to move; hence the yield point is considered higher.

Mud plastic viscosity: Plastic viscosity of the mud is determined by the slope of the shear stress vs. shear rate plot. Higher plastic viscosity indicates more viscous fluid and vice versa. The unit for measurement of plastic viscosity is centipoises.

Mud gel strength: Gel strength is the term that defines the shear stress measured at low shear rate after the drilling mud has been static for a certain period of time, which is 10 s and 10 min in API standard. It indicates ability of the drilling mud to suspend drill solid and weighting material when circulation is ceased. It is measured in lbf/100 ft2 in petroleum engineering applications.

5.3 Case study

In the present study, a data set obtained from a drilling process in a gas field located in the south of Iran was used. The depth of the well was 4235, which was drilled with one run of roller-cone bit and three runs of PDC bit. The IADC code of the roller-cone bit was 435 M, and PDC bits had codes of M332, M433, and M322. Roller-cone bit was used for about 20% and PDC bits for 80% of the drilled depth. In detail, roller-cone bit was used for the depth interval of 1016–1647 m, PDC (M332) was used for depth interval of 1647–2330 m, PDC (M433) was used for depth interval of 2330–3665 m, and finally, the depth between 3665 and 4235 m was drilled by PDC (M322).

The data set consists of 3180 samples, which were taken every 1 meter of penetration from 1016 to 4235 m. The recorded variables included well depth (D), rotation speed of bit (N), weight on bit (WOB), shut-in pipe pressure (SPP), fluid rate (Q), mud weight (MW), the ratio of yield point to plastic viscosity (Yp/PV), and the ratio of 10 min gel strength to 10 s gel strength (10MGS/10SGS). The statistical summary of the data points is gathered in Table 1.

Parameter (unit)Minimum valueMaximum valueMean value
Well depth (m)101642352636
Rotation speed of bit (rpm)91.38192.00150.72
Weight on bit (Klb)1.0243.2621.59
Shut-in pipe pressure (psi)898.984085.822502.61
Fluid rate (gpm/day)726.921054.75865.17
The ratio of yield point to plastic viscosity0.962.091.49
The ratio of 10 min gel strength to 10 s gel strength1.131.501.27

Table 1.

Statistical summary of input data.

5.4 General description of artificial bee Colony

This algorithm was developed by Karaboga [42] and mimics the behavior of bees when they search for nectar of flowers. In a hive of bees, there are three different types of bees: scouts, employed bees, and onlookers. The scout bees start a random search of the surrounding environment in order to find flowers that secrete nectar. After finding the flowers, they keep the location in their memory. Then, they return to the hive and share their information about their findings through a process called waggle dance. Next, the other group, called employed bees, starts finding the flowers based on the information obtained from the scouts in order to exploit the nectar of the flowers. The number of employed bees is equal to number of food sources. The third group of bees are called onlookers, which remain in the hive waiting for the return of the employed bees in order to exchange information and select the best source based on the dances (fitness of the candidates). In addition, the employed bees of an abandoned food site serves as a scout bee.

Considering an objective function, fx, the probability of a food source to be chosen by an onlooker can be expressed as [42]:

Pi=Fxij=1SFxiE14

where S indicates the number of food sources and Fx represents the amount of nectar at location x. The intake efficiency is defined as F/τ, in which τ represents the time consumed at the food source. If in a predefined number of iterations, a food source is tried with no improvement, then the employed bees dedicated to this location become scout and hence start searching the new food sources in a random manner.

ABC algorithm has been used in different engineering problems including well placement optimization of petroleum reservoirs [43], optimization of water discharge in dams [44], data classification [45], and machine scheduling [46]. More description on the ABC algorithm can be found in other references [47, 48, 49, 50]. A typical flowchart of ABC algorithm is shown in Figure 12.

Figure 12.

Typical flowchart of ABC algorithm.

Advertisement

6. Result and discussion

6.1 Prediction

In the present research, an ANN model was developed to predict the ROP as a function of effective parameters. The neural network is widely used in various engineering fields [51, 52, 53, 54, 55, 56, 57, 58, 59, 60]. In order to train the network, three training functions were used including Levenberg-Markvart (LM), scaled conjugate gradient (SCG), and one-step secant (OSS). The number of hidden layers in the network was one since according to Hornik et al. [61], one hidden layer is capable of solving any type of nonlinear function. The number of neurons in the hidden layer was another parameter to be set. Several equations have been proposed by different authors to determine the optimum number of neurons in a hidden layer, which are represented in Table 2. Ni and No indicate the number of input and output variables, respectively.

RelationshipsReference
2× Ni + 1[62]
(Ni + N0)/2[63]
2+N0×Ni+0.5N0×N02+Ni3Ni+N0[64]
2Ni/3[65]
Ni×N0[66]
2Ni[67, 68]

Table 2.

The equations for determining the optimum number of neurons in a hidden layer.

Using the values obtained by equations of Table 2, several ANN models were developed with neurons of 2–16. Then, the models were compared in terms of R2 and RMSE, and the best model was selected [69, 70, 56, 71]. The comparison was done through the method proposed by Zorlu et al. [72]. In this method, the R2 and RMSE of each enveloped model are calculated. Next, the networks are assigned an integer number according to their R2 and RMSE value, in the way that the better result acquires higher number. For example, if the number of models is equal to 8, the model having the best (highest) R2 value acquires 8, and the model having the worst model acquires the value of 1. This procedure also is repeated based on RMSE comparison. Then, the two numbers assigned to each model are summed up, and a total score is obtained for each model. Finally, the model acquiring the highest total value is determined as the best model for the problem of study.

In the present article, three types of learning functions were used for training the network, results of which are presented in Tables 35. According to the tables, LM, SCG, and OSS functions acquired the best results, respectively. In order to design an accurate model, the best model of each function was compared. The results of comparison are shown in Figures 13 and 14. As can be seen, the best model of LM function yielded better performance. Thus, this function was selected for designing an ANN for prediction and optimization of ROP.

Model no.Neuron no.TrainTestTrain ratingTest ratingTotal rank
R2RMSER2RMSER2RMSER2RMSE
120.8390.10400.8160.107611114
240.8990.08210.8850.0893564419
360.9020.08500.8970.0818648826
480.8820.08970.8840.0886223512
5100.8930.08680.8870.0910435214
6120.8920.08270.8750.0907352313
7140.9080.08000.8920.0885776626
8160.9120.07790.8930.0863887730

Table 3.

The results of the developed ANN models based on LM function.

Model no.Neuron no.TrainTestTrain ratingTest ratingTotal rank
R2RMSER2RMSER2RMSER2RMSE
120.7980.11590.8240.100211349
240.8200.10920.8150.1083442212
360.8090.11270.8390.0949226816
480.8410.10350.8310.0993664521
5100.8270.10760.8460.0982557724
6120.8140.10930.8100.109333118
7140.8530.09840.8370.1065885324
8160.8490.10060.8600.0985778628

Table 4.

The results of the developed ANN models based on SCG function.

Model no.Neuron no.TrainTestTrain ratingTest ratingTotal rank
R2RMSER2RMSER2RMSER2RMSE
120.8150.11280.8070.1033224513
240.8110.10890.7810.125414117
360.8290.10720.7910.1086562316
480.8160.11130.8430.0976338721
5100.8370.11280.7920.1057723416
6120.8220.10850.8280.0971455822
7140.8490.09960.8360.1098886224
8160.8320.10550.8400.1006677626

Table 5.

The results of the developed ANN models based on OSS function.

Figure 13.

The results of R2 for LM, SCG, and OSS functions.

Figure 14.

The results of RMSE for LM, SCG, and OSS functions.

6.2 Optimization

In the previous section, an ANN was developed for prediction of ROP using the input data. As mentioned, selecting the most accurate predictive model can significantly affect the performance of optimization. In this section, the performance of the optimization algorithm is evaluated. Then, the ANN model obtained in the previous section is incorporated in the optimization algorithm to optimize the effective parameters for maximizing the penetration rate.

6.3 Evaluation of optimization algorithm

In this section, the best ANN model obtained in the previous section was selected for optimization of ROP using ABC algorithm. In order to evaluate the performance of ABC, two functions were used for minimization by ABC:

F1x=1+x1+x2+121914x1+3x1214x2+6x1x2+3x22×30+2x13x221832x1+12x12+48x236x1x2+27x22E15

The range of variations of x1 and x2 are (−2, 2). Also, the optimal value of this function at the point (1−, 0) is 3.

This function is plotted in Figure 15. The ABC algorithm was used for finding minimum point of the above mentioned function, and the values of −0.33559 and −0.52311 were obtained for Eq. (15). The performance of ABC in finding the minimum point is illustrated in Figure 16.

Figure 15.

Function of Eq. (6) plotted in Cartesian coordinates.

Figure 16.

Evaluation of ABC algorithm for Eq. (6).

6.4 Optimization of ROP in petroleum wells

In this section, the ANN predictive model was used for optimization of parameters effective on ROP. Since the well depth increases during drilling, it was not considered as a decision variable. Hence, the parameters of ROP were optimized in some specific depths. It makes sense in the way that the parameters cannot be optimized in each meter of penetration.

The ABC algorithm was used for optimization of ROP effective parameters. After a series of sensitivity analysis, it was concluded that the efficient number of population and iterations are 40 and 500, respectively. Three depths on which optimization applied were 2000, 2500, and 3000. The results of optimization in the selected depths are provided in Tables 68.

ParameterUnitInitial valueOptimum value
WOBKlb23.817.4
Nrpm181149
SPPpsi2181.42783.6
Qbbl/day901.67848
Yp/PV1.5451.34
10MGS/10SGS1.331.16
ROPm/h16.7721.66

Table 6.

Comparison of real and optimized values for depth of 2000 m.

ParameterUnitInitial valueOptimum value
WOBKlb15.421.6
Nrpm157162
SPPpsi2531.52481.3
Qbbl/day898.45790
Yp/PV2.091.76
10MGS/10SGS1.21.09
ROPm/h18.5222.85

Table 7.

Comparison of real and optimized values for depth of 2500 m.

ParameterUnitInitial valueOptimum value
WOBKlb21.925.5
Nrpm142153
SPPpsi2854.72927.5
Qbbl/day851.7816
Yp/PV1.4281.59
10MGS/10SGS1.251.11
ROPm/h13.9417.30

Table 8.

Comparison of real and optimized values for depth of 3000 m.

As can be seen, in each selected depth, value of ROP was increased by about 20–30%. Therefore, by combining artificial intelligence and optimization, suitable patterns for ROP in an oil well in order to increase penetration and reduce costs can be created.

6.5 Conclusion and summary

In this chapter, firstly, the basics of optimization are explained to solve problems. Then, an application of neural network combined with ABC algorithm was used for prediction of rate of penetration in a gas well. The data were collected from a gas field located in south of Iran. Seven input parameters were selected as input data to develop a predictive ANN model. For this purpose, three learning functions were compared, and LM function was selected as the best function for designing the predictive model. Next, an ABC algorithm was employed to optimize the effective parameters of ROP for maximizing the penetration rate. Three scenarios were selected for considering the well depth in optimization process. Then, the best models for the depths of 2000, 2500, and 3000 m were obtained, and the results showed 20–30% of improvement in penetration rate.

According to the results of the test, it was concluded that the proposed model is a powerful tool for prediction and optimization of rate of penetration during drilling process. Since the drilling process involves numerous effective parameters, it is almost infeasible to explicitly take into account each parameter. Therefore, use of ANN seems very useful in this complex problem and it helps to predict and optimize the penetration rate in a short period of time and without heavy computational costs.

References

  1. 1. Azizi A. Introducing a novel hybrid artificial intelligence algorithm to optimize network of industrial applications in modern manufacturing. Complexity. 2017:18. https://doi.org/10.1155/2017/8728209
  2. 2. Holland JH. Genetic algorithms. Scientific American. 1992;267:66-73
  3. 3. Kirkpatrick S, Gelatt CD, Vecchi MP. Optimization by simulated annealing. Science (80-). 1983;220:671-680
  4. 4. Glover F. Tabu search—Part I. ORSA Journal on Computing. 1989;1:190-206
  5. 5. Dorigo M, Birattari M. Ant colony optimization. In: Encyclopedia of Machine Learning. Springer; 2011. pp. 36-39
  6. 6. James K, Eberhart R. A new optimizer using particle swarm theory. In: Proceedings of the IEEE sixth international symposium on micro machine and human science.1995;34:39-43
  7. 7. Kennedy J. Particle swarm optimization. In: Encyclopedia of Machine Learning. Springer; 2011. pp. 760-766
  8. 8. Geem ZW, Kim JH, Loganathan GV. A new heuristic optimization algorithm: Harmony search. SIMULATION. 2001;76:60-68
  9. 9. Yang X-S. Metaheuristic optimization. Scholarpedia. 2011;6:11472
  10. 10. Erol OK, Eksin I. A new optimization method: Big bang–big crunch. Advances in Engineering Software. 2006;37:106-111
  11. 11. Yang X-S. Firefly algorithms for multimodal optimization. In: International Symposium on Stochastic Algorithms. Springer; 2009. pp. 169-178
  12. 12. Atashpaz-Gargari E, Lucas C. Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition. In: Evolutionary Computation, 2007. CEC 2007. IEEE Congress on. IEEE; 2007. pp. 4661-4667
  13. 13. Yang X-S, Deb S. Cuckoo search: Recent advances and applications. Neural Computing and Applications. 2014;24:169-174
  14. 14. Yang X-S. A new metaheuristic bat-inspired algorithm. In: Nature Inspired Cooperative Strategies for Optimization (NICSO 2010). Springer; 2010. pp. 65-74
  15. 15. Kaveh A, Talatahari S. A novel heuristic optimization method: Charged system search. Acta Mechanica. 2010;213:267-289
  16. 16. Gandomi AH, Alavi AH. Krill herd: A new bio-inspired optimization algorithm. Communications in Nonlinear Science and Numerical Simulation. 2012;17:4831-4845
  17. 17. Kaveh A, Farhoudi N. A new optimization method: Dolphin echolocation. Advances in Engineering Software. 2013;59:53-70
  18. 18. Maurer WC. The“ perfect-cleaning” theory of rotary drilling. Journal of Petroleum Technology. 1962;14:1-270
  19. 19. Galle EM, Woods HB. Best constant weight and rotary speed for rotary rock bits. In: Drilling and Production Practice. Society of Petroleum Engineers: American Petroleum Institute; 1963
  20. 20. Mechem OE, Fullerton HB Jr. Computers invade the rig floor. Oil and Gas Journal. 1965;1:14
  21. 21. Bourgoyne AT Jr, Young FS Jr. A multiple regression approach to optimal drilling and abnormal pressure detection. Society of Petroleum Engineers Journal. 1974;14:371-384
  22. 22. Tansev E. A heuristic approach to drilling optimization. In: Fall Meeting of the Society of Petroleum Engineers of AIME. Society of Petroleum Engineers; 1975
  23. 23. Al-Betairi EA, Moussa MM, Al-Otaibi S. Multiple regression approach to optimize drilling operations in the Arabian gulf area. SPE Drilling Engineering. 1988;3:83-88
  24. 24. Maidla EE, Ohara S. Field verification of drilling models and computerized selection of drill bit, WOB, and drillstring rotation. SPE Drilling Engineering. 1991;6:189-195
  25. 25. Hemphill T, Clark RK. The effects of pdc bit selection and mud chemistry on drilling rates in shale. SPE Drilling and Completion. 1994;9:176-184
  26. 26. Fear MJ. How to improve rate of penetration in field operations. SPE Drilling and Completion. 1999;14:42-49
  27. 27. Ritto TG, Soize C, Sampaio R. Robust optimization of the rate of penetration of a drill-string using a stochastic nonlinear dynamical model. Computational Mechanics. 2010;45:415-427
  28. 28. Alum MA, Egbon F. Semi-analytical models on the effect of drilling fluid properties on rate of penetration (ROP). In: Nigeria Annual International Conference and Exhibition. Society of Petroleum Engineers; 2011
  29. 29. Yi P, Kumar A, Samuel R. Realtime rate of penetration optimization using the shuffled frog leaping algorithm. Journal of Energy Resources Technology. 2015;137:32902
  30. 30. Hankins D, Salehi S, Karbalaei Saleh F. An integrated approach for drilling optimization using advanced drilling optimizer. Journal of Petroleum Engineering. 2015;2015:12. http://dx.doi.org/10.1155/2015/281276
  31. 31. Asgharzadeh Shishavan R, Hubbell C, Perez H, et al. Combined rate of penetration and pressure regulation for drilling optimization by use of high-speed telemetry. SPE Drilling and Completion. 2015;30:17-26
  32. 32. Wang Y, Salehi S. Application of real-time field data to optimize drilling hydraulics using neural network approach. Journal of Energy Resources Technology. 2015;137:62903
  33. 33. Koopialipoor M, Armaghani DJ, Hedayat A, et al. Applying various hybrid intelligent systems to evaluate and predict slope stability under static and dynamic conditions. Soft Computing. 2018;34:1-17
  34. 34. Koopialipoor M, Armaghani DJ, Haghighi M, Ghaleini EN. A neuro-genetic predictive model to approximate overbreak induced by drilling and blasting operation in tunnels. Bulletin of Engineering Geology and the Environment. 2017;33:1-10
  35. 35. Hasanipanah M, Armaghani DJ, Amnieh HB, et al. A risk-based technique to analyze Flyrock results through rock engineering system. Geotechnical and Geological Engineering. 2018;34:1-14
  36. 36. Koopialipoor M, Nikouei SS, Marto A, et al. Predicting tunnel boring machine performance through a new model based on the group method of data handling. Bulletin of Engineering Geology and the Environment. 2018;34:1-15
  37. 37. Bradley HB. Petroleum Engineering Handbook. Society of Petroleum Engineers; 1987
  38. 38. Pirson SJ. Oil Reservoir Engineering. RE Krieger Publishing Company; 1977
  39. 39. Ahmed T. Reservoir Engineering Handbook. Elsevier; 2006
  40. 40. Bourgoyne AT, Millheim KK, Chenevert ME, Young FS. Applied Drilling Engineering. Society of Petroleum Engineers; 1986
  41. 41. Mitchell RF, Miska SZ. Fundamentals of Drilling Engineering. Society of Petroleum Engineers; 2017
  42. 42. Karaboga D. An Idea Based on Honey Bee Swarm for Numerical Optimization. Technical Report-tr06, Erciyes University, Engineering Faculty, Computer Engineering Department; 2005
  43. 43. Nozohour-leilabady B, Fazelabdolabadi B. On the application of artificial bee colony (ABC) algorithm for optimization of well placements in fractured reservoirs; efficiency comparison with the particle swarm optimization (PSO) methodology. Petroleum. 2016;2:79-89
  44. 44. Ahmad A, Razali SFM, Mohamed ZS, El-shafie A. The application of artificial bee colony and gravitational search algorithm in reservoir optimization. Water Resources Management. 2016;30:2497-2516
  45. 45. Zhang C, Ouyang D, Ning J. An artificial bee colony approach for clustering. Expert Systems with Applications. 2010;37:4761-4767
  46. 46. Rodriguez FJ, García-Martínez C, Blum C, Lozano M. An artificial bee colony algorithm for the unrelated parallel machines scheduling problem. In: International Conference on Parallel Problem Solving from Nature. Springer; 2012. pp. 143-152
  47. 47. Koopialipoor M, Ghaleini EN, Haghighi M, et al. Overbreak prediction and optimization in tunnel using neural network and bee colony techniques. Engineering Computations. 2018;34:1-12
  48. 48. Gordan B, Koopialipoor M, Clementking A, et al. Estimating and optimizing safety factors of retaining wall through neural network and bee colony techniques. Engineering Computations. 2018;34:1-10
  49. 49. Koopialipoor M, Fallah A, Armaghani DJ, et al. Three hybrid intelligent models in estimating flyrock distance resulting from blasting. Engineering Computations. 2018;34:1-14
  50. 50. Ghaleini EN, Koopialipoor M, Momenzadeh M, et al. A combination of artificial bee colony and neural network for approximating the safety factor of retaining walls. Engineering Computations. 2018;35:1-12
  51. 51. Azizi A, Yazdi PG, Humairi AA. Design and fabrication of intelligent material handling system in modern manufacturing with industry 4.0 approaches. International Robotics & Automation Journal. 2018;4:186-195
  52. 52. Azizi A, Entessari F, Osgouie KG, Rashnoodi AR. Introducing neural networks as a computational intelligent technique. In: Applied Mechanics and Materials. Trans Tech Publications; 2014. pp. 369-374
  53. 53. Osgouie KG, Azizi A. Optimizing fuzzy logic controller for diabetes type I by genetic algorithm. In: Computer and Automation Engineering (ICCAE), 2010 the 2nd International Conference on. Trans Tech Publications: IEEE; 2010. pp. 4-8
  54. 54. Azizi A. Hybrid artificial intelligence optimization technique. In: Applications of Artificial Intelligence Techniques in Industry 4.0. Trans Tech Publications: Springer; 2019. pp. 27-47
  55. 55. Azizi A, Seifipour N. Modeling of dermal wound healing-remodeling phase by neural networks. In: Computer Science and Information Technology-Spring Conference, 2009. IACSITSC’09. International Association of. Trans Tech Publications: IEEE; 2009. pp. 447-450
  56. 56. Koopialipoor M, Murlidhar BR, Hedayat A, et al. The use of new intelligent techniques in designing retaining walls. Engineering Computations. 2019;35:1-12
  57. 57. Zhao Y, Noorbakhsh A, Koopialipoor M, et al. A new methodology for optimization and prediction of rate of penetration during drilling operations. Engineering Computations. 2019;35:1-9
  58. 58. Liao X, Khandelwal M, Yang H, et al. Effects of a proper feature selection on prediction and optimization of drilling rate using intelligent techniques. Engineering Computations. 2019;35:1-12
  59. 59. Azizi A, Vatankhah Barenji A, Hashmipour M. Optimizing radio frequency identification network planning through ring probabilistic logic neurons. Advances in Mechanical Engineering. 2016;8:1687814016663476
  60. 60. Azizi A. RFID network planning. In: Applications of Artificial Intelligence Techniques in Industry 4.0. Springer; 2019. pp. 19-25
  61. 61. Hornik K, Stinchcombe M, White H. Multilayer feedforward networks are universal approximators. Neural Networks. 1989;2:359-366
  62. 62. Hecht-Nielsen R. Kolmogorov’s mapping neural network existence theorem. In: Proceedings of the International Joint Conference in Neural Networks. IEES press; 1989. pp. 11-14
  63. 63. Ripley BD. Statistical aspects of neural networks. In: Networks Chaos—Statistical Probabilistic Aspects. Chapman & Hall; Vol. 50. 1993. pp. 40-123
  64. 64. Paola JD. Neural Network Classification of Multispectral Imagery. USA: Master Tezi, Univ Arizona; 1994
  65. 65. Wang C. A Theory of Generalization in Learning Machines with Neural Network Applications. University of Pennsylvania Philadelphia, PA, USA: 1994
  66. 66. Masters T. Practical Neural Network Recipes in C++. Morgan Kaufmann; 1993
  67. 67. Kanellopoulos I, Wilkinson GG. Strategies and best practice for neural network image classification. International Journal of Remote Sensing. 1997;18:711-725
  68. 68. Kaastra I, Boyd M. Designing a neural network for forecasting financial and economic time series. Neurocomputing. 1996;10:215-236
  69. 69. Ashkzari A, Azizi A. Introducing genetic algorithm as an intelligent optimization technique. In: Applied Mechanics and Materials. Trans Tech Publications; 2014. pp. 793-797
  70. 70. Azizi A. Applications of Artificial Intelligence Techniques in Industry 4.0. Springer; 2018
  71. 71. Koopialipoor M, Fahimifar A, Ghaleini EN, et al. Development of a new hybrid ANN for solving a geotechnical problem related to tunnel boring machine performance. Engineering Computations. 2019;35:1-13
  72. 72. Zorlu K, Gokceoglu C, Ocakoglu F, et al. Prediction of uniaxial compressive strength of sandstones using petrography-based models. Engineering Geology. 2008;96:141-158

Written By

Mohammadreza Koopialipoor and Amin Noorbakhsh

Submitted: 30 November 2018 Reviewed: 24 February 2019 Published: 15 January 2020