Open access peer-reviewed chapter

Multi Strategy Search with Crow Search Algorithm

Written By

Rafet Durgut and Mehmet Emin Aydin

Reviewed: 25 January 2022 Published: 10 March 2022

DOI: 10.5772/intechopen.102862

From the Edited Volume

Optimisation Algorithms and Swarm Intelligence

Edited by Nodari Vakhania and Mehmet Emin Aydin

Chapter metrics overview

221 Chapter Downloads

View Full Metrics

Abstract

Crow Search Algorithm (CSA) is one of the recently proposed swarm intelligence algorithms developed inspiring of the social behaviour of crow flocks. One of the drawbacks of the original CSA is that it tends to randomly select a neighbour on search strategy due to its low convergence rate, which pushes the search to stick in local optima due to the same search strategy applied across iterations. The multi-strategy search for CSA (CSA-MSS) has been proposed to enrich the search facilities and provide diversity to overcome these drawbacks. The multi-strategy search implies utilising a pool of strategies consists of six different types of search operators. The multi-strategy approach with a selection mechanism has not been proposed for CSA before and implemented first time. The comparative performance analysis for the proposed algorithm has been conducted over solving 24 benchmark problems. The results demonstrate that the proposed approach is outperforming well-known state-of-the-art methods.

Keywords

  • swarm intelligence
  • crow search algorithm
  • global optimisation
  • strategy selection
  • operator selection schemes

1. Introduction

Over the last two decades, many nature-inspired metaheuristic optimisation algorithms (MOAs) have been proposed to solve many abstract and real-world problems including combinatorial, binary and real engineering problems [1]. MOAs are generally iterative processes developed imitating natural phenomena and animal behaviours to obtain optimal solutions for given problems [2]. Although these algorithms provide satisfactory solutions to various optimisation problems, they do not always guarantee an optimal way of solving them in different scales [3, 4]. In this respect, MOAs has recently received increasing interest from scholars in related fields (e.g. engineering, business, logistics, etc.) as a way to design algorithms that specify a solution to general problems using a certain pattern of behaviour inspired by a given society. An example in this respect is the study of a colony of insects or the behaviour of other animals’ societies [5]. Swarm intelligence algorithms tend to simulate social behaviours of the imitated societies, where some renown swarm intelligence algorithms are particle swarm optimisation (PSO) inspired of bird flocks and fish schooling [6, 7], artificial bee colony (ABC) simulates collective food searching behaviour of honey bees [8, 9], cuckoo search algorithm is based on the brood parasitism of some cuckoo species [10, 11], group search optimiser (GSO) is based on animal searching behaviour [12, 13], and firefly algorithm (FA) is based on the flashing light patterns of tropic fireflies [14, 15].

Crow Search Algorithm (CSA) is another swarm intelligence metaheuristic framework inspired of feeding behaviours of crow flocks [1]. Similar to many other MOAs, CSA needs a balance in between exploration and exploitation activities throughout the search process in order to approximate global optimal solutions [16]. In this respect, exploitation refers to the use of gained experience in deepening the search within the locality of the current solution while exploration implies diversification, e.g. jumping to another region of the search space, in order to reach the optimal solution. The majority of the real-world problems are multi-model problems, which possess numerous local optimum solutions hard to avoid, the exploitative search has higher probability to get stuck into a local optimum. In contrast, exploration helps enrich the search with higher diversity to let the search move to other feasible regions within the search space, which may end up with a better payoff due to that a new region may lead to global best. In addition, a well-balanced exploration would let the algorithm to avoid sub-optimal solutions. It is known that a search strategy with balance in between exploration and exploitation facilitates finding sub-optimal solutions close to the global optimal [17] in shorter time, while CSA is known with its higher exploitation capability as designed [18].

In order to balance exploration and exploitation capabilities for MOAs, a number of approaches have been implemented; among these, three categories can be highlighted; (i) dynamic parameterisation on the fly [19], (ii) implementing different perturbation search strategies [20] and (iii) using multiple strategies in the search [21] instead of single one. For the latest approach, a strategy pool consisting of different strategies is constructed. The algorithm chooses the appropriate strategy according to search process circumstances and dynamics. In this article, an implementation of CSA approach with multiple strategies is introduced to solve functional optimisation problems using a pool of six different search strategies. The main contribution of the study is the use of multi-strategy search policy with CSA and suggesting a selection scheme in order to respond best to the search dynamics for solving functional optimisation problems.

Advertisement

2. Related works

Studies related to this work can be viewed into two categories: modified version of CSA and adaptation approaches. Zamani et al. [22] proposed new version of CSA, called Conscious Neighbourhood-based CSA (CCSA), with three new search strategies to improve local and global search capabilities of CSA. Sonuc [23] proposed BinCSA, a pure binary version of CSA, in which a logic gate was used as a search strategy. Majhi et al. [24] developed a new version of CSA, called OBL-CSA-MO, in which they tried to escape from local optima by using opposition learning strategy and mutation operator.

CSA has two control parameters: namely awareness probability (AP) and flight length (FL). AP is randomly generated solution probability and controls intensification and diversification balance. FL controls searching effect of algorithm. Rizk-Allah et al. [25] proposed a new version of CSA integrated with chaos theory for solving fractional optimisation problems. Coelho et al. [26] developed a modified CSA in which Gaussian distribution function is used in order to control two algorithmic parameters. Necira et al. [27] proposed an enhanced version of CSA, called dynamic crow search algorithm (DCSA), in which there are two modifications on CSA. Awareness probability is linearly adjusted throughout iterations, and FL is expressed as pareto probability density function. There are several studies aiming to improve the efficiency and robustness of CSA; for more details and further modifications of CSA, the reader is referred to the surveys [28, 29]. To the best knowledge of the authors, CSA has not been studied with multiple search strategies to avoid local optima problem, and this article presents the first ever study in this respect.

Advertisement

3. Materials and methods

3.1 Crow search algorithm

Crows are known to be among the smartest bird species based on their natural talents [30]; they can remember the location where they or other bird species store food [31], often times even called ‘thieves’. With such a sound memory skill, they can even recall faces and warn the flock using complex communication channels in dangerous situations [32]. CSA is developed as a metaheuristic optimisation algorithm inspired by such smart behaviours [1] where each crow represents a valid solution within the search space. The crow’s memory represents the best solutions for the given problem set. Let N be the number of crows in the flock and the search space be D dimensional. Then the position of the ith crow at jth time should be as xi,j=xdi,j, where dD, iN and jT.

The position of the information held by each crow in the memory should be mi,j, where m information stored in memory holds the best position information available up to that iteration, and T is the maximum iteration number. There are two options once the crows move towards the food positions in mind: (i) moving towards position m blind of being followed by another crow (k); (ii) moving to any random position aware of being followed by another curious crow. The choice between these two options is modelled as follows:

xi,j+1=xi,j+ri×FL×mk,jxi,j,rj>AParandom position,otherwiseE1

where ri and rj are two uniformly distributed random variables within the range of 01, FL is the flight distance, and AP is the probability of awareness, k is a randomly specified integer parameter that takes value from 1 to N. If the obtained position information of the solution—new solution—is better than the one in the memory, then, the position information is updated. The exploration and exploitation capabilities are managed with FL and AP in this approach, where FL is used to obtain better solutions, while AP is used to discover new solutions. If a high value is set for the AP, it will then come into play at the local minimum in the exploration phase of the algorithm. If a low value is set for the AP, late convergence will occur in the exploitation phase. Therefore, the AP value needs to be adjusted well [1].

The general overview of the CSA is given in Algorithm 1 in a pseudo code. Firstly, the algorithmic parameters are determined based on the position of each crow randomly generated in a d-dimensional search space. Once the first population is adopted as the initial best set of solution in the flock memory. Then the main loop of the algorithm starts working. Each crow in the flock makes decision for whom to follow within the flock. Next, the probability of AP is calculated randomly, as a threshold value, to identify if the crow is to move towards the food or to reach a new random position. If the new position information is a valid/feasible one and better than the solution in the memory, the crow updates its own memory. The method continues until the specified criteria are obtained.

Algorithm 1. The pseudo code of the CSA.

1: Initialising of parameters (D, T, AP, FL)

2: Randomly initialise crows positions

3: Memorise the positions

4: while termination criteria is not met do

5:    for i = 1:N do

6:        Selection Phase

7:        Determine AP

8:        if rand() >= AP then

9:           Calculate xi,j+1 by the Eq. (1)

10:       else

11:          Generate Random Position

12:       end if

13:    end for

14:    Bound Control

15:    Evaluate

16:    Memorise

17: end while

3.2 Search strategies

Search strategies are used to help algorithms to switch from one solution to another within the neighbourhood. However, use of single search strategy with any metaheuristic algorithm ends up with serious limitations with respect to the balance in between the exploration ant exploitation capabilities. In order to deliver search with a balance, we decided to use a pool of search strategies composed of six different search strategies. The five different approaches picked up from to the literature inspired of the update rules used in differential evolution (DE) and artificial bee colony (ABC) algorithms, where some modifications applied [5, 33].

Eq. (2) is the original update rule used by CSA to generate candidate solution, while Eq. (3) is extended with global best guided search strategy (GCSA) [34] in order to improve exploitation capability. Eq. (4) is a global best-guided search strategy embedded with flock memory (BCSA). Eq. (5) is added to the strategy pool to enhance exploration ability with which it brings two different randomly chosen crows’ position in the scene (CBCSA). Eq. (6) is developed inspiring of DE/RAND/1 to offer higher exploration capability (RCSA). Eq. (7) is also an implementation of DE/RAND/1 rule inserting memory solutions in use (RMCSA). In the all strategies, k is a randomly selected neighbour solution from the flock.

xi,j+1=xi,j+ri×FL×mi,jxi,jE2
xi,j+1=xi,j+ri×FL×mk,jxi,j+gbestxi,jE3
xi,j+1=gbest+ri×FL×mk1,jmk2,jE4
xi,j+1=xi,j+ri×mi,jxi,j+xk1,jxk2,jE5
xi,j+1=xk1,j+ri×xk2,jxk3,jE6
xi,j+1=mk1,j+ri×mk2,jmk3,jE7

where i=1,..,N and j=1,..,T are indexes for the number of crows and iterations, respectively, while k1,k2,k3 are three random integers required to hold k1k2k3.

3.3 CSA with multi-strategy search (CSA-MSS)

As explained in the previous section, search-based on single strategy imposes limitations which enforce local optima. This renowned problem is tackled with various ways in the literature. Memetic algorithms [35], modified swarm intelligence algorithms [36], hybrid solutions [37] are the studies seeking for resolutions for this issue. Recently, adaptive operator selection schemes [8, 38] look promising to overcome this issue in an easier way. The proposed approach in this study is based on a multi-strategy search approach, CSA-MSS, which implies adaptive use of multiple search strategies in order to overcome the limitations of single strategy search. It consists of two main components: credit assignment and strategy selection. In order to evaluate the performance of the applied strategy, reward value, ri,j, is calculated as follows:

ri,j=fxgbestfxfxE8

where fx is the objective value of the current solution, and fx is the objective value of the candidate solution which is generated by search strategy i. To balance reward value in the course of iterations, improvement on objective function is normalised over global best solution value. In Eq. (9), credit values are assigned to each strategy.

ci,j+1=1αci,j+αri,jsi,j+1E9

where α is the learning rate, ci,j and si,j are credit value and the number of successful improvements so far for strategy i at iteration j, respectively. The average reward is calculated for iteration j.

Adaptive pursuit approach is used as the strategy selection scheme for selecting the search strategy to operate due to its proven good performance in literature [8]. In this selection scheme, each strategy has a probability value which is calculated with Eq. (10).

pi,j+1=pi,j+βpmaxpi,j,ifi=ijpi,j+βpminpi,j,otherwiseE10

where β is the learning rate in 01, pmin is the minimum selection probability, and ij is the best strategy for iteration i. In this approach, the best strategy wins the maximum selection probability. The general overview of the CSA-MSS is given in Algorithm 2. The proposed approach starts with selecting a strategy to operate. If there is any non-rewarded strategy within the pool, a random selection is conducted to let these strategies have a chance to be opted. The selected strategy is operated on the current solution to generate a candidate solution—a neighbouring solution. A reward value is calculated for the opted strategy and recorded if it is successful in the running iteration. The credit values are updated from the reward values obtained/recorded at the end of each iteration. Both average and maximum reward values are calculated through a sliding window over the last W iterations.

Algorithm 2. The general overview of CSA-MSS.

1: Initialising of parameters (D,Tmax,M,AP,FL)

2: Randomly initialise crows positions

3: Memorise the positions

4: while termination criteria is not met do

5:     for i = 1:N do

6:        Selection Phase

7:        Determine AP

8:           if rand() >= AP then

9:              if Any strategy has not rewarded then

10:                 Select the strategy randomly

11:              else

12:                 Assign probabilities using Eq. (10)

13:                 Select the strategy using roulette-wheel selection

14:              end if

15:              Generate candidate solution

16:           else

17:           Generate Random Position

18:        end if

19:        Bound Control

20:        Evaluate

21:        Calculate reward using Eq. (8)

22:        if reward > 0 then

23:            Add reward and increase s for selected strategy

24:        end if

25:     end for

26:     Memorise

27:     Credit assignment using Eq. (9)

28: end while

3.4 Benchmark functions

In order to compare and analyse the performance of the approaches under-consideration alongside CSA, a number of well-known numeric benchmark functions have been used, which are collected from the literature [39, 40] as in Table 1. The characteristics of each are depicted in the second column with UN, MN, SP and NP standing for unimodality, multimodality, separable and non-separable, respectively. If the functions have only one local optima as the global optima, then it is called as unimodal function [41]. These types of functions can be used to demonstrate exploitation ability of the algorithms. If the functions have more than one local optima, then it is called a multimodal function, where exploration is also needed significantly to travel among the regions, i.e. modals, that have their local optima. If the relationship between any two inputs of a function is loose, i.e. can be easily operated standalone, then it is considered separable, non-separable otherwise [42].

NoNameSearch SpaceCFunction
F1Sphere100,100DUSf1x=i=1nxi2
F2Elliptic100,100DUNf2x=i=1nxi2106i1/n1
F3Sum Squares1010DUSf3x=i=1nixi2
F4Sum Power1010DMSf4x=i=1nxii+1
F5Schwefel 2.221010DUNf5x=i=1nxi+i=1nxi
F6Schwefel 2.21100,100DUNf6x=maxixi1in
F7Step100,100DUSf7x=i=1nxi+0.52
F8Quartic1.28,1.28DUSf8x=i=1nixi4
F9QuarticWN1.28,1.28DUSf9x=i=1nixi4+random01
F10Rosenbrock1010DUNf10x=i=1n100xi+1xi2+xi12
F11Rastrigin5.12,5.12DMSf11x=i=1nxi210cos2πxi+10
F12Non-Continuous Rastrigin5.12,5.12DMSf12x=i=1nyi210cos2πyi+10yi=xi,xi<12round2xi2,xi12
F13Griewank600,100DMNf13x=1400i=1nxi2i=1ncosxii+1
F14Schwefel 2.26500,500DUNf14x=418.98ni=1nxisinxi
F15Ackley3232DMNf15x=20exp0.21ni=1nxi2exp1ni=1ncos(2πxi+20+e
F16Penalized15050DMNf16x=πn10sin2πx1+i=1n1yi121+10sin2πyi+1+yn12+i=1nuxi10100,4yi=1+14xi+1uxi,a,k,m=kxiam,xi>a0,axiakxiam,xi<a
F17Penalized25050DMNf17x=110sin2πx1+i=1n1xi121+sin23πxi+1+xn121+sin22πxi+1+i=1nuxi5100,4
F18Alpine1010DMSf18x=i=1nxisinxi+0.1xi
F19Levy1010DMNf19x=i=1nxi121+sin2πxi+1+sin23πxi+1+xn11+sin23πxn
F20Weierstrass0.5,0.5DMNf20x=i=1nk=0kmaxakcos2πbkxi+0.5Dk=0kmaxakcos2πbk0.5a=0.5,b=3,kmax=20
F21Schaffer100,100DMNf21x=0.5+sin2i=1nxi20.51+0.001i=1nxi22
F22Himmelblau55DMSf22x=1ni=1nxi416xi2+5xi
F23Michalewicz0πDMSf23x=i=1nsinxisin20ixi2π

Table 1.

Benchmark functions.

Advertisement

4. Experimental results

This section presents the experimental study for testing the performance of proposed algorithm in comparison to a number of state-of-the-art algorithms. To fairly compare the performance of proposed approach, each algorithm was executed with 30 different runs; each seeded randomly. The algorithmic parameters of proposed approach and other variants, AP, FL, N are set to 0.1,2,30, respectively. The maximum function evaluation is set to 1.5E+5. Firstly, proposed approach is compared with the original CSA in Table 2. After that, CSA-MSS is compared with all strategies in Tables 36. The parameters of strategy selection scheme, i.e. Adaptive Pursuit, are set to α=0.1, beta=0.1, the reward value is calculated with extreme approach, i.e. maximum, over the last W=5 iterations. All parametric settings are delivered using parameter tuning experiments.

30D60D
F. NoCSACSA-MSSCSACSA-MSS
MeanStdMeanStdSignMeanStdMeanStdSign
F14.77E-052.81E-051.79E-746.79E-74+7.87E+041.99E+047.63E-173.10E-16+
F23.21E+005.95E+003.18E−661.70E-65+2.71E+094.69E+081.27E-125.75E-12+
F36.60E-064.00E-061.78E-699.59E-69+2.50E+045.06E+039.11E-184.79E-17+
F46.71E-141.21E-131.62E-788.73E-78+5.81E+401.37E+413.72E−312.00E-30+
F51.32E-032.78E-046.17E-342.98E-33+3.17E+188.63E+181.74E-089.13E-08+
F65.72E-021.95E-024.37E-011.89E-01+8.35E+002.73E-012.94E+004.17E-01+
F70.00E+000.00E+003.30E+002.15E+00+7.75E+042.16E+043.15E+011.94E+01+
F85.18E-148.21E-147.10E−623.82E-61+2.39E+027.05E+012.70E-319.43E-31+
F91.91E-025.43E-039.85E-034.71E-03+2.16E+027.95E+015.40E-021.41E-02+
F102.74E+019.95E+008.48E+006.43E+00+2.97E+068.81E+059.13E+013.25E+01+
F113.73E+011.37E+015.01E+011.76E+01+7.86E+023.09E+011.16E+021.62E+01+
F125.47E+012.29E+015.37E+011.67E+017.49E+022.83E+011.18E+022.70E+01+
F136.30E-037.08E-033.93E-024.93E-02+6.57E+022.23E+028.54E-021.16E-01+
F148.37E+033.37E+024.38E+037.61E+02+1.92E+043.24E+021.00E+041.07E+03+
F151.40E-033.93E-043.24E+001.28E+00+1.98E+019.61E-016.89E+001.48E+00+
F163.78E-072.89E-072.08E-014.50E-014.63E+081.41E+083.76E-016.87E-01+
F174.60E-063.78E-064.84E-032.61E-02+8.62E+083.08E+083.33E-021.62E-01+
F181.19E-031.03E-032.17E-091.17E-08+1.08E+028.18E+003.14E-051.69E-04+
F193.19E-062.84E-064.47E-011.26E+001.16E+033.39E+027.76E-011.84E+00+
F207.41E-021.82E-027.75E+002.28E+00+8.90E+015.61E+002.81E+013.89E+00+
F215.73E-022.02E-028.01E-021.45E-02+3.97E-018.65E-031.66E-012.17E-02+
F22-6.89E+012.17E+00-6.88E+012.88E+00-3.65E+011.79E+00-6.72E+011.55E+00+
F23−1.56E+013.43E+00−2.58E+011.20E+00+−1.47E+018.09E-01−5.04E+012.53E+00+

Table 2.

The comparison of CSA and CSA-MSS on benchmark functions.

F NoCSAGCSABCSACBCSARCSARMCSACSA-MSS
MeanSTDMeanSTDMeanSTDMeanSTDMeanSTDMeanSTDMeanSTD
F14.77E-052.81E-051.75E-191.45E-193.63E+041.22E+045.90E+035.21E+023.58E+045.30E+031.55E-396.04E-391.79E-746.79E-74
F23.21E+005.95E+004.48E-091.25E-086.03E+081.45E+087.49E+071.23E+075.83E+089.55E+071.36E-345.01E-343.18E−661.70E−65
F36.60E-064.00E-064.06E-203.41E-205.02E+031.18E+037.96E+021.06E+025.23E+037.18E+025.20E-392.21E-381.78E-699.59E-69
F46.71E-141.21E-134.82E-175.72E-171.32E+172.45E+171.80E+081.78E+085.81E+161.13E+178.72E+014.25E+021.62E-788.73E-78
F51.32E-032.78E-044.68E-112.34E-111.51E+062.29E+063.75E+013.23E+009.98E+051.83E+068.04E-273.13E-266.17E-342.98E-33
F65.72E-021.95E-022.10E-022.29E-027.11E+002.23E-013.23E+001.92E-017.06E+001.88E-012.30E+004.71E-014.37E-011.89E-01
F70.00E+000.00E+000.00E+000.00E+004.03E+044.31E+035.79E+037.46E+023.58E+044.35E+031.34E+013.25E+013.30E+002.15E+00
F85.18E-148.21E-147.12E-391.65E-384.39E+011.09E+011.24E+002.28E-013.48E+011.02E+014.80E-382.55E-377.10E-623.82E-61
F91.91E-025.43E-031.63E-025.13E-034.69E+011.18E+011.56E+002.75E-013.55E+017.44E+001.85E-021.88E-029.85E-034.71E-03
F102.74E+019.95E+002.85E+012.17E+011.05E+064.52E+053.30E+047.35E+039.65E+052.20E+052.65E+012.01E+018.48E+006.43E+00
F113.73E+011.37E+014.75E+011.08E+013.41E+021.14E+012.22E+021.19E+013.29E+021.64E+013.58E+017.61E+005.01E+011.76E+01
F125.47E+012.29E+012.97E+018.55E+003.04E+021.50E+011.89E+029.60E+002.92E+022.00E+015.19E+011.41E+015.37E+011.67E+01
F136.30E-037.08E-031.73E-023.45E-023.62E+023.72E+015.42E+017.96E+003.29E+023.57E+011.33E-021.70E-023.93E-024.93E-02
F148.37E+033.37E+025.51E+036.15E+028.40E+032.89E+028.31E+032.03E+028.39E+032.91E+024.51E+031.52E+034.38E+037.61E+02
F151.40E-033.93E-043.87E-022.08E-011.97E+015.59E-011.36E+014.54E-011.95E+012.02E-019.61E-017.36E-013.24E+001.28E+00
F163.78E-072.89E-072.81E-011.33E+001.82E+083.58E+071.81E+058.24E+041.26E+083.23E+073.90E-018.09E-012.08E-014.50E-01
F174.60E-063.78E-063.74E-195.71E-193.60E+081.13E+082.90E+061.02E+062.88E+088.23E+072.90E-019.00E-014.84E-032.61E-02
F181.19E-031.03E-032.49E-047.46E-044.53E+012.61E+002.32E+011.19E+004.33E+012.31E+001.67E-151.36E-152.17E-091.17E-08
F193.19E-062.84E-062.65E-133.97E-135.58E+027.14E+018.59E+011.02E+015.08E+026.01E+019.84E-021.63E-014.47E-011.26E+00
F207.41E-021.82E-021.14E+013.19E+004.16E+014.07E+002.72E+017.16E-014.12E+011.63E+008.72E-017.99E-017.75E+002.28E+00
F215.73E-022.02E-021.14E-012.52E-022.76E-013.35E-031.10E-011.99E-022.63E-011.88E-026.54E-022.82E-028.01E-021.45E-02
F22-6.89E+012.17E+00-6.76E+012.75E+00−4.54E+014.79E+00−5.29E+011.49E+00−4.46E+011.62E+00−7.12E+012.17E+00−6.88E+012.88E+00
F23−1.56E+013.43E+00−2.56E+011.24E+00−9.43E+006.27E-01−1.16E+014.97E-01−9.32E+003.83E-01−2.61E+011.08E+00−2.58E+011.20E+00

Table 3.

The comparison of the search strategies on 30D benchmark functions.

F NoCSAGCSABCSACBCSARCSARMCSACSA-MSS
RankSignRankSignRankSignRankSignRankSignRankSignRank
F14+3+7562+1
F243+7562+1
F34+3+6572+1
F43+2+75641
F54+3+7562+1
F62+1+75643
F71+1+75643
F84+2+7563+1
F94+2+7563+1
F103475621
F112375614
F124175623
F131+3+7562+4
F145374621
F151+275634
F161+375642
F172+1+75643
F184+3+7561+2
F192+1+75634
F201+475623
F211+5+7+4+6+2+3
F222465713
F234365712
Mean:2.739132.6086966.8695654.9130436.1304352.4347832.26087

Table 4.

The statistical details of comparison of the search strategies on 30D benchmark functions.

F NoCSAGCSABCSACBCSARCSARMCSACSA-MSS
MeanSTDMeanSTDMeanSTDMeanSTDMeanSTDMeanSTDMeanSTD
F17.87E+041.99E+048.51E-134.83E-131.13E+054.89E+032.90E+042.32E+031.14E+055.66E+035.27E-092.05E-087.63E-173.10E-16
F22.71E+094.69E+082.66E+003.27E+002.88E+094.26E+086.08E+081.02E+082.83E+093.31E+082.83E+087.80E+081.27E-125.75E-12
F32.50E+045.06E+036.64E-133.17E-133.20E+041.42E+038.04E+035.22E+023.19E+042.11E+031.08E-084.21E-089.11E-184.79E-17
F45.81E+401.37E+413.26E-142.84E-142.11E+419.55E+416.83E+271.42E+287.94E+401.43E+412.38E+406.95E+403.72E−312.00E-30
F53.17E+188.63E+182.37E-071.04E-076.61E+191.69E+204.04E+024.16E+025.83E+191.16E+201.62E+177.84E+171.74E-089.13E-08
F68.35E+002.73E-011.49E+002.84E-018.41E+002.04E-015.37E+001.71E-018.42E+002.01E-014.96E+001.73E+002.94E+004.17E-01
F77.75E+042.16E+042.00E-015.42E-011.15E+055.48E+033.03E+041.29E+031.15E+054.54E+031.88E+021.64E+023.15E+011.94E+01
F82.39E+027.05E+011.24E-269.93E-273.46E+023.04E+013.03E+013.88E+003.41E+022.53E+018.70E-124.44E-112.70E-319.43E-31
F92.16E+027.95E+017.06E-021.91E-023.53E+023.04E+012.86E+014.47E+003.45E+023.60E+011.75E-015.41E-025.40E-021.41E-02
F102.97E+068.81E+058.99E+014.53E+014.76E+065.80E+054.18E+055.10E+044.74E+064.95E+051.52E+024.80E+019.13E+013.25E+01
F117.86E+023.09E+011.14E+022.44E+018.17E+021.68E+015.72E+021.96E+018.07E+022.57E+011.05E+026.56E+011.16E+021.62E+01
F127.49E+022.83E+011.04E+022.23E+017.54E+022.62E+015.20E+022.25E+017.55E+023.01E+013.09E+022.95E+021.18E+022.70E+01
F136.57E+022.23E+026.16E-037.24E-031.03E+035.50E+012.69E+021.64E+011.04E+034.44E+011.72E-023.56E-028.54E-021.16E-01
F141.92E+043.24E+021.43E+041.87E+031.91E+045.35E+021.92E+044.43E+021.92E+044.70E+021.93E+044.05E+021.00E+041.07E+03
F151.98E+019.61E-017.41E+002.29E+002.04E+016.71E-021.69E+012.91E-012.04E+019.49E-024.52E+001.44E+006.89E+001.48E+00
F164.63E+081.41E+086.56E-017.70E-017.97E+088.99E+072.08E+076.58E+067.88E+081.03E+082.00E+041.03E+053.76E-016.87E-01
F178.62E+083.08E+087.58E+001.25E+011.59E+091.37E+088.28E+071.47E+071.58E+091.87E+085.30E+032.59E+043.33E-021.62E-01
F181.08E+028.18E+001.09E-021.20E-021.11E+024.63E+006.81E+012.56E+001.13E+024.00E+004.14E+015.45E+013.14E-051.69E-04
F191.16E+033.39E+023.66E-031.97E-021.67E+038.70E+014.58E+022.73E+011.68E+037.68E+014.64E-011.22E+007.76E-011.84E+00
F208.90E+015.61E+003.77E+014.39E+009.61E+011.37E+006.81E+011.64E+009.56E+011.64E+001.02E+013.15E+002.81E+013.89E+00
F213.97E-018.65E-032.25E-012.60E-024.07E-017.33E-032.57E-011.87E-024.04E-017.64E-031.71E-012.17E-021.66E-012.17E-02
F22-3.65E+011.79E+00−6.65E+011.54E+00−3.57E+011.35E+00−4.34E+011.07E+00−3.60E+011.66E+00−6.49E+011.08E+01−6.72E+011.55E+00
F23−1.47E+018.09E-01−4.73E+011.75E+00−1.49E+017.20E-01−1.80E+017.08E-01−1.46E+015.62E-01−1.48E+017.80E-01−5.04E+012.53E+00

Table 5.

The comparison of the search stragies on 60D benchmark functions.

F NoCSAGCSABCSACBCSARCSARMCSACSA-MSS
RankSignRankSignRankSignRankSignRankSignRankSignRank
F15+2+6+4+7+3+1
F25+2+7+4+6+3+1
F35+2+7+4+6+3+1
F45+2+7+3+6+4+1
F55+2+7+3+6+4+1
F65+1+6+4+7+3+2
F75+1+7+4+6+3+2
F85+2+7+4+6+3+1
F95+2+7+4+6+3+1
F105+1+7+4+6+3+2
F115+2+7+4+6+1+3
F125+1+6+4+7+3+2
F135+1+6+4+7+2+3
F146+2+3+4+5+7+1
F155+3+7+4+6+1+2
F165+2+7+4+6+3+1
F175+2+7+4+6+3+1
F185+2+6+4+7+3+1
F195+1+6+4+7+2+3
F205+3+7+4+6+1+2
F215+3+7+4+6+2+1
F225+2+7+4+6+3+1
F236+2+4+3+7+5+1
Mean:5.0869571.8695656.4347833.8695656.260872.9565221.521739

Table 6.

The statistical details of comparison of the search stragies on 60D benchmark functions.

Table 2 shows the results of CSA and CSA-MSS algorithms on 30- and 60-dimensional benchmark functions. For the 30D problems, CSA looks slightly competing with the CSA-MSS. The column with Sign headings represents Wilcoxon signed sum rank test result. If the sign is +, it means that the result is statistically sound, otherwise not sound, where the results of four out of 23 functions do not look statistically sound. In eight out of 19 functions, CSA produces better solutions while, in 11 out 19 functions, CSA-MSS performs better. When we look at the characteristics of the functions best obtained with CSA, it is clearly shown that most of them are multi-modal, because CSA has good performance on these types of problems. On the other side, CSA-MSS has improved the success over the CSA on uni-modal functions. For the 60D functions, CSA-MSS outperforms CSA in both mean and standard deviation statistics of the best solutions. And the success of CSA-MSS is statistically sound for all benchmark functions.

The comparative experimental results with 30 D benchmark functions for different search strategies are presented in Tables 3 and 4. CSA-MSS has produced the minimum mean values for F1, F2, F3, F4, F5, F8, F9, F10, F14 functions, while CSA achieves slightly better results for F7, F13, F15, F16, F20, F21 functions. For F6, F7, F12, F17, F19 functions, GCSA produced better solutions than the others. RMCSA found the minimum mean values for F11, F18, F22, F23 functions, while CBCSA and RCSA have lower performance than the others presumably due to their poorer exploitation capabilities. The tables for both dimensions are combined to conduct statistical confidence test of the success, which demonstrated that the success of CSA-MSS over the others is found statistically sound with an exemption that the success of RMCSA for the functions F11 and F18 is not statistically different from CSA-MSS. It is noted that the mean rank values by CSA-MSS remain slightly better than the other methods.

The comparison of the experimental results for 60 D benchmark functions with different search strategies are provided in Tables 5 and 6. CSA-MSS has the minimum mean values except for F6, F7, F10, F11, F12, F13, F15, F19, F20 functions. Nevertheless, for all functions considered in the test, CSA-MSS produced better solutions than CSA. It is also observed that GCSA produced better solutions than CSA-MSS for F6, F7, F10, F12, F13, F19 functions, and RMCSA produced better solutions than CSA-MSS for F11, F15, F20 functions. CSA turns to poorer performance when the problem dimension increases. The statistical soundness of the success of CSA-MSS is approved except for F21, F22 functions. For the comparison of mean rank values, CSA-MSS outperforms the other methods.

Figure 1 depicts the convergence comparison of the methods over the iterations. As seen, CSA-MSS has higher convergence speed from the others for F1,F2,F3 and F4 functions. For the 60 D problems, Figure 2 shows the convergence history for the all methods.

Figure 1.

The converge graphs of compared methods on 30D benchmark functions. (a) F1 Function. (b) F2 Function. (c) F4 Function. (d) F8 Function.

Figure 2.

The converge graphs of compared methods on 60D benchmark functions. (a) F16 Function. (b) F21 Function. (c) F17 Function. (d) F3 Function.

Advertisement

5. Conclusions

In this article, a multi-strategy search with crow search algorithm is proposed for solving global optimisation problems. In the proposed approach, adaptive pursuit is used to select search strategies according to the search dynamics. Search strategies are obtained from other state-of-the-art metaheuristics and implemented into crow search algorithm. Twenty-three benchmark functions with different characteristics have been used to compare strategies’ performance. The proposed approach is demonstrated that it helps improve the performance of original crow search algorithm especially on the higher-dimensional problems. Future works of this study would be dynamic and adaptive parametric settings, especially for awareness probability and flight length distance. It may also include embedding machine learning approaches to develop high-performance adaptive strategy selection schemes into the algorithm to achieve transfer learning.

Advertisement

Acknowledgments

This study has been conducted under the research support scheme of 2219, namely “Post-doctorate research programme”, TUBITAK of Republic of Turkey.

Advertisement

Conflict of interest

The authors declare no conflict of interest.

Advertisement

Notes/thanks

The first author would like to show his thanks to TUBITAK for supporting his visit in UWE Bristol, UK, in the period between October, 2021 and October 2022.

References

  1. 1. Askarzadeh A. A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Computers & Structures. 2016;169:1-12
  2. 2. Said GAE-NA, Mahmoud AM, El-Horbaty E-SM. A comparative study of meta-heuristic algorithms for solving quadratic assignment problem. arXiv preprint arXiv:1407.4863. 2014
  3. 3. Helmi AM et al. Efficient and sustainable reconfiguration of distribution networks via metaheuristic optimization. In: IEEE Transactions on Automation Science and Engineering. IEEE; 2021
  4. 4. Kalita K, Ghadai RK, Chakraborty S. A comparative study on the metaheuristic-based optimization of skew composite laminates. Engineering with Computers. 2021:1-18. (In press)
  5. 5. Baatar N, Zhang D, Koh C-S. An improved differential evolution algorithm adopting \𝜆-best mutation strategy for global optimization of electromagnetic devices. IEEE Transactions on Magnetics. 2013;49(5):2097-2100
  6. 6. Eberhart R, Kennedy J. A new optimizer using particle swarm theory. In: MHS’95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science. Nagoya, Japan: IEEE; 1995. pp. 39-43
  7. 7. Wang F, Zhang H, Zhou A. A particle swarm optimization algorithm for mixed-variable optimization problems. Swarm and Evolutionary Computation. 2021;60:100808
  8. 8. Durgut R, Aydin ME. Adaptive binary artificial bee colony algorithm. Applied Soft Computing. 2021;101:107054
  9. 9. Karaboga D, Basturk B. On the performance of artificial bee colony (ABC) algorithm. Applied Soft Computing. 2008;8(1):687-697
  10. 10. Garcıa J, Maureira C. A KNN quantum cuckoo search algorithm applied to the multidimensional knapsack problem. Applied Soft Computing. 2021;102:107077
  11. 11. Yang X-S, Deb S. Cuckoo search via LĂ©vy flights. In: 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC). IEEE; 2009. pp. 210-214
  12. 12. Naik A, Satapathy SC. A comparative study of social group optimization with a few recent optimization algorithms. Complex & Intelligent Systems. 2021;7(1):249-295
  13. 13. He S, Wu QH, Saunders JR. Group search optimizer: An optimization algorithm inspired by animal searching behavior. IEEE Transactions on Evolutionary Computation. 2009;13(5):973-990
  14. 14. Yang X-S. Firefly algorithm, levy flights and global optimization. In: Research and Development in Intelligent Systems XXVI. Springer; 2010. pp. 209-218
  15. 15. Yang X-S, Slowik A. Firefly algorithm. In: Swarm Intelligence Algorithms. CRC Press; 2020. pp. 163-174
  16. 16. Morales-Castañeda B et al. A better balance in metaheuristic algorithms: Does it exist? Swarm and Evolutionary Computation. 2020;54:100671
  17. 17. Hughes M, Goerigk M, Dokka T. Particle swarm metaheuristics for robust optimisation with implementation uncertainty. Computers & Operations Research. 2020;122:104998
  18. 18. Majhi SK, Sahoo M, Pradhan R. A space transformational crow search algorithm for optimization problems. Evolutionary Intelligence. 2020;13(3):345-364
  19. 19. Ozsari S et al. Adaptation of metaheuristic algorithms to improve training performance of an ESZSL model. Turkish Journal of Electrical Engineering & Computer Sciences. 2021;29:3
  20. 20. Xiao S et al. Artificial bee colony algorithm based on adaptive neighborhood search and Gaussian perturbation. Applied Soft Computing. 2021;100:106955
  21. 21. Hu P et al. Multi-strategy serial cuckoo search algorithm for global optimization. Knowledge-Based Systems. 2021;214:106729
  22. 22. Zamani H, Nadimi-Shahraki MH, Gandomi AH. CCSA: Conscious neighborhood-based crow search algorithm for solving global optimization problems. Applied Soft Computing. 2019;85:105583
  23. 23. Sonuc E. Binary crow search algorithm for the uncapacitated facility location problem. Neural Computing and Applications. 2021;33(21):1-17
  24. 24. Majhi SK, Sahoo M, Pradhan R. Oppositional crow search algorithm with mutation operator for global optimization and application in designing FOPID controller. Evolving Systems. 2019;12(2):1-26
  25. 25. Rizk-Allah RM, Hassanien AE, Bhattacharyya S. Chaotic crow search algorithm for fractional optimization problems. Applied Soft Computing. 2018;71:1161-1175
  26. 26. dos Santos Coelho L et al. Electromagnetic optimization based on Gaussian crow search approach. In: 2018 International Symposium on Power Electronics, Electrical Drives, Automation and Motion (SPEEDAM). IEEE; 2018. pp. 1107-1112
  27. 27. Necira A et al. Dynamic crow search algorithm based on adaptive parameters for large-scale global optimization. Evolutionary Intelligence. 2021:1-17. (In press)
  28. 28. Hussien AG et al. Crow search algorithm: Theory, recent advances, and applications. IEEE Access. 2020;8:173548-173565
  29. 29. Meraihi Y et al. A comprehensive survey of crow search algorithm and its applications. Artificial Intelligence Review. 2020;54(4):1-48
  30. 30. Prior H, Schwarz A, GĂŒntĂŒrkĂŒn O. Mirror-induced behavior in the magpie (Pica pica): Evidence of self-recognition. PLoS Biology. 2008;6(8):e202
  31. 31. Black H. Social skills to crow about. Scientific American Mind. 2013;24(4):12-12
  32. 32. Abdelaziz AY, Fathy A. A novel approach based on crow search algorithm for optimal selection of conductor size in radial distribution networks. Engineering Science and Technology, an International Journal. 2017;20(2):391-402
  33. 33. Kiran MS et al. Artificial bee colony algorithm with variable search strategy for continuous optimization. Information Sciences. 2015;300:140-157
  34. 34. Peng H, Deng C, Zhijian W. Best neighbor-guided artificial bee colony algorithm for continuous optimization problems. Soft Computing. 2019;23(18):8723-8740
  35. 35. Liang J, Xue Y. An adaptive GP-based memetic algorithm for symbolic regression. Applied Intelligence. 2020;50(11):3961-3975
  36. 36. Aydin ME. Coordinating metaheuristic agents with swarm intelligence. Journal of Intelligent Manufacturing. 2012;23(4):991-999
  37. 37. Abualigah L et al. A parallel hybrid krill herd algorithm for feature selection. International Journal of Machine Learning and Cybernetics. 2021;12(3):783-806
  38. 38. Durgut R, Yavuz Ä°, Aydin M. KĂŒme Birlesimli Sırt Çantası Probleminin Adaptif Yapay Arı Kolonisi Algoritması ile ÇözĂŒmĂŒ. Journal of Intelligent Systems: Theory and Applications. 2021:43-54. DOI: 10.38016/jista.854584
  39. 39. Surjanovic S, Bingham D. Virtual Library of Simulation Experiments: Test Functions and Datasets. Burnaby, BC, Canada: Simon Fraser University; 2013. p. 2015. [Accessed: May 13]
  40. 40. Zhu G, Kwong S. Gbest-guided artificial bee colony algorithm for numerical function optimization. Applied Mathematics and Computation. 2010;217(7):3166-3173
  41. 41. Trelea IC. The particle swarm optimization algorithm: Convergence analysis and parameter selection. Information Processing Letters. 2003;85(6):317-325
  42. 42. Babaoglu I. Artificial bee colony algorithm with distribution-based update rule. Applied Soft Computing. 2015;34:851-861

Written By

Rafet Durgut and Mehmet Emin Aydin

Reviewed: 25 January 2022 Published: 10 March 2022