Grid environment example.
Abstract
The workflow application is a common grid application. The objective of a workflow application is to complete all the tasks within the shortest time, i.e., minimal makespan. A job scheduler with a high-efficient scheduling algorithm is required to solve workflow scheduling based on grid information. Scheduling problems are NP-complete problems, which have been well solved by metaheuristic algorithms. To attain effective solutions to workflow application, an algorithm named the stochastic greedy PSO (SGPSO) is proposed to solve workflow scheduling; a new velocity update rule based on stochastic greedy is suggested. Restated, a stochastic greedy-driven search guidance is provided to particles. Meanwhile, a stochastic greedy probability (SGP) parameter is designed to help control whether the search behavior of particles is exploitation or exploration to improve search efficiency. The advantages of the proposed scheme are retaining exploration capability during a search, reducing complexity and computation time, and easy to implement. Retaining exploration capability during a search prevents particles from getting trapped on local optimums. Additionally, the diversity of the proposed SGPSO is verified and analyzed. The experimental results demonstrate that the SGPSO proposed can effectively solve workflow class problems encountered in the grid environment.
Keywords
- scheduling
- optimization
- stochastic greedy
- workflow
- particle swarm optimization
1. Introduction
Grid computing is applied mainly to utilize the heterogeneous computational resources to execute various applications. The computing ability of the grid is comparable to that of a super computer, and the grid computing environment consists of heterogeneous computing devices throughout the world and connected by low-latency and high-bandwidth networks [1]. The main application of the grids is the sharing of computing resources. Scholars have already used the grid in real cases when requiring vast computation and immense storage space [1, 2, 3]. The basic scenario of grid computing application is shown in Figure 1. The job scheduler uses grid information supplied by information server which collects grid information from grid sensors. When a workflow application comprising numerous tasks with partially ordered constraint is uploaded to the grid, the job scheduler of the grid platform allocates the tasks to be processed to the computing resources on the grid. If the tasks and the resources are well scheduled, the time needed to complete all the tasks of the workflow application can then be reduced. Otherwise, the time will be extended. Restated, the makespan of workflow application on the grid is highly impacted by the quality of task-resource arrangement. Many workflow application scheduling algorithms have been presented to boost efficiency and make the resource manager more efficient when matching tasks and resources so that grid performance can be upgraded effectively.
Many studies have developed scheduling optimization methods intended to reduce the makespan of jobs (all tasks) on the grid [4, 5, 6, 7, 8, 9]. When restrictions regarding partially ordered tasks exist between tasks (i.e., dependent tasks), the algorithm applied must meet the needs of such sequential relationships when scheduling optimization is conducted. Besides solving the task-resource matching problem, the sequence of execution of independent tasks allocated to the same resource also has a rather significant effect on the reduction of the makespans of jobs in workflow scheduling. In other words, workflow scheduling has to simultaneously solve two subproblems in task-resource matching and those unrelated to tasks’ priorities. Most scheduling problems are NP-complete, and many heuristic and metaheuristic algorithms have been proposed to solve NP problems, such as the ant colony optimization (ACO) [3], genetic algorithm (GA) [6], simulated annealing (SA) [5], and particle swarm optimization (PSO) [10]. Among them, PSO carries the advantages of easy implementation, requiring fewer parameters and having a faster convergence speed; therefore, PSO is often used to solve scheduling problems in fields aside from a grid or cloud computing, such as course timetabling problems [11], flowshop problems [12], and vehicle routing problems (VRP) [13]. Also, PSO was applied to solve grid scheduling problems; Tao et al. [14] and Chen and Wang [15] have all adopted the PSO to solve grid scheduling problems effectively.
Two subproblems have to be dealt with in workflow scheduling: the task-resource mapping and the execution priorities of tasks without precedence constraint in the same computing resource. To these two subproblems, two PSO algorithms were designed to find the corresponding solutions: a discrete-type PSO algorithm to solve the task-resource mapping subproblem and a constriction-type PSO algorithm to solve the task priority subproblem. In PSO, the location of each particle represents a solution to the problem to be solved, and each particle moves with reference to global experience and individual experience, resulting in a new solution. In this study, a new velocity update rule developed from state transformation rules used in ant colony optimization (ACO) [16] was proposed rather than the velocity update rule in the contraction-type PSO, where movement is based on both experiences. This new PSO is named the stochastic greedy PSO (SGPSO) herein. In ACO, the ant moves by referencing the highest pheromone. Besides movement guided by the highest pheromone trail, the ant also references the other trail, determined using the roulette wheel rule to move. In this study, the global experience of the particle swarm is regarded as the path of ants with the highest pheromone. Thus, a new velocity update rule was introduced to allow the particles with the probability to explore on their own in the solution search process and prevent them from getting trapped on local optimums. Restated, the particle moves in accordance with either the global experience or individual experience in this work. Moreover, different problem cases with small-scale and large-scale problems are designed and tested to verify the performance of the proposed SGPSO. In the end, the workflow scheduling problems in [15] were tested, and comparative analysis was conducted. Furthermore, the diversity of the proposed SGPSO is also defined and verified. The remaining parts of this paper are arranged as follows: Section 2 presents the workflow scheduling problems, Section 3 describes the new velocity update rule applied in the particle swarm optimization algorithm and the concept behind its adoption from ACO, in Section 4 the experimental results and comparative analysis are provided, and the conclusions are presented in Section 5.
2. Description of workflow scheduling problems
In the grid environment, distributed computing is conducted with the resources that are scattered among different places around the world and connected together through networks. The composition of the grid environment is shown in Figure 2(a). The scattered computational resources are linked through the Internet. Each resource has its own computing ability and external bandwidth represented by
The goals of workflow scheduling optimization are to appropriately match tasks to resources and to suitably assign execution priorities to tasks without precedence restriction in the same computing resource to reduce the makespan of the application execution. The cost includes the resource processing time of the path and the data transmission time. In Figure 3, for example, there are four execution sequence paths: (p1) task 0→task 1→task 4→task 7→task 8, (p2) task 0→task 1→task 5→task 7→task 8, (p3) task 0→task 2→task 6→task 7→task 8, and (p4) task 0→task 3→task 6→task 7→task 8. All the tasks in the four execution sequences must be executed to complete the job on the grid. The makespan is subject to the time of the longest execution sequence path. That is, the max(
Calculation of
If task i and task j are, respectively, allocated to be executed with the resources
Therefore, the makespan is defined as the fitness function to denote the quality of workflow scheduling. The definition of fitness function (
3. The proposed method
Many nature-inspired optimization algorithms have been proposed to find optimal solutions to workflow scheduling problems and metaheuristic algorithms that imitate the behaviors of biological creatures. Some that are extensively applied include ACO, GA, bee colony optimization (BCO), and the PSO adopted in this study. Among them, PSO requires fewer parameters and is easier to implement. Therefore, it has been well applied to solve diverse
As shown in Figure 3, if task 2 and task 6 are allocated to be computed by different resources, there will be a transmission time, and the makespan will be extended. On the contrary, if they are arranged to be executed by the same resource, there will be no transmission time, and the makespan will be shortened. Furthermore, if task 1 and task 2 are arranged to be executed by the same resource, executing task 1 before task 2 or vice versa will have an effect on the makespan of the workflow application on the grid. Hence, the study of minimization of the makespan in workflow scheduling involves two issues: task-resource matching and task execution priority determination. Figure 4 illustrates an example with two heterogeneous resources (
To deal with these issues, the constriction PSO is used to solve the task execution priority issue and the discrete PSO to cope with the task-resource matching issue.
3.1. Used PSOs
PSO was first introduced by Kennedy and Ebrhart in 1995 [17]. After observing birds flying, fish seeking food, and other social behaviors of animals, they discovered that each particle would move to their next position according to information (experience) shared among the members of the swarm. Restated, when particles move, they refer to individual experience (
Each particle of a swarm has its own velocity
In this work, a constriction PSO is applied to solve the task execution priority problem. Similar to the inertia weight, a constriction factor
The discrete PSO (DPSO) was proposed by Kennedy and Ebrhart in 1997 [19]. Unlike in conventional PSO, the particle position components of the discrete PSO are binary. In other words,
The sigmoid function is applied to convert the velocity value into a probability between 0 and 1. However, to prevent the value of
3.2. Stochastic greedy in ant colony optimization (ACO)
Stochastic greedy is greedy by chance; it has been well applied in constructing the path of ants in ant colony optimization. Ant colony optimization, which was initially introduced by Dorigo et al. in 1996 [20], imitates the foraging behavior of ants. The ACO was first applied to solve the traveling salesman problem [16, 21]. In ACO, ants lay down pheromones on the foraging paths. The deposited pheromone is the stigmergy used to communicate with ants. The amount of pheromone deposited on a particular foraging path increases with the number of ants traveling along the path. An ant foraging path corresponds to a feasible solution to the studied scheduling problem; an ant establishes a path by using a transition rule to choose nodes to visit, i.e., each movement is determined by stochastic greedy rule as displayed in Eq. (7):
where
3.3. Stochastic greedy PSO (SGPSO)
This section will introduce the proposed PSO using a new velocity update rule in this study and the design philosophy behind it. In PSO, the particles always move and search for optimums in the solution space in accordance with the best individual experience and the global best experience. In this work, the global best experience in PSO is similar to the path with the highest pheromone level in ACO. Restated,
where
3.4. The diversity of SGPSO
The diversity of the swarm during moving in the solution space impacts the solution quality. High diversity of the particles in the initial stage is desired for possible most solution space to find a good seed of search. Conversely, in the latter stage, the particles ought to proceed fine search for the better solution, i.e., low diversity of the population should be provided. To analyze the diversity of the SGPSO, the diversity (
where
4. Experimental results and discussion
Since there is not any specific library providing grid task scheduling problems for workflow applications, workflow scheduling problems involving larger numbers of tasks were also designed for this study to test whether the proposed method can also perform well on large-scale problems. Intrinsically, the workflow scheduling problem can be regarded as a derivative of the multimode resource-constrained project scheduling problem (MRCPSP). Therefore, the task precedence constraints of the designed workflow scheduling problems in this work are generated based on the problem cases (J10, J12, J14, J16, J18, J20, and J30) of the MRCPSP in the PSPLIB library; each problem case has 50 different instances generated. The workload of an activity is a randomly produced number with 1000 as the base unit. The data transmission between predecessors and successors is created by using random numbers with 10 as the base unit. The processing ability and the external bandwidth of computing resources in the grid were generated as those in the problem designed by [15] as listed in Table 1. Table 2 illustrates the workflow application example of a J14 instance including workload, number of successors, precedence (successor), and communication cost. In Table 2, the tasks 0 and 15 are pseudo tasks representing the start and end. The corresponding DAG of the workflow application instance is displayed in Figure 6. The settings of the parameters in constriction and discrete PSOs are χ = 0.72984, χ = 1, and
where
Resources | MIPS | Bandwidth |
---|---|---|
450 | 8 | |
1000 | 2 | |
650 | 10 | |
1500 | 8 | |
800 | 10 | |
4000 | 2 | |
2000 | 15 | |
1250 | 6 | |
250 | 20 | |
750 | 5 |
Task No. | Workload | No. of successors | Successors | Communication cost |
---|---|---|---|---|
0 | 0 | 3 | 1 2 3 | 0 0 0 |
1 | 35,000 | 2 | 4 5 | 14 12 |
2 | 7000 | 2 | 5 9 | 5 3 |
3 | 5000 | 2 | 9 13 | 7 2 |
4 | 28,000 | 3 | 6 7 11 | 11 17 6 |
5 | 20,000 | 3 | 8 11 14 | 13 11 13 |
6 | 18,000 | 3 | 9 10 14 | 11 16 13 |
7 | 4000 | 2 | 8 14 | 7 13 |
8 | 27,000 | 2 | 10 12 | 3 3 |
9 | 4000 | 1 | 12 | 17 |
10 | 19,000 | 1 | 13 | 3 |
11 | 32,000 | 2 | 12 13 | 4 4 |
12 | 31,000 | 1 | 15 | 13 |
13 | 16,000 | 1 | 15 | 11 |
14 | 22,000 | 1 | 15 | 3 |
15 | 0 | 0 |
The performance evaluation on J10 to J30 associates with different SGP values is displayed in Table 3. When the SGPs are 0 and 1, the averages of all problem cases’ Avg.Dev are 12.43 and 12.05%; they are higher than other Avg.Dev results via other SGPs. This is because only global experience is referred to when SGP = 1 (exploitation only), and convergence of the algorithm in the process of searching in the vast solution space is premature; it is easy to get trapped on local optimums and impossible to obtain the global optimum. When SGP = 0 (exploration only), only local experience is referred to, and slow convergence exhibits while searching in a vast solution space, i.e., obtained optimal solution demands much more iterations.
SGP | J10 | J12 | J14 | J16 | J18 | J20 | J30 | Avg. |
---|---|---|---|---|---|---|---|---|
0 | 3.86 | 6.61 | 7.87 | 15.55 | 13.11 | 16.28 | 23.70 | 12.43 |
0.1 | 3.88 | 6.58 | 7.32 | 15.43 | 12.14 | 14.31 | 17.27 | 10.99 |
0.2 | 4.44 | 6.87 | 7.67 | 15.95 | 12.39 | 14.03 | 15.93 | 11.04 |
0.3 | 4.18 | 6.74 | 8.11 | 15.93 | 12.46 | 14.62 | 15.13 | 11.02 |
0.4 | 4.27 | 7.27 | 8.63 | 16.49 | 12.54 | 15.25 | 14.92 | 11.34 |
0.5 | 4.55 | 7.26 | 8.36 | 16.52 | 12.49 | 14.73 | 15.22 | 11.30 |
0.6 | 4.69 | 7.82 | 8.68 | 16.74 | 12.72 | 14.88 | 15.62 | 11.59 |
0.7 | 4.60 | 7.39 | 8.55 | 17.35 | 13.21 | 14.92 | 15.41 | 11.63 |
0.8 | 4.93 | 7.50 | 8.68 | 17.40 | 13.13 | 15.40 | 14.61 | 11.66 |
0.9 | 4.97 | 7.77 | 8.94 | 16.78 | 13.38 | 15.63 | 15.53 | 11.86 |
1 | 4.79 | 7.66 | 9.06 | 17.36 | 13.63 | 16.13 | 15.72 | 12.05 |
Meanwhile, the obtained minimum Avg.Dev corresponding to the SGP for each problem case is shown in Table 4. The SGP of minimum Avg.Dev for J10 is zero, which is because the best solution in small solution space can be found by exploration search only. The SGPs of minimum Avg.Dev for J12 to J18 and J20 are 0.1 and 0.2, respectively. That is, more exploration search is adequate for small- and medium-scale problems. Since J30 is much more complex than J20, the solution space is vast. Thus, the SGP of the minimum
J10 | J12 | J14 | J16 | J18 | J20 | J30 | |
---|---|---|---|---|---|---|---|
SGP | 0.0 | 0.1 | 0.1 | 0.1 | 0.1 | 0.2 | 0.8 |
Avg.Dev | 3.86 | 6.58 | 7.32 | 15.43 | 12.14 | 14.03 | 14.61 |
Moreover, the comparisons between conventional PSO (
J10 | J12 | J14 | J16 | J18 | J20 | J30 | Avg. | |
---|---|---|---|---|---|---|---|---|
SGPSO | 3.88 | 6.58 | 7.32 | 15.43 | 12.14 | 14.31 | 17.27 | 10.99 |
PSO | 4.52 | 7.42 | 8.72 | 20.55 | 13.12 | 15.12 | 15.92 | 12.20 |
A resulting schedule of the corresponding DAG of the workflow application scheduling instance with 14 tasks (Figure 6) is displayed in Figure 7. The fitness evolution of the J14 instance with different SGPs is displayed in Figure 8.
Additionally, to further realize the search behavior of the proposed scheme, the diversity evolution of the proposed SGPSO is checked. Figure 9 displays the diversity evolution of a J14 instance with SGP = 0, SGP = 0.1, SGP = 0.8, and SGP = 1. The diversity to be checked in this study is defined as in Eq. (9).
In Figure 9, the diversity remains high until the end of the operation when SGP = 0 without referencing global experience. Restated, particle search behavior keeps exploration until the end and hence suffers slow convergence. Conversely, the diversity quickly drops to none for SGP = 1, i.e., the particles go to exploitation search and therefore lead to premature convergence and trap on local optima. When SGP = 0.8, the behavior is similar to that of SGP = 1 but provides some exploration ability. Hence, SGP = 0.8 has the higher diversity than that of SGP = 1. With the setting of SGP = 0.1, high diversity in the early stage and diversity gradually lowered after the middle stage to the end are provided. Therefore, giving global search in the early stage gradually shrinks the search area in the later stage, hence providing an ideal search process from exploration to exploitation for finding the optimal schedule.
Finally, this work tested the workflow application problems examined in [15] to verify the proposed method. The comparison is made mainly with the largest-scale case in [15] which involves 15 tasks (represented here as J15).
The settings of the parameters in constriction and discrete PSOs are χ = 0.72984, ω = 1, and c1 = c2 = 2. The values of different SGP parameters SGP = {0, 0.1, 0.2, …, 0.9, 1} were tested. The minimum average fitness of the simulation results of iteration = 300 is shown in Figure 10. Figure 10 shows that the minimal fitness (44.5) can be obtained when SGP = 0.1 and SGP = 0.8. However, the minimum average fitness (45.88) is obtained when SGP = 0.1. The results coincide with the above experiment consequences. The comparison between this work and [15] is listed in Table 6.
According to Table 6, with a small SGP value, the task scheduling outcomes of J15 will become better as the number of iterations increases (100 iterations → 300 iterations). This is because small SGP values tend to lead to the use of individual experience. In other words, the particles perform exploration search in solution space. In consequence, to obtain the optimums in the vast solution space will consume much time, and the convergence is delayed.
5. Conclusions
Workflow application is the most common application in the grid. However, the workflow scheduling heavily affects the performance of workflow execution application. Two PSOs were used to solve task-resource matching and task execution priority subproblems of the workflow scheduling. A new and simplified velocity update rule extended from the ACO state transition rule is designed in constriction PSO for solving the task execution priority subproblem. Restated, the search control is based on a suggested SGP inspired by the ACO’s transition rule. This constriction PSO-based algorithm is named stochastic greedy PSO (SGPSO), which provides both exploration and exploitation abilities during search. The main purpose is to strengthen the exploration capacity of the PSO in the solution search process while providing certain exploitation capability to avoid getting trapped in local optimums.
According to experimental results as indicated in Table 3, high SGP provides global experience guidance and causes premature convergence, hence easy to trap on local optimal such as only exploitation applied in SGP = 1.0 and Avg.Dev = 12.05% yielded. When SGP = 0, the algorithm would conduct self-search such that only exploration is enabled that causes slow convergence and Avg.Dev = 12.43% obtained. Better solutions can be found while providing enough exploration and certain exploitation capabilities such as SGP = 0.1; the lowest Avg.Dev = 10.99% can be obtained.
By using the SGP to control the search behavior, either exploitation or exploration would make the algorithm simplified and also reduce the execution time.
Meanwhile, high diversity in the early stage and low diversity in the later stage are preferred for searching in the solution space provided as indicated in Figure 9. Therefore, the proposed SGPSO with lower SGP is suggested for solving workflow class scheduling problem in the grid.
Unlike in [15], using heuristics to find initial solutions is not adopted in this work. Therefore, there is no need to consider which heuristic should be designed to increase performance, and hence the algorithm is thus easier to implement. In [15], the best result comes with the constriction factor χ = 0.5 which was obtained after thorough testing. However, the best result can be yielded with the commonly suggested value χ = 0.72984. Hence, the laborious work of finding the best constriction factor value is eliminated.
The experimental results show that the proposed method can effectively solve grid task scheduling problems and boost grid performance. In reality, there are many problems similar to grid task scheduling problems, such as the multimode resource-constrained project scheduling problems (MRCPSPs). In the future, the method proposed in this study will be applied to find solutions to MRCPSP-type problems.
Acknowledgments
This work was partly funded and supported by the Ministry of Science and Technology, Taiwan, under contract MOST 104-2221-E-167-011.
References
- 1.
Beynon M, Sussman A, Catalyurek U, Kurc T, Saltz J. Performance optimization for data intensive grid applications. In: Proceedings of the Third Annual International Workshop on Active Middleware Services, San Francisco, CA, USA, Aug 2001, AMS’01. Los Alamitos, USA: IEEE Computer Society Press - 2.
Goux JP, Kulkarni S, Linderoth J, Yoder M. An enabling framework for master-worker applications on the computational grid. In: Proceedings of The Ninth International Symposium on High-Performance Distributed Computing, Pittsburgh, USA, Aug 1–4, 2000. Los Alamitos, USA: IEEE Computer Society Press - 3.
Xhafa F, Paniagua C, Barolli L, Caballé S. A parallel grid-based implementation for real time processing of event log data in collaborative applications. International Journal of Web and Grid Services. 2010; 6 (2):124-140 - 4.
Chang RS, Lin CY, Lin CF. An adaptive scoring job scheduling algorithm for grid computing. Information Sciences. 2012; 207 :79-89 - 5.
Fidanova, S. Simulated annealing for grid scheduling problem. In: Proceedings of the IEEE John Vincent Atanasoff 2006 International Symposium on Modern Computing (JVA’06), Sofia, Bulgaria, Oct 3–6, 2006. Los Alamitos, USA: IEEE Computer Society Press - 6.
Gao Y, Rong H, Huang JZ. Adaptive grid job scheduling with genetic algorithms. Future Generation Computer Systems. 2005; 21 :151-161 - 7.
Liu H, Abraham A, Hassanien AE. Scheduling jobs on computational grids using a fuzzy particle swarm optimization algorithm. Future Generation Computer Systems. 2010; 26 (8):1336-1343 - 8.
Lorpunmanee S, Sap MN, Adbullah AH, Chai C. An Ant Colony Optimization for dynamic job scheduling in grid environment. Engineering and Technology. 2007; 1 (5):314-321 - 9.
Salimi R, Motameni H, Omranpour H. Task scheduling using NSGA II with fuzzy adaptive operators for computational grids. Journal of Parallel and Distributed Computing. 2014; 74 (5):2333-2350 - 10.
Chen RM, Shen YM, Wang CT. Ant Colony Optimization inspired swarm optimization for grid task scheduling. In: Proceedings of the 2016 International Symposium on Computer, Consumer and Control (IS3C 2016), Xi'an, China, July 4–6, 2016. Los Alamitos, USA: IEEE Computer Society Press - 11.
Chen RM, Shih HF. Solving university course timetabling problems using constriction particle swarm optimization with local search. Algorithms. 2013; 6 :227-244 - 12.
Li D, Deng N. Solving permutation flow shop scheduling problem with a cooperative multi-swarm PSO algorithm. Journal of Information & Computational Science. 2012; 9 (4):977-987 - 13.
Chen RM, Shen YM. Novel encoding and routing balance insertion based particle swarm optimization with application to optimal CVRP depot location determination. Mathematical Problems in Engineering. 2015; 2015 :11 - 14.
Tao Q, Chang HY, Yi Y, Gu CQ, Li WJ. A rotary chaotic PSO algorithm for trustworthy scheduling of a grid work flow. Computers & Operations Research. 2010; 38 (5):824-836 - 15.
Chen RM, Wang CM. Project scheduling heuristics-based standard PSO for task-resource assignment in heterogeneous grid. Abstract and Applied Analysis. 2011; 2011 :20 - 16.
Dorigo M, Gambardella LM. Ant colony system: A cooperative learning approach to the traveling salesman problem. IEEE Transactions on Evolutionary Computation. 1997; 1 (1):53-66 - 17.
Kennedy J, Ebrhart RC. Particle swarm optimization. In: Proceedings of the 4th IEEE International Conference on Neural Networks, 1995, Perth, Australia, Nov 27–Dec 1, 1995. Piscataway, NJ: IEEE Service Center; 1995 - 18.
Bratton D, Kennedy J. Defining a standard for particle swarm optimization. In: Proceedings of the IEEE Swarm Intelligence Symposium (SIS’07), Honolulu, UAS, Apr 1–5, 2007. Los Alamitos, USA: IEEE Computer Society Press - 19.
Kennedy J, Ebrhart RC. A discrete binary version of the particle swarm Algorithm. In: Proceedings of The IEEE International Conference on Systems, Man, and Cybernetics, Orlando, USA, Oct 12–15, 1997. Vol. 5. Los Alamitos, USA: IEEE Computer Society Press - 20.
Dorigo M, Maniezzo V, Colorni A. The ant system: Optimization by a colony of cooperating agents. IEEE transaction on system. Man and Cybernetics. 1996; 26 (1):1-13 - 21.
Hlaing ZCSU, Khine MA. Solving traveling salesman problem by using improved ant Colony optimization algorithm. International Journal of Information and Education Technology. 2011; 1 (5):404-409 - 22.
Chen D, Wang J, Zou F, Hou W, Zhao C. An improved group search optimizer with operation of quantum-behaved swarm and its application. Applied Soft Computing. 2012; 12 (2):712-725