A Stochastically Perturbed Particle Swarm Optimization for Identical Parallel Machine Scheduling Problems

Bio-inspired computational algorithms are always hot research topics in artificial intelligence communities. Biology is a bewildering source of inspiration for the design of intelligent artifacts that are capable of efficient and autonomous operation in unknown and changing environments. It is difficult to resist the fascination of creating artifacts that display elements of lifelike intelligence, thus needing techniques for control, optimization, prediction, security, design, and so on. Bio-Inspired Computational Algorithms and Their Applications is a compendium that addresses this need. It integrates contrasting techniques of genetic algorithms, artificial immune systems, particle swarm optimization, and hybrid models to solve many real-world problems. The works presented in this book give insights into the creation of innovative improvements over algorithm performance, potential applications on various practical tasks, and combination of different techniques. The book provides a reference to researchers, practitioners, and students in both artificial intelligence and engineering communities, forming a foundation for the development of the field.


Problem description
The problem of identical parallel machine scheduling is about creating schedules for a set J ={J 1 , J 2 , J 3 ,..., J n } of n independent jobs to be processed on a set M={M 1 , M 2 , M 3 ,..., M m } of m identical machines. Each job should be carried out on one of the machines, where the time required for processing job i on a machine is denoted by p i . The subset of jobs assigned to machine M i in a schedule is denoted by i M S . Once a job begins processing, it must be completed without interruption. Furthermore, each machine can process one job at a time, and there is no precedence relation between the jobs. The aim is to find a permutation for the n jobs to machines from set M so as to minimize the maximum completion time, in other words the makespan. The problem is denoted as P||C max , where P represents identical parallel machines, the jobs are not constrained, and the objective is to obtain the minimum length schedule. An integer programming formulation of the problem that minimize the makespan is as follows: [5] min y subject to: where the optimal value of y is C max and x ij =1 when job i is assigned to machine j, otherwise x ij =0. 6 5 5 4 4 4 Table 1. An example of 9-job × 4-machine PMS problem For all of the three algorithms, the process of finding makespan value for a particle can be illustrated by an example. Namely, let's assume a permutation vector of Π= {1 8 3 4 5 6 7 2 9}. By considering 4 parallel machines and 9 jobs, whose processing times are given in Table  1, the makespan value of the given vector is depicted in Figure 1. According to the schedule, each value of the vector is iteratively assigned to the most available machine. First four elements of the permutation vector (1, 8,3,4) are assigned to the four machines respectively. The remaining jobs are assigned one by one to the first machine available. For instance, 5 goes to second machine (M 2 ), since it is the first machine released. If there is more than one available machine at the time, the job will be assigned randomly (ties can be broken arbitrarily). The makespan value of the given sequence is C max (Π)=14, as can easily be seen in figure 1.
The lower bound for P||C max is calculated as follows [22]: It is obtained by assuming that preemption is not allowed. If C max (Π)=LB(C max ), the current solution(Π) is optimum. So, lower bound will be used as one of the termination criteria throughout this chapter. The lower bound of the example presented in Table 1

Classic Particle Swarm Optimization
In PSO, each single solution, called a particle, is considered as an individual, the group becomes a swarm (population) and the search space is the area to explore. Each particle has a fitness value calculated by a fitness function, and a velocity to fly towards the optimum. All particles fly across the problem space following the particle that is nearest to the optimum. PSO starts with an initial population of solutions, which is updated iteration-byiteration. The principles that govern PSO algorithm can be stated as follows: Vv v v = for i th particle starts with a random position and velocity.
• Each particle knows its position and value of the objective function for that position. The best position of i th particle is donated as 12 (, , . . . ,) , and the best position of the whole swarm as, 12 ( , ,..., ) n G gg g = respectively. The PSO algorithm is governed by the following main equations: where t represents the iteration number, w is the inertia weight which is a coefficient to control the impact of the previous velocities on the current velocity. c 1 and c 2 are called learning factors. r 1 and r 2 are uniformly distributed random variables in [0,1].
The original PSO algorithm can optimize problems in which the elements of the solution space are continuous real numbers. The major obstacle for successfully applying PSO to combinatorial problems in the literature is due to its continuous nature. To remedy this drawback, Tasgetiren et al. [17] presented the smallest position value (SPV) rule. Another approach to tackle combinatorial problems with PSO is done by Pan et al. [21]. They generate a similar PSO equation to update the particle's velocity and position vectors using one and two cut genetic crossover operators.

The proposed Stochastically Perturbed Particle Swarm Optimization algorithm
In this chapter, a stochastically perturbed particle swarm optimization algorithm (SPPSO) is proposed for the PMS problems. The initial population is generated randomly. Initially, each individual with its position, and fitness value is assigned to its personal best (i.e., the best value of each individual found so far). The best individual in the whole swarm with its position and fitness value, on the other hand, is assigned to the global best (i.e., the best particle in the whole swarm). Then, the position of each particle is updated based on the personal best and the global best. These operations in SPPSO are similar to classical PSO www.intechopen.com algorithm. However, the search strategy of SPPSO is different. That is, each particle in the swarm moves based on the following equations.
At each iteration, the position vector of each particle, its personal best and the global best are considered. First of all, a random number of U(0,1) is generated to compare with the inertia weight to decide whether to apply Insert function(η ) to the particle or not.
Insert function(η ) implies the insertion of a randomly chosen job in front (or back sometimes) of another randomly chosen job. For instance, for the PMS problem, suppose a sequence of {3, 5, 6, 7, 8, 9, 1, 2, 4}. In order to apply Insert function, we also need to derive two random numbers; one is for determining the job to change place and the other is for the job in front of which the former job is to be inserted. Let's say those numbers are 3 and 5 (that is, the third job will move in front of the fifth. In other words, job no.6 will be inserted in front of job no.8 {3, 5, 6, 7, 8, 9, 1, 2, 4}). The new sequence will be {3, 5, 7, 8, 6, 9, 1, 2, 4}.
If the random number chosen is less than the inertia weight, the particle is manipulated with this Insert function, and the resulting solution, say s 1 , is obtained. Meanwhile, the inertia weight is discounted by a constant factor at each iteration, in order to tighten the acceptability of the manipulated particle for the next generation, that is, to diminish the impact of the randomly operated solutions on the swarm evolution.
The next step is to generate another random number of U(0,1) to be compared with c 1 , cognitive parameter, to make a decision whether to apply Insert function to personal best of the particle considered. If the random number is less than c 1 , then the personal best of the particle undertaken is manipulated and the resulting solution is spared as s 2 . Likewise, a third random number of U(0,1) is generated for making a decision whether to manipulate the global best with the Insert function. If the random number is less than c 2 , social parameter, then Insert is applied to the global best to obtain a new solution of s 3 . Unlike the case of inertia weight, the values of c 1 and c 2 factors are not increased or decreased iteratively, but are fixed at 0.5. That means the probability of applying Insert function to the personal and global bests remains the same. The new replacement solution is selected among s 1 , s 2 and s 3 , based on their fitness values. This solution may not always be better than the current solution. This is to keep the swarm diverse. The convergence is traced by checking the personal best of each new particle and the global best. As it is seen, proposed equations have all major characteristics of the classical PSO equations. The following pseudo-code describes in detail the steps of the SPPSO algorithm.
It can be seen from the pseudo-code of the algorithm that the algorithm has all major characteristics of the classical PSO, the search strategy of the algorithm is different in a way that the new solution is selected among s 1 , s 2 and s 3 , based on their fitness values. The selected particle may be worse than the current solution that keep the swarm diverse. The convergence is obtained by changing the personal best of each new particle and the global best.

Computational results
In this section, a comparison study is carried out on the effectiveness of the proposed SPPSO algorithm. SPPSO was exclusively tested in comparison with two other recently introduced PSO algorithms: PSO spv algorithm of Tasgetiren et al. [17] and DPSO algorithm of Pan et al. [21]. Two experimental frameworks, namely E1 and E2, are considered implying the type of discrete uniform distribution used to generate job-processing times. That is, the processing time of each job is generated by using uniform distribution of U[1,100] and U[100,800] for experiments E1 and E2 respectively. All SPPSO, PSO spv and DPSO algorithms are coded in C and run on a PC with the configuration of 2.6 GHz CPU and 512MB memory. The size of the population considered by all algorithms is the number of jobs (n).
For SPPSO and DPSO, the social and cognitive parameters were taken as 12 0.5 cc == , initial inertia weight is set to 0.9 and never decreased below 0.40, and the decrement factor β is fixed at 0.999. For the PSO spv algorithm, the social and cognitive parameters were fixed at 12 2 cc ==, initial inertia weight is set to 0.9 and never decreased below 0.40, and the decrement factor β is selected as 0.999. The algorithms were run for 20000/n iterations. All the there algorithms were applied without embedding any kind of local search.
The result for the experiment E1, in which processing times are generated by using U(1,100) are summarized in Table 2. In this experiment, it is found that the minimum, average and maximum values of the ratios are quite similar for SPPSO and PSO spv . On the other hand, SPPSO and PSO spv performed better than DPSO.
The result for the experiment E2 in which processing times are generated by using U(100,800) are summarized in Table 3. In this experiment, there is also no significant difference between SPPSO and PSO spv . However, in terms of max ratio performance SPPSO performed slightly better than PSO spv . In addition, PSO spv and SPPSO are also better than DPSO for all the three ratios in this experiment.

Conclusion
In this chapter, a stochastically perturbed particle swarm optimization algorithm (SPPSO) is proposed for identical parallel machine scheduling (PMS) problems. The SPPSO has all major characteristics of the classical PSO. However, the search strategy of SPPSO is different. The algorithm is applied to (PMS) problem and compared with two recent PSO algorithms. The algorithms are kept standard and not extended by embedding any local search. It is concluded that SPPSO produced better results than DPSO and PSO spv in terms of number of optimum solutions obtained. In terms of average relative percent deviation, there is no significant difference between SPPSO and PSO spv . However, they are better than DPSO.
It also should be noted that, since PSO spv considers each particle based on three key vectors; position (X i ), velocity (V i ), and permutation (Π i ), it consumes more memory than SPPSO. In addition, since DPSO uses one and two cut crossover operators in every iteration, implementation of DPSO to combinatorial optimization problems is rather cumbersome. The proposed algorithm can be applied to other combinatorial optimization problems such as flow shop scheduling, job shop scheduling etc. as future work.

References
[1] Garey MR, Johnson DS (1979) Computers and intractability: a guide to the theory of NP completeness. Freeman, San Francisco, California www.intechopen.com