Open access peer-reviewed chapter

A Comparative Study on Meta Heuristic Algorithms for Solving Multilevel Lot-Sizing Problems

By Ikou Kaku, Yiyong Xiao and Yi Han

Submitted: May 13th 2012Reviewed: November 27th 2012Published: January 30th 2013

DOI: 10.5772/55279

Downloaded: 1210

1. Introduction

Material requirements planning (MRP) is an old field but still now plays an important role in coordinating replenishment decision for materials/components of complex product structure. As the key part of MRP system, the multilevel lot-sizing (MLLS) problem concerns how to determine the lot sizes for producing/procuring multiple items at different levels with quantitative interdependences, so as to minimize the total costs of production/procurement setup and inventory holding in the planning horizon. The problem is of great practical importance for the efficient operation of modern manufacturing and assembly processes and has been widely studied both in practice and in academic research over past half century. Optimal solution algorithms exist for the problem; however, only small instances can be solved in reasonable computational time because the problem is NP-hard (Steinberg and Napier, 1980). Early dynamic programming formulations used a network representation of the problem with a series structure (Zhangwill, 1968,1969) or an assembly structure (Crowston and Wagner,1973). Other optimal approaches involve the branch and bound algorithms (Afentakis et al., 1984, Afentakis and Gavish, 1986) that used a converting approach to change the classical formulation of the general structure into a simple but expanded assembly structure. As the MLLS problem is so common in practice and plays a fundamental role in MRP system, many heuristic approaches have also been developed, consisting first of the sequential application of the single-level lot-sizing models to each component of the product structure (Yelle,1979, Veral and LaForge,1985), and later, of the application of the multilevel lot-sizing models. The multilevel models quantify item interdependencies and thus perform better than the single-level based models (Blackburn and Millen, 1985, Coleman and McKnew, 1991).

Recently, meta-heuristic algorithms have been proposed to solve the MLLS problem with a low computational load. Examples of hybrid genetic algorithms (Dellaert and Jeunet, 2000, Dellaert et al., 2000), simulated annealing (Tang, 2004, Raza and Akgunduz, 2008), particle swarm optimization (Han et al, 2009, 2012a, 2012b), and soft optimization approach based on segmentation (Kaku and Xu, 2006, Kaku et al, 2010), ant colony optimization system (Pitakaso et al., 2007, Almeda, 2010), variable neighborhood search based approaches (Xiao et al., 2011a, 2011b, 2012), have been developed to solve the MLLS problem of large-scale. Those meta-heuristic algorithms outperform relative simplicity in solving the MLLS problems, together with their cost efficiency, make them appealing tool to industrials, however they are unable to guarantee an optimal solution. Hence those meta-heuristic algorithms that offer a reasonable trade-off between optimality and computational feasibility are highly advisable. It is very reasonable to consider the appropriateness of the algorithm, especially is which algorithm most appropriate for solving the MLLS problems?

In this chapter, We first review the meta-heuristic algorithms for solving the MLLS problems, especially focus on the implemental techniques and their effectives in those meta-heuristic algorithms. For simplicity the MLLS problems are limited with time-varying cost structures and no restrictive assumption on the product structure. Even so the solutions of the MLLS problems are not simply convex but becoming very complex with multi minimums when the cost structure is time-varying and the product structure is becoming general. Comparing those implement methods used in different algorithms we can find some essential properties of searching better solution of the MLLS problems. Using the properties therefore, we can specify the characteristics of the algorithms and indicate a direction on which more efficient algorithm will be developed.

Second, by using these properties as an example, we present a succinct approach—iterated variable neighborhood descent (IVND), a variant of variable neighborhood search (VNS), to efficiently solve the MLLS problem. To examine the performance of the new algorithm, different kinds of product structures were considered including the component commonality and multiple end-products, and 176 benchmark problems under different scales(small, medium and large) were used to test against in our computational experiments. The performance of the IVND algorithm were compared with those of three well-known algorithms in literatures—the hybrid genetic algorithm by Dellaert and Jeunet (2000a), the MMAS algorithm by Pitakaso et al.(2007), and the parallel genetic algorithm by by Homberger (2008), since they all tackled the same benchmark problems. The results show that the IVND algorithm is very competitive since it can on average find better solutions in less computing time than other three.

The rest of this chapter is organized as follows. Section 2 describes the MLLS problem. Section 3 gives an overview of related meta-heuristic algorithms firstly, and several implemental techniques used in the algorithms are discussed. Then section 4 explains the initial method and six implemental techniques used in IVND algorithm, and the scheme of the proposed IVND algorithm. In section 5, computational experiments are carried out on three 176 benchmark problems to test the new algorithm of efficiency and effectiveness and compared with existing algorithms. Finally, in section 7, we symmary the chapter.

2. The MLLS problems

The MLLS problem is considered to be a discrete-time, multilevel production/inventory system with an assembly structure and one finished item. We assume that external demand for the finished item is known up to the planning horizon, backlog is not allowed for any items, and the lead time for all production items is zero. Suppose that there are M items and the planning horizon is divided into N periods. Our purpose is to find the lot sizes of all items so as to minimize the sum of setup and inventory-holding costs, while ensuring that external demands for the end item are met over the N-period planning horizon.

To formulate this problem as an integer optimization problem, we use the same notation of Dellaert and Jeunet (2000a), as follows:

i : Index of items, i = 1, 2, …, M

t : Index of periods, t = 1, 2, …, N

Hi: Unit inventory-holding cost for item i

Si: Setup cost for item i

Ii,t: Inventory level of item i at the end of period t

xi,t: Binary decision index addressed to capture the setup cost for item i

Di,t: Requirements for item i in period t

Pi,t: Production quantity for item i in period t

Ci,j: Quantity of item i required to produce one unit of item j

Γ(i): set of immediate successors of items i

M: A large number

The objective function is the sum of setup and inventory-holding costs for all items over the entire planning horizon, denoted by TC (total cost). Then

TC=i=1Mt=1N(HiIi,t+Sixi,t).E1

The MLLS problem is to minimize TC under the following constraints:

Ii,t=Ii,t1+Pi,tDi,t,E2
Di,t=jΓiCi,jPj,t+lji|Γiφ,E3
Pi,tMxi,t0,E4
Ii,t0,Pi,t0,xi,t{0,1},i,t.E5

where Equation 2 expresses the flow conservation constraint for item i. Note that, if item i is an end item (characterized by Γ(i)=φ), its demand is exogenously given, whereas if it is a component (such that Γ(i)φ), its demand is defined by the production of its successors (items belonging to Γ(i)as stated by Equation 3). Equation 3 guarantees that the amount Pj,tof item j available in period t results from the exact combination of its predecessors (items belonging to Γiin period t). Equation 4 guarantees that a setup cost is incurred when a production is produced. Equation 5 states that backlog is not allowed, production is either positive or zero, and that decision variables are 0, 1 variables.

Because xi,t{0,1}is a binary decision variable for item i in period t, X={xi,t}M×Nrepresents the solution space of the MLLS problem. Searching for an optimal solution of the MLLS problem is equivalent to finding a binary matrix that produces a minimum sum of the setup and inventory-holding costs. Basically, there exists an optimal solution if

xi,tIi,t1=0E6

Equation 6 indicates that any optimal lot must cover an integer number of periods of future demand. We set the first column of X to be 1 to ensure that the initial production is feasible because backlog is not allowed for any item and the lead times are zero. Since there is an inner corner property for assembly structure (see Tang (2004)), we need to have xi,txk,tif item creates internal demand for item k. Thus we need a constraint in order to guarantee that the matrix is feasible.

3. Meta-heuristic algorithms used to solve MLLS problems

The meta-heuristic algorithms are widely used to refer to simple, hybrid and population-based stochastic local searching (Hoos and Thomas 2005). They transfer the principle of evolution through mutation, recombination and selection of the fittest, which leads to the development of solutions that are better adapted for survival in a given environment, to solving computationally hard problems. However, those algorithms often seem to lack the capability of sufficient search intensification, that is, the ability to reach high-quality candidate solutions efficiently when a good starting position is given. Hence, in many cases, the performance of the algorithms for combinatorial problems can be significantly improved by adding some implemental techniques that are used to guide an underlying problem-specific heuristic. In this chapter, our interesting is on the mechanisms of those implemental techniques used to solve a special combinatorial optimization problem, i.e. MLLS problem. Hence we first review several existing meta-heuristic algorithms for solving the MLLS problems.

3.1. Soft optimization approach

Soft optimization approach (SOA) for solving the MLLS problems is based on a general sampling approach (Kaku and Xu, 2006; Kaku et al, 2010). The main merit of soft optimization approach is that it does not require any structure information about the objective function, so it can be used to treat optimization problems with complicated structures. However, it was shown that random sampling (for example simple uniform sampling) method cannot produce a good solution. Several experiments had been derived to find the characteristics of an optimal solution, and as a result applying the solution structure information of the MLLS problem to the sampling method may produce a better result than that arising from the simple uniform sampling method. A heuristic algorithm to segment the solution space with percentage of number of 1s has been developed and the performance improvement of solving MLLS problem was confirmed. It should be pointed that the SOA based on segmentation still remains the essential property of random sampling but limited with the searching ranges, however the adopted new solution(s) does not remain any information of the old solution(s). Therefore the improvement of solution performance can only be achieved by increasing the numbers of samples or by narrowing the range of segment.

3.2. Genetic algorithm

Genetic algorithm (GA) has been developed firstly for solving the MLLS problems in (Dellaert and Jeunet, 2000a, Dellaert et al. 2000b, Han et al. 2012a, 2012b). In fact, it firstly created the way that solving MLLS problems by using meta-heuristic algorithms. Several important contributions were achieved. Firstly a very general GA approach was developed and improved by using several specific genetic operators and a roulette rule to gather those operators had been implemented to treat the two dimensional chromosomes. Secondly, comparison studies had been provided to show that better solution could be obtained than those existing heuristic algorithms, based on several benchmarks data collected from literature. Later such benchmarks provide a platform of developing meta-heuristic algorithms and evaluating their performance. Because the complexity of MLLS problem and the flexibility and implement ability of GA are matching each other so that GA seems powerful and effective for solving MLLS problem. However, even several operators as single bit mutation; cumulative mutation; inversion; period crossover and product crossover were combined in the GA algorithm but what operators were effective in better solution searching process was not presented clearly. It is the point that we try to define and solve in this chapter.

3.3. Simulated annealing

Simulated annealing (SA) has been also developed to solve the MLLS problem (Tang,2004; Raza and Akgunduz,2008). It starts from a random initial solution and changing neighbouring states of the incumbent solution by a cooling process, in which the new solution is accepted or rejected according to a possibility specified by the Metropolis algorithm. Also parameters used in the algorithm had been investigated by using the analysis of variance approach. It had been reported that the SA is appropriate for solving the MLLS problem however verified only in very small test problems. Based on our understanding for SA, different from other meta-heuristic algorithms like GA, SA is rather like a local successive search approach from an initial solution. Then almost information of the old solution can be remained which may lead a long time to search better solution if it is far from the original solution. Also several implement methods con be used to improve the effective of SA (see Hoos and Thomas 2005). It is a general point to improve the effective of SA through shortening the cooling time with some other local searching methods.

3.4. Particle swarm optimization

Particle swarm optimization (PSO) is also a meta-heuristic algorithm formally introduced (Han et al,2009, 2011). It is a suitable and effective tool for continuous optimization problems. Recently the standard particle swarm optimization algorithm is also converted into a discrete version through redefining all the mathematical operators to solve the MLLS problems (Han et al, 2009). It starts its search procedure with a particle swarm. Each particle in the swarm keeps track of its coordinates in the problem space, which are associated with the best solution (fitness) it has achieved so far. This value is called pBest. Another “best” value tracked by the global version of the particle swarm optimization is the overall best value, and its location, obtained so far by any particle in the population, which is called gBest. Gather those so-called optimal factors into current solutions then they will converge to a better solution. It has been reported that comparing experiments with GA proposed in (Dellaert and Jeunet, 2000a, Dellaert et al. 2000b) had been executed by using the same benchmarks and better performance were obtained. Consider the essential mechanism of PSO, it is clear that those optimal factors (pBest and gBest) follow the information of the particle passed and direct where the better solution being. However, it has not been explained clearly that whether those optimal factors remained the required information when the PSO is converted into a discrete form.

3.5. Ant colony optimization

A special ant colony optimization (ACO) combined with linear program has been developed recently for solving the MLLS problem (Pitakaso et al. 2007, Almeda 2010). The basic idea of ant colony optimization is that a population of agents (artificial ants) repeatedly constructs solutions to a given instance of a combinatorial optimization problem. Ant colony optimization had been used to select the principle production decisions, i.e. for which period production for an item should be schedules in the MLLS problems. It starts from the top items down to the raw materials according to the ordering given by the bill of materials. The ant’s decision for production in a certain period is based on the pheromone information as well as on the heuristic information if there is an external (primary) or internal (secondary) demand. The pheromone information represents the impact of a certain production decision on the objective values of previously generated solutions, i.e. the pheromone value is high if a certain production decision has led to good solution in previous iterations. After the selection of the production decisions, a standard LP solver has been used to solve the remaining linear problem. After all ants of an iteration have constructed a solution, the pheromone information is updated by the iteration best as well as the global best solutions. This approach has been reported works well for small and medium-size MLLS problems. However for large instances the solution method leads to high-quality results, but cannot beat highly specialized algorithms.

3.6. Variable neighbourhood search

Variable neighborhood search (VNS) is also used to solve the MLLS problem (Xiao et al., 2011a, 2011b, 2012). The main reasoning of this searching strategy, in comparison to most local search heuristics of past where only one neighborhood is used, is based on the idea of a systematic change of predefined and ordered multi neighborhoods within a local search. By introducing a set of distance-leveled neighborhoods and correlated exploration order, the variable neighborhood search algorithm can perform a high efficient searching in nearby neighborhoods where better solutions are more likely to be found.

There may other different meta-heuristic algorithms have been proposed for solving the MLLS problems, but it can be considered that the algorithms updated above can cover almost fields of the meta-heuristic so that the knowledge obtained in this chapter may has high applicable values. All of the meta-heuristic algorithms used to solve the MLLS problems are generally constructed by the algorithm describe in Fig.1 as follows.

Figure 1.

General algorithm for solving the MLLS problem

In Fig. 1, at the beginning of the search process, an initial solution(s) is generated by using the function init. A simple init method used to generate the initial solution may be the random sampling method, and often several heuristic concepts of MLLS problem are employed to initialize the solution because they can help obtaining better performance. Moreover, the init methods are usually classified into two categories in terms of single solution or multi solutions (usually called population). Function step shows the own originality of the algorithm by using different implement techniques. By comparing the total costs of old and new solutions, it accepts usually the solution with smaller one as next incumbent solution. Finally, function terminate, a user-defined function such as numbers of calculation, coverage rate and so on, is used to terminate the program.

All of the implement techniques used in step function, which are represented in all of the meta-heuristic algorithms for solving the MLLS problems, can be summarized as below.

  1. Single- bit mutation

Just one position (i,t) has randomly selected to change its value (from 0 to 1 or the reverse)

  1. Cumulative mutation

When a single-bit mutation is performed on a given item, it may trigger mutating the value(s) of its predecessors.

  1. Inversion

One position (i,t) has been randomly selected, then compare its value to that of its neighbor in position (i,t+1). If the two values are different from each other(1 and 0, or 1 and 0), then exchanges the two values. Note the last period t=N should not be selected for inversion operation.

  1. Period crossover

A point t in period (1, N) is randomly selected for two parents to produce off-springs by horizontally exchanging half part of their solutions behind period t.

  1. Product crossover

An item i is randomly selected for two parents to produce off-springs by vertically exchanging half part of their solutions below item i.

Table 1 summaries what implement techniques and init function have been used in the existing meta-heuristic algorithms. From Table 1 it can be observed that all of the algorithms use the single-bit mutation method in their step function, but their ways of making the mutations are different. Clearly, SOA creates new solution(s) without using any previous information so that it may is recognized a non-implemental approach. Reversely SA uses single-bit and cumulative mutation based on incumbent solution therefore it reserves almost all of the previous information. Moreover, a random uphill technique is used in SA to escape from local optima so that the global optimal solution may is obtained. However a very long computing time is needed to get it. On the other hand, PSO uses the single-bit mutation method to create the new candidate in step function but starting with multiple initial solutions. According to the concept of original PSO, some information of solutions obtained before may be added into the construction of new solutions in order to increase the probability of generating better solutions. However, it needs a proof of such excellent property is guaranteed in the implemental process but is not appeared in the algorithm proposed before. It may is a challenge work for developing a new PSO algorithm for solving the MLLS problem. Finally, because the techniques of inversion and crossover (which were usually used in other combinational optimization problem such as a Travelling Salesman Problem) only have been used in GA, it is not able to compare them with other algorithms.

Single- bit mutationCumulative mutationInversionPeriod crossoverProduct crossoverinitial solution
SOAmany
GAmulti
SAsingle
PSOmulti
ACOsingle
VNSsingle

Table 1.

The implement techniques used in various algorithms

Note the implement techniques can change the states of the incumbent solution(s) to improve the performance with respect to total cost function, so that which implement technique could do how many contributions to the solution improvement is very important for the computing efficiency of MLLS problems. Here our interesting is on those implement techniques used in the above algorithms, which had been reported appropriate in their calculation situations. We define several criteria for evaluating the performance and effective of the implemental techniques. We firstly define distance-based neighborhood structures of the incumbent solution. The neighborhoods of incumbent solution are sets of feasible solutions associated with different distance measurements from the incumbent solutions. The distance means the exchanged number of different points of two incumbent solutions.

Definition 1. Distance from incumbent solution: For a set of feasible solutions of a MLLS problem, i.e., X={xi,t}, a solution X'belongs to the kth-distance of incumbent solution x, i.e. Nk(x), if and only if it satisfies, x'Nk(x)ρ(x,x')=k, where k is a positive integer. Distance between any two elements in in X, e.g., x and x’, is measured by

ρ(x,x')=|x\x'|=|x'\x|x,x'XE7

Where |\|denotes the number of different points between two solutions, i.e., |\|=i=1Mt=1N|xi,txi,t'|

For example of a MLLS problem with 3 items and 3 periods, and three feasible solutions: x, x’ and x’’, which are as following,

x=|100100111|,x'=|100101111|,x''=|100101101|

According to Definition 1 above, we can get their distances such that: ρ(x,x')=1,ρ(x',x'')=1,ρ(x,x'')=2.

Therefore, creating a solution with kth distance, i.e. Nk(x), to the incumbent solution can be realized by changing the values of k different points of incumbent solution. It can be considered that less changes, e.g. N1(x), can avoid too much damage to the maybe good properties of the incumbent solution, in which some points may be already in its optimal positions. While multiple changes, e.g. Nk(x) (k>1), lead a new solution to be very different from the incumbent solution, so it may also destroy the goodness of the original solution. However on the other hand local optimization may be overcame by multiple changes. Hence, following hypothesis may be considered to evaluate the performance and effective of an implement technique. Secondly we define range measurements that means the changing points are how far from the incumbent solutions. Eventually, we can evaluate those implemental techniques used in the algorithms to solve the MLLS problems by measuring the distance and range when the solution has been changed.

Definition 2. Changing Range of incumbent solution: Any elements belong to the kth-distance of incumbent solution x, i.e. Nk(x), have a range in which the changed items and periods of incumbent solution have been limited.

Therefore, if the changing result of an incumbent solution by an implement technique falls into a small range, it seems be a local search and hence may give little influence on the total cost function. Otherwise a change of incumbent solution within a large range may be a global search increasing the probability of worse solutions found and resulting lower searching efficiency. Comparing the considerations of constructing mutation in the algorithms above, we can find that only short distance (for cumulative mutation, it is a very special case of long distance) has been executed in GA, SA and PSO with different implement techniques. For example, all of the algorithms use mutation to change the solution but only GA uses the probability of obtaining a good solution to increase the share rate of the operators. While SA uses a probability of accepting a worse solution to create an uphill ability of escaping from the traps of local optima, and PSO uses other two factors (pBest and gBest ) to point a direction in which the optimal solution maybe exist. SOA and ACO are using basic random sampling principle to select the decisions in all positions of item and period. The difference is that SOA does not remain any information of previous solutions so large numbers of solution should be initialed and long distance (about half of items*periods) always existed, whereas ACO uses a level by level approach to do single-bit mutation in a solution so goodness of previous solution may has been remained with long distance. Finally, VNS directly use distances to search good solution so that its distance is controllable.

Next, regarding the other three implement techniques, i.e., the inversion, the item crossover, and the period crossover, which are not used in other algorithms except GA, we can find that the inversion is just a special mutation that changes the solution with two points distance. However crossover(s) may change the original solutions with a longer distance so only partial goodness of previous solution has been remained. In GA algorithm, the implement techniques with long and short distance are combined together, therefore it should be considered more effective for solving the MLLS problem. However, we have to say that those implement techniques used in above algorithms (includes GA) seem not effective in finding good solution. Because even a good solution exists near by the original solution, here is no way to guarantee that it will be found by above algorithms since the candidate solution is randomly generated. It can be considered that not only distance but also range should be used in the step function to find a better solution. However, two problems here need to answer. Can distance and range evaluate the effective of implement techniques using in the MLLS problem? How do the implement techniques should be applied?

For proofing our consideration above, simulation experiments were executed by using the general algorithm for solving the MLLS problems shown in Fig.1. Initial solution(s) is produced with a randomized cumulative Wagner and Whitin (RCWW) algorithm. The step function uses a GA algorithm, because all kinds of the implemental techniques had been included in the algorithm. We can control what the implemental technique should be used then evaluate the efficiency of the techniques. However the terminate functions used in different problem instances are different. In the experiments we first show the relationship among distance range and searching effective. Then we investigate the performance effective of various implemental techniques. For calculating the distance, set k←1, and repeated to find randomly a better solution from Nk(x) and accept it until non better solution could be found (indicated by 100 continuous tries without improvement). And then, search better solutions from next distance (kk+1) until k>kmax. The calculation is repeated for 10 runs in different rages and the averages are used. In the step of GA, each of the five operators (single-bit mutation, cumulative mutation, inversion, period crossover and product crossover) are used with a fixed probability 20%, to produce new generation. Initial populations are generated and maintained with a population of 50 solutions. In each generation, new 50 solutions are randomly generated and added into the populations, then genetic operation with the five operators was performed and related total cost has been calculated. In the selection, top 50 solutions starting from lowest total cost are remained as seeds for the next generation and the rest solutions are removed from the populations. We use a stop rule of maximum 50 generations in terminate function. The successful results are counted in terms of distance and range (maximum deviation item/period position among changed points). If there is only one point changed, then the range is zero. Finaly, simulation is excuted by using the same banch marks of Dellaert and Jeunet (2000a).

Remark 1: Distance, range and searching effective

Table 2 and 3 show an example of better solutions with different distances in different ranges. Better solution is defined as that the new solutions are better than the incumbent, so we count the number of better solutions and calculate the ratio of different distances. It can be observed from Table 2 that different distances lead to different number of better solutions found. Most of them (94.82%) are found in the nearest neighbourhood with one point distance and the ratio decreases as the distance increasing, which indicates a less probability of finding better solution among those whose distance are long from the incumbent. However, the incumbent solution did be improved by candidate solutions from longer distance. Even so the results with same tendency can be observed from Table 3 in which a limited range has been used. However very different meanings can be observed from Table 3. Firstly, limiting the search range can lead to a more efficiency for searching better solutions, which is represented by the fact that the total number of better solutions found is about 4% more than that of the non-limited search, and it also leads to obtain a solution with a better total cost (comparing with Table 2). Secondly, even the number of better solutions are almost same in distance 1 (changing just one position of the incumbent solution), but the number of better solutions in distance 2 was about three times of that of non-limited search. That is, longer time and less effect were performed in distance 1 if the range is not limited. This observation can lead a considerable result of implementing a faster searching in distance 1 and a faster changing between distances. That is the superiors of limited searching range.

DistanceBetter solutionsRatio
1333394.82%
21353.84%
3280.80%
4140.40%
550.14%
Total cost=826.233515100.00%

Table 2.

A non limited search in distances in small MLLS problems (Parameters: kmax=5, Δi=±5,Δt=±12)

DistanceBetter solutionsRatio
1323288.57%
23329.10%
3551.51%
4250.69%
550.14%
Total cost=824.373649100%

Table 3.

A limited search in distances in small MLLS problems (Parameters: kmax=5, Δi=±1,Δt=±3)

That means even the implemental techniques used in algorithms such as SA, PSO, and VNS are seemly different, but the results from them may be totally similar. In addition, GA uses other implement techniques like crossover(s), so it may lead a longer distance and improve the searching performance basically. Moreover, distance and range have a very flexible controllability to produce candidate solutions. It gives a very important observation that a special range which includes some heuristic information (such as period heuristics is effective but level heuristic is not effective, and so on) can improve the performance of implemental technique, therefore they should be used as some new implemental methods into the meta-heuristic algorithms.

Remark 2: effective of various implement techniques

Here five implemental techniques, i.e., single-bit mutation, cumulative mutation, inversion, product crossover, and period crossover, were tested by using GA algorithm. These implement techniques are used in the step function of our GA algorithm. The RCWW algorithm is used as the initial method to produce a population of 50 solutions as the first generation. Then, each of the five implemental techniques is selected with equal probability, namely, 20%, to produce a new generation, and each generation keeps only a population of 50 solutions after the selections by total cost function. The algorithm stops after 50 generations have been evolved. A solution is said to be a better solution if its objective cost is better than those of its parent and is counted in terms of changing distance and changing range from its parent. Therefore, simulation experiment is able to show the relationships among distance, range and searching effectives (represented by the number of better solutions found) of each implemental technique. We represent the relationships among distance, range and searching effectives in a three dimensional figure, in which the dimensions are the distance, the range and the number of better solutions found. Then the distribution in the figures can show how the criteria of distance and range segmenting the searching effectives.

Firstly, it can be considered that the single-bit mutation and the inversion are very similar since the candidate solutions are always in distance 1 and range ±0 by single-bit mutation and always in distance 2 and range ±1 by inversion, so the better solutions found are always in distance 1 and range ±0 for the former and distance 2 and range ±1 for the latter, which have been verified in Fig.2 and Fig.3. Comparing Fig.2 and Fig.3, we also can find that the searching effective of reversion is little better than that of single-bit mutation because more better solutions can be found by reversion.

Figure 2.

The result of single-bit mutation

Figure 3.

The result of reverse

Figure 4.

The result of cumulative mutation

Figure 5.

The result of period crossover

Figure 6.

The result of item crossover

Figure 7.

The mixed result of five implemental techniques

Secondly, it seems the cumulative mutation may trigger longer distance and larger range (than the single-bit mutation) to the incumbent solutions, the results of the experiment shown in Fig.4 can illustrate the correctness of this observation. Also it can be observed that only those with small distance and range are more effective in improving the performance of cost function.

Thirdly, the implemental techniques of item crossover and period crossover are more complex than cumulative mutation and reversion, and the offspring consequently may be associated with very large range and very long distance in comparison to their parents. Still, it can be observed from our experimental results in Fig.5 and Fig.6 that only those offspring with small distance and range are more effective on improving the fitness function. Moreover, comparing with simple implemental techniques (single-bit mutation, reversion and cummulative mutation), crossover techniques can achieve significent performance improvements in searching effective.

Finally in Fig.7, we show the mixed distribution of all the five implemental techniques. The results of simulation experiments show that a total signifecent effective of searching better solution can be obtained by using the mixed implemental techniques. Also repeatly, almost achievements of searching effective are in smaller distance and range. This phenominon may be used to conduct a more effective implemental technique in GA algorithms than pure free crossover.

However, for candidate solutions with larger range and longer distance, they still provided improvements on fitness function with probability so should not be ignored by any meta-heuristic algorithms for optimality purpose. Nevertherless, we can develop more effective and more efficient implemental techniques by matching to the ratio of better solution can be found in different ranges and distances. For example, those crossover operations resulting in offspring with too long distance are non-effective and should be avoided as possible.

4. An example: A IVNS algorithm for solving MLLS problems

The VNS/VND algorithm initiated by Mladenovic and Hansen (1997), Hansen and Mladenovic (2001) and Hansen et al. (2001, 2008), is a top-level methodology for solving the combinatorial optimization problems. Because its principle is simple and easily understood and implemented, it has been successfully applied to several optimization problems in different fields. The success of VNS is largely due to its enjoying many of the 11 desirable properties of meta-heuristic generalized by Hansen et al., (2008), such as simplicity, user-friendliness, efficiency, effectiveness, and so on. Since the MLLS problem is observed to share common characteristics, e.g., a binary decision variable, with those problems successfully solved by VNS/VND based algorithm, it is promising to develop an efficient algorithm for this problem (Xiao et al., 2011a, 2011b, 2012). Here an iterated variable neighborhood descent (IVND) algorithm, which is a variant of VNS, is proposed as an example to show the utility and performance improvement of considerations descripted above.

4.1. Initial method

We use a randomized cumulative Wagner and Whitin (RCWW) based approach to initialize the solution for our proposed IVNS algorithm. The RCWW method was introduced by Dellaert and Jeunet(2000a), of which the main idea is based on the fact that lot-sizing a product in the one period will trigger demands for its components in previous periods with leading time corrections( or in same period for the case of zero leading time). Therefore, the real setup cost of an item is in fact greater than its own setup cost and should be modified when using the wellknown sequentiail WW algorithm to generate a initial solution. The time-varying modified setup cost is a improved concept introduced by Dellaert et al(2000b, 2003) and used by Pitakaso et al(2007) which disposes of using a constant modified setup cost for the whole planning horizon; it suggested the costs might vary from one time period to the next, and reported good in getting better initial solutions.

In the IVND algorithm, we use the sequential WW algorithm based on randomly initialized constant modified setup cost to generate initial solutions. For each item i, its modified setup cost Si'can be calculated recursively by

Si'=Si+r[jΓi1(Sj+Sj'|Γj1|)]E8

where Si is the original setup cost of item i, r is a random value uniformly distributed between [0,1], |Γi1|is the set of immediate predecessors(components) of product i and |Γi1|is its cardinality.

In addition to the modified setup cost, we also use the modified unit inventory holding cost to construct initial solutions. It is simply based on the consideration that a positive inventory balance of one product in one period causes not only its own inventory holding cost but also additional inventory holding cost from its predecessors because not all the demands for predecessors are met by timely production; some of them may also be satisfied by inventory. Therefore and similarly, the modified unit inventory holding cost of product i, i.e., Hi'can be calculated recursively by

Hi'=Hi+q[jΓi1(Hj+Hj'|Γj1|)]E9

where Hi is the original unit inventory holding cost of product i and q is a random value uniformly distributed between [0,1].

4.2. Implemental techniques

Here six implemental techniques are used in the IVND algorithm which are integrated together to deliver good performance in the computing experiments.

  1. Limit the changing range of incumbent solution within one item. Limit the changing range of incumbent solution within one item, i.e., N1(x), when exploring a neighborhood farther than N1(x). That is, when multiple mutations are done on incumbent solution, they must occur on same item but different periods.

  2. All mutations are limited between the first period and the last period that have positive demand. This technique makes the search algorithm to avoid invalid mutations. Nevertheless, the first period with positive demand should be fixed with 1.

  3. No demand, no setup. Mutation must not try of arranging setups for products in periods without positive demand, which is obviously non-optimal operation and should be banned in the whole searching process.

  4. Trigger recursive mutations. A mutation of canceling a setup for a product in a period will trigger recursive mutations on all of its ancestors. While a mutation of canceling a setup occurs, e.g. changing the value of bit xit from 1 to 0, it withdraws demands for the immediate predecessors in previous periods of leading time. As a consequence, some of these predecessors their demands may drop to zero such that their setups (if they have) in these periods should be canceled at the same time; other predecessors who remain non-zero demands due to product commonality should remain unchanged. The recursive mutations are only triggered for cancellation of a setup for production; they will not occur when arranging a setup.

  5. Shift a setup rather than cancel a setup. When the setup for product i at period t need to be canceled, try to shift the setup to the first period with positive demand behind t, rather than simply cancel it. For example, when xit is to be canceled, find the first period t* behind t of product i that satisfies Dit* >0 and arrange a setup by setting xit*←1 if xit* is 0. Notably, this arrangement of this setup will also trigger recursive mutations.

  6. Only affected products and periods are recalculated of their inventory holding costs and setup costs. Different to other evolutionary algorithms like GA and PSO where a group of incumbent solutions have to be maintained, the IVND algorithm has only one incumbent solution. In fact, when a mutation occurs, most area of the solution states including setup matrix Yit, lot-sizing matrix Xit and demand matrix Dit are remain unchanged. Thus, it just needs to recalculate the affected products and the affected periods of the setup cost and inventory holding cost after a mutation operation. By taking this advantage, the computing efficiency of IVND algorithm can be significantly improved since the recalculation of the objective function--the most time-consuming part of IVND algorithm, are avoided.

The above six implemental techniques are all used in our proposed IVND algorithm to mutate the incumbent solution into its neighborhood. Steps of implementing these techniques on neighborhood search, e.g., neighborhood Nk(x), can be explained by Fig.8.

Figure 8.

The implementation of implemental techniques

Although the new solutions from Nk(x) may has a greater than k unit distance from the incumbent solution x after implemented with these six implemental techniques, it is still considered as a member of Nk(x). These implemental techniques are only deemed as additional actions implemented on the new solution toward better solution. Moreover, benefiting from these actions, higher efficiency of VNS algorithm could be consequently anticipated, which has been confirmed in the experiments of Section 4.

4.3. The IVND algorithm

The algorithm IVND is a variant of the basic VND algorithm. It starts from initiating a solution as the incumbent solution, and then launches a VND search. The VND search repeatedly tries of finding a better solution in the nearby neighborhood of the incumbent solution and moves to the better solution found; if a better solution cannot be found in current neighborhood, then go to explore a father neighborhood until the farthest neighborhood is reached. Once the VND process is stopped(characterized by the farthest neighborhood been explored), another initial solution is randomly generated and restarts the VND search again. This simply iterated search process loops until the stop condition is met. The stop condition can be a user-specified computing time or a maximum span between two restarts without improvement on the best solution found. In our experiments of the next section, we use the later one, i.e., a fixed times of continuous restarts without improvement, as the stop condition. The basic scheme the proposed IVND algorithm is illustrated in Fig. 9.

Figure 9.

The IVND algorithm for MLLS problem

There are three parameters, i.e., P, N, and Kmax, in the IVND algorithm for determining the tradeoff between searching efficiency and the quality of final solution. The first parameter P is a positive number which serves as a stop condition indicating the maximum span between two restarts without improvement on best solution found. The second parameter N is the maximum number of explorations between two improvements within a neighborhood. If a better solution cannot be found after N times of explorations in the neighborhood Nk(x), it is then deemed as explored and the algorithm goes to explore a farther neighborhood by kk+1. The third parameter Kmax is the traditional parameter for VND search indicating the farthest neighborhood that the algorithm will go.

5. Computational experiments and the results

5.1. Problem instances

Three sets of MLLS problem instances under different scales(small, medium and large) are used to test the performance of the proposed IVND algorithm. The first set consists of 96 small-sized MLLS problems involving 5-item assembly structure over a 12-period planning horizon, which was developed by Coleman and McKnew (1991) on the basis of work by Veral and LaForge (1985) and Benton and Srivastava (1985), and also used by Dellaert and Jeunet (2000a). In the 96 small-sized problems, four typical product structures with an one-to-one production ratio are considered, and the lead times of all items are zero. For each product structure, four cost combinations are considered, which assign each individual item with different setup costs and different unit holding costs. Six independent demand patterns with variations to reflect low, medium and high demand are considered over a 12-period planning horizon. Therefore, these combinations produce 4×4×6=96problems for testing. The optimal solutions of 96 benchmark problem are previously known so that can serve as benchmark for testing against the optimality of the new algorithm.

The second set consists of 40 medium-sized MLLS problems involving 40/50-item product structure over a 12/24-period planning horizon, which are based on the product structures published by Afentakis et al. (1984), Afentakis and Gavish (1986), and Dellaert and Jeunet (2000). In the 40 medium-sized problems, four product structures with an one-to-one production ratio are constructed. Two of them are 50-item assembly structures with 5 and 9 levels, respectively. The other two are 40-item general structure with 8 and 6 levels, respectively. All lead times were set to zero. Two planning horizons were used: 12 and 24 periods. For each product structure and planning horizon, five test problems were generated, such that a total number of 4×2×5=40problems could be used for testing.

The third set covers the 40 problem instances with a problem size of 500 products and 36/52 periods synthesized by Dellaert and Jeunet (2000). There are 20 different product structures with one-to-one production ratio and different commonality indices[1] -. The first 5 instances are pure assembly structures with one end-product. The instances from 6 to 20 are all general structure with five end-products and different communality indices ranges from 1.5 to 2.4. The first 20 instances are all over a 36-period planning horizon; the second 20 instances are of the same product structures of the first 20 instances but over a 52-period planning horizon. The demands are different for each instances and only on end-products.

Since the hybrid GA algorithm developed by Dellaert and Jeunet (2000a) is the first meta-heuristic algorithm for solving the MLLS problem, it was always selected as a benchmark algorithm for comparison with newly proposed algorithm. Therefore, we compared the performance of our IVND algorithm with the hybrid GA algorithm on the all instances of three different scales. We also compared our IVND algorithm with the MMAS algorithm developed by Pitakaso et al.(2007), and the parallel GA algorithm developed by Homberger (2008) since both of them used the same three set of instances used in this paper.

5.2. Test environment

The IVND algorithm under examination were coded in VC++6.0 and ran on a notebook computer equipped with a 1.6G CPU operating under Windows XP system. We fixed the parameter Kmax to be 5 for all experiments, and let the parameter P and N changeable to fit for the different size of problem. The symbol ‘IVNDNP specifies the IVND algorithm with the parameter P and N, e.g., IVND20050indicates P=50, N=200, and Kmax=5 by default. The effect of individual parameter on the quality of solution was tested in section 5.6.

5.3. Small-sized MLLS problems

We repeatedly ran IVND20050on the 96 small-sized MLLS problems for 10 times and got 960 results among which 956 were the optimal results so the optimality was 99.58% indicated by the column Best solutions found(%). The column average time(s) indicates the average computing time in second of one run for each problem. The average result of 10 and the minimum result of 10 are both shown in Table 4 and compared to the HGA algorithm of Dellaert and Jeunet (2000a), the MMAS algorithm of Pitakaso et al.(2007) and the PGA algorithm of Homberger (2008). It can be observed from Table 4 that IVND20050uses 0.7 second to find 100% optimal solutions of 96 benchmark problems. Although the PGAC and the GA3000 can also find 100% optimal solutions, they take longer computing time and also take the advantage of hardware (for PGAC 30 processors were used to make a parallel calculation). After that, we adjust the parameter P from 50 to 200 and repeatedly ran IVND200200on the 96 problems for 10 times again. Surprisingly, we got 960 optimal solutions (100% optimality) with computing time of 0.27 second.

MethodAvg. costBest solutions found (%)Mean dev. if best solution not foundAverage time(s)CPU typeNumber of processorsSources
HGA50810.7496..880.265.14s--1Dellaert et al.(2000)
MMAS810.7992.710.26<1sP4 1.5G1Pitakaso et al.(2007)
GA100811.9875.000.685sP4 2.4G1Homberger(2008)
GA3000810.67100.000.005sP4 2.4G1Homberger(2008)
PGAI810.8194.790.285sP4 2.4G30Homberger(2008)
PGAC810.67100.000.005sP4 2.4G30Homberger(2008)
IVND50100(Avg. of 10)810.6999.580.020.07sNB 1.6G1IVND
IVND50100 (Min. Of 10)810.67100.000.000.7sNB 1.6G1IVND
IVND200200(Avg. Of 10)810.67100.000.000.27sNB 1.6G1IVND
OPT.810.67100.000.00-------

Table 4.

Comparing IVND with existing algorithms on 96 small-sized problems

5.4. Medium-sized MLLS problems

Secondly, we ran IVND600100algorithm on 40 medium-sized MLLS benchmark problems and repeated 10 runs. We summarize the 400 results and compare them with the existing algorithms in Table 5. More detailed results of 40 problems are listed in Table 6 where the previous best known solutions summarized by Homberger(2008) are also listed for comparison. After that, we repeatedly ran IVND600100algorithm for several times and updated the best solutions for these 40 medium-sized problems which are listed in column new best solution in Table 6.

MethodAvg. costOptimality on previous best solutions (%)Comp. Time(s)CPU typeNumber of processorsSources
HGA250*263,931.817.50<60s--1Dellaert et all(2000)
MMAS263,796.322.50<20sP4 1.5G1Pitakaso et al.(2007)
GA100271,268.20.0060sP4 2.4G1Homberger(2008)
GA3000266,019.815.0060sP4 2.4G1Homberger(2008)
PGAI267,881.40.0060sP4 2.4G30Homberger(2008)
PGAC263,359.665.0060sP4 2.4G30Homberger(2008)
IVND100600(Avg. of 10)263,528.530.002.67NB 1.6G1IVND
IVND100600(Min. of 10)263,398.860.0026.7NB 1.6G1IVND
Prev. best solution263,199.8----------
New best solution260,678.3---------

Table 5.

Comparing IVND with existing algorithms on 40 medium-sized problems

It can be observed from Table 5 that the PGAC and IVND600100(Min. of 10) algorithm are among the best and very close to each other. Although the optimality of PGAC (65%) is better than that of IVND600100(Min. of 10) (60%) in terms of the baseline of previous best known solutions, it may drop at least 17% if in terms of new best solutions since 12 of 40 problems had been updated their best known solutions by IVND algorithm(see the column new best solution in Table 6) and 7 of the 12 updated problems’ previous best known solution were previously obtained by PGAC. Furthermore, by taking account into consideration of hardware advantage of the PGAC algorithm(multiple processors and higher CPU speed), we can say that the IVND algorithm performances at least as best as the PGAC algorithm on medium-sized problems, if not better than.

InstanceIVND400Prev. bestSolutionNew bestsolutionNewmethod
SDIPAvg. of 10Min. of 10
0115012196,084196,084194,571194,571B&B
1125012165,682165,632165,110165,110B&B
2135012201,226201,226201,226201,226B&B
3145012188,010188,010187,790187,790B&B
4155012161,561161,561161,304161,304B&B
5215012179,761179,761179,762179,761B&B
6225012155,938155,938155,938155,938B&B
7235012183,219183,219183,219183,219B&B
8245012136,474136,462136,462136,462B&B
9255012186,645186,597186,597186,597B&B
10314012148,004148,004148,004148,004PGAC
11324012197,727197,695197,695197,695PGAC
12334012160,693160,693160,693160,693MMAS
13344012184,358184,358184,358184,358PGAC
14354012161,457161,457161,457161,457PGAC
15414012185,507185,170185,170185,161PGAC→IVND
16424012185,542185,542185,542185,542PGAC
17434012192,794192,794192,157192,157MMAS
18444012136,884136,757136,764136,757PGAC→IVND
19454012166,180166,122166,041166,041PGAC
20165024344,173343,855343,207343,207PGAC
21175024293,692293,373292,908292,908HGA
22185024356,224355,823355,111355,111HGA
23195024325,701325,278325,304325,278PGAC
241105024386,322386,059386,082385,954HGA→IVND
25265024341,087341,033340,686340,686HGA
26275024378,876378,845378,845378,845HGA
27285024346,615346,371346,563346,358HGA→IVND
28295024413,120412,511411,997411,997HGA
292105024390,385390,233390,410390,233PGCA→IVND
30364024344,970344,970344,970344,970HGA
31374024352,641352,634352,634352,634PGAC
32384024356,626356,456356,427356,323PGAC→IVND
33394024411,565411,438411,509411,438MMAS→IVND
343104024401,732401,732401,732401,732HGA
35464024289,935289,846289,883289,846PGAC→IVND
36474024339,548339,299337,913337,913MMAS
37484024320,920320,426319,905319,905PGCA
38494024367,531367,326366,872366,848PGCA→IVND
394104024305,729305,363305,172305,053PGCA→IVND
Average263,529263,399263,199.8260,677.1
Avg. computing time2.67s26.7s

Table 6.

Results of 40 medium-sized problems and the new best solutions

Note. Boldface type denotes previous best solution; underline type denotes better solution; Boldface&underline denotes the new best solution.


5.5. Large-sized MLLS problems

Next, we ran IVND100050algorithm on 40 large-sized MLLS benchmark problems and repeated 10 runs. We summarize the 400 results and compare them with the existing algorithms in Table 7, and show detailed results of 40 problems in Table 8.

MethodAvg. CostOptimality on prev. best solutions (%)Time(m)CPU typeNumber of processorsSources
HGA1000*40,817,60010.00--1Dellaert et all(2000)
MMAS40,371,70247.50P4 1.5G1Pitakaso et al.(2007)
GA10041,483,5900.0060P4 2.4G1Homberger(2008)
GA3000----60P4 2.4G1Homberger(2008)
PGAI41,002,7430.0060P4 2.4G30Homberger(2008)
PGAC39,809,73952.5060P4 2.4G30Homberger(2008)
IVND501000(Avg. of 10)40,051,63865.004.44NB 1.6G1IVND
IVND501000(Min. of 10)39,869,21070.0044.4NB 1.6G1IVND
Prev. best solution39,792,241----------
New best solution39,689,769---------

Table 7.

Comparing IVND with existing algorithms on 40 large-sized problems

It can be observed from Table 7 and Table 8 that the IVND algorithm shows its best optimality among all existing algorithms since 70% of these 40 problems were found new best solution by IVND algorithm. The average computing time for each problem used by IVND algorithm was relatively low. However, four problems, i.e., problem 19, 15, 25, and 50, used much long time to finish their calculation because the interdependencies among items are relatively high for these four problems. The column Inter D. in Table 8 is the maximum number of affected items when the lot-size of end-product is changed.

InstanceIVND1000Prev. bestknownNew bestknownNew source
IPInter D.Avg. of 10Min. of 10Time(m)
050036500597,940597,5606.3595,792595,792PGAC
150036500817,615816,5076.8816,058816,058HGA
250036500929,097927,8606.4911,036911,036PGAC
350036500945,317944,6266.2942,650942,650MMAS
4500365001,146,9461,145,9806.51,149,0051,145,980MMAS→IVND
550036110367,725,3237,689,43471.97,812,7947,689,434PGAC→IVND
65003635473,928,1083,923,33622.54,063,2483,923,336MMAS→IVND
75003610342,724,4722,713,49617.22,704,3322,703,004HGA→IVND
8500365591,898,2631,886,81210.31,943,8091,865,141PGAC→IVND
9500363411,511,6001,505,3926.01,560,0301,502,371MMAS→IVND
105003619360759,911,52059,842,858179.959,866,08559,842,858PGAC→IVND
11500362297313,498,85313,441,69258.613,511,90113,441,692PGAC→IVND
125003632474,751,4644,731,81834.44,828,3314,731,818PGAC→IVND
13500369142,951,2322,937,91418.62,910,2032,910,203HGA
14500367081,759,9761,750,6118.91,791,7001,740,397MMAS→IVND
15500361099608472,625,159472,088,128106.1471,325,517471,325,517PGAC
16500362423418,719,24318,703,57380.518,750,60018,703,573MMAS→IVND
175003673127,321,9857,292,34033.77,602,7307,292,340MMAS→IVND
185003611583,592,0863,550,99423.43,616,9683,550,994PGAC→IVND
19500369822,326,3902,293,13113.82,358,4602,291,093MMAS→IVND
20500525001,189,5991,188,21022.41,187,0901,187,090MMAS
21500525001,343,5671,341,41214.91,341,5841,341,412HGA→IVND
22500525001,403,8221,402,8188.71,400,4801,384,263MMAS→IVND
23500525001,386,6671,384,2639.11,382,1501,382,150MMAS
24500525001,660,8791,658,1568.81,660,8601,658,156MMAS→IVND
25500521103612,845,43812,777,577117.713,234,36212,776,833PGAC→IVND
265005235477,292,7287,246,23727.77,625,3257,246,237PGAC→IVND
275005210344,253,4004,231,89621.94,320,8684,199,064PGAC→IVND
28500525592,905,0062,889,32810.72,996,5002,864,526MMAS→IVND
29500523412,198,5342,186,4297.82,277,6302,186,429MMAS→IVND
3050052193607103,535,103103,237,091297.4102,457,238102,457,238PGAC
31500522297318,160,12918,104,42449.418,519,76018,097,215PGAC→IVND
325005232476,932,3536,905,07044.37,361,6106,905,070MMAS→IVND
33500529144,109,7124,095,10941.14,256,3614,080,792PGAC→IVND
34500527082,602,8412,573,49120.52,672,2102,568,339MMAS→IVND
35500521099608768,067,039762,331,081121.4756,980,807756,980,807PGAC
36500522423433,393,24033,377,419137.733,524,30033,356,750MMAS→IVND
3750052731210,506,43910,491,32452.110,745,90010,491,324MMAS→IVND
385005211585,189,6515,168,54729.55,198,0115,120,701PGAC→IVND
39500529823,406,7643,394,47014.33,485,3603,381,090MMAS→IVND
Average40,051,63839,869,21044.439,792,24139,689,769--
Avg. computing time4.44m44.4m

Table 8.

Results of 40 large-sized problems and the new best solutions

Note. Boldface type denotes previous best solution; underline type denotes better solution; Boldface&underline denotes the new best solution.


5.6. The effectiveness of individual parameter of VIND

Finally, we used the 40 medium-sized problems to test parameters, i.e., P, N and Kmax, of IVND algorithm their relation between effectiveness and computing load (using medium-sized problems is just for saving computing time). We did three experiments by varying one parameter while fixing other two parameters. First, we fixed N=200 and Kmax=5, and increased P from 10 to 200, and repeated 10 runs for each P. Second, we fixed P=50 and Kmax=5, and increase N from 50 to 500. thirdly, P, N were fixed to 50 and 200, and Kmax was increased from 1 to 10. The average costs gotten by the three experiments against varied parameters are shown in Table 9. A general trend can be observed that increases parameter P, N or Kmax will all lead to better solutions been found but at the price of more computing time. However, all the parameter may contribute less to the quality of solution when they are increased large enough. Obviously, the best effectiveness-cost combination of these parameters exists for the IVND algorithm which is a worthwhile work to do in future works, but we just set these parameters manually in our experiments.

PNKmaxAvg. Cost of 10 runsComp. time of 10 runs (s)
52005264,11130
102005263,91457
202005263,816104
302005263,744144
502005263,712233
702005263,671308
1002005263,640433
1302005263,614553
1602005263,620674
2002005263,579832
50105266,55240
50505264,39587
501005263,920135
501505263,775186
502005263,702220
502505263,672274
503005263,634309
504005263,613376
505005263,603463
506005263,585559
502001263,86874
502002263,775100
502003263,715136
502004263,713178
502005263,709225
502006263,704276
502007263,693341
502008263,684420
502009263,688527
5020010263,678567

Table 9.

Experimental results of different parameters for medium-sized problem

6. Summarization

The consideration of meta-heuristic is widly used in a lot of fields. Deffirent meta-heuristic algorithms are developed for solving deffirent problems, especially combinational optimization problems. In this chapter, we discussed a special case of MLLS problem. First, the general definition of MLLS problem was described. We shown its solution structure and explained its NP completeness. Second, we reviewed the meta-heuristic algorithms which have been use to solve the MLLS problem and pointed their merits and demerits. Based on the recognition, third, we investigated those implement techniques used in the meta-heuristic algorithms for solving the MLLS problems. And two criteria of distance and range were firstly defined to evaluate the effective of those techniques. We brifly discussed the mechanisms and characteristics of the techniques by using these two criteria, and provided simulation experiments to prove the correctness of the two criteria and to explain the performance and utility of them. This is one of our contributions. Fourth, we presented a succinct and easily implemented IVND algorithm and six implemental techniques for solving the MLLS problem. The IVND algorithm was evaluated by using 176 benchmark problems of different scales (small, medium and large) from literatures. The results on 96 small-sized benchmark problems showed the IVND algorithm of good optimality; it could find 100% optimal solutions in repeated 10 runs using a very low computing time(less than 1s for each problem). Experiments on other two sets of benchmark problems (40 medium-sized problems and 40 large-sized problems) showed it good efficiency and effectiveness on solving MLLS problem with product structure complex. For the medium-sized problems, the IVND can use 10 repeated runs to reach 40% of the 40 problems of their previous known best solutions and find another 20% of new best known solutions. By more repeated runs, our IVND algorithm actually had updated 30% (12 problems) of the best known solutions, and computing efficiency was also very acceptable because the longest computing time for each problem was less than one minute. For the 40 large-sized problems, the IVND algorithm delivered even more exciting results on the quality of solution. Comparison of the best solutions achieved with the new method and those established by previous methods including HGA, MMAS, and PGA shows that the IVND algorithm with the six implemental techniques are till now among the best methods for solving MLLS problem with product structure complexity considered, not only because it is easier to be understood and implemented in practice, but more importantly, it also provides quite good solutions in very acceptable time.

Acknowledgments

This work is supported by the Japan Society for the Promotion of Science (JSPS) under the grant No. 24510192.

Notes

  • Commonality index is the average number of successors of all items in a product structure

© 2013 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Ikou Kaku, Yiyong Xiao and Yi Han (January 30th 2013). A Comparative Study on Meta Heuristic Algorithms for Solving Multilevel Lot-Sizing Problems, Recent Advances on Meta-Heuristics and Their Application to Real Scenarios, Javier Del Ser, IntechOpen, DOI: 10.5772/55279. Available from:

chapter statistics

1210total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

A Two-Step Optimisation Method for Dynamic Weapon Target Assignment Problem

By Cédric Leboucher, Hyo-Sang Shin, Patrick Siarry, Rachid Chelouah, Stéphane Le Ménec and Antonios Tsourdos

Related Book

First chapter

A Tutorial on H.264/SVC Scalable Video Coding and its Tradeoff between Quality, Coding Efficiency and Performance

By Iraide Unanue, Inigo Urteaga, Ronaldo Husemann, Javier Del Ser, Valter Roesler, Aitor Rodriguez and Pedro Sanchez

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us