Open access peer-reviewed chapter

A Multilevel Genetic Algorithm for the Maximum Satisfaction Problem

Written By

Noureddine Bouhmala

Submitted: 01 November 2017 Reviewed: 03 May 2018 Published: 27 June 2018

DOI: 10.5772/intechopen.78299

From the Edited Volume

Artificial Intelligence - Emerging Trends and Applications

Edited by Marco Antonio Aceves-Fernandez

Chapter metrics overview

1,212 Chapter Downloads

View Full Metrics

Abstract

Genetic algorithms (GA) which belongs to the class of evolutionary algorithms are regarded as highly successful algorithms when applied to a broad range of discrete as well continuous optimization problems. This chapter introduces a hybrid approach combining genetic algorithm with the multilevel paradigm for solving the maximum constraint satisfaction problem (Max-CSP). The multilevel paradigm refers to the process of dividing large and complex problems into smaller ones, which are hopefully much easier to solve, and then work backward toward the solution of the original problem, using the solution reached from a child level as a starting solution for the parent level. The promising performances achieved by the proposed approach are demonstrated by comparisons made to solve conventional random benchmark problems.

Keywords

  • maximum constraint satisfaction problem
  • genetic algorithms
  • multilevel paradigm

1. Introduction

Many problems in the field of artificial intelligence can be modeled as constraint satisfaction problems (CSP). A CSP is a tuple XDC where, X=x1x2xn is a finite set of variables, D=Dx1Dx2Dxn is a finite set of domains. Thus each variable xX has a corresponding discrete domain Dx from which it can be instantiated, and C=C1C2Ck is a finite set of constraints. Each k-ary constraint restricts a k-tuple of variables, x1x2xk and specifies a subset of D1××Dk, each element of which are values that the variables cannot take simultaneously. A solution to a CSP requires the assignment of values to each of the variables from their domains such that all the constraints on the variables are satisfied. The maximum constraint satisfaction problem (Max-CSP) aims at finding an assignment so as to maximize the number of satisfied constraints. Max-CSP can be regarded as the generalization of CSP; the solution maximizes the number of satisfied constraints. In this chapter, attention is focused on binary CSPs, where all constraints are binary, that is, they are based on the cartesian product of the domains of two variables. However, any non-binary CSP can theoretically be converted to a binary CSP [1]. Algorithms for solving CSPs apply the so-called 1-exchange neighborhood under, which two solutions are direct neighbors if, and only if, they differ at most in the value assigned to one variable. Examples include the minimum conflict heuristic MCH [2], the break method for escaping from local minima [3], and various enhanced MCH (e.g., randomized iterative improvement of MCH called WMCH [4], MCH with tabu search [5], and evolutionary algorithms [6]). Algorithms based on assigning weights on constraints are techniques that work by introducing weights on variables or constraints in order to avoid local minima. Methods belonging to this category include genet [7], guided local search [8], the exponentiated subgradient [9], discrete Lagrangian search [10], the scaling and probabilistic smoothing [11], evolutionary algorithms combined with stepwise adaptation of weights [12], methods based on dynamically adapting weights on variables [13], or both (i.e., variables and constraints) [14]. Methods based on large neighborhood search have recently attracted several researchers for solving the CSP [15]. The central idea is to reduce the size of local search space relying on a continual relaxation (removing elements from the solution) and re-optimization (re-inserting the removed elements). Finally, the work introduced in [16] introduces a variable depth metaheuristic combing a greedy local search with a self-adaptive weighting strategy on the constraints weights.

Advertisement

2. Algorithm

2.1. Multilevel context

The multilevel paradigm is a simple technique, which at its core involves recursive coarsening to produce smaller and smaller problems that are easier to solve than the original one. Multilevel techniques have been developed in the period after 1960 and are among the most efficient techniques used for solving large algebraic systems arising from the discretization of partial differential equations. In recent years, it has been recognized that an effective way of enhancing metaheuristics is to use them in the multilevel context. The pseudo-code of the multilevel genetic algorithm is shown in Algorithm 1. Figure 1 illustrates the multilevel paradigm used for six variables and two coarsening levels. The multilevel paradigm consists of four phases: coarsening, initial solution, uncoarsening, and refinement. The coarsening phase aims at merging the variables associated with the problem to form clusters. The clusters are used in a recursive manner to construct a hierarchy of problems each representing the original problem but with fewer degrees of freedom. The coarsest level can then be used to compute an initial solution. The solution found at the coarsest level is uncoarsened (extended to give an initial solution for the parent level) and then improved using a chosen optimization algorithm. A common feature that characterizes multilevel algorithms, is that any solution in any of the coarsened problems is a legitimate solution to the original one. Optimization algorithms using the multilevel paradigm draw their strength from coupling the refinement process across different levels.

Algorithm 1.

The multilevel genetic algorithm.

Figure 1.

The different steps of the multilevel paradigm.

2.2. Multilevel genetic algorithm (GA)

GAs [17] are stochastic methods for global search and optimization and belong to the group of nature-inspired metaheuristics leading to the so-called natural computing. It is a fast-growing interdisciplinary field in which a range of techniques and methods are studied for dealing with large, complex, and dynamic problems with various sources of potential uncertainties. GAs simultaneously examine and manipulate a set of possible solutions. A gene is a part of a chromosome (solution), which is the smallest unit of genetic information. Every gene is able to assume different values called allele. All genes of an organism form a genome, which affects the appearance of an organism called phenotype. The chromosomes are encoded using a chosen representation and each can be thought of as a point in the search space of candidate solutions. Each individual is assigned a score (fitness) value that allows assessing its quality. The members of the initial population may be randomly generated or by using sophisticated mechanisms by means of which an initial population of high-quality chromosomes is produced. The reproduction operator selects (randomly or based on the individual’s fitness) chromosomes from the population to be parents and enter them in a mating pool. Parent individuals are drawn from the mating pool and combined so that information is exchanged and passed to off-springs depending on the probability of the crossover operator. The new population is then subjected to mutation and enters into an intermediate population. The mutation operator acts as an element of diversity into the population and is generally applied with a low-probability to avoid disrupting crossover results. Finally, a selection scheme is used to update the population giving rise to a new generation. The individuals from the set of solutions, which is called population will evolve from generation to generation by repeated applications of an evaluation procedure that is based on genetic operators. Over many generations, the population becomes increasingly uniform until it ultimately converges to optimal or near-optimal solutions. The different steps of the multilevel weighted genetic algorithm are described as follows:

  • construction of levels: let G0=V0E0 be an undirected graph of vertices V and edges E. The set V denotes variables and each edge xixjE implies a constraint joining the variables xi and xj. Given the initial graph G0, the graph is repeatedly transformed into smaller and smaller graphs G1, G2, …, Gm such that V0>, V1>, … >Vm. To coarsen a graph from Gj to Gj+1, a number of different techniques may be used. In this chapter, when combining a set of variables into clusters, the variables are visited in a random order. If a variable xi has not been matched yet, then the algorithms randomly select one of its neighboring unmatched variable xj, and a new cluster consisting of these two variables is created. Its neighbors are the combined neighbors of the merged variables xi and xj. Unmatched variables are simply left unmatched and copied to the next level.

  • initial assignment: the process of constructing a hierarchy of graphs ceases as soon as the size of the coarsest graphs reaches some desired threshold. A random initial population is generated at the lowest level Gk=VkEk. The chromosomes, which are assignments of values to the variables are encoded as strings of bits, the length of which is the number of variables. At the lowest level, the length of the chromosome is equal to the number of clusters. The initial solution is simply constructed by assigning to all variable in a cluster, a random value vi. In this work, it is assumed that all variables have the same domain (i.e., same set of values), otherwise different random values should be assigned to each variable in the cluster. All the individuals of the initial population are evaluated and assigned a fitness expressed in Eq. (1), which counts the number of constraint violations where <xisi,xjsj> denotes the constraint between the variables xi and xj where xi is assigned the value si from Dxi and xj is assigned the value sj from Dxj.

Fitness=i=1n1j=i+1nViolationWi,j<xisixjsj>E1
  • initial weights: the next step of the algorithm assigns a fixed amount of weight equal to 1 across all the constraints. The distribution of weights to constraints aims at forcing hard constraints with large weights to be satisfied thereby preventing the algorithm at a later stage from getting stuck at a local optimum.

  • optimization: having computed an initial solution at the coarsest graph, GA starts the search process from the coarsest level Gk=(Vk,Ek) and continues to move toward smaller levels. The motivation behind this strategy is that the order in which the levels are traversed offers a better mechanism for performing diversification and intensification. The coarsest level allows GA to view any cluster of variables as a single entity leading the search to become guided in faraway regions of the solution space and restricted to only those configurations in the solution space in which the variables grouped within a cluster are assigned the same value. As the switch from one level to another implies a decrease in the size of the neighborhood, the search is intensified around solutions from previous levels in order to reach better ones.

  • parent selection: during the optimization, new solutions are created by combining pairs of individuals in the population and then applying a crossover operator to each chosen pair. Combining pairs of individuals can be viewed as a matching process. In the version of GA used in this work, the individuals are visited in random order. An unmatched individual ik is matched randomly with an unmatched individual il.

  • genetic operators: the task of the crossover operator is to reach regions of the search space with higher average quality. The two-point crossover operator is applied to each matched pair of individuals. The two-point crossover selects two randomly points within a chromosome and then interchanges the two parent chromosomes between these points to generate two new offspring.

  • survivor selection: the selection acts on individuals in the current population. Based on each individual quality (fitness), it determines the next population. In the roulette method, the selection is stochastic and biased toward the best individuals. The first step is to calculate the cumulative fitness of the whole population through the sum of the fitness of all individuals. After that, the probability of selection is calculated for each individual as being PSelectioni=fi/1Nfi, where fi is the fitness of individual i.

  • updating weights: the weights of each current violated constraint is then increased by one, whereas the newly satisfied constraints will have their weights decreased by one before the start of new generation.

  • termination condition: the convergence of GA is supposed to be reached if the best individual remains unchanged during five consecutive generations.

  • projection: once GA has reached the convergence criterion with respect to a child level graph Gk=VkEk, the assignment reached on that level must be projected on its parent graph Gk1=Vk1Ek1. The projection algorithm is simple; if a cluster belongs to Gk=VkEk is assigned the value vli, the merged pair of clusters that it represents belonging to Gk1=Vk1Ek1 are also assigned the value vli,

Advertisement

3. Experimental results

3.1. Experimental setup

The benchmark instances were generated using model A [18] as follows: each instance is defined by the 4-tuple n,m,pd,pt, where n is the number of variables; m is the size of each variable’s domain; pd, the constraint density, is the proportion of pairs of variables, which have a constraint between them; and pt, the constraint tightness, is the probability that a pair of values is inconsistent. From the n×n1/2 possible constraints, each one is independently chosen to be added in the constraint graph with the probability pd. Given a constraint, we select with the probability pt, which value pairs become no-goods. The model A will on average have pd×n1/2 constraints, each of which has on average pt×m2 inconsistent pairs of values. For each pair of density tightness, we generate one soluble instance (i.e., at least one solution exists). Because of the stochastic nature of GA, we let each algorithm do 100 independent runs, each run with a different random seed. Many NP-complete or NP-hard problems show a phase transition point that marks the spot where we go from problems that are under-constrained and so relatively easy to solve, to problems that are over-constrained and so relatively easy to prove insoluble. Problems that are on average harder to solve occur between these two types of relatively easy problem. The values of pd and pt are chosen in such a way that the instances generated are within the phase transition. In order to predict the phase transition region, a formula for the constrainedness [19] of binary CSPs was defined by:

κ=n12pdlogm11pt.E2

The tests were carried out on a DELL machine with 800 MHz CPU and 2 GB of memory. The code was written in C and compiled with the GNU C compiler version 4.6. The following parameters have been fixed experimentally and are listed below:

  • Population size = 50

  • Stopping criteria for the coarsening phase: the reduction process stops as soon as the number of levels reaches 3. At this level, MLV-WGA generates an initial population.

  • Convergence during the optimization phase: if there is no observable improvement of the fitness function of the best individual during five consecutive generations, MLV-WGA is assumed to have reached convergence and moves to a higher level.

3.2. Results

The plots in Figures 2 and 3 compare the WGA with its multilevel variant MLV-WGA. The improvement in quality imparted by the multilevel context is immediately clear. Both WGA and MLV-WGA exhibit what is called a plateau region. A plateau region spans a region in the search space where crossover and mutation operators leave the best solution or the mean solution unchanged. However, the length of this region is shorter with MLV-WGA compared to that of WGA. The multilevel context uses the projected solution obtained at Gm+1Vm+1Em+1 as the initial solution for GmVmEm for further refinement. Even though the solution at Gm+1Vm+1Em+1 is at a local minimum, the projected solution may not be at a local optimum with respect to GmVmEm. The projected assignment is already a good solution leading WGA to converge quicker within few generations to a better solution. Tables 13 show a comparison of the two algorithms. For each algorithm, the best (Min) and the worst (Max) results are given, while mean represents the average solution. MLV-WGA outperforms WGA in 53 cases out of 96, gives similar results in 20 cases, and was beaten in 23 cases. The performance of both algorithms differs significantly. The difference for the total performance is between 25 and 70% in the advantage of MLV-GA. Comparing the worst performances of both algorithms, MLV-WGA gave bad results in 15 cases, both algorithms give similar results in 8 cases, and MLV-WGA was able to perform better than WGA in 73 cases. Looking at the average results, MLV-WGA does between 16 and 41% better than WGA in 84 cases, while the differences are very marginal in the remaining cases where WGA beats MLV-WGA.

Figure 2.

MLV-GA vs. GA: evolution of the mean unsatisfied constraints as a function of time. Csp-N30-DS40-C125-cd026ct063.

Figure 3.

MLV-GA vs. GA: evolution of the mean unsatisfied constraints as a function of time. Csp-N35-DS20-C562-cd094-ct017.

MLV-WGAWGA
InstanceMinMaxMeanREavMinMaxMeanREav
N25-DS20-C36-cd-014-ct083374.580.128385.410.151
N25-DS20-C44-cd012-ct0876108.040.1838149.910.226
N25-DS20-C54-cd018-ct075375.370.100496.910.128
N25-DS20-C78-cd026-ct061284.330.0562105.790.073
N25-DS20-C225-cd078-ct027384.160.019395.660.026
N25-DS20-C229-cd072-ct029496.040.0144118.160.036
N25-DS20-C242-cd086-ct025163.50.0153105.700.024
N25-DS20-C269-cd086-ct0254105.660.0224107.540.029
N25-DS20-C279-cd094-ct023274.750.018496.750.025
N25-DS40-C53-cd016-ct0856118.910.16981310.700.202
N25-DS40-C70-cd026-ct069264.250.061385.750.083
N25-DS40-C72-cd022-ct07561290.12561510.450.146
N25-DS40-C102-cd032-ct0615128.120.08071410.330.102
N25-DS40-C103-cd034-ct059596.830.0674128.790.086
N25-DS40-C237-cd082-ct031385.660.0245107.870.034
N25-DS40-C253-cd088-ct029374.950.0205128.040.032
N25-DS40-C264-cd088-ct0295106.910.0276168.910.034
N25-DS40-C281-cd096-ct027395.620.0204128.540.031
N25-DS40-C290-cd096-ct0274107.080.02561490.032

Table 1.

MLV-WGA vs. WGA: number of variables: 25.

REav denotes the relative error in percent. The value in bold shows the algorithm with the lowest RE.

MLV-WGAWGA
InstanceMinMaxMeanREavMinMaxMeanREav
N30-DS20-C41-cd012-ct083263.700.026375.080.124
N30-DS20-C71-cd018-ct069173.370.0483105.660.080
N30-DS20-C85-cd020-ct0653960.0715128.370.099
N30-DS20-C119-cd028-ct0533105.700.0486128.830.075
N30-DS20-C334-cd074-ct0256138.160.0256149.870.030
N30-DS20-C387-cd090-ct021396.660.0185138.700.033
N30-DS20-C389-cd090-ct021296.080.0164148.950.024
N30-DS20-C392-cd090-ct0213107.080.0195159.160.024
N30-DS20-C399-cd090-ct0215137.700.0206149.790.025
N30-DS40-C85-cd020-ct0735117.750.09271410.870.152
N30-DS40-C96-cd020-ct073812160.167111914.580.015
N30-DS40-C121-cd026-ct06381410.50.08791914.330.152
N30-DS40-C125-cd026-ct06381812.200.098101915.580.125
N30-DS40-C173-cd044-ct0454106.410.0386149.200.054
N30-DS40-C312-cd070-ct03171410.50.03371913.330.025
N30-DS40-C328-cd076-ct02961310.370.032101813.450.042
N30-DS40-C333-cd076-ct02971310.250.03191812.620.038
N30-DS40-C389-cd090-ct0256139.330.02491712.200.032
N30-DS40-C390-cd090-ct0256149.290.0241017130.031

Table 2.

MLV-WGA vs. WGA: number of variables: 30.

REav denotes the relative error in percent. The value in bold shows the algorithm with the lowest RE.

MLV-WGAWGA
InstanceMinMaxMeanREavMinMaxMeanREav
N40-DS20-C78-cd010-ct0796128.910.1155139.040.116
N40-DS20-C80-cd010-ct0797139.620.12171310.040.153
N40-DS20-C82-cd012-ct073496.250.0734116.950.085
N40-DS20-C95-cd014-ct067284.450.047274.120.044
N40-DS20-C653-cd084-ct0172149.370.01561610.620.018
N40-DS20-C660-cd084-ct0176149.120.014769.750.015
N40-DS20-C751-cd096-ct0156139.910.0145139.830.014
N40-DS20-C752-cd096-ct0155179.290.0133139.200.013
N40-DS20-C756-cd096-ct0156159.950.0145168.750.012
N40-DS40-C106-cd014-ct07571411.080.10571611.50.109
N40-DS40-C115-cd014-ct075122015.50.135112015.50.135
N40-DS40-C181-cd024-ct05561712.040.06771711.750.065
N40-DS40-C196-cd024-ct055111216.580.085122015.540.080
N40-DS40-C226-cd030-ct04771410.910.05171611.160.050
N40-DS40-C647-cd082-ct021112315.660.025112015.200.024
N40-DS40-C658-cd082-ct021112216.330.025132116.700.026
N40-DS40-C703-cd092-ct01992113.410.02082013.580.020
N40-DS40-C711-cd092-ct019122315.750.02382014.870.021
N40-DS40-C719-cd092-ct01982116.540.024102015.160.022

Table 3.

MLV-WGA vs. WGA: number of variables 40.

REav denotes the relative error in percent. The value in bold shows the algorithm with the lowest RE.

Advertisement

4. Conclusion

In this work, a multilevel weighted based-genetic algorithm is introduced for MAX-CSP. The results have shown that the multilevel genetic algorithm returns a better solution for the equivalent run-time for most cases compared to the standard genetic algorithm. The multilevel paradigm offeres a better strategy for performing diversification and intensification. This is achieved by allowing GA to view a cluster of variables as a single entity thereby leading the search becoming guided and restricted to only those assignments in the solution space in which the variables grouped within a cluster are assigned the same value. As the size of the clusters gets smaller from one level to another, the size of the neighborhood becomes adaptive, and allows the possibility of exploring different regions in the search space while intensifying the search by exploiting the solutions from previous levels in order to reach better solutions.

References

  1. 1. Dechter R, Pearl J. Tree clustering for constraint networks. Artificial Intelligence. 1989;38:353366
  2. 2. Minton S, Johnson M, Philips A, Laird P. Minimizing conflicts: A heuristic repair method for constraint satisfaction and scheduling scheduling problems. Artificial Intelligence. 1992;58:161-205
  3. 3. Morris P. The breakout method for escaping from local minima. In: Proceeding AAAI’93 Proceedings of the Eleventh National Conference on Artificial Intelligence. 1993. pp. 40-45
  4. 4. Wallace R, Freuder E. Heuristic methods for over-constrained constraint satisfaction problems. In: Over-Constrained Systems. LNCS. Vol. 1106. Berlin, Germany: Springer Verlag; 1995. pp. 207-216
  5. 5. Galinier P, Hao, J. Tabu search for maximal constraint satisfaction problems. In: Principles and Practice of Constraint Programming CP 1997. LNCS. Vol. 1330. Berlin, Germany: Springer Verlag; 1997. pp. 196-208
  6. 6. Zhou Y, Zhou G, Zhang J. A hybrid glowworm swarm optimization algorithm for constrained engineering design problems. Applied Mathematics and Information Sciences. 2013;7(1):379-388
  7. 7. Davenport A, Tsang E, Wang C, Zhu K. Genet: A connectionist architecture for solving constraint satisfaction problems by iterative improvement. In: Proceedings of the Twelth National Conference on Artificial Intelligence. 1994
  8. 8. Voudouris C, Tsang E. Guided local search: Handbook of metaheuristics. International Series in Operation Research and Management Science. 2003;57:185-218
  9. 9. Schuurmans D, Southey F, Holte E. The exponentiated subgradient algorithm for heuristic Boolean programming. In: 17th International Joint Conference on Artificial Intelligence. San Francisco, CA, USA: Morgan Kaufmann Publishers; 2001. pp. 334-341
  10. 10. Shang E, Wah B. A discrete Lagrangian-based global-search method for solving satisfiability problems. Journal of Global Optimization. 1998;12(1):6199
  11. 11. Hutter F, Tompkins D, Hoos H. Scaling and probabilistic smoothing: Efficient dynamic local search for SAT. In: Principles and Practice of Constraint Programming CP 2002. LNCS. Vol. 2470. Berlin, Germany: Springer Verlag; 2002. pp. 233-248
  12. 12. Amante D, Marin A. Adaptive penalty weights when solving congress timetabling. Advances in Artificial Intelligence, Lectures Notes in Computer Science. 2004;3315:144-153
  13. 13. Pullan W, Mascia F, Brunato M. Cooperating local search for the maximum clique problems. Journal of Heuristics. 2011;17:181-199
  14. 14. Fang S, Chu Y, Qiao K, Feng X, Xu K. Combining edge weight and vertex weight for minimum vertex cover problem. In: FAW 2014. 2014. pp. 71-81
  15. 15. Lee H, Cha S, Yu Y, Jo G. Large neighborhood search using constraint satisfaction techniques in vehicle routing problem. In: Gao Y, Japkowicz N, editors. Advances in Artificial Intelligence. Lecture Notes in Computer Science. Vol. 5549. Heidelberg: Springer Berlin; 2010. pp. 229-232
  16. 16. Bouhmala N. A variable depth search algorithm for binary constraint satisfaction problems. Mathematical Problems in Engineering. 2015;2015:Article ID 637809, 10 pages. DOI: 10.1155/2015/637809
  17. 17. Holland J. Adaptation in Natural and Artificial Systems. Ann Arbor: The University of Michigan Press; 1975
  18. 18. Xu W. Satisfiability transition and experiments on a random constraint satisfaction problem model. International Journal of Hybrid Information Technology. 2014;7(2):191-202
  19. 19. Gent IP, MacIntyre E, Prosser P, Walsh T. The constrainedness of search. In: Proceedings of the AAAI-96. 1996. pp. 246-252

Written By

Noureddine Bouhmala

Submitted: 01 November 2017 Reviewed: 03 May 2018 Published: 27 June 2018