Open access peer-reviewed chapter

A Multilevel Genetic Algorithm for the Maximum Satisfaction Problem

By Noureddine Bouhmala

Submitted: November 1st 2017Reviewed: May 3rd 2018Published: June 27th 2018

DOI: 10.5772/intechopen.78299

Downloaded: 206

Abstract

Genetic algorithms (GA) which belongs to the class of evolutionary algorithms are regarded as highly successful algorithms when applied to a broad range of discrete as well continuous optimization problems. This chapter introduces a hybrid approach combining genetic algorithm with the multilevel paradigm for solving the maximum constraint satisfaction problem (Max-CSP). The multilevel paradigm refers to the process of dividing large and complex problems into smaller ones, which are hopefully much easier to solve, and then work backward toward the solution of the original problem, using the solution reached from a child level as a starting solution for the parent level. The promising performances achieved by the proposed approach are demonstrated by comparisons made to solve conventional random benchmark problems.

Keywords

  • maximum constraint satisfaction problem
  • genetic algorithms
  • multilevel paradigm

1. Introduction

Many problems in the field of artificial intelligence can be modeled as constraint satisfaction problems (CSP). A CSP is a tuple XDCwhere, X=x1x2xnis a finite set of variables, D=Dx1Dx2Dxnis a finite set of domains. Thus each variable xXhas a corresponding discrete domain Dxfrom which it can be instantiated, and C=C1C2Ckis a finite set of constraints. Each k-ary constraint restricts a k-tuple of variables, x1x2xkand specifies a subset of D1××Dk, each element of which are values that the variables cannot take simultaneously. A solution to a CSP requires the assignment of values to each of the variables from their domains such that all the constraints on the variables are satisfied. The maximum constraint satisfaction problem (Max-CSP) aims at finding an assignment so as to maximize the number of satisfied constraints. Max-CSP can be regarded as the generalization of CSP; the solution maximizes the number of satisfied constraints. In this chapter, attention is focused on binary CSPs, where all constraints are binary, that is, they are based on the cartesian product of the domains of two variables. However, any non-binary CSP can theoretically be converted to a binary CSP [1]. Algorithms for solving CSPs apply the so-called 1-exchange neighborhood under, which two solutions are direct neighbors if, and only if, they differ at most in the value assigned to one variable. Examples include the minimum conflict heuristic MCH [2], the break method for escaping from local minima [3], and various enhanced MCH (e.g., randomized iterative improvement of MCH called WMCH [4], MCH with tabu search [5], and evolutionary algorithms [6]). Algorithms based on assigning weights on constraints are techniques that work by introducing weights on variables or constraints in order to avoid local minima. Methods belonging to this category include genet [7], guided local search [8], the exponentiated subgradient [9], discrete Lagrangian search [10], the scaling and probabilistic smoothing [11], evolutionary algorithms combined with stepwise adaptation of weights [12], methods based on dynamically adapting weights on variables [13], or both (i.e., variables and constraints) [14]. Methods based on large neighborhood search have recently attracted several researchers for solving the CSP [15]. The central idea is to reduce the size of local search space relying on a continual relaxation (removing elements from the solution) and re-optimization (re-inserting the removed elements). Finally, the work introduced in [16] introduces a variable depth metaheuristic combing a greedy local search with a self-adaptive weighting strategy on the constraints weights.

2. Algorithm

2.1. Multilevel context

The multilevel paradigm is a simple technique, which at its core involves recursive coarsening to produce smaller and smaller problems that are easier to solve than the original one. Multilevel techniques have been developed in the period after 1960 and are among the most efficient techniques used for solving large algebraic systems arising from the discretization of partial differential equations. In recent years, it has been recognized that an effective way of enhancing metaheuristics is to use them in the multilevel context. The pseudo-code of the multilevel genetic algorithm is shown in Algorithm 1. Figure 1 illustrates the multilevel paradigm used for six variables and two coarsening levels. The multilevel paradigm consists of four phases: coarsening, initial solution, uncoarsening, and refinement. The coarsening phase aims at merging the variables associated with the problem to form clusters. The clusters are used in a recursive manner to construct a hierarchy of problems each representing the original problem but with fewer degrees of freedom. The coarsest level can then be used to compute an initial solution. The solution found at the coarsest level is uncoarsened (extended to give an initial solution for the parent level) and then improved using a chosen optimization algorithm. A common feature that characterizes multilevel algorithms, is that any solution in any of the coarsened problems is a legitimate solution to the original one. Optimization algorithms using the multilevel paradigm draw their strength from coupling the refinement process across different levels.

Algorithm 1.

The multilevel genetic algorithm.

Figure 1.

The different steps of the multilevel paradigm.

2.2. Multilevel genetic algorithm (GA)

GAs [17] are stochastic methods for global search and optimization and belong to the group of nature-inspired metaheuristics leading to the so-called natural computing. It is a fast-growing interdisciplinary field in which a range of techniques and methods are studied for dealing with large, complex, and dynamic problems with various sources of potential uncertainties. GAs simultaneously examine and manipulate a set of possible solutions. A gene is a part of a chromosome (solution), which is the smallest unit of genetic information. Every gene is able to assume different values called allele. All genes of an organism form a genome, which affects the appearance of an organism called phenotype. The chromosomes are encoded using a chosen representation and each can be thought of as a point in the search space of candidate solutions. Each individual is assigned a score (fitness) value that allows assessing its quality. The members of the initial population may be randomly generated or by using sophisticated mechanisms by means of which an initial population of high-quality chromosomes is produced. The reproduction operator selects (randomly or based on the individual’s fitness) chromosomes from the population to be parents and enter them in a mating pool. Parent individuals are drawn from the mating pool and combined so that information is exchanged and passed to off-springs depending on the probability of the crossover operator. The new population is then subjected to mutation and enters into an intermediate population. The mutation operator acts as an element of diversity into the population and is generally applied with a low-probability to avoid disrupting crossover results. Finally, a selection scheme is used to update the population giving rise to a new generation. The individuals from the set of solutions, which is called population will evolve from generation to generation by repeated applications of an evaluation procedure that is based on genetic operators. Over many generations, the population becomes increasingly uniform until it ultimately converges to optimal or near-optimal solutions. The different steps of the multilevel weighted genetic algorithm are described as follows:

  • construction of levels: let G0=V0E0be an undirected graph of vertices Vand edges E. The set Vdenotes variables and each edge xixjEimplies a constraint joining the variables xiand xj. Given the initial graph G0, the graph is repeatedly transformed into smaller and smaller graphs G1, G2, …, Gmsuch that V0>, V1>, … >Vm. To coarsen a graph from Gjto Gj+1, a number of different techniques may be used. In this chapter, when combining a set of variables into clusters, the variables are visited in a random order. If a variable xihas not been matched yet, then the algorithms randomly select one of its neighboring unmatched variable xj, and a new cluster consisting of these two variables is created. Its neighbors are the combined neighbors of the merged variables xiand xj. Unmatched variables are simply left unmatched and copied to the next level.

  • initial assignment: the process of constructing a hierarchy of graphs ceases as soon as the size of the coarsest graphs reaches some desired threshold. A random initial population is generated at the lowest level Gk=VkEk. The chromosomes, which are assignments of values to the variables are encoded as strings of bits, the length of which is the number of variables. At the lowest level, the length of the chromosome is equal to the number of clusters. The initial solution is simply constructed by assigning to all variable in a cluster, a random value vi. In this work, it is assumed that all variables have the same domain (i.e., same set of values), otherwise different random values should be assigned to each variable in the cluster. All the individuals of the initial population are evaluated and assigned a fitness expressed in Eq. (1), which counts the number of constraint violations where <xisi,xjsj>denotes the constraint between the variables xiand xjwhere xiis assigned the value sifrom Dxiand xjis assigned the value sjfrom Dxj.

Fitness=i=1n1j=i+1nViolationWi,j<xisixjsj>E1
  • initial weights: the next step of the algorithm assigns a fixed amount of weight equal to 1 across all the constraints. The distribution of weights to constraints aims at forcing hard constraints with large weights to be satisfied thereby preventing the algorithm at a later stage from getting stuck at a local optimum.

  • optimization: having computed an initial solution at the coarsest graph, GA starts the search process from the coarsest level Gk=(Vk,Ek) and continues to move toward smaller levels. The motivation behind this strategy is that the order in which the levels are traversed offers a better mechanism for performing diversification and intensification. The coarsest level allows GA to view any cluster of variables as a single entity leading the search to become guided in faraway regions of the solution space and restricted to only those configurations in the solution space in which the variables grouped within a cluster are assigned the same value. As the switch from one level to another implies a decrease in the size of the neighborhood, the search is intensified around solutions from previous levels in order to reach better ones.

  • parent selection: during the optimization, new solutions are created by combining pairs of individuals in the population and then applying a crossover operator to each chosen pair. Combining pairs of individuals can be viewed as a matching process. In the version of GA used in this work, the individuals are visited in random order. An unmatched individual ikis matched randomly with an unmatched individual il.

  • genetic operators: the task of the crossover operator is to reach regions of the search space with higher average quality. The two-point crossover operator is applied to each matched pair of individuals. The two-point crossover selects two randomly points within a chromosome and then interchanges the two parent chromosomes between these points to generate two new offspring.

  • survivor selection: the selection acts on individuals in the current population. Based on each individual quality (fitness), it determines the next population. In the roulette method, the selection is stochastic and biased toward the best individuals. The first step is to calculate the cumulative fitness of the whole population through the sum of the fitness of all individuals. After that, the probability of selection is calculated for each individual as being PSelectioni=fi/1Nfi, where fiis the fitness of individual i.

  • updating weights: the weights of each current violated constraint is then increased by one, whereas the newly satisfied constraints will have their weights decreased by one before the start of new generation.

  • termination condition: the convergence of GA is supposed to be reached if the best individual remains unchanged during five consecutive generations.

  • projection: once GA has reached the convergence criterion with respect to a child level graph Gk=VkEk, the assignment reached on that level must be projected on its parent graph Gk1=Vk1Ek1. The projection algorithm is simple; if a cluster belongs to Gk=VkEkis assigned the value vli, the merged pair of clusters that it represents belonging to Gk1=Vk1Ek1are also assigned the value vli,

3. Experimental results

3.1. Experimental setup

The benchmark instances were generated using model A [18] as follows: each instance is defined by the 4-tuple n,m,pd,pt, where nis the number of variables; mis the size of each variable’s domain; pd, the constraint density, is the proportion of pairs of variables, which have a constraint between them; and pt, the constraint tightness, is the probability that a pair of values is inconsistent. From the n×n1/2possible constraints, each one is independently chosen to be added in the constraint graph with the probability pd. Given a constraint, we select with the probability pt, which value pairs become no-goods. The model A will on average have pd×n1/2constraints, each of which has on average pt×m2inconsistent pairs of values. For each pair of density tightness, we generate one soluble instance (i.e., at least one solution exists). Because of the stochastic nature of GA, we let each algorithm do 100 independent runs, each run with a different random seed. Many NP-complete or NP-hard problems show a phase transition point that marks the spot where we go from problems that are under-constrained and so relatively easy to solve, to problems that are over-constrained and so relatively easy to prove insoluble. Problems that are on average harder to solve occur between these two types of relatively easy problem. The values of pdand ptare chosen in such a way that the instances generated are within the phase transition. In order to predict the phase transition region, a formula for the constrainedness [19] of binary CSPs was defined by:

κ=n12pdlogm11pt.E2

The tests were carried out on a DELL machine with 800 MHz CPU and 2 GB of memory. The code was written in C and compiled with the GNU C compiler version 4.6. The following parameters have been fixed experimentally and are listed below:

  • Population size = 50

  • Stopping criteria for the coarsening phase: the reduction process stops as soon as the number of levels reaches 3. At this level, MLV-WGA generates an initial population.

  • Convergence during the optimization phase: if there is no observable improvement of the fitness function of the best individual during five consecutive generations, MLV-WGA is assumed to have reached convergence and moves to a higher level.

3.2. Results

The plots in Figures 2 and 3 compare the WGA with its multilevel variant MLV-WGA. The improvement in quality imparted by the multilevel context is immediately clear. Both WGA and MLV-WGA exhibit what is called a plateau region. A plateau region spans a region in the search space where crossover and mutation operators leave the best solution or the mean solution unchanged. However, the length of this region is shorter with MLV-WGA compared to that of WGA. The multilevel context uses the projected solution obtained at Gm+1Vm+1Em+1as the initial solution for GmVmEmfor further refinement. Even though the solution at Gm+1Vm+1Em+1is at a local minimum, the projected solution may not be at a local optimum with respect to GmVmEm. The projected assignment is already a good solution leading WGA to converge quicker within few generations to a better solution. Tables 13 show a comparison of the two algorithms. For each algorithm, the best (Min) and the worst (Max) results are given, while mean represents the average solution. MLV-WGA outperforms WGA in 53 cases out of 96, gives similar results in 20 cases, and was beaten in 23 cases. The performance of both algorithms differs significantly. The difference for the total performance is between 25and 70% in the advantage of MLV-GA. Comparing the worst performances of both algorithms, MLV-WGA gave bad results in 15 cases, both algorithms give similar results in 8 cases, and MLV-WGA was able to perform better than WGA in 73 cases. Looking at the average results, MLV-WGA does between 16and 41% better than WGA in 84 cases, while the differences are very marginal in the remaining cases where WGA beats MLV-WGA.

Figure 2.

MLV-GA vs. GA: evolution of the mean unsatisfied constraints as a function of time. Csp-N30-DS40-C125-cd026ct063.

Figure 3.

MLV-GA vs. GA: evolution of the mean unsatisfied constraints as a function of time. Csp-N35-DS20-C562-cd094-ct017.

MLV-WGAWGA
InstanceMinMaxMeanREavMinMaxMeanREav
N25-DS20-C36-cd-014-ct083374.580.128385.410.151
N25-DS20-C44-cd012-ct0876108.040.1838149.910.226
N25-DS20-C54-cd018-ct075375.370.100496.910.128
N25-DS20-C78-cd026-ct061284.330.0562105.790.073
N25-DS20-C225-cd078-ct027384.160.019395.660.026
N25-DS20-C229-cd072-ct029496.040.0144118.160.036
N25-DS20-C242-cd086-ct025163.50.0153105.700.024
N25-DS20-C269-cd086-ct0254105.660.0224107.540.029
N25-DS20-C279-cd094-ct023274.750.018496.750.025
N25-DS40-C53-cd016-ct0856118.910.16981310.700.202
N25-DS40-C70-cd026-ct069264.250.061385.750.083
N25-DS40-C72-cd022-ct07561290.12561510.450.146
N25-DS40-C102-cd032-ct0615128.120.08071410.330.102
N25-DS40-C103-cd034-ct059596.830.0674128.790.086
N25-DS40-C237-cd082-ct031385.660.0245107.870.034
N25-DS40-C253-cd088-ct029374.950.0205128.040.032
N25-DS40-C264-cd088-ct0295106.910.0276168.910.034
N25-DS40-C281-cd096-ct027395.620.0204128.540.031
N25-DS40-C290-cd096-ct0274107.080.02561490.032

Table 1.

MLV-WGA vs. WGA: number of variables: 25.

REav denotes the relative error in percent. The value in bold shows the algorithm with the lowest RE.

MLV-WGAWGA
InstanceMinMaxMeanREavMinMaxMeanREav
N30-DS20-C41-cd012-ct083263.700.026375.080.124
N30-DS20-C71-cd018-ct069173.370.0483105.660.080
N30-DS20-C85-cd020-ct0653960.0715128.370.099
N30-DS20-C119-cd028-ct0533105.700.0486128.830.075
N30-DS20-C334-cd074-ct0256138.160.0256149.870.030
N30-DS20-C387-cd090-ct021396.660.0185138.700.033
N30-DS20-C389-cd090-ct021296.080.0164148.950.024
N30-DS20-C392-cd090-ct0213107.080.0195159.160.024
N30-DS20-C399-cd090-ct0215137.700.0206149.790.025
N30-DS40-C85-cd020-ct0735117.750.09271410.870.152
N30-DS40-C96-cd020-ct073812160.167111914.580.015
N30-DS40-C121-cd026-ct06381410.50.08791914.330.152
N30-DS40-C125-cd026-ct06381812.200.098101915.580.125
N30-DS40-C173-cd044-ct0454106.410.0386149.200.054
N30-DS40-C312-cd070-ct03171410.50.03371913.330.025
N30-DS40-C328-cd076-ct02961310.370.032101813.450.042
N30-DS40-C333-cd076-ct02971310.250.03191812.620.038
N30-DS40-C389-cd090-ct0256139.330.02491712.200.032
N30-DS40-C390-cd090-ct0256149.290.0241017130.031

Table 2.

MLV-WGA vs. WGA: number of variables: 30.

REav denotes the relative error in percent. The value in bold shows the algorithm with the lowest RE.

MLV-WGAWGA
InstanceMinMaxMeanREavMinMaxMeanREav
N40-DS20-C78-cd010-ct0796128.910.1155139.040.116
N40-DS20-C80-cd010-ct0797139.620.12171310.040.153
N40-DS20-C82-cd012-ct073496.250.0734116.950.085
N40-DS20-C95-cd014-ct067284.450.047274.120.044
N40-DS20-C653-cd084-ct0172149.370.01561610.620.018
N40-DS20-C660-cd084-ct0176149.120.014769.750.015
N40-DS20-C751-cd096-ct0156139.910.0145139.830.014
N40-DS20-C752-cd096-ct0155179.290.0133139.200.013
N40-DS20-C756-cd096-ct0156159.950.0145168.750.012
N40-DS40-C106-cd014-ct07571411.080.10571611.50.109
N40-DS40-C115-cd014-ct075122015.50.135112015.50.135
N40-DS40-C181-cd024-ct05561712.040.06771711.750.065
N40-DS40-C196-cd024-ct055111216.580.085122015.540.080
N40-DS40-C226-cd030-ct04771410.910.05171611.160.050
N40-DS40-C647-cd082-ct021112315.660.025112015.200.024
N40-DS40-C658-cd082-ct021112216.330.025132116.700.026
N40-DS40-C703-cd092-ct01992113.410.02082013.580.020
N40-DS40-C711-cd092-ct019122315.750.02382014.870.021
N40-DS40-C719-cd092-ct01982116.540.024102015.160.022

Table 3.

MLV-WGA vs. WGA: number of variables 40.

REav denotes the relative error in percent. The value in bold shows the algorithm with the lowest RE.

4. Conclusion

In this work, a multilevel weighted based-genetic algorithm is introduced for MAX-CSP. The results have shown that the multilevel genetic algorithm returns a better solution for the equivalent run-time for most cases compared to the standard genetic algorithm. The multilevel paradigm offeres a better strategy for performing diversification and intensification. This is achieved by allowing GA to view a cluster of variables as a single entity thereby leading the search becoming guided and restricted to only those assignments in the solution space in which the variables grouped within a cluster are assigned the same value. As the size of the clusters gets smaller from one level to another, the size of the neighborhood becomes adaptive, and allows the possibility of exploring different regions in the search space while intensifying the search by exploiting the solutions from previous levels in order to reach better solutions.

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Noureddine Bouhmala (June 27th 2018). A Multilevel Genetic Algorithm for the Maximum Satisfaction Problem, Artificial Intelligence - Emerging Trends and Applications, Marco Antonio Aceves-Fernandez, IntechOpen, DOI: 10.5772/intechopen.78299. Available from:

chapter statistics

206total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Artificial Intelligence Application in Machine Condition Monitoring and Fault Diagnosis

By Yasir Hassan Ali

Related Book

First chapter

Introductory Chapter: Artificial Intelligence - Challenges and Applications

By Dinesh G. Harkut and Kashmira Kasat

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More about us