Open access peer-reviewed chapter

Multi‐Objective Hyper‐Heuristics

By Mashael Suliaman Maashi

Submitted: November 16th 2016Reviewed: April 13th 2017Published: August 30th 2017

DOI: 10.5772/intechopen.69222

Downloaded: 469

Abstract

Multi‐objective hyper‐heuristics is a search method or learning mechanism that operates over a fixed set of low‐level heuristics to solve multi‐objective optimization problems by controlling and combining the strengths of those heuristics. Although numerous papers on hyper‐heuristics have been published and several studies are still underway, most research has focused on single‐objective optimization. Work on hyper‐heuristics for multi‐objective optimization remains limited. This chapter draws attention to this area of research to help researchers and PhD students understand and reuse these methods. It also provides the basic concepts of multi‐objective optimization and hyper‐heuristics to facilitate a better understanding of the related research areas, in addition to exploring hyper‐heuristic methodologies that address multi‐objective optimization. Some design issues related to the development of hyper‐heuristic framework for multi‐objective optimization are discussed. The chapter concludes with a case study of multi‐objective selection hyper‐heuristics and its application on a real‐world problem.

Keywords

  • hyper‐heuristics
  • multi‐objective optimization
  • meta‐heuristics
  • evolutionary algorithms
  • computational search problems

1. Introduction

Many real‐world problems are complex. Owing to the (often) NP‐hard nature of such problems, researchers and practitioners frequently resort to problem‐tailored heuristics in order to obtain a reasonable solution within a reasonable amount of time. Hyper‐heuristics are methodologies that operate on a search space of heuristics rather than directly searching the solution space for solving hard computational problems. One of the main aims of hyper‐heuristics is to raise the level of generality of search methodologies and to automatically adapt the algorithm by combining the strength of each heuristic and making up for the weaknesses of others. References to multi‐objective hyper‐heuristics are scarce as most research in this area has been limited to single‐objective optimization. The purpose of this chapter is to provide an introduction to hyper‐heuristic methodologies that are designed for multi‐objective optimization (HHMOs). The chapter might help researchers and PhD students interested in this research area to understand and use these methods. Hyper‐heuristics for multi‐objective optimization is a relatively new area of research in operational research (OR) and evolutionary computation [1, 2]. Few studies have dealt with hyper‐heuristics for multi‐objective problems. In order to offer a better understanding of the subject, the basic concepts of multi‐objective optimization and hyper‐heuristics are provided in Section 2. This chapter also provides a brief survey of hyper‐heuristic methodologies that address multi‐objective optimization in Section 3. Some design issues related to the development of a hyper‐heuristic framework for multi‐objective optimization are discussed in Section 4. Additionally, a case study of multi‐objective selection hyper‐heuristics and its application on a real‐world problem is presented in Section 5. Finally, promising research areas for future application are provided in Section 6.

2. The basic concepts and underlying issues

2.1. Multi‐objective optimization

Multi‐objective problems (MOPs): MOPs comprise several objectives (two or more), which need to be minimized or maximized depending on the problem. A general definition of an MOP [3] is

An MOP minimizes F(x)=(f1(x),,fk(x))subject to gi(x)0;i=1,,m,xΩ. The solution of the MOP minimizes the components of a vector F(x),where xis an n‐dimensional decision variable vector (X=x1,,xn)from some universe Ω. An MOP includes ndecision variables, mconstraints,and k objectives. The MOP’s evaluation function F:Ωmaps decision variable vectors (X=x1,,xn)to vectors (Y=a1,,ak).

Multi‐objective optimization techniques are divided into three classes [3, 4]:

  • A priori approach (decision making and then a search):

    In this class, the objective preferences or weights are set by the decision maker prior to the search process. An example of this is aggregation‐based approaches such as the weighted sum approach.

  • A posteriori approach (a search and then decision making):

    The search is conducted to find solutions for the objective functions. Following this, a decision process selects the most appropriate solutions (often involving a trade‐off). Multi‐objective evolutionary optimization (MOEA) techniques, whether non‐Pareto‐based or Pareto‐based, are examples of this class.

  • Interactive or progressive approach (search and decision making simultaneously):

    In this class, the preferences of the decision maker(s) are made and adjusted during the search process.

Pareto dominance: The idea behind the dominance concept is to generate a preference between MOP solutions since there is no information regarding objective preference provided by the decision maker. This preference is used to compare the dominance between any two solutions [5]. A more formal definition of Pareto dominance (for minimization case) is as follows [4]:

A vector u=(u1,,uk)is said to dominate another vector v=(v1,,vk)(denoted by uv) according to kobjectives if and only if uis partially less than v, that is, i{1,,k},uivii{1,,k}ui<vi.

All non‐dominated solutions are also known as the Pareto optimal sets. The corresponding Pareto optimal set, with respect to the objective space, is known as the Pareto optimal front. The quality of the obtained Pareto optimal set can be determined in Refs. [6, 7]: the extent of the Pareto optimal set and the distance and the distribution of the Pareto optimal front.

2.2. Hyper‐heuristics

The main aim of the hyper‐heuristic methodology is raising the level of generality of search techniques by producing general search algorithms that are applicable for solving a wide range of the problems in different domains [2, 8, 9]. In a hyper‐heuristic approach, different heuristics (or heuristic components) can be selected, generated, or combined to solve a given optimization problem in an efficient way. Since each heuristic has its own strengths and weaknesses, one of the aims of hyper‐heuristics is to automatically inform the algorithm by combining the strength of each heuristic and making up for the weaknesses of others. This process requires the incorporation of a learning mechanism into the algorithm to adaptively direct the search at each decision point for a particular state of the problem or the stage of search. It is obvious that the concept of hyper‐heuristics has strong ties to operational research (OR) in terms of finding optimal or near‐optimal solutions to computational search problems. It is also firmly linked to artificial intelligence (AI) in terms of machine‐learning methodologies [8].

2.2.1. Generic hyper‐heuristic framework

In their simplest form, hyper‐heuristics is a search methodology that encompasses a high‐level strategy (which could be a meta‐heuristic) that controls the search over a set of heuristics (low‐level heuristics) rather than controlling a search over a direct representation of the solutions. Usually, in a hyper‐heuristic framework, there is a clear separation between high‐level strategy and the set of low‐level heuristics [10]. The purpose of domain barrier is to give the hyper‐heuristics a higher level of abstraction. This also increases the level of generality of hyper‐heuristics by enabling its application to a new problem without the need for a framework change. Only information relevant to the problem domain is provided from low level to high level, including cost/fitness measured by an evaluation function, indicating the quality of a solution. The high‐level strategy can be a (meta‐) heuristic or a learning mechanism. The task of the high‐level strategy is to guide the search intelligently and adapt according to the success/failure of the low‐level heuristics or combinations of heuristic components during the search process.

2.2.2. Hyper‐heuristics classification

Generally, there are two recognized methodologies of hyper‐heuristics: selection and generation hyper‐heuristics. For both hyper‐heuristic methodologies, there are two types of heuristics: (i) constructive heuristics, which process a partial solution(s) and build a complete solution(s) and (ii) perturbative heuristics, which operate on complete solution(s).

In the context of hyper‐heuristics for multi‐objective optimization, we could classify them into three categories:

  • Multi‐objective selection hyper‐heuristic manages a number of heuristics during an iterative process for a given time. At each iteration, one of best heuristics is chosen at a decision point to operate and run. This type comprises two main stages: heuristic selection and move‐acceptance strategy.

  • Multi‐objective combination/hybridization hyper‐heuristic combines a number of heuristics or (components of heuristics) that operate simultaneously and adaptively to create new solutions.

  • Multi‐objective generation hyper‐heuristic: To the best of the authors’ knowledge, there are no studies addressing multi‐objective generation hyper‐heuristic in the literature.

3. Brief survey of hyper‐heuristics for multi‐objective optimization

The hyper‐heuristics for multi‐objective optimization problems is a new area of research in evolutionary computation and operational research [2, 8]. To date, few studies have been identified that deal with hyper‐heuristics for multi‐objective problems (see Table 1).

Component nameApplication domain/test problemsReference(s)
Tabu searchSpace allocation, timetabling[8]
Traveling salesman problems[35]
Markov chain, evolution strategyReal‐world water distribution networks design/DTLZ, WFG[12]
NSGAIIIrregular 2D cutting stock[37]
Strip packing and cutting stock[39]
NSGAII, quasi‐Newton algorithmStacked neural network[40]
Number of operations from NSGAII, SPEA2, and IBEAA number of continuous multi‐objective test problems[19]
Number of selection, crossover, and mutation operations of evolutionary algorithmsSoftware module clustering[23]
HypervolumeDynamic‐mapped island‐based model/WFG[36]
Particle swarm optimization, adaptive metropolis algorithm, differential evolutionWater resource problems/a number of continuous multi‐objective test problems[41]
Memory strategy, genetic and differential operatorsDynamic optimization problems/a number of continuous multi‐objective test problems[46]
Genetic algorithm, simulated annealing, particle swarm optimizationEngineering system design problems/a number of classical multi‐objective test problems[16]
Simulated annealingShelf‐space allocation[34]
NSGAII, SPEA2,MOGA, choice function, great deluge algorithm, and late acceptanceWFG/the vehicle crashworthiness design problem[2426]
NSGAII, SPEA2, IBEA, choice function, great deluge algorithmWFG/wind farm layout optimization[32]
Markov modelDTLZ[33]

Table 1.

Heuristic components and application domains of hyper‐heuristics for multi‐objective optimization.

Regarding multi‐objective selection hyper‐heuristics, a multi‐objective hyper‐heuristic tabu search (TS) based (TSRoulette Wheel) was presented in Ref. [11]. In this approach, an appropriate heuristic is selected at each iteration using tabu search as a high‐level search strategy. The experiments showed results with acceptable solution when applied in space allocation and timetabling problems. In Ref. [12], an online selection hyper‐heuristic, Markov‐chain‐based hyper‐heuristic (MCHH), is investigated. The Markov chain guides the selection of heuristics and applies online reinforcement learning to adapt transition weights between heuristics. In MCHH, hybrid meta‐heuristics and evolution strategies were incorporated and applied to the DTLZ test problems [13] and compared to a (1+1) evolution strategy meta‐heuristic, a random hyper‐heuristic, and TSRoulette Wheel [11]. The comparison shows the efficacy of the proposed approach in terms of Pareto convergence and learning ability to select good heuristic combinations. Further work is needed in terms of diversity‐preserving mechanisms. The MCHH was applied to the Walking Fish Group (WFG) test problems [14], and although the results from the performed tests show the ability of the approach, future work needs to be made to improve the search strategies [12]. MCHH has been applied to real‐world water distribution networks and it produced competitive results [15]. A multi‐objective hyper‐heuristic optimization scheme for engineering system designs is presented in Ref. [16]. Simulated annealing (SA) [17], genetic algorithm, and particle swarm optimization [18] are used as low‐level heuristics. A multi‐ indicator hyper‐heuristics for multi‐objective optimization is proposed in Ref. [19]. The approach based on multiple rank of indicators is taken from NSGAII [20], SPEA2 [22], and IBEA [21]. In Ref. [23], a multi‐objective hyper‐heuristic genetic algorithm (MHypGA) for multi‐objective software module clustering problem is presented. In MHypGA, different strategies of selection, crossover, and mutation operations of genetic algorithms are incorporated as low‐level heuristics. In Refs. [2426], online‐learning multi‐objective hyper‐heuristics are presented. These multi‐objective hyper‐heuristics are based on a choice function. The multi‐objective choice‐function‐based hyper‐heuristics are combined with different move‐acceptance strategies including all‐moves (AMs) and the great deluge algorithm (GDA) [27] and late acceptance (LA) [28]. The multi‐objective hyper‐heuristic controls and combines the strengths of three well‐known multi‐objective evolutionary algorithms (NSGAII [20], SPEA2 [22], and MOGA [30]). The performance of the proposed multi‐objective choice‐function‐based hyper‐heuristics is evaluated on the Walking Fish Group (WFG) test suite [14] and is applied to the vehicle crashworthiness design problem [31]. The results of both benchmark test problems demonstrate the capability and potential of the multi‐objective hyper‐heuristic approaches in solving continuous multi‐objective optimization problems. More details about these approaches are provided as a study case in Section 5. In Ref. [32], a multi‐objective selection hyper‐heuristics for a multi‐objective wind farm layout optimization is proposed. The experiential results show that the proposed approach is successfully applied to this optimization problem. In Ref. [33], a multi‐objective selection sequence‐based hyper‐heuristic (MOSSHH) is proposed. The MOSSHH algorithm employs a hidden Markov model based on a sequence of heuristics that is determined by transition probabilities on ɛ‐dominance. The proposed approach has been applied to DTLZ test problems [13] and results showed its capability to solve it through the learning process. A multiple neighborhood hyper‐heuristic for two‐dimensional (2D) shelf‐space allocation problem is proposed in Ref. [34]. The proposed hyper‐heuristic is based on a simulated annealing algorithm. A two‐stage multi‐objective hyper‐heuristic approach is presented in Ref. [35]. The first phase targets in producing an efficient Pareto front, and the second phase focuses on solving a given problem in a flexible way so as to drive a subset of the population to the desired Pareto front. The approach was assessed on the multi‐objective‐traveling salesman problems using 11 low‐level heuristics. Comparison with other methods from the scientific literature revealed that the proposed approach produces high‐quality results. Nevertheless, upcoming efforts are still necessary in advancing the approach. In Ref. [36], a hypervolume‐based hyper‐heuristic for a dynamic‐mapped multi‐objective island‐based model is proposed. This method shows its superiority when compared to the contribution‐based hyper‐heuristic and other standard parallel models over the WFG test problems [14]. A new hyper‐heuristic based on the multi‐objective evolutionary algorithm NSGAII [20] is proposed in Ref. [37]. The main idea of this method involves the production of the final Pareto‐optimal set, through a learning process that evolves combinations of condition‐action rules based on NSGAII. The proposed method was tested on many instances of irregular 2D‐cutting stock benchmark problems and produced promising results. A hyper‐heuristic‐based codification is proposed in Refs. [38, 39], for solving strip packing and cutting stock problems in order to maximize the total profit and minimize the total number of cuts. The experimental results show that outcomes of the proposed hyper‐heuristic outperform single heuristics. In Ref. [40], a multi‐objective hyper‐heuristic for the design and optimization of a stacked neural network is proposed. The proposed approach is based on NSGAII [20] combined with a local search algorithm (Quasi‐Newton algorithm).

Regarding multi‐objective hybridization hyper‐heuristic, an adaptive multi‐method (multi‐point) search called AMALGAM is proposed in Ref. [41]. It employs multiple search algorithms, NSGAII [20], PSO [18], AMS [42], and DE [43], simultaneously using the concepts of multi‐method search and adaptive offspring creation. AMALGAM has been applied to a number of continuous multi‐objective test problems, and it has been shown to be superior to other methods. It has also been applied to solve a number of water resource problems and has yielded very good solutions [44, 45]. A multi‐strategy ensemble, multi‐objective evolutionary algorithm called MS‐MOEA for dynamic optimization, is proposed in Ref. [46]. The approach combines differential operators and different strategies, including strategy and genetic, to adoptively creating offspring and increase the convergence speed. The results show that MS‐MOEA obtains solution with good quality.

It is worth mentioning that multi‐objective hybridization hyper‐heuristics are similar to multi‐objective selection hyper‐heuristics in terms of the incorporation of different algorithms. However, they are different in their concepts. Multi‐objective selection hyper‐heuristic is based on two successive stages: a selection mechanism and an acceptance move strategy. By contrast, multi‐objective hybridization hyper‐heuristics is based on an adaptive creation offspring strategy. In multi‐objective selection hyper‐heuristics, a sequence of heuristics/meta‐heuristics is executed during the search, that is, one heuristic/meta‐heuristic is selected and applied at each stage (iteration/decision point) of the search. The high‐level strategy in hyper‐heuristics evaluates the performance of a set of heuristics/meta‐heuristics in order to improve the population of solutions. By contrast, in multi‐objective hybridization hyper‐heuristics, multiple heuristic/meta‐heuristics run concurrently. Each heuristic/meta‐heuristic produces a different population of offspring, and then, all produced offspring are evaluated to evolve a new population of offspring by an adaptive creation offspring strategy.

4. Multi‐objective hyper‐heuristics design issues

The idea of hybridizing a number of algorithms (heuristics) into a hyper‐heuristic framework is straightforward and meaningful. However, many design issues related to the development of hyper‐heuristics for multi‐objective optimization require more attention when designing such a framework to be applicable and effective.

The main components of the hyper‐heuristic framework are low‐level heuristics, selection method, learning mechanism, and move‐acceptance method. The choosing of these components is critical. Each component in the hyper‐heuristic framework plays a significant role in improving the quality of both the search and the eventual solution. The components of the hyper‐heuristic in the context of multi‐objective optimization are discussed as follows.

4.1. Low‐level heuristics

The choice of appropriate low‐level heuristics is not an easy task. Many questions arise here, what heuristics (algorithms) are suitable for dealing with multi‐objective optimization problems? Are priori approaches or a posteriori approaches more suitable? Are non‐Pareto‐based or Pareto‐based approaches more applicable? The author agrees with many researchers [30, 38, 4751] that evolutionary algorithms and population‐based methods such as decomposition‐based approaches MOEA/Ds (e.g., [52, 53]) and indicator‐based approaches (e.g., [5455] are more suitable in dealing with multi‐objective optimization problems because of their population‐based nature, which means that they can find Pareto optimal sets (trade‐off solutions) in a single run, thus allowing a decision maker to select a suitable compromise solution (with respect to the space of the solutions). In the context of multi‐objective hyper‐heuristics, a decision maker here could be a selection method that decides which is the best low‐level heuristic to select at each decision point (with respect to the space of the heuristics).

4.2. Selection methods

As a selection hyper‐heuristic relies on an iterative process, the main questions arising here are what is an effective way to choose an appropriate heuristic at each decision point, and how to choose this heuristic, that is, which criteria can be considered when choosing a heuristic? In single‐objective cases, this criterion is easy to determine by measuring the quality of the solution, such as the objective/cost value and time. However, this is more complex when tackling a multi‐objective problem. The quality of the solution is not easy to assess. There are many different criteria that should be considered, such as the number of non‐dominated individuals and the distance between the non‐dominated front and the POF. To make a simple framework, a higher level of abstraction of the hyper‐heuristics should be considered when designing such a framework. It is not necessary to provide any problem‐specific information, such as the number of objectives or the nature of the solution space to the high‐level strategy. More attention should be given to the performance of the low‐level heuristics. This will boost the intensification element. Therefore, a heuristic with the best performance will be chosen more frequently to exploit the search area. The aim is to achieve a kind of balance between the intensification and diversification when choosing a heuristic. Selection methods based on randomization support only the diversification by exploring unvisited areas of the search space. Reinforcement learning (RL) [56] uses support intensification as a selection method by rewarding and punishing each heuristic based on its performance during the search using a scoring mechanism. An example of good selection method is the choice function, which could provide a balance between intensification and diversification.

4.3. Learning and feedback mechanism

Not all hyper‐heuristic approaches incorporate a learning mechanism. However, a learning mechanism is strongly linked to the selection method. An example of this is a random hyper‐heuristic classified as an offline‐learning approach [1], because the random selection does not provide any kind of learning. The learning mechanism guides the selection method to which best heuristic should be chosen at each decision point. A best heuristic refers to the heuristic that produces solutions with good quality based on some criteria using performance measurements. It is good to note that the measurement of the quality of the solution for multi‐objective problems requires assessing different aspects of the non‐dominated set in the objective space. In the scientific literature, many performance metrics have been proposed to measure different aspects of the quality and quantity of the resulting non‐dominated set [4, 29, 57].

4.4. Move‐acceptance method

The selection hyper‐heuristic framework comprises two main stages: selection and move‐acceptance methods. A move‐acceptance criterion can be deterministic or non‐deterministic. A deterministic move‐acceptance criterion produces the same result, given the configuration (e.g., proposed new solution, etc.). Non‐deterministic move‐acceptance criteria may generate a different result even when the same solutions are used for the decision at a same given time. This could be because the move‐acceptance criterion depends on time or it might have a stochastic component while making the accept/reject decision. Examples of deterministic move‐acceptance criteria are all‐moves, only‐improving, and improving & equal. For non‐deterministic move‐acceptance criteria, the candidate solution is always accepted if it improves the solution quality, while worsening solutions can be accepted based on an acceptance function, including GDA [27], LA [28], Monte Carlo [58], and SA [17]. Selection of the move‐acceptance criteria has to be compatible with the selection methods. In the scientific literature, many combinations of the selection method and move‐acceptance criterion have been successfully applied to single‐objective optimization [59]. It could be worth employing them in the context of hyper‐heuristics for multi‐objective optimization.

5. Case study: multi‐objective choice‐function‐based hyper‐heuristics

5.1. The proposed multi‐objective hyper‐heuristics methodologies

This study [26] highlights the lack of scientific study that has been conducted in hyper‐heuristics and multi‐objective optimization, investigates the design of a hyper‐heuristic framework for multi‐objective optimization, and develops hyper‐heuristic approaches for multi‐objective optimization (HHMOs) to solve continuous multi‐objective problems. Hyper‐heuristic frameworks generally impose a domain barrier that separates the hyper‐heuristic from the domain implementation along with low‐level heuristics to provide a higher level of abstraction. The domain barrier does not allow any problem‐specific information to be passed to the hyper‐heuristic itself during the search process. The multi‐objective choice‐function‐based hyper‐heuristic framework is designed in this same modular manner (see Figure 1).

Figure 1.

Proposed framework of the hyper‐heuristic choice function based on multi‐objective optimization problems.

One of the advantages of the multi‐objective choice‐function‐based hyper‐heuristic framework is its simplicity. It is also highly flexible, and its components are reusable. Moreover, it is built on an interface that allows other researchers to write their own multi‐objective hyper‐heuristic components easily. Even the low‐level heuristics can be easily changed if required. If new and better‐performing components are found in the future, they can be incorporated. Based on the multi‐objective selection hyper‐heuristic framework, three online‐learning‐selection, choice‐function‐based hyper‐heuristics are proposed: HHMO_CF_AM, HHMO_CF_GDA, and HHMO_CF_LA [24, 25]. The multi‐objective choice‐function‐based hyper‐heuristics control and combine the strengths of three well‐known multi‐objective evolutionary algorithms (NSGAII, SPEA2, and MOGA), which are ‐utilized as the low‐level heuristics. The choice‐function‐selection heuristic acts as a high‐level strategy that adaptively ranks the performance of those low‐level heuristics according to feedback received during the search process, determining which one to call at each decision point. Four performance measurements (algorithm effort (AE), ratio of non‐dominated individuals (RNI) [5], size of space covered (SSC) [60], and uniform distribution of a non‐dominated population (UD) [61]) are integrated into a ranking scheme that acts as a feedback‐learning mechanism to provide knowledge of the problem domain to the high‐level strategy. The multi‐objective choice‐function‐based hyper‐heuristic is combined with different move‐acceptance strategies, including all‐moves (AM) as a deterministic move acceptance and GDA [27] and LA [28] as a non‐deterministic move acceptance. GDA and LA require a change in the value of a single objective at each step, and hence a well‐known hypervolume metric, referred to as D metric, is proposed for their applicability to the multi‐objective optimization problems. The D metric is integrated into the non‐deterministic move‐acceptance criterion in order to convert the multi‐objective optimization to the single‐objective optimization without having to define value weights for the various objectives. For more details about the HHMOs algorithm, see Refs. [24, 25].

5.2. Problem description

The multi‐objective vehicle crashworthiness design problem has only five decision variables and no constraints [31]. The output of the problem provides a wider choice for engineers to make their final design decision based on the Pareto solution space. The decision variables of the problem represent the thickness of five reinforced members around the front as they could have a significant effect on the crash safety. The mass of the vehicle is tackled as the first design objective, while the integration of collision acceleration between t1= 0.05 s and t2= 0.07 s in the full‐frontal crash is considered as the second objective function. The toe‐board intrusion in the 40% offset‐frontal crash is tackled as the third objective as it is the most severe mechanical injury. The three objectives are formulated as follows:

Mass=1640.2823+2.3573285t12.3220035t2+4.5688768t3+7.7213633t4+4.4559504t5E1
Ain=6.5856+1.15t11.0427t2+0.9738t3+0.8364t40.3695t1t4+0.0861t1t5+0.3628t2t40.1106t120.3437t32+0.1764t42E2
Intrusion=0.0551+0.0181t1+0.1024t2+0.0421t30.0073t1t2+0.024t2t30.0118t2t40.0204t3t40.008t3t50.0241t22+0.0109t42E3

Thus, the multi‐objective design of vehicle crashworthiness problem in Tdecision variable space is formulated as

mF(x)=[Mass,Ain,Intrusion]s.t.1 mmx3 mmwhere x=(t1,t2,t3,t4,t5)TE4

5.3. Performance evaluation criteria and experimental settings

A set of experiments are conducted over a multi‐objective vehicle crashworthiness design problem as a real‐world problem to evaluate the performance of the multi‐objective choice‐function‐based hyper‐heuristics: HHMO_CF_AM, HHMO_CF_GDA, and HHMO_CF_LA. The performance of three multi‐objective hyper‐heuristics is compared to the well‐known multi‐objective evolutionary algorithm, NSGAII [20]. Five performance metrics are used to measure the quality of the approximation sets from different aspects: (i) RNI [5], (ii) SSC [60], (iii) UD [61], (iv) generational distance (GD) [62], and (v) inverted generational distance (IGD) [63]. In addition, the t‐test is used as a statistical test for the average performance comparison of selection hyper‐heuristics and the results. Thirty independent runs are performed for each comparison method using the same parameter settings as those provided in Ref. [31] with a population size equal to 30. All multi‐objective hyper‐heuristics methodologies run for a total of 75 iterations (stages). In each iteration, a low‐level heuristic is selected and applied to execute 50 generations. Thus, all methods are terminated after 3750 generations. Other experimental settings are provided in Refs. [24, 25].

5.4. Results

The mean performance comparison of AM, GDA, LA, and NSGAII based on the performance metrics (RNI, SSC, UD, GD, and IGD) for solving the vehicle crashworthiness problems is provided in Tables 2 and 3.

MethodRNISSCUD
AVGMINMAXSTDAVGMINMAXSTDAVGMINMAXSTD
NSGAII1.001.001.000.007.936E+074.168E+079.587E+071.595E+070.5920.5320.6700.045
AM1.001.001.000.007.381E+075.315E+079.577E+071.463E+070.5850.5160.7070.050
GDA1.001.001.000.008.289E+076.294E+079.580E+071.954E+070.6130.5550.6920.034
LA1.001.001.000.007.538E+074.512E+079.550E+071.474E+070.5820.3020.6410.062

Table 2.

The performance NSGAII and the selection choice‐function‐based hyper‐heuristics using different move‐acceptance strategies on the vehicle crashworthiness problems with respect to the metrics: the ratio of non‐dominated individuals (RNI), the hypervolume (SSC), and the uniform distribution (UD).

MethodGDIGD
AVGMINMAXSTDAVGMINMAXSTD
NSGAII2.48E‐031.46E‐034.21E‐039.10E‐044.156E‐031.543E‐031.289E‐023.859E‐03
AM2.71E‐031.59E‐034.06E‐037.90E‐044.376E‐031.738E‐031.288E‐024.168E‐03
GDA2.11E‐031.10E‐034.28E‐037.10E‐043.552E‐031.661E‐031.230E‐023.075E‐03
LA3.32E‐031.70E‐036.76E‐031.33E‐033.604E‐031.525E‐031.238E‐022.582E‐03

Table 3.

The performance NSGAII and the selection choice‐function‐based hyper‐heuristics using different move‐acceptance strategies on the vehicle crashworthiness problems with respect to the metrics: the generational distance (GD) and the inverted generational distance (IGD).

For each performance metric, the average, minimum, maximum, and standard deviation values are computed. For all metrics, a higher value indicates better performance, except in GD and IGD, where a lower value indicates better performance. The statistical t‐test results of NSGAII and the three multi‐objective choice‐function‐based hyper‐heuristics (AM, GDA, and LA) are given in Table 4.

MethodsMetrics
RNISSCUDGDIGD
NSGAII:AMn/a+±±±
NSGAII:GDAn/a+±
NSGAII:LAn/a+++
AM:GDAn/a±
AM:LAn/a±+
GDA:LAn/a+++±

Table 4.

The t‐test results of NSGAII and the three multi‐objective choice‐function‐based hyper‐heuristics methodologies on the multi‐objective vehicle crashworthiness design problems with respect to the metrics: the ratio of non‐dominated individuals (RNI), the hypervolume (SSC), the uniform distribution (UD), the generational distance (GD), and the inverted generational distance (IGD).

The distribution of the simulation data of the 30 independent runs for the comparison methods with respect to these performance metrics is visualized as box plots, shown in Figure 2. The results indicate that all methods perform similar to each other with respect to the metric of RNI over. GDA exhibits the best performance in the metrics of SSC, GD, and IGD, and it converges better toward the POF than the other methods. GDA also exhibits the best performance in the metric of UD and distributes more uniformly than other methods.

Figure 2.

Box plots of multi‐objective choice‐function‐based hyper‐heuristics methodologies and NSGAII on the multi‐objective vehicle crashworthiness design problems.

The 50% attainment surface for each method, from the 30 fronts after 3750 generations, is computed and illustrated in Figure 3. GDA appears to generate a good convergence. It can be clearly observed that GDA converges to the best POF with a well‐spread Pareto front as compared to the other approaches. By contrast, AM generates the poorest solutions. NSGAII and LA have similar convergence. In general, the hyper‐heuristics for real‐world multi‐objective problems benefits from the use of a learning heuristic selection method as well as GDA. The results demonstrate the effectiveness of our selection hyper‐heuristics particularly when combined with great deluge algorithm as a move‐acceptance criterion. HHMO_CF_GDA turns out to be the best choice for solving this problem. Although other multi‐objective hyper‐heuristics still produce solutions with acceptable quality in some cases, they could not perform as well as NSGAII. In summary, the results of the real‐world problem demonstrate the capability and potential of the multi‐objective hyper‐heuristic approaches in solving continuous multi‐objective optimization problems.

Figure 3.

The 50% attainment surfaces for NSGAII and the three multi‐objective choice‐function‐based hyper‐heuristics (AM, GDA, and LA) after 3750 generations on the multi‐objective design of vehicle crashworthiness problem.

6. Some promising research area

Multi‐objective hyper‐heuristic offers interesting potential research directions in multi‐objective optimization. Some of these promising research areas are recommended as follows:

  • Many heuristic selection methods can be adapted from previous research in single‐objective optimization and can be used for multi‐objective optimization within multi‐objective selection hyper‐heuristic. This process is not a trivial process, requiring elaboration of existing methods and their usefulness in a multi‐objective setting. Furthermore, other acceptance criteria such as simulated annealing (SA) and tabu search (TS) [58] could be employed as a move‐acceptance component within the selection hyper‐heuristic framework for multi‐objective optimization. As those criteria involve many parameters, this methodology would require initial experiments to tune the parameters for multi‐objective settings, such as defining a cooling schedule and an initial temperature for SA and aspiration criterion and tabu tenure for TS.

  • There are many low‐level heuristics choices possible and, therefore, great scope for research in this area. It would be interesting to employ state‐of‐the‐art multi‐objective optimizers and other population‐based methods that obtain promising results to act as low‐level heuristics within the multi‐objective combination/selection hyper‐heuristic framework.

  • It would be interesting to test the level of generality of existing multi‐objective hyper‐heuristic frameworks further on some other problems and domains, including the continuous real‐valued constrained, combinatorial, discrete, and dynamic problems.

  • Since no studies are found in the scientific literature that address multi‐objective generation hyper‐heuristic, it would be interesting to propose a multi‐objective generation hyper‐heuristic framework. This indicates great scope for research. Many‐generation hyper‐heuristic methodologies have been successfully applied to single‐objective optimization; it would also be interesting to modify them to deal with multi‐objective optimization.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Mashael Suliaman Maashi (August 30th 2017). Multi‐Objective Hyper‐Heuristics, Heuristics and Hyper-Heuristics - Principles and Applications, Javier Del Ser Lorente, IntechOpen, DOI: 10.5772/intechopen.69222. Available from:

chapter statistics

469total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Heuristics Techniques for Scheduling Problems with Reducing Waiting Time Variance

By Satyasundara Mahapatra, Rati Ranjan Dash and Sateesh K. Pradhan

Related Book

First chapter

A Tutorial on H.264/SVC Scalable Video Coding and its Tradeoff between Quality, Coding Efficiency and Performance

By Iraide Unanue, Inigo Urteaga, Ronaldo Husemann, Javier Del Ser, Valter Roesler, Aitor Rodriguez and Pedro Sanchez

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More about us