Open access

Grasp and Path Relinking to Solve the Problem of Selecting Efficient Work Teams

Written By

Fernando Sandoya and Ricardo Aceves

Submitted: 18 August 2012 Published: 30 January 2013

DOI: 10.5772/53700

From the Edited Volume

Recent Advances on Meta-Heuristics and Their Application to Real Scenarios

Edited by Javier Del Ser

Chapter metrics overview

1,970 Chapter Downloads

View Full Metrics

1. Introduction

The process of selecting objects, activities, people, projects, resources, etc. is one of the activities that is frequently realized by human beings with some objective, and based on one or more criteria: economical, space, emotional, political, etc. For example, as a daily experience people should select what means of transportation and routes to utilize to arrive at a determined destination according to the price, duration of the trip, etc. In these cases, one must select the best subset of elements based on a large set of possibilities, the best in some sense, and in many cases there is an interest in the selected elements not appearing amongst themselves, if not it is better that they have different characteristics so that they can represent the existing diversity in the collection of original possibilities. Of course at this level people make these decisions intuitively, but commonsense, generally, is not a good advisor with problems that require optimized decision-making, and simple procedures that apparently offer effective solutions lead to bad decisions, thus this can be avoided by applying mathematical models that can guarantee obtainable effective solutions. In other human activities the selection of this subset has economic implications that involve a selection of a more diverse subset, a crucial decision, and difficult to obtain, which requires a correct process of optimization guided by a methodical form.

In the Operations Research literature, the maximum diversity problem (MDP) can be formulated by the following manner: If V=1, 2, , n is the original set, and M is the selected subset, MV, the search for optimizing the objective is as follows:

Max f1M=divME1

In the equation (1) the objective function divM represents the measurement that has been made of the diversity in the subset selected. There are some existing models to achieve this goal, as well as a number of practical applications, as reported in [1, 2, 3, 4, 5]; in particular, we target the Max-Mean dispersion model in which the average distance between the selected elements is maximized, this way not only is there a search for the maximization of diversity, if not also the equitable selected set, also, the number of elements selected are as well a decision variable, as mentioned in [6].

Traditionally the MDP has permitted the resolution of concrete problems of great interest, for example: the localization of mutually competitive logistic facilities, for illustration see [3], composition of the panels of judges, [7], location of dangerous facilities, [1], new drugs design [8], formulation of immigration policies and admissions [9].

In the past, a great part of the public’s interest in diversity was centered around themes such as justice and representation. On the other hand, lately there has been a growing interest in the exploitation of the benefits of diversity. Recently, in [6], it a potential case of the application of the selection of efficient work teams is mentioned. In practice, there are many examples when the diversity in a group enhances the group’s ability to solve problems, and thus, leads to more efficient teams, firms, schools. For this reason, efforts have begun on behalf of the investigators to identify how to take advantage of the diversity in human organizations, beginning with the role played by the diversity in groups of people, for example in [10], Page et al. introduces a general work plan showing a model of the functionality of the problem solving done by diverse groups. In this scenario, it is determined that the experts in solving problems possess different forms of presenting the problem and their own algorithms that they utilize to find their solutions. This focus can be used to establish a relative result in the composition of an efficient team within a company. In the study it is determined that in the selection of a team to solve problems based in a population of intelligent agents, a team of selected agents at random surpasses a team composed by the best suited agents. This result is based on the intuition that when an initial group of problem solvers becomes larger, the agents of a greater capability will arrive to a similar conclusion, getting stuck in local optimum, and its greater individual capacity is more than uncompensated by the lack of diversity.

This chapter is organized in the following manner, beginning with the Section 2 study of concepts relating to diversity, and how it can be measured. Later on, in Section 4 we are introduced to the classic Maximum Diversity Problem, with differing variants, and the new problem Max-Mean, with which we attempted to resolve the first objective described by the equation (1), also revised are the formulations of the mathematical programming for these problems, and its properties are explored. In Section 5 an algorithm is developed based on GRASP with path relinking in which the local search is developed mainly with the methodology based on Variable neighborhood search, in Section 4 there is a documented extensive computerized experimentation.

Advertisement

2. Distances, similarities, and diversity

2.1. Definitions

Similarities are understood to be a resemblance between people and things. Although it is common to accept that diversity is an opposite concept of similarities, both terms perform within different structures, since similarities are a local function for each pair of elements. In contrast, diversity is a characteristic associated to a set of elements, which is calculated with the function of the dissimilarities within all the possible pairings. Where dissimilarities are the exact opposite of the similarities.

To be even more specific, to measure the diversity in M, divM , it is required to first have a clear definition of the connection, distance, or dissimilarity between each pair i, jM. The estimation of this distance depends on the concrete problem that is being analyzed, in particular in complex systems like social groups a fundamental operation is the assessment of the similarities between each individual pair. Many measurements of the similarities that are proposed in the literature, in many cases show similarities that are assessed as a distance in some space with adequate characteristics, generally in a metric space, as for example the Euclidian distance. In the majority of applications each element is supposed to able to be represented by a collection of attributes, and defining xik as the value of the attribute k of the element i, then, for example, utilizing the Euclidian distance:

dij=kxik-xjk2

Under this model, d, satisfies the axioms of a metric, although the empirical observation of attractions and differences between individuals forces abandoning these axioms, since they obligate an unnecessary rigid system with properties that can not adapt adequately the frame of work of this investigation: the measurements of similarities

In the literature, one can find the different measurements of similarities that can be applied to groups of people. For example, in [11] it is established that “the measurements of similarities of the cosine is a popular measurement of the similarities”. On the other hand, in [10] it is established that the measurement of dissimilarities to treat the problem of the relation between the diversity and the productivity of groups of people can be established to solve problems. These measurements are developed in section 1.2. In [6] a similar measurement is utilized to solve a real case.

2.2. Similarity measurements

Given two individuals i, j with the characteristics xi=xi1, xi2, , xip, xj=xj1, xj2, , xjp is defined by the measurement of similarities of the Cosine like:

dij=k=1pxikxjkk=1pxik2k=1pxjk2E2

On the other hand, in [10] the authors explain the problem with how diversity presents a group can increase the efficiency to solve problems, in particular in its investigation that authors use the following measurement of dissimilarities:

dij=l=1pδxil,xjlpE3

Where:

δxil,xjl=-1si xil=xjlxil-xjlsi xilxjl

This measurement will take a negative value (in the case of similarities) and positives (in the case of dissimilarities). In general terms, we are referring to a dij as the dissimilarities or the distance between i and j.

2.3. Equity, diversity, and dispersion

The growing interest in the treatment of diversity also has originated in an effort to study the management of fairness, that is to say that all the practices and processes utilized in the organizations to guarantee a just and fair treatment of individuals and institutions. Speaking in general terms, the fair treatment is that which has or has exhibited fairness, being terms that are synonyms: just, objective, or impartial. Many authors, like French, in [12] the argument is that equality has to do with justice, for example the distribution of resources or of installations or public service infrastructures, and in the same manner the achievement of equality in diversity has been identified within as a problem of selection and distribution. Synthesized, one can say that the equality represents an argument concerning the willingness for justice, understanding this as a complicated pattern of decisions, actions, and results in which each element engages as a member of the subset given.

The other sub problem that should be resolved is how to measure diversity. Given a set V=1, 2, , n, and a measure of dissimilarity dij defined between every pair of elements of V, and a subset MV, different forms have been established as their measure of diversity.

2.4. The measure of dispersion of the sum

With this calculated measurement of diversity and a subset as the sum of the dissimilarities between all the pairs of their elements; this is to say, the diversity of a subset M is calculated with the equation (4):

divM=i<j,i,jMdijE4

2.5. The measurement of dispersion of the minimum distance

In this case of the diversity of a subset given the establishment of how the minimum of these types of dissimilarities between the pairs of elements of the set; this is to say, like in equation (5).

divM=mini<j,i,jMdijE5

This type of measurement can be useful with contexts that can make very close undesirable elements, and thereby having a minimum distance that is great is important.

2.6. The measurement of the average dispersion

For a subset M, the average diversity is calculated by the expression of the equation (6)

divM=i<j,i,jMdijME6

Notice that this measurement of diversity is intimately associated with the measurement of the dispersion of the sum, that constitutes the numerator of the equation (6). In the literature lately some references have appeared in which the diversity is measured in this manner, for example in [13], in the context of systems Case-based reasoning, CBR, the authors defined the diversity of the subset of some cases, like the average dissimilarity between all the pairs of cases considered. So much so that in [6] diversity of a subset is defined by the equation (6) within the context of the models of the dispersion equation.

Advertisement

3. The maximum diversity problem

Once determined how to resolve the sub problem of estimating the existence of diversity in a set, the following is establishing the problem of optimizing what to look for the determined subset with maximum diversity. Such problem is named in the literature as The Maximum Diversity Problem.

The most studied model probably is the Problem in which it maximizes the sum of the distances or dissimilarities between the elements selected, this is to say the maximum measure of diversity of the sum established in the equation (4). In the literature there is also the problem also known with other denominations, as the Max-Sum problem [14], the Maximum Dispersion problem [15], Maximum Edge Weight Clique problem, [16], the Maximum edge-weighted subgraph problem, [18], or the Dense k-subgraph problem, [19].

Recently another model has been proposed in the context of equitative dispersion models [20], this model is denominated as the Maximum Mean Dispersion Problem (Max-Mean), that is the problem of optimization that consists in maximizing the equation (6), and one of the principal characteristics, that makes is different than the rest of the models of diversity, being that the number of elements selected also is a decision variable.

3.1. Formulations & mathematical programming models

Given a set V=1, 2, , n, and the dissimilarity relation dij, the problem is selecting a subset MV, of cardinality m<n, of maximum diversity:

maxMVf1M=divME7

The manner in which diversity is measured in the equation (7) permits constructing the formulations of the different maximum diversity problems.

3.2. The Max-Sum problem

The Max-Sum problem consists in selecting the subset that has the maximum diversity, measuring the agreement of the equation (4):

maxMV, M=mi<j,i,jMdij

Introducing the binary variables: xi=1 if element i is selected0 otherwise;1in

Therefore, this problem can be formulated as a problem of quadratic binary programming:

max   i=1n-1j=i+1ndijxixjE8
s.t.   i=1nxi=mE9
xi0,1; 1inE10

3.3. The Max-Mean problem

This problem can be described as:

maxMV,  M2i<j,i,jMdijM

Generically speaking, this problem deals with the maximization of the average diversity. A formulation of the mathematical programming with the binary variables is then:

maxi=1n-1j=i+1ndijxixji=1nxiE11
s.t.   i=1nxi2E12
xi0,1,   1inE13

In this problem the objective function (11) is the average of the sum of the distances between the selected elements, the constraint (12) indicates that at least two elements should be selected. Just as presented in [20], this is a fractional binary optimization problem, but can be linearized utilizing new binary variables, this way the problem is formulated for the equations (14) to (19):

max   i=1n-1j=i+1ndijzijE14
s.t.     y-zi1-xi;  ziy;  zixi;  zi0; 1inE15
y-zij2-xi-xj; zijy;  zijxi; zijxj; zij0; 1i<jnE16
i=1nxi2E17
i=1nzi=1E18
xi0,1; 1inE19

Notice that the Max-Mean problem cannot be resolved applying a solution method for any of the other problems, unless applied repeatedly for all the possible values of m=M;m=2, 3, , n. Surprisingly, as seen in Section 4, to find the solution of the Max-Mean problem with exact methods through resolving n-1 Max-Sum problems requires much less time that resolves directly the formulation (14)-(19).

Advertisement

3.4. Computational complexity

This is known as the Max-Sum problem it is strongly NP-hard, as demonstrated in [9]. Recently, it has also been demonstrated in [20] that the Max-Mean problem is strongly NP-hard if the measurements of dissimilarities take a positive value and negative. Here the property 3 is demonstrated, this then indicates that if dij satisfying the properties of a metric, then the diversity divM for any MV is always less than divMk for any kM, then, a solution with m<n elements cannot be optimal in the Max-Mean problem, from there the optimum of this case is selecting all the elements.

Property 1 [12]

The Max-Sum Problem is Strongly NP-hard.

Property 2 [6]:

If the dissimilarity coefficients dij does not have restrictions in the sign, then the Max-Mean problem is strongly NP-hard.

Property 3:

The Max-Mean problem has a trivial solution M=V, if the dissimilarity measure is a metric.

Proof:

The Max-Mean problem consists in selecting a subset M such that divM is maximized. Demonstrating that given the instance in which the dissimilarities are not negative, symmetrical, and satisfy the triangular inequality, the solution to the Max-Mean problem is selecting all the elements, that is to say: M=V.

For all i, jM and kM the triangular inequality establishes that dijdik+djk

Adding over all the possible pairs of elements in M:

i,jMi<jdiji,jMi<jdik+djk

But the right side of the last expression is equivalent to M-1 times iMdik,

If representing with m=M, then:

i,jMi<jdiji,jMi<jdik+djk=m-1iMdik<miMdik

Divided by m on has:

1mi,jMi<jdij<iMdik

Adding the term i,jMi<jdij on both sides of the last inequality:

m+1mi,jMi<jdij<i,jMi<jdij+iMdik=i,jMki<jdij

Finally dividing for m+1 :

divM=1mi,jMi<jdij<1m+1i,jMki<jdij=divMk

Advertisement

4. An efficient method to solve the Max-Mean problem

4.1. Exact solution for the MIP formulation

It is evident that an optimal solution can be obtained for the Max-Mean problem in an indirect manner if resolving the Max-Sum model for all the possible values of m; meaning, for m=2, 3, , n, and then dividing the remaining solutions for the corresponding value of m. Then, the best value of these n-1 values is the optimal Max-Mean model. Therefore, if ZMax-Summ* is the optimal value of the objective function of the Max-Sum problem with m selected elements, and ZMax-Mean* is the optimal value of the Max-Mean problem, then:

ZMax-Mean*=maxm2,,nZMax-Summ*m

This research takes into account two new types of test instances:

  • Type I: This set contains 60 matrices of sizes: n=20,  25, 30, 35, 150 and 500 with random numbers in [-1,1] generated from a uniform distribution.

  • Type II: There are also 60 symmetrical matrices, with n=20,  25, 30, 35, 150 and  500, but with coefficients that generate with random numbers with a uniform distribution in -1,-0.50.5,1.

These test instances are found as available in the web site of the project OPTSICOM, [21].

Figure 1 shows the result of the resolution of the Max-Mean problem in an indirect way, for the test instances of type I and type II, of size n=30, solving in an exact manner in each example 29 Max-Sum problems, each one of the cuadratic binary formulation (8)-(10). In this investigation, the Max-Sum problems are solved by the method of dynamic search using Cplex 12.4.0, the professional solver for mixed integer linear programming problems. Progress in computer technology and in design of MIP efficient algorithms and their implementation in Cplex 12.4.0 together with mathematical advance lead in some cases to satisfactory solution times. Unfortunately the MIP formulation described above cannot be solved in reasonable times for medium or large problems.

Also, Figure 1 shows that the Max-Mean value of the Max-Sum solution increases as m increases from 2 to certain value, and then this value decreases in the rest of the range. We have observed the same pattern (approximately a concave function) in all the examples tested with positive and negative distances randomly generated. We will consider this pattern to design an efficient GRASP algorithm.

Figure 1.

Evolution of the Optimal Values of the Max-Sum Problem divided for m value

Table 1 shows, that for each method and for each size of a problem, the average value of the objective function Value in the optimal solution, the average number of elements that end up being selected in the optimal solution m, and the average time in seconds CPU, ND signifies that the value is not available because the solution was not reached in 5 hours. Cplex 12.4.0 only permitted solving small problems in moderate times. In particular in the linear formulation (14)-(19) can only be resolved in test instances of n<30, and for n=30 the solution could not be obtained in a 5 hour process. Experiments with Cplex corroborate the difficulties that commercial branch-and-bound codes encounter when approaching the Max-Sum and Max-Mean problem with this manner.

TYPE I TYPE II
n Max-Mean Max-Sum (n-1) times Max-Mean Max-Sum (n-1) times
20 CPU (s) 50.334 14.662 66.714 19.164
Value 1.443 1.443 1.898 1.898
m 7.400 7.400 7.500 7.500
25 CPU (s) 694.606 41.826 1995.100 59.581
Value 1.732 1.732 2.207 2.207
m 9.800 9.800 9.600 9.600
30 CPU (s) > 5 horas 102.303 > 5 horas 182.176
Value ND 1.875 ND 2.383
m ND 10.700 ND 10.800

Table 1.

Max-Mean Problem Solutions obtained with Cplex 12.4.0

Surprisingly, the Max-Sum model applied n-1 times permits resolving instances of a greater size in less time, and one could obtain the solution for n=30 in 102.30 seconds on average, and for n=35 in 719.51 seconds in the type I problems, in the type II problems this requires more time. Yet, in instances of size n=50 in 5 hours cannot obtain the optimum solution for this strategy.

It can be concluded that if one desires to resolve the Max-Mean problem in an exact manner it is preferable to use the strategy to solve n-1 times the Max-Sum model since the it consistently worked in much less time in all the experiments. This could be due to the fact that the relaxation continues in the Max-Sum problem providing better levels than the relaxation provided by the continued Max-Mean problem.

Given that the problems of the maximum diversity are NP-hard, it is clear that is required to make a heuristic design to resolve problems of large and medium size. In [6] a algorithm is developed based in GRASP that exploits the characteristics of the Max-Mean problem, and that is hybridized with other successful techniques of intensification, like Path Relinking (PR), and Variable Neighborhood Search, (VNS). This algorithm has resulted as an efficient solution to the medium and large problems.

4.2. Solving the Max-Mean problem

In this section, we describe a heuristic developing in [6] to solve the Max-Mean problem. This heuristic consists of a phase of construction GRASP, with a local search phase based on the Variable Neighborhood Search methodology subsequently it is improved with incorporation of a phase of post processing, based on Path Relinking.

4.3. GRASP construction phase

From the results shown in Figure 1, we can design a new constructive method in which we add elements to the partial solution under construction as long as the Max-Mean value improves, and when this value starts to decrease, we stop the construction. In this way, the method selects by itself the value of m, which seems adequate to this problem.

In place of a typical GRASP construction for diversity in which, first, each candidate element is evaluated by a greedy function to construct the Restricted Candidate List (RCL) and then an element is selected at random from RCL we utilizing an alternative design, in accordance with the proposed in recent studies [22] in which we first apply the randomization and then the greediness can obtain improved outcomes. In particular, in our constructive method for the Max-mean problem, we first randomly choose candidates and then evaluate each candidate according to the greedy function, selecting the best candidate, permitting better results.

More so specifically, given a partial solution Mk with k selected elements, the list of candidates CL is formed by the n-k unselected elements. The list of restricted candidates, RCL, contains a fraction α0<α<1 of the elements of CL selected randomly, where α where is a parameter that should be selected adequately, generally by computational experiments.

Then, for each element iRCL, the method computes its contribution, evali, if it is added to Mk to obtain Mki:

evali=divMki-divMk

Where div is the mean diversity defined in the equation (6).

Afterwards, the method selects the best candidate i* in RCL if this improves the actual partial solution; this is to say, if evali*>0, and add it to the partial solution, Mk+1=Mki*; otherwise, if evali*0, the method stops.

Figure 2 show the pseudo-code of this phase of construction of the method that one calls heuristic GRASP.

Figure 2.

GRASP construction phase

4.4. Local search in GRASP

The GRASP construction usually does not obtain a local optimum and it is customary in GRASP to apply a local search method to the solution constructed. As shown in [6], previous local search methods for diversity problems limit themselves to exchange a selected with an unselected element, keeping constant the number m of selected elements. Since we do not have this size constraint in the Max-Mean model and we admit solutions with any value of m, we can consider an extended neighborhood based on the Variable Neighborhood Descent (VND) methodology.

We consider the combination of three neighborhoods in our local search procedure:

  • N1: Remove an element from the current solution, thus reducing the number of selected elements by one unit.

  • N2: Exchange a selected element with an unselected one, keeping constant the number of selected elements.

  • N3: Add an unselected element to the set of selected elements, thus increasing its size by one unit.

The order of exploration of the neighborhoods is given to try, in the range of possibility, diminishing the number of selected elements, increasing its diversity as well, which happens when a better solution is obtained in N1. If this is not possible, one can conserve the cardinality of the selected set with the obligation of increasing diversity, just like what happens when exploring the neighborhood N2. Finally, by exploring N3, one is willing to increase the cardinality of the set selected if increasing diversity.

More specifically: Given a solution, Mm, the local search first tries to obtain a solution in N1 to improve it. If it succeeds, and finds Mm-1' with dm(Mm-1')>dm(Mm), then we apply the move and consider Mm-1' as the current solution. Otherwise, the method resorts to N2 and searches for the first exchange that improves Mm. If it succeeds, and finds Mm' with dm(Mm')>dm(Mm), then we apply the move and consider Mm' as the current solution. In any case, regardless that we found the improved solution in N1 or in N2, in the next iteration the method starts scanning N1 to improve the current solution. If neither N1 nor N2 is able to contain a solution better than the current solution, we finally resort to N3. If the method succeeds, finding Mm+1' with dm(Mm+1')>dm(Mm), then we apply the move and consider Mm+1' as the current solution (and come back to N1 in the next iteration). Otherwise, since none of the neighborhoods contain a solution better that the current one, the method stops.

To accelerate the search in these neighborhoods, one would not make the exploration in a sequential manner over the elements of a specific neighborhood, if not one would evaluate the potential contribution to the partial solution of the following manner: Given a solution Mm, one calculates the contribution of each element selected i, just like the potential contribution of each element unselected i like:

dsi,Mm=jMmdij

Thus, when exploring N1 one searches for the elements selected in the given order by ds, where the element with the smallest value is tested first. Similarly, when exploring N2 proving the selected elements in the same order but the elements unselected in the inverse order, this is to say, first considering the elements not selected with a grand potential contribution to the partial solution.

Finally, when exploring N3 the elements not selected, that are considered to be added in the actual solution, they are explored in the same manner than in N2, in which the element with the largest contribution is considered first. Figure 3 outlines the pseudo-code of this phase.

Figure 3.

Local search in GRASP

4.5. GRASP with path relinking

The Path Relinking algorithm was described for the first time in the framework of tabu search method, it operates on a Elite Set of solutions (ES), constructed with the application of a previous method. Here we apply GRASP to build ES considering both quality and diversity. Initially ES is empty, and we apply GRASP for b=ES iterations and populate it with the solutions obtained (ordering the solutions in ES from the best x1 to the worst xb). Then, in the following GRASP iterations, we test whether the generated solution x', qualify or not to enter ES. Specifically, if x' is better than x1, it enters in the set. Moreover, if it is better than xb and it is sufficiently different from the other solutions in the set (d(x',ES)dth), it also enters ES. To keep the size of ES constant and equal to b, when we add a solution to this set, we remove another one. To maintain the quality and the diversity, we remove the closest solution to x' in ES among those worse than it in value.

Given two solutions, x ,y , interpreted as binary vectors with n variables, where variable xi takes the value 1 if element i is selected, 0 otherwise, the distance dx,y can be computed as dx,y=i=1nxi-yi and the distance between a solution x' and the set ES,  d(x',ES), can be computed as the sum of the distances between x' and all the elements in ES.

The path relinking procedure PR(x,y) starts with the first solution x, called the initiating solution, and gradually transforms it into the final one y called the guiding solution. At each iteration we consider to remove an elements in x not present in y, or to add an element in y not present in x. The method selects the best one among these candidates, creating the first intermediate solution, x(1). Then, we consider to remove an element in x(1) not present in y, or to add an element in y not present in x(1). The best of these candidates is the second intermediate solution x2. In this way we generate a path of intermediate solutions until we reach y. The output of the PR algorithm is the best solution, different from x and y, found in the path. We submit this best solution to the improvement method. Figure 4 shows a pseudo-code of the entire GRASP with Path Relinking algorithm in which we can see that we apply both PR(x,y) and PR(y,x) to all the pairs x, y in the elite set ES.

Figure 4.

GRASP with Path Relinking

4.6. Comparison with existing methods

We also propose a new adaptation of existing methods for several models of maximum diversity problem.

Prokopyev et al. in [20] introduced several models to deal with the equitable dispersion problem and the maximum diversity problem. The authors proposed a GRASP with local search for the Max-MinSum variant in which for each selected element (in M), they compute the sum of the distances to the other selected elements (also in M) and then calculate the minimum of these values. The objective of the Max-MinSum model is to maximize this minimum sum of distances. We can adapt the method above, originally proposed for the Max-MinSum, to the Max Mean model. We call this adapted method GRASP1.

Also, Duarte and Martí in [26] proposed different heuristics for the Max-Sum model. In particular the authors adapted the GRASP methodology to maximize the sum of the distances among the selected elements. We also adapt this algorithm to solve the Max-Mean Model, and we call the entire method (constructive phase + local search) GRASP2.

Adaptation details of these algorithms can be seen in [6]

In the final experiment we target the 20 largest instances in our data set (n=500). Table 3 shows the average results on each type of instances of GRASP1, GRASP2 and our two methods, GRASP and GRASP with Path Relinking described in this Section. Results in Table 3 are in line with the results obtained in the previous experiments. They confirm that GRASP consistently obtains better results than GRASP1 and GRASP2. As shown in the last column of Table 3, Path Relinking is able to improve the results of GRASP in all the instances.

Advertisement

5. Numeric experiments with test instances

This section contains the results of a large number of numerical experiments that is made to evaluate and calibrate the GRASP algorithm, which was implemented in Mathematica V.7

Mathematica is a computational software program used in scientific, engineering, and mathematical fields and other areas of technical computing. It is developed by Wolfram Research.

, the experiments are processed in an Intel Core 2 Laptop, 1.4 GHz and 2GB de RAM. The parameters of the algorithms were calibrated through extensive computational experiments.

5.1. GRASP heuristic performance on small problems

In this section a comparison is made of the performance of the heuristic GRASP and the exact optimal reported for small problems. The results are shown in Table 2.

Small instances of size n=30 were used, the largest are for those that can be resolved with Cplex 12.4.0 in an exact manner in reasonable times. Since the optimal is known, a measurement of the precision of the methods is the difference in relative percentage with respect to the optimum (GAP). Table 2 shows the average of the objective function (Value), the average number of elements selected (m), the times that the optimum was reached (# of optimal times), the relative difference with the optimal (GAP) and the average time in seconds (CPU Time).

GRASP constructive Cplex 12.4.0
Type I Value 1.87351 1.874955
m 10.8 10.7
# optimal times 9 10
GAP 0.084% 0%
CPU Time 0.35444 102.303
Type II Value 2.377163 2.383
m 10.3 10.8
# optimal times 6 10
GAP 0.397% 0%
CPU Time 0.3444 182.176

Table 2.

Performance of the constructive phase in small problems

Only applying the constructive phase of GRASP one can reach the exact optimum of the problems 90% of the times, for the test instances of type I, and the 80% of the times in the test instances of type II, and in a reduced amount of time (less than a second), also in instances in which the optimum is not found, the GAP is very small.

5.2. Solution to large problems

Being that is no longer possible to compare the optimal solution of these problems, in place of GAP it is reported that a percentage of deviation in respect to the best solutions found in the experiments, the represented value in the tables like deviation, and that it is equal to:

Deviation= best solution-current solutionbest solution×100%

GRASP GRASP+PR GRASP1 GRASP2
Type I Value 7.71370 7.7977 6.6796 7.0163
m 139.4 145.2 154.4 157.6
# Best 0 10 0 0
Deviation 1.07% 0.00% 14.31% 10.01%
CPU (sec.) 717.3 688.1 1414.5 950.9
Type II Vaue 10.2957 10.437 88.98 92.68
m 143.2 144.4 186.1 170.4
# Best 0 10 0 0
Deviation 1.53% 0.00% 14.74% 11.18%
CPU (sec.) 662.422 679.641 804.8 708.3

Table 3.

Comparison of the obtained results with GRASP+PR in large instances

Table 3 shows that the Path Relinking phase permitted improvements to the results of the heuristic GRASP, GRASP1 (based in [20]) and GRASP2 (based in [26]) in all of the test instances of size n=500 and for the two types of examples considered

5.3. Search profile in Variable Neighborhood Search (VNS) methodology by GRASP

Our local search in the heuristic GRASP utilizes three types of neighborhoods, generated according to the methodology VNS, these neighborhoods are represented by: N1 (remove an element from the solution), N2 (exchange a selected element with an unselected one), and N3 (add an unselected element to the solution). This way an interesting study is measured by the contribution of each type of neighborhood to the quality of the final solution.

Figure 6 depicts a bar chart with the average number of times, in the 20 instances of size n=150 used in our preliminary experimentation, that each neighborhood is able to improve the current solution. We can see that, although N2 improves the solutions in a larger number of cases, N1 and N3 are also able to improve them and therefore contribute to obtain the final solution.

Figure 5.

Average number of improvements of each GRASP neighborhood

Curiously, if one calculates the was average contribution to the improvement of the function of the objective that provides the exploration in each one of the types of neighborhoods, one can observe that the neighborhoods of type N1 and N3 provide greatest contribution on average compared with the visit to the neighborhood N2, as shown in Figure 6.

5.4. Solution of large problems using GRASP with Path Relinking

In this section the experiments made are described with the 20 test instances of size n=500. Table 3 shows the summary of the results obtained in the large instances when applying the algorithms proposed, the values correspond to the achieved averages with each one of the test instances of this size.

Figure 6.

Contribution to improving the objective function value for each neighborhood

Figure 7.

Search profile of GRASP and GRASP+PR

5.5. Search profile

Finally, to complete the analysis of the comparison of the efficiency of the algorithms that are designed, graphs were made of the profile of search of the algorithms; this is to say, since these heuristics were improving the value of the objective function of the time of execution. In Figure 7 one can observe the amplified details of its profile for a search in the neighborhood of the best values found. The figure clearly shows the GRASP achieves good solutions quickly. The execution of GRASP+PR, the phase of relinking of trajectories is executed after the elite set, ES, has been populated, which occurs after approximately 450 seconds, on average. Then the phase of path relinking properly said, by applying the procedure to each pair of solutions of the elite set, the evolution of the best solution found show that this phase permits obtaining the best solutions quickly, surpassing the GRASP (without PR), that after a certain moment does no achieve improvements in the solutions in the same proportion that GRASP+PR, and therefore is seen surpassing due to this. Similar profiles are observed for Type II instances

Advertisement

6. A case of application for the Max-Mean problem

6.1. Teams that are more diverse are more efficient for problem solving than those less diverse

This way, in daily activities of organizations, companies, schools, sport teams, etc. it has been observed through evidence that diversity has an important role on the ability for groups of people to solve problems. Lately, literature investigations have shown formally that this empirical phenomenon is true, proportioning a theoretic justification for this fact, for example in [10]. A consequence of this is that, under certain circumstances, the groups of people that have conformed in a diverse manner can surpass the productivity of the groups conformed by the people individually more capable to resolve these problems; meaning, in a certain way diversity triumphant over the ability.

From a practical point of view, this result implies that, for example, a company that wants to conform a team should not look for simply a selection of individuals with a greater qualification for it, probably the most efficient selection would be to choose a diverse group. In reality the ideal would be that the groups of work be conformed by people with great qualifications and diversity; yet, these two objectives tend to be opposing one another since the diversity of the team formed by the people more qualified tends to be smaller, as demonstrated in [24].

The idea in the background is that we have a population of capable people to realize any task; these people have different levels of ability or of productivity for resolve it, and if one must select the work teams of this population for realizing a task, one can consider two possible groups: in the first only individuals are chosen with high qualifications, and in the second “diverse” individuals are chosen in some sense It turns out that the first finish in some way arriving to the same solution, creating a more difficult and confusing work for each other, on the other hand the second group the diversity created more perspectives and thus more opportunity of avoiding a halt on the search for a solution of the problems, generating in some way the right environment to increase the individual productivity of each one, and therefore of all groups. From a formal point of view what happens in the first group, under certain hypothesis, the people that are highly qualified tend to convert into similar points of view and ways to solve problems from which the set of optimal locations that the group can reach is reduced. Although the second group of diverse members originates a set of optimal locations more widely, and thus has more opportunities to improve.

6.2. Diversity in identity and functional diversity, perspectives and heuristics

In terms of a population, understood as “diversity in identity”, or simply “diversity,” to the differences en its demographic characteristics, cultural, ethnics, academic formation, and work experience. On the other hand, “formal diversity” is known as the differences in how these people focus and treat problem solving. An important fact is that these two types of diversity are correlated, since it has been identified experimentally a strong correlation between two types of diversity, just as demonstrated in [25]. Given the connection, it can be deduced that diverse groups in identity are functionally diverse.

In the literature, the focus was employed on a person to resolve a problem is a representation or an encoding of the problem in its internal language, and it can be known as “perspective.” Formally, a perspective P is a mapping of the set of solutions of a problem into the internal language of the person resolving a problem.

On the other hand, the way in which people attempt to resolve a problem, or how they look for solutions are known as “heuristic.” Formally, a heuristic is a mapping H of the encoding of the solutions in an internal language of the person that will solve the problem into the solutions set. This way, given a particular solution, the subset generated by the mapping H is the set of the other solutions that the person considers. In this manner, the ability to resolve the problem on behalf of a person is represented by its couple of perspective-heuristic P,H. Two people can differ in one of these components or in both; meaning, they can have different perspectives or different heuristics, or differ on both. A solution would be the local optimum for a person if and only if when the person encodes the problem and applies the heuristic, neither of the other solutions that the person considers has the abilities, and thus will have a few optimal locales, causing the group to become stuck with one of the solutions.

6.3. How to select the most productive work team

From an intuitive point of view, the conclusion that diverse groups in identity can surpass groups that are not diverse (homogeneous) due to its grand functional diversity based on the affirmation, well reception, that if the agents inside of the groups have equal individual ability to solve problems, a functional diverse group surpasses a homogeneous group. In [24] it has demonstrated that groups with functional diversity tend to surpass the best individual agents being that the agents in the group have the same ability. This still leaves open an important question: Can a functionally diverse group, whose members have less individual ability, have a superior performance than the group of people that have more abilities individually? In [10] finally resolves in a affirmative manner this question, making a mathematical demonstration to this fact. Even though certain doubts still surge in a natural manner in respect to: How many members should this group have in such a way that the average diversity within the group be at its maximum?, and, can one detect which is the group more functionally diverse?

This way, if considering the actual situation in which an Institution desires to hire people to solve a problem. To realize a good selection the Institution usually gives a test to the applicants, around 500, to estimate their abilities individually to solve a problem. Supposing that all the applicants are individually capable to solve them, then they have the formation and experience necessary, but have different levels of ability. It is doubtful if the Institution should hire:

  1. The person with the highest score obtained on the test;

  2. The 10 people with the highest scores;

  3. 10 people selected randomly from the group of applicants;

  4. The 10 people most diverse in identity of the group of applicants;

  5. The group of people most diverse on average of the group of applicants.

Ignoring the possible problems of the communication within the groups, the existing literature suggests that (ii) is better than (i), [25],since most people will be looking in a wider space, having then more opportunities to obtain better solutions, in place of the action of the person graded best that will stay stuck in one of the optimal locations. Recently in [10] it has been demonstrated formally that (iii) is better than (ii).

In this manner, the institution fails based on the group of people with the highest scores, meaning the most prepared individually, go on to form the best work team, and thus the company should hire (ii), since it is demonstrated as under certain hypothesis that (iii) is a better decision, as seen in [10]. The authors have come to determine that a team of people selected randomly have more functional diversity and under certain conditions surpass the performance of (ii). since under the set of conditions identified by the authors, the functional diversity of a group of the people that are individually capable to resolve the problem necessarily becomes smaller, which in the end, the advantage of having best abilities individually is seen as more than compensated by the greater diversity of the randomly selected group.

Figure 8 shows a scheme of the problem of selecting a team, and the options considered.

Figure 8.

As the institution should hire?

Notice that the authors in the proof do not even use the equipment with the maximum diversity, if not a randomly selected group, and even then are able to demonstrate that it is better, thanks to the greater diversity inherent in the random group next to the group with the most abilities individually. Here we prove in the corollary of the theorem 2, that if selecting the group with more diversity on average, this is to say hire the group formed according to (iv), this would result more productive than hiring than that formed randomly (iii), and, by transitivity, better than the group formed by the best scores (ii ) and lastly better than simply choosing the best scored (i).

On the other hand, the literature says little or nothing at all about (v), since classically in the problems of diversity have considered the number of elements chosen as a given value, yet in the practice applications it is not clear how to choose the number of elements to be selected, and the best option would be to leave the process itself of optimization the one that demonstrates its value. This way, the focus of our analysis is centered on the dispute between the importance of the abilities of the individuals of each person in the group, their functional diversity (trapped by the diversity of identity), and the size of the ideal group.

A conclusion to all this is that the diversity in the organizations should be encouraged, which implies new policies, organizational forms, and styles of administration. In the context of solving a problem, the value of a person depends on their ability to improve the collective decision, since the contribution of this person depends in great measure to the perspectives and heuristics of the other people that make up the teamwork. The diversity in the focus of the solution of the problem in respect to the other people is an important predictor of its value, and in the end can be more relevant than its individual ability to solve the problem on its own. This was, to estimate the potential contribution of a person in teamwork, it is more important to make an emphasis in measuring how this person thinks differently, before estimating the magnitude of the ability of the person from aptitude tests or intelligence tests.

Although one has to be more conscious of some aspects that have not been considered and that can have influence in the performance of a team of people. For illustration, the groups with diversity in identity can often have more conflicts, more problems of communication, less mutual respect and less trust amongst the members of a homogeneous groups, which can create a diminishment of performance in diverse groups. In (16) it is mentioned that the people with similar perspectives but with diverse heuristics can communicate with one another without any problem, but people with diverse perspectives can have problems when comprehending the solutions identified by the other members of the group, in this sense the best of the organizations would be to find people with similar perspectives but guarantee a diversity of heuristics, in this manner, the organizations can exploit better the benefits of the diversity while minimizing the costs of the lack of communication.

6.4. Basics hypothesis and relationship between ability and diversity

In this section it is stated in theorem 1, demonstrated in [10], that explains the logic behind the fact that a team of people chosen at random, from a database of applicants that are capable to solve problems, it is better than the team formed by the people more individually capable, from there a result is established, that is immediate, being that the team of people with the most diversity surpasses the team formed by the people with the most abilities for solving problems.

To establish a theoretic result, consider the population from where the team will be selected, this is to say the applicants, represented with con Φ with to satisfy the following suppositions

  • The applicants are trained to solve the problem. Given the initial solution, the applicants can find a better solution, even if it is only a little better;

  • The problem is difficult, none of the applicants can find the optimal solution always;

  • The applicants are diverse, and therefore for any potential solution that is not the optimal, at least one applicant can find the best solution;

  • The best applicant is the only one.

If we consider a team of applicants chosen randomly from Φ to according to some distribution, the theorem establishes what, with probability 1, sample sizes N1 and N exist, N1<N, just like in the collective performance of the team of the N1 applicants chosen at random surpasses the collective performance of the N1 best applicants.

To formulate the theorem 1 more precisely, consider X the solution set of the problem, a function that gives the value of each solution V:X0,1, supposing as well that Vit has the only maximum x*, and that Vx*=1. Each applicant ϕ beings from the initial solution x and uses the search rule to find the maximum, but is not always found, if not generally gets stuck in a local optimum, if ϕx is the local optimum when the applicant ϕ starts his search in x. This way ϕX represents the local optimal set for the applicant ϕ.

Each applicant is characterized by the pair ϕ,ν, ), and an estimation of the performance as the value expected of the search by treating the solving of the problem, represented by EV;ϕ,ν ; this is to say that,

EV;ϕ,ν=xXVϕxνx

The hypothesis should be satisfies by the applicants ϕ, with which the theorem is demonstrated through the following:

HYPOTHESIS 1 (Consistency):

  1. xX:VϕxVx

  2. xX:ϕϕx=ϕx

HYPOTHESIS 2 (Difficulty):

ϕΦ,xX:ϕxx

HYPOTHESIS 3 (Diversity):

xXx*,ϕΦ:ϕxx

HYPOTHESIS 4 (Uniqueness):

argmaxEV;ϕ,ν:ϕΦ is unique

Hypothesis 1 indicates that given the initial solution the people always try to find better solutions, but never select the worst solution, and get stuck in the optimal locale. Hypothesis 2 implies that no one, individually, can reach the optimum always from any point. In hypothesis 3, it is established in a simple manner that the essence of diversity, when a person is stuck in an local optimum always has someone that can find the best due to a different focus. Hypothesis 4 establishes that within the set of applicants considering that a better unique performance exists. With these hypotheses, the theorem 1 is proved in [10].

THEOREM 1: Being Φ a set of people that satisfy the hypothesis 1–4. And being μ his probability distribution. Then, with probability 1, positive integers N y N1, N>N1 , just like the performance of the set of N1 people selected at random surpasses the performance of the set of the N1 individually more capable, taken from the group of N people independently chosen according to μ.

The theorem shows that a randomly selected group works better than a group formed for the better, is an immediate extender of the results as presented in the following corollary, which is demonstrated here, in which it is established more directly in relation between the diversity and ability.

COROLLARY: If Φ is a set of people that satisfy the hypothesis of the theorem 1, then, with probability 1, positive integers N and N1, N>N1 exist and that which the performance of the set of the group of N1 people that maximize divM, MΦ, M=N1 exceeds the overall performance of the N1 people individually more capable, taken of the group of N people independently chosen according to μ.

Proof:

The proof is immediate, since the theorem is based that the diversity of the set of people randomly selected is more diverse than the set of people with the most individual abilities. This way, if selecting the group of people most diverse, helps this surpass the performance of the group of people selected randomly, due to the major diversity of the first, and for theorem 1, this last group surpasses the performance of the group formed by the people with more abilities individually. It continues as transitivity the result that is shown in the corollary.

6.5. Resolution of a case study

Finally, we apply the method solving a real instance. In particular we apply them to obtain a diverse assembly of professors from a set of n=586 in the ESPOL University at Guayaquil (Ecuador). For each professor, we record 7 attributes (tenure position, gender, academic degree, research level, background, salary level, and department), and the similarity measure between each pair of them is computed with the modified difference measure described in the equation 3. The solution obtained with our GRASP+PR method in 127.1 seconds has 90 professors and a similarity value of 1.11. Table 4 it is shown that the results detailed and each one of the 10 trials.

TRIAL GRASP+PR GRASP
CPU time Value m CPU time Value m
1 127.094 1.11542 101 116.631 1.09397 90
2 125.081 1.11393 100 113.7384 1.09487 96
3 120.028 1.10484 96 111.7161 1.07699 86
4 115.622 1.10311 92 113.4575 1.09058 88
5 114.616 1.10251 94 119.4957 1.09844 101
6 139.309 1.10811 96 126.6417 1.08316 95
7 123.162 1.12293 100 128.5092 1.03239 86
8 134.082 1.12600 100 119.378 1.05797 92
9 125.688 1.12033 101 109.6741 1.07982 97
10 134.566 1.11090 97 101.3805 1.05701 107
MEAN 125.9248 1.11281 97.7 116.06222 1.07652 93.8

Table 4.

Average results about the 10 successive runs

Advertisement

7. Conclusions

The main result of this paper provides conditions under which, a diverse group of people will outperform a group of the best. Our result provides insights into the trade-off between diversity and ability. An ideal work team would contain high-ability problem solvers who are diverse.

According to our approach, the problem of designing the most efficient work team is equivalent to the maximum diversity problem, wich is a computationally difficult, In particular we study the solution of the Max-Mean model that arises in the context of equitable dispersion problems. It has served us well as test case for a few new search strategies that we are proposing. In particular, we tested a GRASP constructive algorithm based on a non-standard combination of greediness and randomization, a local search strategy based on the variable neighborhood descent methodology, which includes three different neighborhoods, and a path relinking post-processing.

We performed extensive computational experiments to first study the effect of changes in critical search elements and then to compare the efficiency of our proposal with previous solution procedures.

The principles of the proposed equity measure can be applied to solve the problem of selecting efficient work teams. Therefore, more research is necessary in this area, especially to solve the subproblem to measure diversity. The results from a comparative study carried out with the other algorithms favor the procedure that we proposed, also is able to solve large instances. The focus of our future research will be on the development of multi-objective optimization that attempts to balance efficiency or ability and diversity, namely a study on the selection of the best and most diverse, which gives a flexible and interactive way for decision makers to make the tradeoff between ability and diversity.

References

  1. 1. E. Ercut and S. Neuman, "Analytic Models for locating undesirable facilities," European Journal of Operational Research, vol. 40, pp. 275-291, 1989.
  2. 2. F. Glover, C. Kuo, and K. S. Dhir, "A discrete optimization model for preserving biological diversity," Appl. Math. Modeling, vol. 19, pp. 696 - 701, 1995.
  3. 3. F. Glover, C. C. Kuo, and K. Dhir, "Heuristic Algorithms for the Maximum Diversity Problem," Journal of Information and Optimization Sciences, vol. 19, no. 1, pp. 109 - 132, 1998.
  4. 4. K. Katayama and H. Narihisa, "An Evolutionary Approach for the Maximum Diversity Problem," Studies in Fuzziness and Soft Computing, vol. 166, pp. 31-47, 2005.
  5. 5. E. Erkut, "The discrete p-dispersion problem," European Journal of Operational Research, vol. 46, pp. 48-60, 1990.
  6. 6. R. Martí and F. Sandoya, "GRASP and path relinking for the equitable dispersion problem," Computers and Operations Research, http://dx.doi.org/10.1016/j.cor.2012.04.005, 2012.
  7. 7. M. Lozano, D. Molina, and C. García-Martínez, "Iterated greedy for the maximum diversity problem," European Journal of Operational Research, 2011.
  8. 8. Thorsten Meinl, Maximum-Score Diversity Selection, primera ed.: Südwestdeutscher Verlag, 2010.
  9. 9. M. Kuo, F. Glover, and K. Dhir, "Analyzing and modeling the maximum diversity problem by zero-one programming," Decision Sciences, no. 24, pp. 1171 - 1185, 1993.
  10. 10. Scott Page and Lu Hong, "Groups of diverse problem solvers can outperform groups of high-ability problem solvers," PNAS, vol. 101, no. 46, pp. 16385 - 16389, 2004.
  11. 11. T. Q. Lee and Y. T. Park, "A similarity Measure for Collaborative Filtering with Implicit Feedback," ICIC 2007, pp. 385-397, 2007.
  12. 12. E. French, "The importance of strategic change in achieving equity in diversity," Strategic Change, vol. 14, pp. 35–44, 2005
  13. 13. B. Smyth and P. McClave, "Similarity vs. Diversity," Lecture Notes in Computer Science, vol. 2080, pp. 347-361, 2001.
  14. 14. J. Ghosh, "Computational aspects of the maximum diversity problem," Operations Research Letters, no. 19, pp. 175 - 181, 1996.
  15. 15. J. Wang, Y. Zhou, J. Yin, and Y. Zhang, "Competitive Hopfield Network Combined With Estimation of Distribution for maximum DIversity Problems," IEEE Transactions on systems, man, and cybernetics, vol. 39, no. 4, pp. 1048-1065, 2009.
  16. 16. B. Alidaee, F. Glover, G. Kochenberger, and H. Wang, "Solving the maximum edge weight clique problem via unconstrained quadratic programming," European Journal of Operational Research , no. 181, pp. 592 – 597, 2007.
  17. 17. E. Macambira, "An application of tabu search heuristic for the maximum edge-weighted subgraph problem," Annals of Operational Research, vol. 117, pp. 175-190, 2002.
  18. 18. U. Feige, G. Kortsarz, and D. Peleg, "The dense k-subgraph problem," Algorithmica, vol. 29, no. 3, pp. 410-421, 2001.
  19. 19. O. Prokopyev, N. Kong, and D. Martínez-Torres, "The equitable dispersion problem," European Journal of Operational Research, no. 197, pp. 59 - 67, 2009.
  20. 20. Opticom Project. (2012, Mayo) Opticom Project. [Online]. http://www.optsicom.es/
  21. 21. M. Resende and R. Werneck, "A hybrid heuristic for the p-median problem," Journal of heuristics, vol. 10, no. 1, pp. 59-88, 2004.
  22. 22. P. Hansen and N. Mladenovic, "Variable neighborhood search," in Handbook in Metaheuristics, F. Glover and G. Kochenberger, Eds., 2003, pp. 145-184.
  23. 23. S. Page, The Difference: How the Power of Diversity Creates better Groups, Firms, Schools, and Societies. New Jersey: Princenton University Press, 2007.
  24. 24. J. Polzer, L. Milton, and W. Swann, "Capitalizing on Diversity: Interpersonal Congruence in Small Work Groups," Administrative Science Quarterly, vol. 47, no. 2, pp. 296 - 324, 2002.
  25. 25. Duarte A, Martí R. “Tabu Search and GRASP for the Maximum Diversity Problem”, European Journal of Operational Research; 178, 71-84, (2007)

Notes

  • Mathematica is a computational software program used in scientific, engineering, and mathematical fields and other areas of technical computing. It is developed by Wolfram Research.

Written By

Fernando Sandoya and Ricardo Aceves

Submitted: 18 August 2012 Published: 30 January 2013