Open access peer-reviewed chapter

Multi-Strategy MAX-MIN Ant System for Solving Quota Traveling Salesman Problem with Passengers, Incomplete Ride and Collection Time

By Bruno C.H. Silva, Islame F.C. Fernandes, Marco C. Goldbarg and Elizabeth F.G. Goldbarg

Submitted: June 1st 2020Reviewed: September 2nd 2020Published: October 5th 2020

DOI: 10.5772/intechopen.93860

Downloaded: 295


This study proposes a novel adaptation of MAX-MIN Ant System algorithm for the Quota Traveling Salesman Problem with Passengers, Incomplete Ride, and Collection Time. There are different types of decisions to solve this problem: satisfaction of the minimum quota, acceptance of ride requests, and minimization of travel costs under the viewpoint of the salesman. The Algorithmic components proposed regards vehicle capacity, travel time, passenger limitations, and a penalty for delivering a passenger deliverance out of the required destination. The ant-based algorithm incorporates different sources of heuristic information for the ants and memory-based principles. Computational results are reported, showing the effectiveness of this ant-based algorithm.


  • Traveling Salesman
  • integer programming
  • transportation
  • shared mobility
  • Ant Colony Optimization

1. Introduction

The lives of ordinary consumers have changed almost beyond recognition in the past 20 years. First, with the introduction of high-speed internet access; but, more recently, with the arrival of mobile computing devices such as smartphones and tablets. According to data from the 2017 Gallup World Survey [1], 93 of adults in high-income economies have their cell phones, while 79%in developing economies. In India, 69%of adults have a cell phone, as well as 85%in Brazil and 93%in China [1]. Smartphones and the internet have created a novel digital ecosystem where the adoption of new paradigms is increasingly fast, and each innovation that appears and presents itself to the market can disrupt an entire segment.

In the transportation segment, a central theme is how the digital revolution has created opportunities to consider new models of delivering services under the paradigm of Mobility as a Service (MaaS) [2]. There is a growing interes%t in MaaSdue to the notion of a sharing economy. Millennials own fewer vehicles than previous generations [3]. As evidenced by the ascension of on-demand mobility platforms, they are quickly adopting car sharing as a mainstream transportation solution. Investments in new travel patterns have become a priority to enable the transformation of opportunities in the urban mobility segment into new revenue streams.

This study deals with a novel optimization model that can improve the services provided by on-demand mobility platforms, called Quota Traveling Salesman Problem with Passengers, Incomplete Ride, and Collection Time (QTSP-PIC). In this problem, the salesman is the vehicle driver and can reduce travel costs by sharing expenses with passengers. He must respect the budget limitations and the maximum travel time of every passenger. Each passenger can be transported directly to the desired destination or an alternate destination. Lira et al. [4] suggest pro-environmental or money-saving concerns can induce users of a ride-sharing service to agree to fulfill their needs at an alternate destination.

The QTSP-PIC can model a wide variety of real-world applications. Cases related to sales and tourism are the most pertinent ones. The salesman must choose which cities to visit to reach a minimum sales quota, and the order to visit them to fulfill travel requests. In the tourism case, the salesman is a tourist that chooses the best tourist attractions to visit during a vacation trip and can use a ride-sharing system to reduce travel expenses. In both cases, the driver negotiates discounts with passengers transported to a destination similar to the desired one.

The QTSP-PIC was introduced by Silva et al. [5]. They presented a mathematical formulation and heuristics based on Ant Colony Optimization (ACO) [6]. To support the ant algorithms, they proposed a Ride-Matching Heuristic (RMH) and a local search with multiple neighborhood operators, called Multi-neighborhood Local Search (MnLS). They tested the performances of the ant algorithms on 144 instances up to 500 vertices. One of these algorithms, the Multi-Strategy Ant Colony System (MS-ACS), provided the best results. They concluded that their most promising algorithm could improve with learning techniques to choose the source of information regarding the instance type and the search space.

In this study, a MAX-MINAnt System (MMAS) adaptation to the QTSP-PIC, called Multi-Strategy MAX-MINAnt System (MS-MMAS), is discussed. MMAS improves the design of Ant System [6], the first ACO algorithm, with three important aspects: only the best ants are allowed to add pheromone during the pheromone trail update; use of a mechanism for limiting the strengths of the pheromone trails; and, incorporation of local search algorithms to improve the best solutions. Plenty of recent studies proved good effectiveness of the MMAS in correlated problems to QTSP-PIC [7, 8, 9, 10]. However, none of these explored the Multi-Strategy (MS) concept.

In the traditional ant algorithms applied to Traveling Salesman Problem (TSP), ants use the arcs’ cost as heuristic information [6]. The heuristic information adopted is called visibility. When solving the QTSP-PIC, different types of decisions must be considered: the accomplishment of the minimum quota, management of the ride requests, and minimization of travel costs. The MS idea is to use different mechanisms of visibility for the ants to improve diversification. Every ant decides which strategy to use at random with uniform distribution. The MS proposed in this study extends the original implementation proposed in [5]. MS-MMAS also incorporates RMH and MnLS and uses a memory-based technique proposed in [11] to avoid redundant work. In MS-MMAS, a hash table stores every solution constructed and used as initial solutions to a local algorithm. When the algorithm constructs a new solution, it starts the local search phase if the new solution is not in the hash table.

The benchmark for the tests consisted of 144 QTSP-PIC instances. It was proposed by Silva et al. [5]. Numerical results confirmed the effectiveness of the MS-MMAS by comparing it to other ACO variants proposed in [5].

The main contributions of this chapter are summarized in the following.

  • The extension of the MS concept proposed in [5] with a roulette mechanism that orients the ants to choose their heuristic information based on the best quality solutions achieved;

  • Improvement of the MMAS design with a memory based technique proposed in [11];

  • Presentation of a novel MMAS variant that combines the improved MS concept and memory-based principles and assessment of its performance;

  • Experiments on a set of QTSP-PIC instances ranging: 10 to 500 cities; and 30 to 10.000 travel requests. The results showed that the proposed MMAS variant is competitive regarding the other three ACO variants presented in [5] for the QTSP-PIC.

The remainder of this chapter is organized as follows. Section 2 presents the QTSP-PIC and its formulation. Section 3 presents the Ant Colony Optimization metaheuristic and the implementation design of the MS-MMAS. Section 4 presents experimental results. The performance of the proposed ant-based algorithm is discussed in Section 5. Conclusions and future research directions are outlined in Section 6.


2. Problem definition

The TSP can be formulated as a complete weighted directed graph G=NAwhere Nis the set of vertices and A=ijijNis the set of arcs. C=cijis the arc-weight matrix such that cijis the cost of arc ij. The objective is to determine the shortest Hamiltonian cycle in G. Due to its applicability, many TSP variants deal with specific constraints [12]. Awerbuch et al. [13] presented several quota-driven variants. One of them, called Quota Traveling Salesman Problem (QTSP), is the basis for the problem investigated in this study. In the QTSP, there is a bonus associated with each vertex of G. The salesman has to collect a minimum quota of bonuses in the visited vertices. Thus the salesman needs to figure out which cities to visit to achieve the minimum quota. The goal is to find a minimum cost tour such that the sum of the bonuses collected in the visited vertices is at least the minimum quota.

The QTSP-PIC is a QTSP variant in which the salesman is the driver of a vehicle and can reduce travel costs by sharing expenses with passengers. There is a travel request, associated with each person demanding a ride, consisting of a pickup and a drop off point, a budget limit, a limit for the travel duration, and penalties associated with alternative drop-off points. There is a penalty associated with each point different from the destination demanded by each person. The salesman can accept or decline travel requests. This model combines elements of ride-sharing systems [14] with alternative destinations [4], and the selective pickup and delivery problem [15].

Let GNAbe a connected graph, where Nis the set of vertices and A=ijijNis the set of arcs. Parameter qidenotes the quota associated with vertex iNand githe time required to collect the quota. cijand tijdenote, respectively, the cost and time required to traverse edge ijA. Let Lbe the set of passengers. List liLdenotes the subset of passengers who depart from iN. Let orgland dstlNbe the pickup and drop-off points requested by passenger l. The salesman departs from city s=1, visits exactly once each city of subset NNand returns to s. The quota collected by the salesman must be at least Kunits. Along the trip, the salesman may choose which travel requests to fulfill. The travel costs are shared with vehicle occupants. The number of vehicle occupants, or passengers, cannot exceed R. Each passenger lLimposes a budget limit wland a trip’s maximum duration bl. Let hljbe the penalty to deliver passenger lLat city jN, jdstl. The value of variable hljis computed in the final cost of the tour if passenger lis delivered to city j. If j=dstl, then hlj=0. The objective of the QTSP-PIC is to find a Hamiltonian cycle Ψ=NAsuch that the ride-sharing cost and eventual penalties are minimized, and the quota constraint is satisfied.

The QTSP is NP-hard [13]. It is a particular case of the QTSP-PIC, in which the list of persons demanding a ride is empty and the time spent to collect the bonus in each vertex is zero. Thus, QTSP-PIC also belongs to the NP-hard class.

Silva et al. [5] presented an integer non-linear programming model for the QTSP-PIC. They defined a solution as S=NQLH, where Nis a list of vertices that represents a cycle, Ψ, such that the minimum quota restriction, K, is satisfied; Qis a binary list in which the i-th element is 1 if the salesman collects the bonus from city i, iN; Lis a binary list in which the l-th element is 1 if the salesman accepts the l-th travel request; and His a list of integers in which the l-th element is the index of the city where the l-th passenger leaves the car. If Ll=0, then Hl=0. The cost of solution S, denoted by S.cost, is calculated by Eq. (1).


3. Ant Colony Optimization

In the Ant Colony Optimization, artificial ants build and share information about the quality of solutions achieved with a communication scheme similar to what occurs with some real ants species. Deneubourg et al. [16] investigated the behavior of Argentine ants and performed some experiments, where there were two bridges between the nest and a food source. He observed that the ants initially walked on the two bridges at random, depositing pheromone in the paths. Over time, due to random fluctuations, the pheromone concentration of one bridge was higher than the other. Thus, more ants were attracted to that route. Finally, the whole colony ended up converging towards the same route. The behavior of artificial ants preserves four notions of the natural behavior of ants:

  • Pheromone deposit on the traveled trail;

  • Predilection for trails with pheromone concentration;

  • Concentration of the amount of pheromone in shorter trails;

  • Communication between ants through the pheromone deposit.

Pheromone is a chemical structure of communication [17]. According to Dorigo et al. [18], pheromone enables the process of stigmergy and self-organization in which simple agents perform complex and objective-oriented behaviors. Stigmergy is a particular form of indirect communication used by social insects to coordinate their activities [18].

Considering the context of ant algorithms applied to the TSP, when moving through the graph G, artificial ants tend to follow paths with higher pheromone deposits rates. As ants tend to deposit pheromone along the path they follow, as more ants choose the same path, the pheromone rate tends to increase in these paths. This cooperation mechanism induces artificial ants to find good solutions, as it works as a shared memory that is continuously updated and can be consulted by every ant in the colony [19].

The base-line of the Ant Colony Optimization is the algorithm Ant System [6]. In the TSP application, Nis the set of vertices to visit. Ants construct solutions iteratively. Every iteration, the ant chooses the next vertex based on heuristic information, η, and pheromone trails, τ. Initially, pheromone trails have the same amount of pheromone, τ0, computed by Eq. (2), where nis the number of vertices in Nand CostDis the value of the TSP tour built by a greedy heuristic.


The k-th ant iteratively adds new vertices to the solution. The algorithm uses Eq. (3) to compute the probability of the k-th ant to move from vertex ito jat the t-th iteration, where ηij=1cijis the heuristic factor, τijtis the pheromone in arc ijin the t-th iteration, and Λkis the list of vertices not visited by the k-th ant. Coefficients αand βweight the influence of the pheromone and heuristic information, respectively. They are user-defined parameters. If α=0, the probability computed by Eq. (3) depends only on the heuristic information. So, the ant algorithm behaves like a greedy method. If β=0, ants tend to select paths with higher pheromone levels. It may lead the algorithm to early stagnation. So, balancing the values of αand βis critical to guarantee a suitable search strategy [5].


Eqs. (4) and (5) show the formulas used to update pheromone trails, where ρis the evaporation coefficient, CostWkis the cost of the route Wkbuilt by the k-th ant, and Δτijkis the pheromone deposited on arc ijby the k-th ant, computed by expression (6).


The ant algorithms proposed after AS improved its implementation design with elitist pheromone update strategies and local search algorithms to improve solutions [6]. Two well-known variants of AS are the Ant Colony System (ACS) [20] and MAX-MINAnt System [21]. Silva et al. [5] presented AS and ACS adaptations for the QTSP-PIC. Section 3.1 presents the MAX-MINAnt System algorithm.

3.1 Multi-strategy MAX-MINant system

MMAS uses Eq. (3) to compute the probability of an ant to move from vertex ito j. Besides, it incorporates improvements to avoid search stagnation and a pheromone update rule that limits pheromone concentration rates. Eq. (7) presents the pheromone update rule. Limits τmaxand τminprevent stagnation of pheromone values.


There are three possibilities for the best route (Wbest) considered in the algorithm: the best route in the current iteration, the best route found so far, and the best route since the last time pheromone trails were reinitiated. In the MMAS original design [21], these routes were chosen alternately. The initial value of pheromone trails was tmax. If the algorithm reached stagnation, i.e., the best current route remained the same for several iterations, the pheromone value reinitialized to tmax. Assigning tmaxto pheromone trails produces a small variability among pheromone levels at the start of the search [21].

The implementation of the MMAS for the QTSP-PIC extends the original proposal [21] with the following adaptions:

  • Ants start at vertex s;

  • Ants include vertices in the route up to reach the minimum quota;

  • Solution Sk, built by the k-th ant, is computed by assigning passengers to route Wkwith the RMHalgorithm [5];

  • Use of the MS concept.

The ants in the MMAS, use arc costs to compute heuristic information. In the MS-MMAS, ants use four sources for this task, listed in the following.

  • Cost oriented: uses cijas heuristic information, such that ηij=1cij;

  • Time oriented: uses tijas heuristic information, such that ηij=1tij. This heuristic information guides ants to vertices that lead to travel time savings.

  • Quota oriented: qjis used as heuristic information, ηij=qjcij. This heuristic information guides ants to go to vertices that lead to the maximization of the quota collected.

  • Passenger oriented: the heuristic information is ηij=Ljcij. This strategy orients ants to maximize the number of travel requests fulfilled.

In the MS concept proposed in [5], every ant decides which strategy to use at random with uniform distribution. A roulette wheel selection improves this concept. The proportion of the wheel assigned to each heuristic information is directly related to the quality of solutions achieved. So, ants learn, at each iteration, the best heuristic information. At the final iterations, ants tend to use the heuristic information that proved to be most promising.

Algorithm 1 presents the pseudo-code of the MS-MMAS. It has the following parameters: maximum number of iterations (maxIter), number of ants (mZ>0), pheromone coefficient (αR>0), heuristic coefficient (βR>0), evaporation factor (ρ01), and pheromone limits (τmax, τminR>0). It also has the following parameters and variables:

  • N: set of vertices;

  • ξ: index of the heuristic information source;

  • Wk: route built by the k-th ant;

  • Sk: solution produced after applying the RMHheuristic [5] to route Wk;

  • Wi: the best route built in the i-th iteration;

  • Si: the best solution produced in the i-th iteration;

  • W: the best route found so far;

  • S: the best solution found so far;

  • Wbest: route used as input to the pheromone updating procedure;

  • Π: hash table that stores every solution Siconstructed and used as initial solution to the local search algorithm;

Algorithm 1:MS-MMAS(maxIter, m, α, β, ρτmax, τmin)

1. Π

2. Initialize pheromone trails

3. For k=1to m.

4.   Wk2random_city(N\s)

5. For i=1to maxIter

6.  For k=1to m

7.    ξchose_heuristic_information()

8.    Wkbuild_route(α, β, ξ)

9.    Skassign_passengers(Wk)

10.   Update(Wi,Si)

11.  If SiΠ

12.   SiMnLS(Si)

13.   Store(Π, Si)

14.  Update(W,S)

15.  Wbestalternate(maxIter, i, Wi, W)

16.  Pheromone_update(Wbest,ρ, τmax, τmin)

17. Return S

The algorithm sets τmaxas the initial value of pheromone trails (step 2). Since ants begin at vertex s, the second vertex is selected randomly with uniform distribution (steps 3 and 4). The k-th ant decides which heuristic information, ξ, to use (step 7) and builds a route (step 8). The algorithm uses the RMHheuristic to assign passengers to Wk, completing a solution (step 9). The algorithm updates Wiand Si(step 10). The MnLS algorithm is applied to Si(step 12) if the solution SiΠ. After the local search, the algorithm stores Siin the hash table Π. At the next iteration, the current Siis the starting solution of the local search if it is not in Π. This procedure prevents redundant work. The algorithm updates the best route and the best solution found so far, Wand S(step 14). Similar to the original design of MMAS, Wiis assigned to Wbestat the first 25%iterations or if iranges from [50%,75%] of maxIter. Wis assigned to Wbestif iranges from [25%,50%] of maxIteror if it is greater than or equal to 75%of maxIter(step 15). This procedure improves diversification by shifting the emphasis over the search space. Wbestis used to update pheromones (step 16). Finally, the algorithm returns S.


4. Experiments and results

This section presents the methodology for the experiments and results from the experiments. Section 4.1 presents the methodology. Section 4.2 presents the parameters used in the MS-MMAS algorithm. Section 4.3 presents the results.

4.1 Methodology

The experiments were executed on a computer with an Intel Core i5, 2.6 GHz processor, Windows 10, 64-bit, and 6GB RAM memory. The algorithms were implemented in C ++ lan and compiled with GNU g++ version 4.8.4. The benchmark set proposed in [5] was used to test the effectiveness of the MS-MMAS. The sizes of those instances range from 10 to 500 vertices. Small instances have up to 40 vertices, medium up to 100, and large more than 100 vertices. The instances are available for download at

The best, average results, and average processing times (in seconds) are reported from 20 independent executions of the MS-MMAS. Experiments are conducted to report the distance between the best-known solutions and the best results provided by the MS-MMAS. The variability in which the MS-MMAS achieved the best-known solutions stated in the benchmark set is also calculated. With these experiments, it is possible to conclude if the MS-MMAS algorithm was able to find the best-known solution of each instance and with what variability this happens.

The Friedman test [23] with the Nemenyi post-hoc procedure [24] are applied, with a significance level 0.05, to conclude about significant differences among the results of the MS-MMAS and the other three ACO variants proposed in this [5]. The instances were grouped according to their sizes (number of vertices) for the Friedman test. There are eight groups of symmetric (asymmetric) instances, each of them contains nine instances, called g<n>, where <n>stands for the size.

4.2 Parameter tuning

The IRACE software was used, presented by [22], to tune the parameters of the MS-MMAS algorithm. 20 symmetric and 20 asymmetric instances were submitted to adjust the parameters. Those instances were selected at random. The IRACE uses the maxExperimentsand maxTimeparameters as stopping criteria. This parameters were set as follows: maxExperiments=103; and, maxTime=.

For the asymmetric instance set, the parameters were defined as follows: maxIter=31; m=51; α=3.08; β=10.31; ρ=0.52; τmax=0.8; and τmin=0.2. For the symmetric instance set, the parameters were: maxIter=29; m=57; α=2.92; β=9.53; ρ=0.67; τmax=0.7; and τmin=0.2.

4.3 Results

In this section, the results of the MS-MMAS are tested and compared to those produced by the other three ACO variants proposed in [5]: AS, ACS, and MS-ACS.

Table 1 presents the comparison between the ant algorithms. The best results obtained by MS-MMAS were compared with those achieved by each ant algorithm proposed in [5]. The results are in the X×Yformat, where Xand Ystand for the number of instances in which the ant algorithm Xfound the best solution and the number of instances in which the ant algorithm Yfound the best solution, respectively.

MS-MMAS68 x 068 x 145 x 1766 x 168 x 248 x 14

Table 1.

Comparison between the ant algorithms.

Table 1 shows that the MS-MMAS was the algorithm that reported the best solution for most instances. This algorithm performed best than other ACO variants due to its enhanced pheromone update procedures. The MS implementation with roulette wheel selection proved to be effective at finding the best heuristic information used by the ants during the run. Table 1 also shows that the MS-MMAS provides results with better quality than the MS-ACS in the most symmetric cases. The MS-ACS was superior to the MS-MMAS in seventeen asymmetric cases and fourteen symmetric instances. Was observed that the pseudo-random action choice rule of MS-ACS [20], which allows for a greedier solution construction, proved to be a good algorithmic strategy for solving large instances.

Tables 2 and 3 shows the ranks of the ant algorithms based on the Friedman test [23] with the Nemenyi [24] post-hoc test. The first column of this Tables presents the subsets of instances grouped according to their sizes. The other columns of this Tables present the p-values of the Friedman test and the ranks from the Nemenyi post-hoc test. In the post-hoc test, the order ranks from ato c. The crank indicates that the algorithm achieved the worst performance in comparison to the others. The arank indicates the opposite. If the performances of two or more algorithms are similar, the test assigns the same rank for them. In this experiment, the significance level was assigned with 0.05.


Table 2.

Results of Friedman’s test and Nemenyi post-hoc test over asymmetric instances set.


Table 3.

Results of Friedman’s test and Nemenyi post-hoc test over symmetric instances set.

The p-values presented in Tables 2 and 3 show that the performance of the ant algorithms was not similar, i.e., the null hypothesis [24] is rejected in all cases. In these Tables, can be observed that MS-MMAS ranks higher than AS and ACS for all subsets. The ranks of MS-ACS and MS-MMAS were the same in the most cases. This implies that the performance of only these two algorithms where similar, i. e., the relative distance between the results achieved by these two algorithms are short.

To analyze the variability of the results provided by each ant algorithm compared to the best results so far for the benchmark set, three metrics regarding the results produced by the experiments were adopted. The first metric, ν, shows the percentages relative to the number of times an ant algorithm found the best-known solution along 20 independent executions. The second metric, Φ, is the relative distance between the cost of the best-known solution χand the best solution χminof each ant algorithm. The third metric, Ω, is the relative distance between χand the average solution χaof each ant algorithm. To calculate Φ, the Eq. (9) was used. Ωis calculated using the formula (10). The average values of ν, Φand Ωare reported in Tables 4 and 5.


Table 4.

Variability of the ants algorithms for asymmetric instances.


Table 5.

Variability of the ants algorithms for symmetric instances.


It can be observed from Tables 4 and 5 that the MS-MMAS is the best one concerning the νand Φmetrics. The MS-ACS is the best algorithm concerning the Ωmetric. Tables 6 and 7 show the data regarding the results of MS-MMAS reported in Tables 4 and 5. The results of the other ACO variants can be seen in [5].

Tables 8 and 9 present the average processing time (in seconds) spent by each heuristic. Instances are grouped by the number of vertices. From these tables, it can be conclude that the MS-MMAS was the ant algorithm that demanded more processing time. Tables 6 and 7 (1) present detailed results concerning the average time required by the MS-MMAS. Data regarding the time consumption of the other ACO variants can be seen in [5].


Table 6.

Results of the MS-MMAS executions for small instances.


Table 7.

Results of the MS-MMAS executions for medium and large instances.


Table 8.

Average time spent by the ant algorithms for the set of asymmetric instances.


Table 9.

Average time spent by the ant algorithms for the set of symmetric instances.


5. Discussion

The purpose of this study was to adapt MMAS to the QTSP-PIC and compare its performance with the ACO variants proposed in [5]. As expected, MS-MMAS proved to be competitive regarding the other ACO variants proposed to solve QTSP-PIC. Similarities and differences that were observed in the results are discussed in section 5.1. The limitations of the study are discussed in Section 5.2.

5.1 Comparison between ACO algorithms

The ACO algorithms proposed by Silva et al. [5] showed to be a viable method for solving QTSP-PIC. Yet, the performance of AS and ACS algorithms, when compared to MS-MMAS, was rather poor for the benchmark set studied. MS-MMAS improved the results achieved by AS in 134 instances. Compared to ACS, MS-MMAS performed better in 136 instances. The results achieved by MS-MMAS improved those produced by MS-ACS in 93 instances. It is interesting to note that the MS-ACS algorithm performed slightly better on the large instances than the MS-MMAS. The Friedman test and Nemenyi post-hoc ranked these two ACO algorithms with the same scale for the most instance groups, which means that difference between the results achieved by the MS-ACS and MS-MMAS was significantly small.

These observations are also supported by the variability results of each ACO algorithm. Metric νshowed that MS-MMAS was the algorithm that achieved the best know solutions of the benchmark set in most cases. Metric Φshowed that the MS-MMAS performed slightly better overall on the benchmark set than the MS-ACS. A possible explanation for this is that the MS-ACS variation might converge to a local minimum faster than the MS-MMAS.

Results presented in this study showed that the MS-MMAS algorithm is better suited than the other three ACO variants proposed in [5] to solve QTSP-PIC. This suggests a positive impact of the implementation design proposed in this study and a contribution to the MAX-MINAnt System state of the art.

5.2 Limitations

Due to limited time, parallel computing techniques could not be tested to improve the performance of MS-MMAS. A previous study done by Skinderowicz [10] investigated the potential effectiveness of a GPU-based parallel MAX-MINAnt System in solving the TSP. In this study, the most promising MMAS variant was able to generate over 1 million candidate solutions per second when solving a large instance of the TSP benchmark set. Other techniques can improve the MS-MMAS design and could not be tested due to the lack of time:

  • A rank-based pheromone updating rule [25];

  • The application of the pseudo-random action choice rule proposed in [21];

  • Hybridization with other meta-heuristics [26, 27, 28].


6. Conclusions

This work dealt with a recently proposed variant of the Traveling Salesman Problem named The Quota Traveling Salesman Problem with Passengers, Incomplete Ride, and Collection Time. In this problem, the salesman uses a flexible ride-sharing system to minimize travel costs while visiting some vertices to satisfy a pre-established quota. He must respect the budget limitations and the maximum travel time of every passenger. Each passenger can be transported directly to the desired destination or an alternate destination. The alternative destination idea suggests that when sharing a ride, pro-environmental or money-saving concerns can induce persons to agree to fulfill their needs at a similar destination. Operational constraints regarding vehicle capacity and travel time were also considered.

The Multi-Strategy MAX-MINAnt System, a variant from the Ant Colony Optimization (ACO) family of algorithms, was presented. This algorithm uses the MS concept improved with roulette wheel selection and memory-based principles to avoid redundant executions of the local search algorithm. The results of MS-MMAS were compared with those produced by the ACO algorithms presented in [5]. To support MS-MMAS, the ride-matching heuristic and the local search heuristic based on multiple neighborhood operators proposed by [5] were reused.

The computational experiments reported in this study comprised one hundred forty-four instances. The experimental results show that the proposed ant algorithm variant could update the best-known solutions for this benchmark set according to the statistical results. The comparison results with three other ACO variants proposed in [5] showed that MS-MMAS improved the best results of MS-ACS for ninety-three instances, and a significant superiority of MS-MMAS over AS and ACS.

The presented work may be extended in multiple directions. First, it would be interesting to investigate if the application of the pseudo-random action choice rule [20] could improve the MS-MMAS results. Another further promising idea is the use of pheromone update rule based on ants ranking [25]. Extension of the MS-MMAS implementation design with parallel computing techniques [10] and hybridization with other meta-heuristics [26, 27, 28] is other interesting opportunity for the future research.



MaaSMobility as a Service
QTSP-PICQuota Traveling Salesman Problem with Passengers, Incomplete Ride and Collection Time
ACOAnt Colony Optimization
RMHRide-Matching Heuristic
MnLSMulti-neighborhood Local Search
MS-ACSMulti-Strategy Ant Colony System
MS-MMASMulti-Strategy MAX-MIN Ant System
TSPTraveling Salesman Problem
QTSPQuota Traveling Salesman Problem
ASAnt System
ACSAnt Colony System

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Bruno C.H. Silva, Islame F.C. Fernandes, Marco C. Goldbarg and Elizabeth F.G. Goldbarg (October 5th 2020). Multi-Strategy <em>MAX-MIN</em> Ant System for Solving Quota Traveling Salesman Problem with Passengers, Incomplete Ride and Collection Time, Operations Management - Emerging Trend in the Digital Era, Antonella Petrillo, Fabio De Felice, Germano Lambert-Torres and Erik Bonaldi, IntechOpen, DOI: 10.5772/intechopen.93860. Available from:

chapter statistics

295total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

System of Data Transfer from and to Social and Economic Processes via Creative Economy Networks Created Based on Cultural Heritage Administration Processes and Vice Versa

By Jozef Stašák and Jaroslav Mazůrek

Related Book

First chapter

Fourth Industrial Revolution: Current Practices, Challenges, and Opportunities

By Antonella Petrillo, Fabio De Felice, Raffaele Cioffi and Federico Zomparelli

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us