Open access peer-reviewed chapter

Multi-Strategy MAX-MIN Ant System for Solving Quota Traveling Salesman Problem with Passengers, Incomplete Ride and Collection Time

Written By

Bruno C.H. Silva, Islame F.C. Fernandes, Marco C. Goldbarg and Elizabeth F.G. Goldbarg

Submitted: 01 June 2020 Reviewed: 02 September 2020 Published: 05 October 2020

DOI: 10.5772/intechopen.93860

From the Edited Volume

Operations Management - Emerging Trend in the Digital Era

Edited by Antonella Petrillo, Fabio De Felice, Germano Lambert-Torres and Erik Bonaldi

Chapter metrics overview

500 Chapter Downloads

View Full Metrics

Abstract

This study proposes a novel adaptation of MAX-MIN Ant System algorithm for the Quota Traveling Salesman Problem with Passengers, Incomplete Ride, and Collection Time. There are different types of decisions to solve this problem: satisfaction of the minimum quota, acceptance of ride requests, and minimization of travel costs under the viewpoint of the salesman. The Algorithmic components proposed regards vehicle capacity, travel time, passenger limitations, and a penalty for delivering a passenger deliverance out of the required destination. The ant-based algorithm incorporates different sources of heuristic information for the ants and memory-based principles. Computational results are reported, showing the effectiveness of this ant-based algorithm.

Keywords

  • Traveling Salesman
  • integer programming
  • transportation
  • shared mobility
  • Ant Colony Optimization

1. Introduction

The lives of ordinary consumers have changed almost beyond recognition in the past 20 years. First, with the introduction of high-speed internet access; but, more recently, with the arrival of mobile computing devices such as smartphones and tablets. According to data from the 2017 Gallup World Survey [1], 93 of adults in high-income economies have their cell phones, while 79% in developing economies. In India, 69% of adults have a cell phone, as well as 85% in Brazil and 93% in China [1]. Smartphones and the internet have created a novel digital ecosystem where the adoption of new paradigms is increasingly fast, and each innovation that appears and presents itself to the market can disrupt an entire segment.

In the transportation segment, a central theme is how the digital revolution has created opportunities to consider new models of delivering services under the paradigm of Mobility as a Service (MaaS) [2]. There is a growing interes%t in MaaS due to the notion of a sharing economy. Millennials own fewer vehicles than previous generations [3]. As evidenced by the ascension of on-demand mobility platforms, they are quickly adopting car sharing as a mainstream transportation solution. Investments in new travel patterns have become a priority to enable the transformation of opportunities in the urban mobility segment into new revenue streams.

This study deals with a novel optimization model that can improve the services provided by on-demand mobility platforms, called Quota Traveling Salesman Problem with Passengers, Incomplete Ride, and Collection Time (QTSP-PIC). In this problem, the salesman is the vehicle driver and can reduce travel costs by sharing expenses with passengers. He must respect the budget limitations and the maximum travel time of every passenger. Each passenger can be transported directly to the desired destination or an alternate destination. Lira et al. [4] suggest pro-environmental or money-saving concerns can induce users of a ride-sharing service to agree to fulfill their needs at an alternate destination.

The QTSP-PIC can model a wide variety of real-world applications. Cases related to sales and tourism are the most pertinent ones. The salesman must choose which cities to visit to reach a minimum sales quota, and the order to visit them to fulfill travel requests. In the tourism case, the salesman is a tourist that chooses the best tourist attractions to visit during a vacation trip and can use a ride-sharing system to reduce travel expenses. In both cases, the driver negotiates discounts with passengers transported to a destination similar to the desired one.

The QTSP-PIC was introduced by Silva et al. [5]. They presented a mathematical formulation and heuristics based on Ant Colony Optimization (ACO) [6]. To support the ant algorithms, they proposed a Ride-Matching Heuristic (RMH) and a local search with multiple neighborhood operators, called Multi-neighborhood Local Search (MnLS). They tested the performances of the ant algorithms on 144 instances up to 500 vertices. One of these algorithms, the Multi-Strategy Ant Colony System (MS-ACS), provided the best results. They concluded that their most promising algorithm could improve with learning techniques to choose the source of information regarding the instance type and the search space.

In this study, a MAX-MIN Ant System (MMAS) adaptation to the QTSP-PIC, called Multi-Strategy MAX-MIN Ant System (MS-MMAS), is discussed. MMAS improves the design of Ant System [6], the first ACO algorithm, with three important aspects: only the best ants are allowed to add pheromone during the pheromone trail update; use of a mechanism for limiting the strengths of the pheromone trails; and, incorporation of local search algorithms to improve the best solutions. Plenty of recent studies proved good effectiveness of the MMAS in correlated problems to QTSP-PIC [7, 8, 9, 10]. However, none of these explored the Multi-Strategy (MS) concept.

In the traditional ant algorithms applied to Traveling Salesman Problem (TSP), ants use the arcs’ cost as heuristic information [6]. The heuristic information adopted is called visibility. When solving the QTSP-PIC, different types of decisions must be considered: the accomplishment of the minimum quota, management of the ride requests, and minimization of travel costs. The MS idea is to use different mechanisms of visibility for the ants to improve diversification. Every ant decides which strategy to use at random with uniform distribution. The MS proposed in this study extends the original implementation proposed in [5]. MS-MMAS also incorporates RMH and MnLS and uses a memory-based technique proposed in [11] to avoid redundant work. In MS-MMAS, a hash table stores every solution constructed and used as initial solutions to a local algorithm. When the algorithm constructs a new solution, it starts the local search phase if the new solution is not in the hash table.

The benchmark for the tests consisted of 144 QTSP-PIC instances. It was proposed by Silva et al. [5]. Numerical results confirmed the effectiveness of the MS-MMAS by comparing it to other ACO variants proposed in [5].

The main contributions of this chapter are summarized in the following.

  • The extension of the MS concept proposed in [5] with a roulette mechanism that orients the ants to choose their heuristic information based on the best quality solutions achieved;

  • Improvement of the MMAS design with a memory based technique proposed in [11];

  • Presentation of a novel MMAS variant that combines the improved MS concept and memory-based principles and assessment of its performance;

  • Experiments on a set of QTSP-PIC instances ranging: 10 to 500 cities; and 30 to 10.000 travel requests. The results showed that the proposed MMAS variant is competitive regarding the other three ACO variants presented in [5] for the QTSP-PIC.

The remainder of this chapter is organized as follows. Section 2 presents the QTSP-PIC and its formulation. Section 3 presents the Ant Colony Optimization metaheuristic and the implementation design of the MS-MMAS. Section 4 presents experimental results. The performance of the proposed ant-based algorithm is discussed in Section 5. Conclusions and future research directions are outlined in Section 6.

Advertisement

2. Problem definition

The TSP can be formulated as a complete weighted directed graph G=NA where N is the set of vertices and A=ijijN is the set of arcs. C=cij is the arc-weight matrix such that cij is the cost of arc ij. The objective is to determine the shortest Hamiltonian cycle in G. Due to its applicability, many TSP variants deal with specific constraints [12]. Awerbuch et al. [13] presented several quota-driven variants. One of them, called Quota Traveling Salesman Problem (QTSP), is the basis for the problem investigated in this study. In the QTSP, there is a bonus associated with each vertex of G. The salesman has to collect a minimum quota of bonuses in the visited vertices. Thus the salesman needs to figure out which cities to visit to achieve the minimum quota. The goal is to find a minimum cost tour such that the sum of the bonuses collected in the visited vertices is at least the minimum quota.

The QTSP-PIC is a QTSP variant in which the salesman is the driver of a vehicle and can reduce travel costs by sharing expenses with passengers. There is a travel request, associated with each person demanding a ride, consisting of a pickup and a drop off point, a budget limit, a limit for the travel duration, and penalties associated with alternative drop-off points. There is a penalty associated with each point different from the destination demanded by each person. The salesman can accept or decline travel requests. This model combines elements of ride-sharing systems [14] with alternative destinations [4], and the selective pickup and delivery problem [15].

Let GNA be a connected graph, where N is the set of vertices and A=ijijN is the set of arcs. Parameter qi denotes the quota associated with vertex iN and gi the time required to collect the quota. cij and tij denote, respectively, the cost and time required to traverse edge ijA. Let L be the set of passengers. List liL denotes the subset of passengers who depart from iN. Let orgl and dstlN be the pickup and drop-off points requested by passenger l. The salesman departs from city s=1, visits exactly once each city of subset NN and returns to s. The quota collected by the salesman must be at least K units. Along the trip, the salesman may choose which travel requests to fulfill. The travel costs are shared with vehicle occupants. The number of vehicle occupants, or passengers, cannot exceed R. Each passenger lL imposes a budget limit wl and a trip’s maximum duration bl. Let hlj be the penalty to deliver passenger lL at city jN, jdstl. The value of variable hlj is computed in the final cost of the tour if passenger l is delivered to city j. If j=dstl, then hlj=0. The objective of the QTSP-PIC is to find a Hamiltonian cycle Ψ=NA such that the ride-sharing cost and eventual penalties are minimized, and the quota constraint is satisfied.

The QTSP is NP-hard [13]. It is a particular case of the QTSP-PIC, in which the list of persons demanding a ride is empty and the time spent to collect the bonus in each vertex is zero. Thus, QTSP-PIC also belongs to the NP-hard class.

Silva et al. [5] presented an integer non-linear programming model for the QTSP-PIC. They defined a solution as S=NQLH, where N is a list of vertices that represents a cycle, Ψ, such that the minimum quota restriction, K, is satisfied; Q is a binary list in which the i-th element is 1 if the salesman collects the bonus from city i, iN; L is a binary list in which the l-th element is 1 if the salesman accepts the l-th travel request; and H is a list of integers in which the l-th element is the index of the city where the l-th passenger leaves the car. If Ll=0, then Hl=0. The cost of solution S, denoted by S.cost, is calculated by Eq. (1).

S.cost=i,jN'cij1+lL'vijl+lL'hlHl'E1
Advertisement

3. Ant Colony Optimization

In the Ant Colony Optimization, artificial ants build and share information about the quality of solutions achieved with a communication scheme similar to what occurs with some real ants species. Deneubourg et al. [16] investigated the behavior of Argentine ants and performed some experiments, where there were two bridges between the nest and a food source. He observed that the ants initially walked on the two bridges at random, depositing pheromone in the paths. Over time, due to random fluctuations, the pheromone concentration of one bridge was higher than the other. Thus, more ants were attracted to that route. Finally, the whole colony ended up converging towards the same route. The behavior of artificial ants preserves four notions of the natural behavior of ants:

  • Pheromone deposit on the traveled trail;

  • Predilection for trails with pheromone concentration;

  • Concentration of the amount of pheromone in shorter trails;

  • Communication between ants through the pheromone deposit.

Pheromone is a chemical structure of communication [17]. According to Dorigo et al. [18], pheromone enables the process of stigmergy and self-organization in which simple agents perform complex and objective-oriented behaviors. Stigmergy is a particular form of indirect communication used by social insects to coordinate their activities [18].

Considering the context of ant algorithms applied to the TSP, when moving through the graph G, artificial ants tend to follow paths with higher pheromone deposits rates. As ants tend to deposit pheromone along the path they follow, as more ants choose the same path, the pheromone rate tends to increase in these paths. This cooperation mechanism induces artificial ants to find good solutions, as it works as a shared memory that is continuously updated and can be consulted by every ant in the colony [19].

The base-line of the Ant Colony Optimization is the algorithm Ant System [6]. In the TSP application, N is the set of vertices to visit. Ants construct solutions iteratively. Every iteration, the ant chooses the next vertex based on heuristic information, η, and pheromone trails, τ. Initially, pheromone trails have the same amount of pheromone, τ0, computed by Eq. (2), where n is the number of vertices in N and CostD is the value of the TSP tour built by a greedy heuristic.

τij0=n×CostD1E2

The k-th ant iteratively adds new vertices to the solution. The algorithm uses Eq. (3) to compute the probability of the k-th ant to move from vertex i to j at the t-th iteration, where ηij=1cij is the heuristic factor, τijt is the pheromone in arc ij in the t-th iteration, and Λk is the list of vertices not visited by the k-th ant. Coefficients α and β weight the influence of the pheromone and heuristic information, respectively. They are user-defined parameters. If α=0, the probability computed by Eq. (3) depends only on the heuristic information. So, the ant algorithm behaves like a greedy method. If β=0, ants tend to select paths with higher pheromone levels. It may lead the algorithm to early stagnation. So, balancing the values of α and β is critical to guarantee a suitable search strategy [5].

ϒijkt=τijtαηijβwΛkτiwtαηiwβ,jΛkE3

Eqs. (4) and (5) show the formulas used to update pheromone trails, where ρ is the evaporation coefficient, CostWk is the cost of the route Wk built by the k-th ant, and Δτijk is the pheromone deposited on arc ij by the k-th ant, computed by expression (6).

τij=1ρ×τij+ρ×Δτij,ρ01E4
Δτij=k=1mΔτijkE5
Δτij=1CostWk,ifarcijWk.0,otherwise.E6

The ant algorithms proposed after AS improved its implementation design with elitist pheromone update strategies and local search algorithms to improve solutions [6]. Two well-known variants of AS are the Ant Colony System (ACS) [20] and MAX-MIN Ant System [21]. Silva et al. [5] presented AS and ACS adaptations for the QTSP-PIC. Section 3.1 presents the MAX-MIN Ant System algorithm.

3.1 Multi-strategy MAX-MIN ant system

MMAS uses Eq. (3) to compute the probability of an ant to move from vertex i to j. Besides, it incorporates improvements to avoid search stagnation and a pheromone update rule that limits pheromone concentration rates. Eq. (7) presents the pheromone update rule. Limits τmax and τmin prevent stagnation of pheromone values.

τij=maxτminminτmax1ρ×τij+ρ×Δτijbest,ρ01E7
Δτijbest=1CostWbest,ifarcijWbest.0,otherwise.E8

There are three possibilities for the best route (Wbest) considered in the algorithm: the best route in the current iteration, the best route found so far, and the best route since the last time pheromone trails were reinitiated. In the MMAS original design [21], these routes were chosen alternately. The initial value of pheromone trails was tmax. If the algorithm reached stagnation, i.e., the best current route remained the same for several iterations, the pheromone value reinitialized to tmax. Assigning tmax to pheromone trails produces a small variability among pheromone levels at the start of the search [21].

The implementation of the MMAS for the QTSP-PIC extends the original proposal [21] with the following adaptions:

  • Ants start at vertex s;

  • Ants include vertices in the route up to reach the minimum quota;

  • Solution Sk, built by the k-th ant, is computed by assigning passengers to route Wk with the RMH algorithm [5];

  • Use of the MS concept.

The ants in the MMAS, use arc costs to compute heuristic information. In the MS-MMAS, ants use four sources for this task, listed in the following.

  • Cost oriented: uses cij as heuristic information, such that ηij=1cij;

  • Time oriented: uses tij as heuristic information, such that ηij=1tij. This heuristic information guides ants to vertices that lead to travel time savings.

  • Quota oriented: qj is used as heuristic information, ηij=qjcij. This heuristic information guides ants to go to vertices that lead to the maximization of the quota collected.

  • Passenger oriented: the heuristic information is ηij=Ljcij. This strategy orients ants to maximize the number of travel requests fulfilled.

In the MS concept proposed in [5], every ant decides which strategy to use at random with uniform distribution. A roulette wheel selection improves this concept. The proportion of the wheel assigned to each heuristic information is directly related to the quality of solutions achieved. So, ants learn, at each iteration, the best heuristic information. At the final iterations, ants tend to use the heuristic information that proved to be most promising.

Algorithm 1 presents the pseudo-code of the MS-MMAS. It has the following parameters: maximum number of iterations (maxIter), number of ants (mZ>0), pheromone coefficient (αR>0), heuristic coefficient (βR>0), evaporation factor (ρ01), and pheromone limits (τmax, τminR>0). It also has the following parameters and variables:

  • N: set of vertices;

  • ξ: index of the heuristic information source;

  • Wk: route built by the k-th ant;

  • Sk: solution produced after applying the RMH heuristic [5] to route Wk;

  • Wi: the best route built in the i-th iteration;

  • Si: the best solution produced in the i-th iteration;

  • W: the best route found so far;

  • S: the best solution found so far;

  • Wbest: route used as input to the pheromone updating procedure;

  • Π: hash table that stores every solution Si constructed and used as initial solution to the local search algorithm;

Algorithm 1: MS-MMAS(maxIter, m, α, β, ρτmax, τmin)

1. Π

2. Initialize pheromone trails

3. For k=1 to m.

4.   Wk2 random_city(N\s)

5. For i=1 to maxIter

6.  For k=1 to m

7.    ξ chose_heuristic_information()

8.    Wk build_route(α, β, ξ)

9.    Sk assign_passengers(Wk)

10.   Update(Wi,Si)

11.  If SiΠ

12.   Si MnLS(Si)

13.   Store(Π, Si)

14.  Update(W,S)

15.  Wbest alternate(maxIter, i, Wi, W)

16.  Pheromone_update(Wbest,ρ, τmax, τmin)

17. Return S

The algorithm sets τmax as the initial value of pheromone trails (step 2). Since ants begin at vertex s, the second vertex is selected randomly with uniform distribution (steps 3 and 4). The k-th ant decides which heuristic information, ξ, to use (step 7) and builds a route (step 8). The algorithm uses the RMH heuristic to assign passengers to Wk, completing a solution (step 9). The algorithm updates Wi and Si (step 10). The MnLS algorithm is applied to Si (step 12) if the solution SiΠ. After the local search, the algorithm stores Si in the hash table Π. At the next iteration, the current Si is the starting solution of the local search if it is not in Π. This procedure prevents redundant work. The algorithm updates the best route and the best solution found so far, W and S (step 14). Similar to the original design of MMAS, Wi is assigned to Wbest at the first 25% iterations or if i ranges from [50%,75%] of maxIter. W is assigned to Wbest if i ranges from [25%,50%] of maxIter or if it is greater than or equal to 75% of maxIter (step 15). This procedure improves diversification by shifting the emphasis over the search space. Wbest is used to update pheromones (step 16). Finally, the algorithm returns S.

Advertisement

4. Experiments and results

This section presents the methodology for the experiments and results from the experiments. Section 4.1 presents the methodology. Section 4.2 presents the parameters used in the MS-MMAS algorithm. Section 4.3 presents the results.

4.1 Methodology

The experiments were executed on a computer with an Intel Core i5, 2.6 GHz processor, Windows 10, 64-bit, and 6GB RAM memory. The algorithms were implemented in C ++ lan and compiled with GNU g++ version 4.8.4. The benchmark set proposed in [5] was used to test the effectiveness of the MS-MMAS. The sizes of those instances range from 10 to 500 vertices. Small instances have up to 40 vertices, medium up to 100, and large more than 100 vertices. The instances are available for download at https://github.com/brunocastrohs/QTSP-PIC.

The best, average results, and average processing times (in seconds) are reported from 20 independent executions of the MS-MMAS. Experiments are conducted to report the distance between the best-known solutions and the best results provided by the MS-MMAS. The variability in which the MS-MMAS achieved the best-known solutions stated in the benchmark set is also calculated. With these experiments, it is possible to conclude if the MS-MMAS algorithm was able to find the best-known solution of each instance and with what variability this happens.

The Friedman test [23] with the Nemenyi post-hoc procedure [24] are applied, with a significance level 0.05, to conclude about significant differences among the results of the MS-MMAS and the other three ACO variants proposed in this [5]. The instances were grouped according to their sizes (number of vertices) for the Friedman test. There are eight groups of symmetric (asymmetric) instances, each of them contains nine instances, called g<n>, where <n> stands for the size.

4.2 Parameter tuning

The IRACE software was used, presented by [22], to tune the parameters of the MS-MMAS algorithm. 20 symmetric and 20 asymmetric instances were submitted to adjust the parameters. Those instances were selected at random. The IRACE uses the maxExperiments and maxTime parameters as stopping criteria. This parameters were set as follows: maxExperiments=103; and, maxTime=.

For the asymmetric instance set, the parameters were defined as follows: maxIter=31; m=51; α=3.08; β=10.31; ρ=0.52; τmax=0.8; and τmin=0.2. For the symmetric instance set, the parameters were: maxIter=29; m=57; α=2.92; β=9.53; ρ=0.67; τmax=0.7; and τmin=0.2.

4.3 Results

In this section, the results of the MS-MMAS are tested and compared to those produced by the other three ACO variants proposed in [5]: AS, ACS, and MS-ACS.

Table 1 presents the comparison between the ant algorithms. The best results obtained by MS-MMAS were compared with those achieved by each ant algorithm proposed in [5]. The results are in the X×Y format, where X and Y stand for the number of instances in which the ant algorithm X found the best solution and the number of instances in which the ant algorithm Y found the best solution, respectively.

AsymmetricSymmetric
ASACSMS-ACSASACSMS-ACS
MS-MMAS68 x 068 x 145 x 1766 x 168 x 248 x 14

Table 1.

Comparison between the ant algorithms.

Table 1 shows that the MS-MMAS was the algorithm that reported the best solution for most instances. This algorithm performed best than other ACO variants due to its enhanced pheromone update procedures. The MS implementation with roulette wheel selection proved to be effective at finding the best heuristic information used by the ants during the run. Table 1 also shows that the MS-MMAS provides results with better quality than the MS-ACS in the most symmetric cases. The MS-ACS was superior to the MS-MMAS in seventeen asymmetric cases and fourteen symmetric instances. Was observed that the pseudo-random action choice rule of MS-ACS [20], which allows for a greedier solution construction, proved to be a good algorithmic strategy for solving large instances.

Tables 2 and 3 shows the ranks of the ant algorithms based on the Friedman test [23] with the Nemenyi [24] post-hoc test. The first column of this Tables presents the subsets of instances grouped according to their sizes. The other columns of this Tables present the p-values of the Friedman test and the ranks from the Nemenyi post-hoc test. In the post-hoc test, the order ranks from a to c. The c rank indicates that the algorithm achieved the worst performance in comparison to the others. The a rank indicates the opposite. If the performances of two or more algorithms are similar, the test assigns the same rank for them. In this experiment, the significance level was assigned with 0.05.

Asymmetric
Subsetp-valueASACSMS-ACSMS-MMAS
g100.003159bbaa
g200.000040bcaa
g300.000017ccaa
g400.000024bcaa
g500.000048ccba
g1000.000045bcaa
g2000.000031bcaa
g5000.000037cbaa

Table 2.

Results of Friedman’s test and Nemenyi post-hoc test over asymmetric instances set.

Symmetric
Subsetp-valueASACSMS-ACSMS-MMAS
g100.003543bbaa
g200.000205bcaa
g300.000045ccba
g400.000059bcaa
g500.000035bbaa
g1000.000024bbaa
g2000.000045cbaa
g5000.000098cbaa

Table 3.

Results of Friedman’s test and Nemenyi post-hoc test over symmetric instances set.

The p-values presented in Tables 2 and 3 show that the performance of the ant algorithms was not similar, i.e., the null hypothesis [24] is rejected in all cases. In these Tables, can be observed that MS-MMAS ranks higher than AS and ACS for all subsets. The ranks of MS-ACS and MS-MMAS were the same in the most cases. This implies that the performance of only these two algorithms where similar, i. e., the relative distance between the results achieved by these two algorithms are short.

To analyze the variability of the results provided by each ant algorithm compared to the best results so far for the benchmark set, three metrics regarding the results produced by the experiments were adopted. The first metric, ν, shows the percentages relative to the number of times an ant algorithm found the best-known solution along 20 independent executions. The second metric, Φ, is the relative distance between the cost of the best-known solution χ and the best solution χmin of each ant algorithm. The third metric, Ω, is the relative distance between χ and the average solution χa of each ant algorithm. To calculate Φ, the Eq. (9) was used. Ω is calculated using the formula (10). The average values of ν, Φ and Ω are reported in Tables 4 and 5.

Asymmetric
MetricASACSMS-ACSMS-MMAS
ν4.30%2.56%6.8%11.84%
Φ0.20753330.27736240.05417990.0054741
Ω0.28354010.40236590.17549520.5854892

Table 4.

Variability of the ants algorithms for asymmetric instances.

Symmetric
MetricASACSMS-ACSMS-MMAS
ν2.01%0.69%8.75%9.05%
Φ0.21697560.22859480.05478900.0113889
Ω0.30179570.36206820.16560220.6432748

Table 5.

Variability of the ants algorithms for symmetric instances.

Φ=χminχ1E9
Ω=χaχ1E10

It can be observed from Tables 4 and 5 that the MS-MMAS is the best one concerning the ν and Φ metrics. The MS-ACS is the best algorithm concerning the Ω metric. Tables 6 and 7 show the data regarding the results of MS-MMAS reported in Tables 4 and 5. The results of the other ACO variants can be seen in [5].

Tables 8 and 9 present the average processing time (in seconds) spent by each heuristic. Instances are grouped by the number of vertices. From these tables, it can be conclude that the MS-MMAS was the ant algorithm that demanded more processing time. Tables 6 and 7 (1) present detailed results concerning the average time required by the MS-MMAS. Data regarding the time consumption of the other ACO variants can be seen in [5].

AsymmetricSymmetric
InstanceBestAverageTimePercentageBestAverageTimePercentage
A-10-3478.42863.130.15100%545.92996.340.1525%
A-10-4523.571069.330.1480%460.00838.840.1820%
A-10-5482.60690.080.145%371.93658.970.1910%
A-20-3519.67936.470.355%679.751363.590.3220%
A-20-4458.101145.880.380%346.30661.820.4335%
A-20-5398.75669.820.3510%351.501006.240.485%
A-30-3618.331180.700.4810%574.331469.680.505%
A-30-4401.20805.321.285%654.801202.150.405%
A-30-5475.831033.910.5810%464.05911.560.665%
A-40-3692.001060.672.835%718.251399.040.7210%
A-40-4658.951088.412.525%570.98961.132.955%
A-40-5460.90900.513.115%441.22836.522.895%
B-10-3729.50925.350.135%834.671485.470.2020%
B-10-4306.90421.350.1315%493.58757.450.1610%
B-10-5434.75835.010.1855%726.351160.970.235%
B-20-3805.421251.390.2810%950.001666.220.455%
B-20-4848.621366.690.355%822.821386.670.495%
B-20-5895.171275.780.2870%660.221215.120.355%
B-30-3747.751316.961.315%718.671358.110.935%
B-30-4723.271301.571.515%650.351272.860.695%
B-30-5700.751205.961.395%504.681091.110.895%
B-40-3964.421574.002.020%889.831682.761.975%
B-40-41195.622134.731.205%743.821508.162.090%
B-40-5819.281537.711.1110%749.821351.640.855%
C-10-3359.25604.700.1755%597.83697.510.050%
C-10-4307.10514.660.205%408.45514.650.1585%
C-10-5566.58783.350.1710%409.60846.510.2310%
C-20-3650.25978.840.4610%629.921063.990.360%
C-20-4563.78938.500.725%441.651019.961.0610%
C-20-5739.221056.771.020%711.871095.840.635%
C-30-3837.581198.091.055%830.171221.650.6110%
C-30-4754.101144.992.475%745.921096.551.165%
C-30-5560.18998.282.2910%490.80931.812.215%
C-40-31008.001541.542.595%607.00946.573.4510%
C-40-4695.301172.762.0610%699.801136.172.250%
C-40-5623.331097.223.2410%475.67898.807.820%

Table 6.

Results of the MS-MMAS executions for small instances.

AsymmetricSymmetric
InstanceBestAverageTimePercentageBestAverageTimePercentage
A-50-31058.332128.682.315%1000.331943.132.915%
A-50-4774.931473.574.485%783.401345.193.535%
A-50-5673.421314.544.165%583.081008.332.590%
A-100-31431.422046.9653.445%1514.082292.0815.6620%
A-100-41456.472705.0723.125%1165.901595.9137.115%
A-100-51106.171778.3840.5710%980.281366.0958.205%
A-200-32806.753610.22424.600%2793.333272.00257.420%
A-200-42388.883196.15381.070%2199.452807.0679.3210%
A-200-51753.002286.081380.2610%2086.823237.8555.3710%
A-500-36878.427165.391641.9233%6331.757679.65732.9410%
A-500-45572.425637.264939.840%5030.485080.66913.820%
A-500-54389.924539.4416990.7750%4610.954696.9173.9733%
B-50-31338.422356.245.8210%966.421770.884.605%
B-50-4951.871757.615.005%772.671490.011.475%
B-50-51083.181943.223.915%692.421342.784.745%
B-100-31781.003115.7830.965%1803.333258.9015.115%
B-100-41409.652467.0261.765%1648.583360.9311.005%
B-100-51361.202734.2439.895%1018.371536.3499.025%
B-200-33302.834840.94317.370%3016.674267.1756.620%
B-200-42536.803477.13681.750%2326.973139.5781.3410%
B-200-52127.882814.24906.740%1893.672506.64102.3310%
B-500-36994.847203.613857.770%6433.926475.67126.510%
B-500-45419.875730.9724183.87100%5191.775276.98267.720%
B-500-54546.284643.0321118.980%4379.434494.11164.0833%
C-50-31201.921801.683.125%829.751482.862.825%
C-50-4937.251651.117.965%901.401605.246.1610%
C-50-5609.601127.9916.585%766.481366.587.435%
C-100-31496.582001.7647.620%1364.001819.0314.2110%
C-100-41352.852458.8232.325%1099.001445.2923.980%
C-100-51022.701466.42127.110%991.121472.2335.215%
C-200-32629.003197.13478.9010%2510.503171.6665.0710%
C-200-42184.852648.38710.7010%2141.402741.1152.590%
C-200-51881.032296.53486.620%1713.172127.84126.8510%
C-500-36528.086618.162484.520%6023.426087.14138.1233%
C-500-45139.545298.218188.460%4942.284958.6465.1933%
C-500-54286.454278.496061.780%4167.674178.10302.790%

Table 7.

Results of the MS-MMAS executions for medium and large instances.

nASACSMS-ACSMS-MMAS
100.050.060.120.15
200.100.130.260.46
300.200.301.921.37
400.340.482.292.29
500.412.205.775.92
1006.6828.8532.6950.75
20031.81270.94409.72640.89
50041.723477.133545.209940.87

Table 8.

Average time spent by the ant algorithms for the set of asymmetric instances.

nASACSMS-ACSMS-MMAS
100.050.060.120.17
200.100.130.270.51
300.180.240.490.89
400.280.380.732.78
500.400.565.414.02
1008.1313.5129.1734.93
20022.9451.92112.5897.43
50040.5375.72127.83309.46

Table 9.

Average time spent by the ant algorithms for the set of symmetric instances.

Advertisement

5. Discussion

The purpose of this study was to adapt MMAS to the QTSP-PIC and compare its performance with the ACO variants proposed in [5]. As expected, MS-MMAS proved to be competitive regarding the other ACO variants proposed to solve QTSP-PIC. Similarities and differences that were observed in the results are discussed in section 5.1. The limitations of the study are discussed in Section 5.2.

5.1 Comparison between ACO algorithms

The ACO algorithms proposed by Silva et al. [5] showed to be a viable method for solving QTSP-PIC. Yet, the performance of AS and ACS algorithms, when compared to MS-MMAS, was rather poor for the benchmark set studied. MS-MMAS improved the results achieved by AS in 134 instances. Compared to ACS, MS-MMAS performed better in 136 instances. The results achieved by MS-MMAS improved those produced by MS-ACS in 93 instances. It is interesting to note that the MS-ACS algorithm performed slightly better on the large instances than the MS-MMAS. The Friedman test and Nemenyi post-hoc ranked these two ACO algorithms with the same scale for the most instance groups, which means that difference between the results achieved by the MS-ACS and MS-MMAS was significantly small.

These observations are also supported by the variability results of each ACO algorithm. Metric ν showed that MS-MMAS was the algorithm that achieved the best know solutions of the benchmark set in most cases. Metric Φ showed that the MS-MMAS performed slightly better overall on the benchmark set than the MS-ACS. A possible explanation for this is that the MS-ACS variation might converge to a local minimum faster than the MS-MMAS.

Results presented in this study showed that the MS-MMAS algorithm is better suited than the other three ACO variants proposed in [5] to solve QTSP-PIC. This suggests a positive impact of the implementation design proposed in this study and a contribution to the MAX-MIN Ant System state of the art.

5.2 Limitations

Due to limited time, parallel computing techniques could not be tested to improve the performance of MS-MMAS. A previous study done by Skinderowicz [10] investigated the potential effectiveness of a GPU-based parallel MAX-MIN Ant System in solving the TSP. In this study, the most promising MMAS variant was able to generate over 1 million candidate solutions per second when solving a large instance of the TSP benchmark set. Other techniques can improve the MS-MMAS design and could not be tested due to the lack of time:

  • A rank-based pheromone updating rule [25];

  • The application of the pseudo-random action choice rule proposed in [21];

  • Hybridization with other meta-heuristics [26, 27, 28].

Advertisement

6. Conclusions

This work dealt with a recently proposed variant of the Traveling Salesman Problem named The Quota Traveling Salesman Problem with Passengers, Incomplete Ride, and Collection Time. In this problem, the salesman uses a flexible ride-sharing system to minimize travel costs while visiting some vertices to satisfy a pre-established quota. He must respect the budget limitations and the maximum travel time of every passenger. Each passenger can be transported directly to the desired destination or an alternate destination. The alternative destination idea suggests that when sharing a ride, pro-environmental or money-saving concerns can induce persons to agree to fulfill their needs at a similar destination. Operational constraints regarding vehicle capacity and travel time were also considered.

The Multi-Strategy MAX-MIN Ant System, a variant from the Ant Colony Optimization (ACO) family of algorithms, was presented. This algorithm uses the MS concept improved with roulette wheel selection and memory-based principles to avoid redundant executions of the local search algorithm. The results of MS-MMAS were compared with those produced by the ACO algorithms presented in [5]. To support MS-MMAS, the ride-matching heuristic and the local search heuristic based on multiple neighborhood operators proposed by [5] were reused.

The computational experiments reported in this study comprised one hundred forty-four instances. The experimental results show that the proposed ant algorithm variant could update the best-known solutions for this benchmark set according to the statistical results. The comparison results with three other ACO variants proposed in [5] showed that MS-MMAS improved the best results of MS-ACS for ninety-three instances, and a significant superiority of MS-MMAS over AS and ACS.

The presented work may be extended in multiple directions. First, it would be interesting to investigate if the application of the pseudo-random action choice rule [20] could improve the MS-MMAS results. Another further promising idea is the use of pheromone update rule based on ants ranking [25]. Extension of the MS-MMAS implementation design with parallel computing techniques [10] and hybridization with other meta-heuristics [26, 27, 28] is other interesting opportunity for the future research.

Advertisement

Abbreviations

MaaSMobility as a Service
QTSP-PICQuota Traveling Salesman Problem with Passengers, Incomplete Ride and Collection Time
ACOAnt Colony Optimization
RMHRide-Matching Heuristic
MnLSMulti-neighborhood Local Search
MS-ACSMulti-Strategy Ant Colony System
MMASMAX-MIN Ant System
MS-MMASMulti-Strategy MAX-MIN Ant System
MSMulti-Strategy
TSPTraveling Salesman Problem
QTSPQuota Traveling Salesman Problem
ASAnt System
ACSAnt Colony System

References

  1. 1. Mobile Tech Spurs Financial Inclusion in Developing Nations. shorturl.at/bjyOZ [Accessed in: 2020-08-08]
  2. 2. Hensher D., Ho C., Mulley C., Nelson J., Smith G., Wong Y. (2020). Understanding Mobility as a Service (MaaS): Past, Present and Future. https://doi.org/10.1016/C2019-0-00508-0 [Accessed in: 2020-08-08]
  3. 3. Sin C. Ho, W.Y. Szeto, Yong-Hong Kuo, Janny M.Y. Leung, Matthew Petering, Terence W.H. Tou, A survey of dial-a-ride problems: Literature review and recent developments, Transportation Research Part B: Methodological, Volume 111, 2018, Pages 395–421, ISSN 0191–2615.https://doi.org/10.1016/j.trb.2018.02.001 [Accessed in: 2020-08-02]
  4. 4. de Lira, V. M., Perego, R., Renso, C., Rinzivillo, S., Times, V. C. (2018). Boosting ride sharing with alternative destinations. IEEE Transactions on Intelligent Transportation Systems, 19(7), 2290–2300. https://doi.org/10.1109/TITS.2018.2836395 [Accessed in: 2020-08-02]
  5. 5. Bruno C.H. Silva, Islame F.C. Fernandes, Marco C. Goldbarg, Elizabeth F.G. Goldbarg. Quota travelling salesman problem with passengers, incomplete ride and collection time optimization by ant-based algorithms. Computers & Operations Research, Volume 120, 2020, 104950, ISSN 0305–0548. https://doi.org/10.1016/j.cor.2020.104950. [Accessed in: 2020-08-02]
  6. 6. M. Dorigo, M. Birattari and T. Stutzle, “Ant colony optimization,” in IEEE Computational Intelligence Magazine, vol. 1, no. 4, pp. 28–39, Nov. 2006, https://doi.org/10.1109/MCI.2006.329691. [Accessed in: 2020-08-04]
  7. 7. J. Pedro Schmitt, F. Baldo and R. Stubs Parpinelli. A MAX-MIN Ant System with Short-Term Memory Applied to the Dynamic and Asymmetric Traveling Salesman Problem. 2018 7th Brazilian Conference on Intelligent Systems (BRACIS), Sao Paulo, 2018, pp. 1–6, https://doi.org/10.1109/BRACIS.2018.00009. [Accessed in: 2020-08-04]
  8. 8. J. Pedro Schmitt, R. Stubs Parpinelli and F. Baldo. (2019). Analysis of Max-Min Ant System with Local Search Applied to the Asymmetric and Dynamic Travelling Salesman Problem with Moving Vehicle. In International Symposium on Experimental Algorithms (pp. 202–218). Springer, Cham.https://doi.org/10.1007/978-3-030-34029-2$_$14[Accessed in: 2020-08-04]
  9. 9. Yang, K., You, X., Liu, S. et al. A novel ant colony optimization based on game for traveling salesman problem. Appl Intell (2020). https://doi.org/10.1007/s10489-020-01799-w [Accessed in: 2020-08-04]
  10. 10. R. Skinderowicz. Implementing a GPU-based parallel MAX–MIN Ant System, Future Generation Computer Systems,Volume 106, 2020, Pages 277–295, ISSN 0167-739X. https://doi.org/10.1016/j.future.2020.01.011 [Accessed in: 2020-08-04]
  11. 11. S.L. Martins, P.M. Pardalos, M.G.C. Resende, and C.C. Ribeiro. Greedy randomized adaptive search procedures for the steiner problem in graphs. In P.M. Pardalos, S. Rajasekaran, and J. Rolim, editors, Randomization methods in algorithmic design, volume 43 of DIMACS Series on Discrete Mathematics and Theoretical Computer Science, pages 133–145. American Mathematical Society, 1999. https://bookstore.ams.org/dimacs-43. [Accessed in: 2020-08-04]
  12. 12. K. Ilavarasi and K. S. Joseph. Variants of travelling salesman problem: A survey. International Conference on Information Communication and Embedded Systems (ICICES2014), Chennai, 2014, pp. 1–7. https://doi.org/10.1109/ICICES.2014.7033850. [Accessed in: 2020-08-04]
  13. 13. Awerbuch, B., Azar, Y., Blum, A., Vempala, S. New approximation guarantees for minimum-weight k-trees and prize-collecting salesmen. SIAM Journal on computing, 28(1), 254–262, 1998. https://epubs.siam.org/doi/abs/10.1137/S009753979528826X. [Accessed in: 2020-08-04]
  14. 14. Niels Agatz, Alan Erera, Martin Savelsbergh, Xing Wang. Optimization for dynamic ride-sharing: A review, European Journal of Operational Research, Volume 223, Issue 2, 2012, Pages 295–303. https://doi.org/10.1016/j.ejor.2012.05.028 [Accessed in: 2020-08-04]
  15. 15. Schonberger, J., Kopfer, H., Mattfeld, D. C. A combined approach to solve the pickup and delivery selection problem. In Operations Research Proceedings 2003 (pp. 150–155). Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-55537-4$_$24 [Accessed in: 2020-08-04]
  16. 16. Deneubourg, J. L., Aron, S., Goss, S., Pasteels, J. M. The self-organizing exploratory pattern of the argentine ant. Journal of insect behavior, 3(2), 159–168, 1990. https://doi.org/10.1007/BF01417909 [Accessed in: 2020-08-04]
  17. 17. Hart, Adam; Jackson, Duncan E. U-turns on ant pheromone trails. Current Biology, v. 16, n. 2, p. R42-R43, 2006. https://doi.org/10.1016/j.cub.2006.01.015 [Accessed in: 2020-08-04]
  18. 18. Marco Dorigo, Eric Bonabeau, Guy Theraulaz, Ant algorithms and stigmergy, Future Generation Computer Systems, Volume 16, Issue 8, 2000, Pages 851–871, ISSN 0167-739X. https://doi.org/10.1016/S0167-739X(00)00042-X [Accessed in: 2020-08-04]
  19. 19. Skinderowicz R. Ant Colony System with Selective Pheromone Memory for TSP. Lecture Notes in Computer Science, vol 7654. Springer, Berlin, Heidelberg. 2012. https://doi.org/10.1007/978-3-642-34707-8_49 [Accessed in: 2020-08-04]
  20. 20. M. Dorigo and L. M. Gambardella. Ant colony system: a cooperative learning approach to the traveling salesman problem. in IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, pp. 53–66, April 1997. https://doi.org/doi: 10.1109/4235.585892 [Accessed in: 2020-08-04]
  21. 21. Thomas Stutzle, Holger H. Hoos. MAX–MIN Ant System. Future Generation Computer Systems. Volume 16, Issue 8, 2000, Pages 889–914, ISSN 0167-739X. https://doi.org/10.1016/S0167-739X(00)00043-1 [Accessed in: 2020-08-04]
  22. 22. Manuel Lopez-Ibanez, Jeremie Dubois-Lacoste, Leslie Perez Caceres, Mauro Birattari, Thomas Stutzle. The irace package: Iterated racing for automatic algorithm configuration. Operations Research Perspectives, Volume 3, 2016, Pages 43–58, ISSN 2214–7160. https://doi.org/10.1016/j.orp.2016.09.002 [Accessed in: 2020-08-04]
  23. 23. O’Gorman, Thomas. A comparison of the F-test, Friedman’s test, and several aligned rank tests for the analysis of randomized complete blocks. Journal of Agriculture Biology and Environmental Statistics. 6. 367–378, 2001. https://doi.org/10.1198/108571101317096578 [Accessed in: 2020-08-04]
  24. 24. Alan C. Elliott, Linda S. Hynan. A SAS macro implementation of a multiple comparison post hoc test for a Kruskal–Wallis analysis, Computer Methods and Programs in Biomedicine, Volume 102, Issue 1, 2011, Pages 75–80, ISSN 0169–2607. https://doi.org/10.1016/j.cmpb.2010.11.002 [Accessed in: 2020-08-04]
  25. 25. Q. Song, Q. Zhao, S. Wang, Q. Liu and X. Chen. Dynamic Path Planning for Unmanned Vehicles Based on Fuzzy Logic and Improved Ant Colony Optimization. IEEE Access, vol. 8, pp. 62107–62115, 2020. https://doi.org/10.1109/ACCESS.2020.2984695 [Accessed in: 2020-08-10]
  26. 26. M. Xia. An ant colony algorithm Hybridized with Iterated Local Search for the QAP. 2009 Asia-Pacific Conference on Computational Intelligence and Industrial Applications (PACIIA), Wuhan, pp. 80–83, 2009. https://doi.org/10.1109/PACIIA.2009.5406542 [Accessed in: 2020-08-27]
  27. 27. Li, X., Tian, P., Leung, S. An ant colony optimization metaheuristic hybridized with tabu search for open vehicle routing problems. J Oper Res Soc 60, 1012–1025 (2009). https://doi.org/10.1057/palgrave.jors.2602644 [Accessed in: 2020-08-27]
  28. 28. P.S. Shelokar, Patrick Siarry, V.K. Jayaraman, B.D. Kulkarni. Particle swarm and ant colony algorithms hybridized for improved continuous optimization. Applied Mathematics and Computation, Volume 188, Issue 1, P. 129–142, 2007. https://doi.org/10.1016/j.amc.2006.09.098 [Accessed in: 2020-08-27]

Written By

Bruno C.H. Silva, Islame F.C. Fernandes, Marco C. Goldbarg and Elizabeth F.G. Goldbarg

Submitted: 01 June 2020 Reviewed: 02 September 2020 Published: 05 October 2020