In this chapter, we describe efficient heuristics for scheduling jobs with release and delivery times with the objective to minimize the maximum job completion time. These heuristics are essentially based on a commonly used scheduling theory in Jackson’s extended heuristic. We present basic structural properties of the solutions delivered by Jackson’s heuristic and then illustrate how one can exploit them to build efficient heuristics.
- combinatorial optimization
- heuristic algorithm
- scheduling theory
- time complexity
- approximation algorithm
There are two distinct classes of combinatorial optimization problems, class of polynomially solvable ones and
Thus, approximation algorithms are most frequently used for the solution of
Simple greedy heuristics that use some priority dispatching rules for the for taking the decisions can be easily constructed and adopted for scheduling problems. An obvious advantage of such heuristics is their rapidness, and an obvious disadvantage is a poor solution quality. The generation of a better solution needs more computational and algorithmic efforts. A
This chapter deals with one of the most widely used greedy heuristics in scheduling theory. The generic heuristic for scheduling jobs with release and delivery times on a single machine to minimize the maximum job completion time is named after Jackson  (the heuristic was originally proposed for the version without release times, and then it was extended for the problem with release times by Schrage ). Jackson’s heuristic (J‐heuristic, for short), iteratively, at each scheduling time (given by job release or completion time), among the jobs released by time schedules one with the largest delivery time. This 2‐approximation heuristic is important on its own right, and it also provides a firm basis for more complex heuristic algorithms that solve optimally various scheduling problems that cannot be solved by a greedy method.
In this chapter, we give a brief overview of heuristic algorithms that are essentially based on J‐heuristic. Then, we go into the analysis of the schedules created by J‐heuristic (J‐schedule), showing their beneficial structural properties. They are helpful for a closer study of the related problems, which, in turn, may lead to better solution methods. We illustrate how the deduced structural properties can be beneficially used by building an adaptive heuristic algorithm for our generic scheduling problem with a more flexible worst‐case performance ratio than that of J‐heuristic.
The next section consists of four subsections. In Section 2, we first describe our basic scheduling problem, Jackson’s heuristic, other related heuristics, and real‐life applications of the scheduling problem. In Section 3, we study the basic structural properties of the schedules constructed by Jackson’s heuristic. In Section 4, we derive a flexible worst‐case performance estimation for the heuristic, and in Section 5, we construct a new adaptive heuristic based on Jackson’s heuristic. Section 6 concludes the chapter with final remarks.
2.1. Problem formulation
Our generic single‐machine scheduling problem can be described as follows. We have jobs from set and a single machine. Job is characterized by its
The problem is known to be strongly
The problem has an equivalent formulation in which delivery times are interchanged by
We may note that, besides job lateness, there are other due‐date‐oriented objective criteria. A common one is
Given an instance of , one can obtain an equivalent instance of as follows. Take a suitably large constant (no less than the maximum job delivery time) and define due date of every job as . Vice versa, given an instance of , we may create an equivalent instance of by introducing job delivery times, , taking a suitably large constant (any number larger than the maximum job due date would work). It can be easily seen by the equivalence of these instances (if the makespan for the version is minimized, the maximum job lateness in is minimized, and vice versa, see Bratley et al.  for more details). Because of the equivalence, both above formulations might be used interchangeably.
2.2. Description of J‐heuristic
Now, we describe Jackson’s greedy heuristic (J‐heuristic) in detail that works on scheduling times (at every scheduling time, the next job is scheduled on the machine). Initially, the earliest scheduling time is set to the minimum job release time. Iteratively, among all jobs released by a given scheduling time, a job with the maximum delivery time is scheduled on the machine (ties might be broken by selecting any longest available job). Once a job completes on the machine, the next scheduling time is set to the maximum between the completion time of that job and the minimum release time of a yet unscheduled job.
Since the heuristic always schedules an earliest released job every time, the machine becomes idle and it creates no gap that can be avoided. The time complexity of the heuristic is as at every scheduling times, the search for a maximal element in an ordered list is carried out.
The heuristic is easily expendable for multiprocessor and preemptive scheduling problems with release and delivery (due) times. For identical parallel processor case, a ready job with the largest tail (or smallest due date) is repeatedly determined and is scheduled on the processor with the minimal completion time ties being broken by selecting the processor with the minimal index. For the sake of conciseness, we below refer to that processor as the
find job with the largest delivery time and schedule it at time on the corresponding active processor; ;
update the current active processor and set to the maximum between the completion time of that processor and minimal job release time in set
We illustrate a 3‐processor J‐schedule in Figure 1 for eight jobs with the parameters as specified in the table as follows:
As it can be seen in Figure 1, J‐heuristic creates idle time intervals (the gaps) on all three processors constructing an optimal schedule with makespan 54. Note that job 7 realizes the maximum objective value 54 being scheduled on processor 2 (we call such job the overflow job as we define a bit later), the completion time of processor 2 is 24 (that of job 7), whereas the full completion time of job 7 is 54.
For preemptive version of J‐heuristic, every currently executed job is interrupted at the earliest time moment when a more urgent jobs get released. It is well‐known and also easily seen that the preemptive version of J‐heuristic delivers an optimal solution for the corresponding preemptive scheduling problem.
2.3. Overview of related heuristics
As mentioned earlier, J‐heuristic turned out to be highly flexible in the sense that its different modifications have been used for the solution of different scheduling problems. Potts  has proposed an extension of the heuristic. His algorithm repeatedly applies J‐heuristic times and obtains an improved approximation ratio of at the cost of an increase by a factor of time complexity. Hall and Shmoys , also based on J‐heuristic, have developed another 4/3‐approximation polynomial‐time algorithm with the same time complexity of for the version of our problem with precedence relations. Garey et al.  have modified the heuristic as another more sophisticated heuristic for the feasibility version of this problem with equal‐length jobs (in the feasibility version, job due dates are replaced by deadlines and a schedule in which all jobs complete by their deadlines is looked for). This result was extended to the version of problem with two possible job processing times in an algorithm described in . For another relevant criterion, an algorithm that minimizes the number of late jobs with release times on a single machine when job preemptions are allowed was proposed in . Without preemptions, an algorithm for the case when all jobs have equal length was proposed in .
Multiprocessor version of J‐heuristic has been used as a basis for the solution of multiprocessor scheduling problems. For example, for the feasibility version with identical machines and equal‐length jobs, algorithms with the time complexities and were proposed in Simons  and Simons and Warmuth , respectively. Using the J‐heuristic as a schedule generator, an algorithm for the minimization version of the latter problem was proposed in , where is the maximum job delivery time and is a parameter. With the objective to minimize the number of late jobs on a group of identical processors, an non‐preemptive algorithm for equal‐length jobs was proposed in .
J‐heuristic can be efficiently used for the solution of shop scheduling problems. Using J‐heuristic as a schedule generator, McMahon and Florian  and Carlier  have proposed efficient enumerative algorithms for . Grabowski et al.  use the heuristic for the obtainment of an initial solution in another enumerative algorithm for the same problem.
The problem naturally arises in job‐shop scheduling problems as an auxiliary problem for the derivation of strong lower bounds. By ignoring the potential yet unresolved conflicts on all the machines except a selected machine , the corresponding disjunctive graph defines an auxiliary instance of problem on machine , where every task to be performed on that machine is characterized by an early starting time (defined by the early completion times of its predecessor‐tasks) that is set as its release time and the tail or the delivery time (determined by the processing times of the predecessor‐tasks of task ). In multiprocessor job‐shop scheduling problems, a single machine is replaced by a group of parallel machines, and the corresponding multiprocessor version of problem is derived. For the purpose of a lower bound, preemptive version of the above problems with release and delivery times might be considered and preemptive J‐heuristic can then be applied. For relevant studies on a classical job‐shop scheduling problem, see, for example, Carlier , Carlier and Pinson , Brinkkotter and Brucker , and more recent works of Gharbi and Labidi  and Della Croce and T’kindt  and for multiprocessor job‐shop scheduling problem with identical machines, see Carlier and Pinson . This approach can also be extended for the case when parallel machines are unrelated .
2.4. Some other applications
Besides the above‐mentioned applications in multiprocessor, shop, and batch scheduling problems, our problem has numerous immediate real‐life applications in various production chains, CPU time sharing in operating systems (jobs being the processes to be executed by the processor), wireless sensor network transmission range distribution (where jobs are mobile devices with their corresponding ranges that can be modeled as release and due dates), and important transportation problems such as traveling salesman’s and vehicle routing problems with time windows. The reader may wish to have a look at reference  for an extensive overview of the variations of the vehicle routing problem, and we also refer to [11, 41] for more recent related work.
Our scheduling problem can be used for the solution of the latter transportation problems. Let us first describe these problems briefly. There are, say, customers or cities and one special location called depot. The distances between any pair of locations are known. The goods are to be distributed from depot to the customers using one or more (identical) vehicles. There are certain restrictions on how this distribution should be done that define set of all feasible solutions to the problem. A general notion is a tour carried out by a vehicle that initiates at depot, visits some of the customers, and returns to depot. All customers must be served, i.e., every customer is to be included in exactly one tour. There may be additional restrictions such as vehicle capacity constraints and customer requests. And another basic constraint, which relates these transportation problems with our scheduling problem, is that every customer can be served only within a certain time interval, whereas there is also a valid time interval given for the depot.
A common objective is to minimize the total service/travel time of all the vehicles. Whereas in the basic setting, it is straightforward to find a feasible solution, with time windows, this task is not obvious, in fact, there may exist no feasible solution. If it exists, then one aims to minimize the number of used vehicles and then construct the corresponding number of tours with the objective to minimize total service time.
Associating with every customer and the depot a unique job and with the corresponding time window the release and due dates of that job, we arrive at a corresponding scheduling problem, an instance of . Let us consider the feasibility version of this problem (in which a solution with no positive lateness is looked for). Note that if there is no feasible solution to that feasibility version, then there exists no feasible solution to the corresponding vehicle routing problem with a single vehicle. Then, we may consider the feasibility problem with two identical machines and so on, with identical machines , until a feasible solution is found. We may use a binary search within the interval instead when an upper limit on the maximum number of vehicles is known (otherwise, we set to a sufficiently large magnitude). In case, there exists a feasible solution for , once the minimum is found, the corresponding tours minimizing the total travel time might be constructed.
3. The structure of J‐schedules
Previous section’s brief survey clearly indicates importance of our scheduling problem and the power and flexibility of J‐heuristic as well. Whenever the direct application of J‐heuristic for the solution of the problem is concerned and the solution quality is important, the worst‐case bound of two may not be acceptable. Besides, J‐heuristic may not solve the feasibility version of our problem even though there may exist a feasible solution with no positive maximum lateness. To this end, there are two relevant points that deserve mentioning. On the one hand, the practical behavior of the heuristic might be essentially better than this worst‐case estimation . On the other hand, by carrying out structural analysis of J‐schedules, it is possible to obtain a better, more flexible worst‐case bound, as we show in the next section. In this section, we introduce some basic concepts that will help us in this analysis.
Let us denote by , the schedule obtained by the application of J‐heuristic to the originally given problem instance (as we will see later, this heuristic can also be beneficially applied to some other derived problem instances). Schedule , and, in general, any J‐schedule, may contain a
Let us call a
Now, we give some necessary concepts from  that will help us to expose useful structural properties of the J‐schedules.
Given a J‐schedule , let be a job that realizes the maximum job lateness in , i.e., . Let, further, be the block in that contains job . Among all the jobs in with this property, the latest scheduled one is called an
It follows that every kernel is contained in some block in , and the number of kernels in equals to the number of the overflow jobs in it. Furthermore, since any kernel belongs to a single block, it may contain no gap.
If schedule is not optimal, there must exist a job less urgent than , scheduled before all jobs of kernel that delays jobs in (see Lemma 1 a bit later). By rescheduling such a job to a later time moment, the jobs in kernel can be restarted earlier. We need some extra definitions to define this operation formally.
Suppose job precedes job in ED‐schedule . We will say that
Since the earliest scheduled job of kernel does not start at its release time (see Lemma 1 below), it is immediately preceded and pushed by a job with . In general, we may have more than one such a job scheduled before kernel in block (one containing ). We call such a job an
Now, we can give some optimality conditions for J‐schedules. The proofs of Lemmas 1 and 2 can be found in references [33, 39], respectively. Lemma 4 is obtained as a consequence of Lemmas 1 and 2, though the worst‐case bound of two was also known earlier.
From Lemma 1, we obtain the following corollary:
Observe that the conditions of the Lemma 1 and Corollary 1 are satisfied for our first problem instance of Figure 1. We also illustrate the above‐introduced definitions on a (1‐processor) problem instance of with 11 jobs, job 1 with , , and . All the rest of the jobs are released at time moment 10, have the equal processing time 1, and the delivery time 100. These data completely define our problem instance.
The initial J‐schedule consists of a single block, in which jobs are included in the increasing order of their indexes. The earliest scheduled job 1 is the live emerging job which is followed by jobs 2–11 scheduled in this order. It is easy to see that the latter jobs form the kernel in schedule . Indeed, all the 11 jobs belong to the same block, job 1 pushes the following jobs, and its delivery time is less than that of these pushed jobs. Hence, job 1 is the live emerging job in schedule . The overflow job is job 11, since it realizes the value of the maximum full completion time (the makespan) in schedule , which is . Therefore, jobs 2–11 form the kernel in .
Note that the condition in Lemma 1 is not satisfied for schedule as its kernel starts at time 100 which is more than . Furthermore, the condition of Corollary 1 is also not satisfied for schedule , and it is not optimal. The optimal schedule has the makespan 120, in which the live emerging job 1 is rescheduled behind all kernel jobs.
From here on, we use for the makespan (maximum full job completion time) of J‐schedule and (, respectively) for the optimum makespan (lateness, respectively).
For our problem instance and the corresponding schedule , the above bound is almost reached. Indeed, , whereas ().
Note that Lemma 2 implicitly defines a lower bound of derived from the solution of the non‐preemptive J‐heuristic, which can further be strengthen using the following concept. Let the delay for kernel , be ((, respectively) stands again for the live emerging (overflow, respectively) job for kernel ). Then, the next lemma follows from the observation that is another (more accurate than ) estimation for the delay of the earliest scheduled job of kernel .
Proof. If there exists no live emerging job for , then is optimal by Corollary 1. Suppose job exists; clearly, (as has to be scheduled in and there is at least one more (kernel) job in it). Then, by Lemma 2,
4. Refining J‐heuristic’s worst‐case estimation
From Lemma 2 of the previous section, we may see that the quality of the solution delivered by J‐heuristic is somehow related with the maximum job processing time in a given problem instance. If such a long job turns out to be the live emerging job, then the corresponding forced delay for the successively scheduled kernel jobs clearly affects the solution quality. We may express the magnitude as a fraction the optimal objective value and derive a more accurate approximation ratio. It might be possible to deduce this kind of relationship priory with a good accuracy. Take, for instance, a large‐scale production where the processing time of an individual job is small enough compared to an estimated total production time .
If this kind of prediction is not possible, we can use the bound from Lemma 3 by a single application of J‐heuristic and represent as its fraction (instead of representing it as a fraction of an unknown optimal objective value). Then, we can give an explicit expression of the heuristic’s approximation ratio in terms of that fraction. As we will see, J‐heuristic will always deliver a solution within a factor of of the optimum objective value. Alternatively, we may use a lower bound on the optimal objective value from Lemma 3 (as may not be known). Let be such that , i.e., . Since is a lower bound on (), we let , and thus we have that , i.e., is a valid assignment. Then, note that for any problem instance, can be obtained in time .
Proof. By Lemma 2,
In the previous section’s example, we had a very long live emerging job that has essentially contributed in the makespan of schedule . The resultant J‐schedule gave an almost worst‐possible performance ratio from Lemma 4 due to a significant intersection (close to the magnitude ). We now illustrate an advantage of the estimation from Theorem 1. Consider a slightly modified instance in which the emerging job remains moderately long (a more typical average scenario). A long emerging job 1 has processing time , release time , and the delivery time . The processing time of the rest of the jobs is again 1. Latter jobs are released at time 5 and also have the same delivery times as in the first instance. The J‐schedule has the makespan 120. The lower bound on the optimum makespan defined by Lemma 2 is hence 120 − 10 = 110.
The approximation provided by J‐heuristic for this problem instance can be obtained from Theorem 1. Based on Theorem 1, we use the lower bound 115 on and obtain a valid and the resultant approximation ratio 1 + 1/11.5. Observe that for our second (average) problem instance, J‐heuristic gave an almost optimal solution.
5. An adaptive 3/2‐approximation heuristic
In Section 2, we have mentioned two heuristic algorithms from references [19, 27] solving for our generic problem with the approximation ratios 3/2 and 4/3, respectively. In the previous section, we have described a more flexible approximation ratio that was obtained by a single application of J‐heuristic.
In this section, we propose an heuristic that gives approximation ratio 3/2 for a large class of problem instances, which we will specify a bit later. This algorithm, unlike the above‐mentioned algorithms, carries out a constant number of calls to J‐heuristic (yielding thus the same time complexity as J‐heuristic). Recall that the initial J‐schedule is obtained by a call of J‐heuristic for the originally given problem instance. By slightly modifying this instance, we may create alternative J‐schedules with some desired property. With the intention to improve the initial J‐schedule , and more generally, any J‐schedule , jobs in kernel can be restarted earlier.
To this end, we
We call the resultant J‐schedule a
Our heuristic first creates schedule , determines kernel , and verifies if there exists the live emerging job ; if there is no , then is optimal (Corollary 1). Otherwise, it creates one or more complementary schedules. The first of these complementary schedules is . If job remains to be an emerging job in schedule , then the second complementary schedule , obtained from the first one by activating job for kernel , is created. This operation is repeatedly applied as long as the newly arisen overflow job, that is, the overflow job in the latest created complementary schedule is released within the execution interval of job in schedule (job is activated for the kernel of that complementary schedule). The algorithm halts when either is not an emerging job in the newly created complementary schedule or the overflow job in that schedule is released behind the execution interval of job in schedule . Then, the heuristic determines the best objective value among the constructed J‐schedules and halts.
Proof. In an optimal schedule , either (1) job remains to be scheduled before the overflow job of schedule (and hence before all jobs of kernel ) or (2) is scheduled after job (and hence after kernel ).
Let be the set of emerging jobs in schedule not including the live emerging job . In case (1), either is already optimal or otherwise , and some job(s) from set are scheduled after kernel in an optimal schedule (so that job and the jobs in are rescheduled, respectively, earlier). Let be the total processing time of jobs in . Since job stays before kernel , (this can be seen similarly as Lemma 2). Let (real) be such that . Since schedule contains jobs of set and job , . We have
Hence, if (i.e., ), then .
Suppose now . Then, and using again Lemma 2
It remains to be considered in case (2) when job is scheduled after (all jobs from) kernel in schedule . We claim that schedule is again “long enough,” i.e., . Indeed, consider the J‐schedule . If is not optimal, then there exists an emerging job in . Similarly as above, in schedule , either (2.1) job remains before or (2.2) is scheduled after .
In case (2.2), must be an emerging job in . If the overflow job in kernel is released after time moment , then as job is scheduled after the jobs in in schedule . Otherwise, suppose the overflow job in schedule is released within time interval (note that it cannot be released at time 0 as otherwise would have originally been included ahead job by J‐heuristic). Without loss of generality and for the purpose of this proof, assume is an emerging job in schedule , as otherwise the latter schedule already gives a desired approximation, similarly as in case (1). Because of the same reason, either schedule gives a desired approximation or otherwise job remains to be an emerging job (now, ), and the heuristic creates the next complementary schedule . We repeatedly apply the same reasoning to the following created complementary schedules as long as the overflow job in the latest created such schedule is released within time interval . Once the latter condition is not satisfied, job will be started at time moment, larger than in the corresponding complementary schedule. Hence, its length will be at least . Moreover, an optimal schedule must be at least as long as unless one of the earlier created complementary schedules is optimal. Hence, one of the generated complementary schedules gives a desired approximation.
In case (2.1), if there is no emerging job in , then we are done. Otherwise, let be the set of emerging jobs in not including job . Then similarly as for case (1), there are two sub‐cases and , in each of which a desired approximation is reached. The theorem is proved.
As to the time complexity of the modified heuristic, note that, in the worst‐case, the overflow job in every created complementary schedule is released no later than at time . Then, the algorithm may create up to complementary schedules, and its time complexity will be the same as that of the earlier‐mentioned algorithms. However, it is clear that very unlikely, in an instance of , an “unlimited” amount of jobs are released before time (that would be a highly restricted instance). In average, however, we normally would expect a constant number of such jobs, which, more restrictively, must be overflow jobs in the created complementary schedules (not belonging to kernel ). In this case, our heuristic will obviously run in time . We have proved the following result:
We have described efficient heuristic methods for the solution of a strongly
We believe that J‐schedules possess further useful yet undiscovered properties that may lead to the disclosure of yet unknown insights of the structure of the related problems with release and delivery times. This kind of study was reported in recently published proceedings [8, 9] for the case of a single processor and two allowable job release and delivery times. It still remains open whether basic properties described in these works can be generalized for a constant number of job release and delivery times and for the multiprocessor case. At the same time, some other yet not studied properties even for a single processor and two allowable job release and delivery times may exist. The importance of such a study is emphasized by the fact that the basic single‐machine scheduling problem is strongly