## Abstract

In this chapter, we describe efficient heuristics for scheduling jobs with release and delivery times with the objective to minimize the maximum job completion time. These heuristics are essentially based on a commonly used scheduling theory in Jackson’s extended heuristic. We present basic structural properties of the solutions delivered by Jackson’s heuristic and then illustrate how one can exploit them to build efficient heuristics.

### Keywords

- combinatorial optimization
- heuristic algorithm
- scheduling theory
- time complexity
- approximation algorithm

## 1. Introduction

The *combinatorial optimization* problems have emerged in late 40s of last century due to a rapid growth of the industry and new arisen demands in efficient solution methods. Modeled in mathematical language, a combinatorial optimization problem has a finite set of the so‐called *feasible solutions*; this set is determined by a set of restrictions that naturally arise in practice. Usually, there is an objective function in which domain is the latter set. One aims to determine a feasible solution that gives an extremal (minimal or maximal) value to the objective function, the so‐called *optimal solution*. Since the number of feasible solutions is typically finite, theoretically, finding an optimal solution is trivial: just enumerate all the feasible solutions calculating for each of them the value of the objective function and select any one with the optimal objective value. The main issue here is that a brutal enumeration of all feasible solutions might be impossible in practice.

There are two distinct classes of combinatorial optimization problems, class *NP*‐hard problems. For a problem from class *size* of the problem) algorithm. But no such algorithm exists for an *NP*‐hard problem. The number of feasible solutions of an *NP*‐hard optimization problem grows exponentially with the size of the input (which, informally, is the amount of the computer memory necessary to represent the problem data/parameters). Furthermore, all *NP*‐hard problems have a similar *computational* (*time*) *complexity*, in the sense that if there will be found an efficient polynomial‐time algorithm for any of them, such an algorithm would yield another polynomial‐time algorithm for any other *NP*‐hard problem. On the positive side, all *NP*‐hard problems belong to the *class NP* that guarantees that any feasible schedule to an *NP*‐hard problem can be found in polynomial time. It is believed that it is very unlikely that an *NP*‐hard problem can be solved in polynomial time (whereas an exact polynomial‐time algorithm with a reasonable real‐time behavior exists for a problem in class

Thus, approximation algorithms are most frequently used for the solution of *NP*‐hard problems. Any *NP*‐hard problem has a nice characteristic, that is, any feasible solution can be created and verified in polynomial time, like for problems in class *greedy* algorithm creates in this way a single feasible solution by iteratively taking the next “most promising” decision, until a complete solution is created. These decisions are normally taken in a low‐degree polynomial/constant time. Since the total number of iterations equals to the number of objects in a given problem instance, the overall time complexity of a greedy algorithm is low. Likewise, *heuristic* algorithms reduce the search space creating one or more feasible solutions in polynomial time. Greedy and heuristic algorithms are simplest approximation algorithms. It is easy to construct such an algorithm for both polynomial and *NP*‐hard problems. It may deliver an optimal solution to a problem from calls *NP*‐hard problem. A greedy algorithm reduces the potential search space by taking a unique decision for the extension of the current partial solution in each of the *performance ratio*: the ratio of the value of objective function of the worst solution that may deliver the algorithm to the optimal objective value (a real number greater than 1).

*Scheduling problems* are important combinatorial optimization problems. A given set of requests called *jobs* are to be performed (scheduled) on a finite set of resources called *machines* (or *processors*). The objective is to determine the processing order of jobs on machines in order to minimize or maximize a given objective function. Scheduling problems have a wide range of applications from production process to computer systems optimization.

Simple greedy heuristics that use some priority dispatching rules for the for taking the decisions can be easily constructed and adopted for scheduling problems. An obvious advantage of such heuristics is their rapidness, and an obvious disadvantage is a poor solution quality. The generation of a better solution needs more computational and algorithmic efforts. A *Global search* in the feasible solution space guarantees an optimal solution, but it can take inadmissible computational time. A *local* (*neighborhood*) *search* takes reasonable computational time, and the solution which it gives is locally best (i.e., best among all considered neighbor solutions). Simulated annealing, tabu‐search, genetic algorithms, and beam search are examples of local search algorithms (for example, [16, 22, 23, 25]). These algorithms reduce the search space, and at the same time, their search is less restricted than that of simple heuristic (dispatching) algorithms, giving, in general, better quality solutions than simple greedy algorithms. Global search methods include (exact) implicit enumerative algorithms and also approximative algorithm with embedded heuristic rules and strategies (for example, [1, 20, 29, 38, 4]). Normally, global search algorithms provide the solutions with the better quality than the local search algorithms, but they also take more computer time.

This chapter deals with one of the most widely used greedy heuristics in scheduling theory. The generic heuristic for scheduling jobs with release and delivery times on a single machine to minimize the maximum job completion time is named after Jackson [21] (the heuristic was originally proposed for the version without release times, and then it was extended for the problem with release times by Schrage [30]). Jackson’s heuristic (J‐heuristic, for short), iteratively, at each scheduling time

In this chapter, we give a brief overview of heuristic algorithms that are essentially based on J‐heuristic. Then, we go into the analysis of the schedules created by J‐heuristic (J‐schedule), showing their beneficial structural properties. They are helpful for a closer study of the related problems, which, in turn, may lead to better solution methods. We illustrate how the deduced structural properties can be beneficially used by building an adaptive heuristic algorithm for our generic scheduling problem with a more flexible worst‐case performance ratio than that of J‐heuristic.

The next section consists of four subsections. In Section 2, we first describe our basic scheduling problem, Jackson’s heuristic, other related heuristics, and real‐life applications of the scheduling problem. In Section 3, we study the basic structural properties of the schedules constructed by Jackson’s heuristic. In Section 4, we derive a flexible worst‐case performance estimation for the heuristic, and in Section 5, we construct a new adaptive heuristic based on Jackson’s heuristic. Section 6 concludes the chapter with final remarks.

## 2. Preliminaries

### 2.1. Problem formulation

Our generic single‐machine scheduling problem can be described as follows. We have *release time* *delivery time* *full completion* (the jobs are delivered by an independent unit and this takes no machine time). We wish to minimize the maximum job full completion time.

The problem is known to be strongly *NP* hard (Garey and Johnson [13]). According to the conventional three‐field notation introduced by Graham et al. [18], the above problem is abbreviated as

The problem has an equivalent formulation *due dates* and the maximum job *lateness*

We may note that, besides job lateness, there are other due‐date‐oriented objective criteria. A common one is *the number of late* jobs, where job is late if it completes behind its due date. Here, the number of late jobs is to be minimized. In the *feasibility* version of the problem, one looks for a schedule with no late job. Obviously, if in an optimal solution of the minimization version, the maximum job lateness is no more than 0, then it is a feasible solution of the feasibility version as well; otherwise (the maximum job lateness is positive), there exists no feasible solution to the feasibility version. Vice versa, an algorithm for the feasibility problem can be used to solve the minimization version: we iteratively increase due dates of all jobs until we find a feasible schedule with the modified due dates. Note that the min‐max job lateness obtained in this way depends on the maximum job processing time

Given an instance of

### 2.2. Description of J‐heuristic

Now, we describe Jackson’s greedy heuristic (J‐heuristic) in detail that works on

Since the heuristic always schedules an earliest released job every time, the machine becomes idle and it creates no gap that can be avoided. The time complexity of the heuristic is

The heuristic is easily expendable for multiprocessor and preemptive scheduling problems with release and delivery (due) times. For *active* one.

**Multiprocessor J‐heuristic**

**while** **do**

**begin**

find job

update the current active processor and set

**end**

We illustrate a 3‐processor J‐schedule in Figure 1 for eight jobs with the parameters as specified in the table as follows:

As it can be seen in Figure 1, J‐heuristic creates idle time intervals (the gaps) on all three processors constructing an optimal schedule with makespan 54. Note that job 7 realizes the maximum objective value 54 being scheduled on processor 2 (we call such job the overflow job as we define a bit later), the completion time of processor 2 is 24 (that of job 7), whereas the full completion time of job 7 is 54.

For preemptive version of J‐heuristic, every currently executed job is interrupted at the earliest time moment when a more urgent jobs get released. It is well‐known and also easily seen that the preemptive version of J‐heuristic delivers an optimal solution for the corresponding preemptive scheduling problem.

### 2.3. Overview of related heuristics

As mentioned earlier, J‐heuristic turned out to be highly flexible in the sense that its different modifications have been used for the solution of different scheduling problems. Potts [27] has proposed an extension of the heuristic. His algorithm repeatedly applies J‐heuristic

Multiprocessor version of J‐heuristic has been used as a basis for the solution of multiprocessor scheduling problems. For example, for the feasibility version with

J‐heuristic can be efficiently used for the solution of shop scheduling problems. Using J‐heuristic as a schedule generator, McMahon and Florian [24] and Carlier [5] have proposed efficient enumerative algorithms for

The problem

J‐heuristic can also be useful for parallelizing the computations in scheduling job shop [26] and also for the parallel batch scheduling problems with release times [10].

### 2.4. Some other applications

Besides the above‐mentioned applications in multiprocessor, shop, and batch scheduling problems, our problem has numerous immediate real‐life applications in various production chains, CPU time sharing in operating systems (jobs being the processes to be executed by the processor), wireless sensor network transmission range distribution (where jobs are mobile devices with their corresponding ranges that can be modeled as release and due dates), and important transportation problems such as traveling salesman’s and vehicle routing problems with time windows. The reader may wish to have a look at reference [28] for an extensive overview of the variations of the vehicle routing problem, and we also refer to [11, 41] for more recent related work.

Our scheduling problem can be used for the solution of the latter transportation problems. Let us first describe these problems briefly. There are, say,

A common objective is to minimize the total service/travel time of all the vehicles. Whereas in the basic setting, it is straightforward to find a feasible solution, with time windows, this task is not obvious, in fact, there may exist no feasible solution. If it exists, then one aims to minimize the number of used vehicles and then construct the corresponding number of tours with the objective to minimize total service time.

Associating with every customer and the depot a unique job and with the corresponding time window the release and due dates of that job, we arrive at a corresponding scheduling problem, an instance of

## 3. The structure of J‐schedules

Previous section’s brief survey clearly indicates importance of our scheduling problem and the power and flexibility of J‐heuristic as well. Whenever the direct application of J‐heuristic for the solution of the problem is concerned and the solution quality is important, the worst‐case bound of two may not be acceptable. Besides, J‐heuristic may not solve the feasibility version of our problem even though there may exist a feasible solution with no positive maximum lateness. To this end, there are two relevant points that deserve mentioning. On the one hand, the practical behavior of the heuristic might be essentially better than this worst‐case estimation [40]. On the other hand, by carrying out structural analysis of J‐schedules, it is possible to obtain a better, more flexible worst‐case bound, as we show in the next section. In this section, we introduce some basic concepts that will help us in this analysis.

Let us denote by *gap*, which is its maximal consecutive time interval in which the machine is idle. We shall assume that there occurs a 0‐length gap

Let us call a *block*, a maximal consecutive part of a J‐schedule, is consisting of the successively scheduled jobs without any gap in between (preceded and succeeded by a gap).

Now, we give some necessary concepts from [33] that will help us to expose useful structural properties of the J‐schedules.

Given a J‐schedule *overflow job* in

A *kernel* in

It follows that every kernel is contained in some block in

If schedule

Suppose job *pushes*

Since the earliest scheduled job of kernel *emerging job* for *live* emerging job.

Now, we can give some optimality conditions for J‐schedules. The proofs of Lemmas 1 and 2 can be found in references [33, 39], respectively. Lemma 4 is obtained as a consequence of Lemmas 1 and 2, though the worst‐case bound of two was also known earlier.

**Lemma 1**. *The maximum job lateness (the makespan) of a kernel* *cannot be reduced if the earliest scheduled job in* *starts at time* *. Hence, if a J‐schedule* *contains a kernel with this property, it is optimal.*

From Lemma 1, we obtain the following corollary:

**Corollary 1**. *If schedule* *contains a kernel with no live emerging job* *, then* *is optimal.*

Observe that the conditions of the Lemma 1 and Corollary 1 are satisfied for our first problem instance of Figure 1. We also illustrate the above‐introduced definitions on a (1‐processor) problem instance of _{,} and

The initial J‐schedule _{,} which is

Note that the condition in Lemma 1 is not satisfied for schedule _{,} and it is not optimal. The optimal schedule

From here on, we use

**Lemma 2.** *, where* *is the live emerging job for kernel*

For our problem instance and the corresponding schedule

Note that Lemma 2 implicitly defines a lower bound of

**Lemma 3** *(**, respectively) is a lower bound on the optimal job makespan* *(lateness* *, respectively).*

**Lemma 4** *J‐heuristic gives a 2‐approximate solution for* *, i.e.,*

Proof. If there exists no live emerging job _{,} then

□

## 4. Refining J‐heuristic’s worst‐case estimation

From Lemma 2 of the previous section, we may see that the quality of the solution delivered by J‐heuristic is somehow related with the maximum job processing time

If this kind of prediction is not possible, we can use the bound from Lemma 3 by a single application of J‐heuristic and represent

**Theorem 1** *, for any*

Proof. By Lemma 2,

□

In the previous section’s example, we had a very long live emerging job that has essentially contributed in the makespan of schedule _{,} and the delivery time

The approximation provided by J‐heuristic for this problem instance can be obtained from Theorem 1. Based on Theorem 1, we use the lower bound 115 on

## 5. An adaptive 3/2‐approximation heuristic

In Section 2, we have mentioned two

In this section, we propose an

To this end, we *activate* an emerging job *that is*, we force job

We call the resultant J‐schedule a *complementary* to

Our _{,} and verifies if there exists the live emerging job _{,} then

**Theorem 2** *The modified heuristic has the performance ratio less than*

Proof. In an optimal schedule

Let

Hence, if

Suppose now

It remains to be considered in case (2) when job _{,} either (2.1) job

In case (2.2),

In case (2.1), if there is no emerging job in _{,} then we are done. Otherwise, let

As to the time complexity of the modified heuristic, note that, in the worst‐case, the overflow job in every created complementary schedule is released no later than at time _{,} an “unlimited” amount of jobs are released before time

**Theorem 3**. *The modified heuristic runs in time* *for any problem instance of* *in which the total number of the arisen overflow jobs in all the created complementary schedules released before time* *is no more than a constant* *(more brutally, for any instance in which the total number of jobs released before time* *is bounded by* *).*

## 6. Conclusion

We have described efficient heuristic methods for the solution of a strongly *NP*‐hard scheduling problem that, as we have discussed, has a number of important real‐life applications. We have argued that it is beneficial as an analysis of the basic structural properties of the schedules created by J‐heuristic for the construction of efficient heuristic methods with guaranteed worst‐case performance ratios. As we have seen, not only J‐heuristic constructs 2‐optimal solutions in a low‐degree polynomial time, but it is also flexible enough to be served as a basis for other more efficient heuristics. The useful properties of J‐schedules were employed in our flexible worst‐case performance bound of Section 4 and in the proposed, in Section 5, heuristic algorithm with an improved performance ratio. The latter heuristic is adaptive in the sense that it takes an advantage of the structure of an average problem instance and runs faster for such instances.

We believe that J‐schedules possess further useful yet undiscovered properties that may lead to the disclosure of yet unknown insights of the structure of the related problems with release and delivery times. This kind of study was reported in recently published proceedings [8, 9] for the case of a single processor and two allowable job release and delivery times. It still remains open whether basic properties described in these works can be generalized for a constant number of job release and delivery times and for the multiprocessor case. At the same time, some other yet not studied properties even for a single processor and two allowable job release and delivery times may exist. The importance of such a study is emphasized by the fact that the basic single‐machine scheduling problem is strongly *NP*‐hard and that the version with only two allowable job release and delivery times remains *NP* hard [8].