In this chapter, we describe efficient heuristics for scheduling jobs with release and delivery times with the objective to minimize the maximum job completion time. These heuristics are essentially based on a commonly used scheduling theory in Jackson’s extended heuristic. We present basic structural properties of the solutions delivered by Jackson’s heuristic and then illustrate how one can exploit them to build efficient heuristics.
- combinatorial optimization
- heuristic algorithm
- scheduling theory
- time complexity
- approximation algorithm
The combinatorial optimization problems have emerged in late 40s of last century due to a rapid growth of the industry and new arisen demands in efficient solution methods. Modeled in mathematical language, a combinatorial optimization problem has a finite set of the so‐called feasible solutions; this set is determined by a set of restrictions that naturally arise in practice. Usually, there is an objective function in which domain is the latter set. One aims to determine a feasible solution that gives an extremal (minimal or maximal) value to the objective function, the so‐called optimal solution. Since the number of feasible solutions is typically finite, theoretically, finding an optimal solution is trivial: just enumerate all the feasible solutions calculating for each of them the value of the objective function and select any one with the optimal objective value. The main issue here is that a brutal enumeration of all feasible solutions might be impossible in practice.
There are two distinct classes of combinatorial optimization problems, class of polynomially solvable ones and NP‐hard problems. For a problem from class , there exists an efficient (polynomial in the size of the problem) algorithm. But no such algorithm exists for an NP‐hard problem. The number of feasible solutions of an NP‐hard optimization problem grows exponentially with the size of the input (which, informally, is the amount of the computer memory necessary to represent the problem data/parameters). Furthermore, all NP‐hard problems have a similar computational (time) complexity, in the sense that if there will be found an efficient polynomial‐time algorithm for any of them, such an algorithm would yield another polynomial‐time algorithm for any other NP‐hard problem. On the positive side, all NP‐hard problems belong to the class NP that guarantees that any feasible schedule to an NP‐hard problem can be found in polynomial time. It is believed that it is very unlikely that an NP‐hard problem can be solved in polynomial time (whereas an exact polynomial‐time algorithm with a reasonable real‐time behavior exists for a problem in class ). Hence, it is natural to think of an approximation solution method.
Thus, approximation algorithms are most frequently used for the solution of NP‐hard problems. Any NP‐hard problem has a nice characteristic, that is, any feasible solution can be created and verified in polynomial time, like for problems in class . A greedy algorithm creates in this way a single feasible solution by iteratively taking the next “most promising” decision, until a complete solution is created. These decisions are normally taken in a low‐degree polynomial/constant time. Since the total number of iterations equals to the number of objects in a given problem instance, the overall time complexity of a greedy algorithm is low. Likewise, heuristic algorithms reduce the search space creating one or more feasible solutions in polynomial time. Greedy and heuristic algorithms are simplest approximation algorithms. It is easy to construct such an algorithm for both polynomial and NP‐hard problems. It may deliver an optimal solution to a problem from calls , but it is highly unlikely that a heuristic optimal algorithm may exist for an NP‐hard problem. A greedy algorithm reduces the potential search space by taking a unique decision for the extension of the current partial solution in each of the iterations. A simplest heuristic algorithm is greedy, though there are more sophisticated heuristic algorithms that use different search strategies. In general, an approximation algorithm may guarantee some worst‐case behavior measured by its performance ratio: the ratio of the value of objective function of the worst solution that may deliver the algorithm to the optimal objective value (a real number greater than 1).
Scheduling problems are important combinatorial optimization problems. A given set of requests called jobs are to be performed (scheduled) on a finite set of resources called machines (or processors). The objective is to determine the processing order of jobs on machines in order to minimize or maximize a given objective function. Scheduling problems have a wide range of applications from production process to computer systems optimization.
Simple greedy heuristics that use some priority dispatching rules for the for taking the decisions can be easily constructed and adopted for scheduling problems. An obvious advantage of such heuristics is their rapidness, and an obvious disadvantage is a poor solution quality. The generation of a better solution needs more computational and algorithmic efforts. A Global search in the feasible solution space guarantees an optimal solution, but it can take inadmissible computational time. A local (neighborhood) search takes reasonable computational time, and the solution which it gives is locally best (i.e., best among all considered neighbor solutions). Simulated annealing, tabu‐search, genetic algorithms, and beam search are examples of local search algorithms (for example, [16, 22, 23, 25]). These algorithms reduce the search space, and at the same time, their search is less restricted than that of simple heuristic (dispatching) algorithms, giving, in general, better quality solutions than simple greedy algorithms. Global search methods include (exact) implicit enumerative algorithms and also approximative algorithm with embedded heuristic rules and strategies (for example, [1, 20, 29, 38, 4]). Normally, global search algorithms provide the solutions with the better quality than the local search algorithms, but they also take more computer time.
This chapter deals with one of the most widely used greedy heuristics in scheduling theory. The generic heuristic for scheduling jobs with release and delivery times on a single machine to minimize the maximum job completion time is named after Jackson  (the heuristic was originally proposed for the version without release times, and then it was extended for the problem with release times by Schrage ). Jackson’s heuristic (J‐heuristic, for short), iteratively, at each scheduling time (given by job release or completion time), among the jobs released by time schedules one with the largest delivery time. This 2‐approximation heuristic is important on its own right, and it also provides a firm basis for more complex heuristic algorithms that solve optimally various scheduling problems that cannot be solved by a greedy method.
In this chapter, we give a brief overview of heuristic algorithms that are essentially based on J‐heuristic. Then, we go into the analysis of the schedules created by J‐heuristic (J‐schedule), showing their beneficial structural properties. They are helpful for a closer study of the related problems, which, in turn, may lead to better solution methods. We illustrate how the deduced structural properties can be beneficially used by building an adaptive heuristic algorithm for our generic scheduling problem with a more flexible worst‐case performance ratio than that of J‐heuristic.
The next section consists of four subsections. In Section 2, we first describe our basic scheduling problem, Jackson’s heuristic, other related heuristics, and real‐life applications of the scheduling problem. In Section 3, we study the basic structural properties of the schedules constructed by Jackson’s heuristic. In Section 4, we derive a flexible worst‐case performance estimation for the heuristic, and in Section 5, we construct a new adaptive heuristic based on Jackson’s heuristic. Section 6 concludes the chapter with final remarks.
2.1. Problem formulation
Our generic single‐machine scheduling problem can be described as follows. We have jobs from set and a single machine. Job is characterized by its release time , a time moment when it becomes available. Once a released job is assigned to the machine, it takes time units of uninterrupted processing time on that machine. Here, we have a basic restriction that the machine can process no more than one job at any time moment. Once job completes its processing on the machine, it still needs a (constant) delivery time for its full completion (the jobs are delivered by an independent unit and this takes no machine time). We wish to minimize the maximum job full completion time.
The problem is known to be strongly NP hard (Garey and Johnson ). According to the conventional three‐field notation introduced by Graham et al. , the above problem is abbreviated as : in the first field, the single‐machine environment is indicated, the second field specifies job parameters, and the third field specifies objective criteria.
The problem has an equivalent formulation in which delivery times are interchanged by due dates and the maximum job lateness , that is, the difference between the job completion time and its due date is minimized (due date of job is the desirable time for the completion of job ).
We may note that, besides job lateness, there are other due‐date‐oriented objective criteria. A common one is the number of late jobs, where job is late if it completes behind its due date. Here, the number of late jobs is to be minimized. In the feasibility version of the problem, one looks for a schedule with no late job. Obviously, if in an optimal solution of the minimization version, the maximum job lateness is no more than 0, then it is a feasible solution of the feasibility version as well; otherwise (the maximum job lateness is positive), there exists no feasible solution to the feasibility version. Vice versa, an algorithm for the feasibility problem can be used to solve the minimization version: we iteratively increase due dates of all jobs until we find a feasible schedule with the modified due dates. Note that the min‐max job lateness obtained in this way depends on the maximum job processing time and so that we will need to apply a feasibility algorithm times. But by using binary search, the cost can be reduced to .
Given an instance of , one can obtain an equivalent instance of as follows. Take a suitably large constant (no less than the maximum job delivery time) and define due date of every job as . Vice versa, given an instance of , we may create an equivalent instance of by introducing job delivery times, , taking a suitably large constant (any number larger than the maximum job due date would work). It can be easily seen by the equivalence of these instances (if the makespan for the version is minimized, the maximum job lateness in is minimized, and vice versa, see Bratley et al.  for more details). Because of the equivalence, both above formulations might be used interchangeably.
2.2. Description of J‐heuristic
Now, we describe Jackson’s greedy heuristic (J‐heuristic) in detail that works on scheduling times (at every scheduling time, the next job is scheduled on the machine). Initially, the earliest scheduling time is set to the minimum job release time. Iteratively, among all jobs released by a given scheduling time, a job with the maximum delivery time is scheduled on the machine (ties might be broken by selecting any longest available job). Once a job completes on the machine, the next scheduling time is set to the maximum between the completion time of that job and the minimum release time of a yet unscheduled job.
Since the heuristic always schedules an earliest released job every time, the machine becomes idle and it creates no gap that can be avoided. The time complexity of the heuristic is as at every scheduling times, the search for a maximal element in an ordered list is carried out.
The heuristic is easily expendable for multiprocessor and preemptive scheduling problems with release and delivery (due) times. For identical parallel processor case, a ready job with the largest tail (or smallest due date) is repeatedly determined and is scheduled on the processor with the minimal completion time ties being broken by selecting the processor with the minimal index. For the sake of conciseness, we below refer to that processor as the active one.
find job with the largest delivery time and schedule it at time on the corresponding active processor; ;
update the current active processor and set to the maximum between the completion time of that processor and minimal job release time in set
We illustrate a 3‐processor J‐schedule in Figure 1 for eight jobs with the parameters as specified in the table as follows:
As it can be seen in Figure 1, J‐heuristic creates idle time intervals (the gaps) on all three processors constructing an optimal schedule with makespan 54. Note that job 7 realizes the maximum objective value 54 being scheduled on processor 2 (we call such job the overflow job as we define a bit later), the completion time of processor 2 is 24 (that of job 7), whereas the full completion time of job 7 is 54.
For preemptive version of J‐heuristic, every currently executed job is interrupted at the earliest time moment when a more urgent jobs get released. It is well‐known and also easily seen that the preemptive version of J‐heuristic delivers an optimal solution for the corresponding preemptive scheduling problem.
2.3. Overview of related heuristics
As mentioned earlier, J‐heuristic turned out to be highly flexible in the sense that its different modifications have been used for the solution of different scheduling problems. Potts  has proposed an extension of the heuristic. His algorithm repeatedly applies J‐heuristic times and obtains an improved approximation ratio of at the cost of an increase by a factor of time complexity. Hall and Shmoys , also based on J‐heuristic, have developed another 4/3‐approximation polynomial‐time algorithm with the same time complexity of for the version of our problem with precedence relations. Garey et al.  have modified the heuristic as another more sophisticated heuristic for the feasibility version of this problem with equal‐length jobs (in the feasibility version, job due dates are replaced by deadlines and a schedule in which all jobs complete by their deadlines is looked for). This result was extended to the version of problem with two possible job processing times in an algorithm described in . For another relevant criterion, an algorithm that minimizes the number of late jobs with release times on a single machine when job preemptions are allowed was proposed in . Without preemptions, an algorithm for the case when all jobs have equal length was proposed in .
Multiprocessor version of J‐heuristic has been used as a basis for the solution of multiprocessor scheduling problems. For example, for the feasibility version with identical machines and equal‐length jobs, algorithms with the time complexities and were proposed in Simons  and Simons and Warmuth , respectively. Using the J‐heuristic as a schedule generator, an algorithm for the minimization version of the latter problem was proposed in , where is the maximum job delivery time and is a parameter. With the objective to minimize the number of late jobs on a group of identical processors, an non‐preemptive algorithm for equal‐length jobs was proposed in .
J‐heuristic can be efficiently used for the solution of shop scheduling problems. Using J‐heuristic as a schedule generator, McMahon and Florian  and Carlier  have proposed efficient enumerative algorithms for . Grabowski et al.  use the heuristic for the obtainment of an initial solution in another enumerative algorithm for the same problem.
The problem naturally arises in job‐shop scheduling problems as an auxiliary problem for the derivation of strong lower bounds. By ignoring the potential yet unresolved conflicts on all the machines except a selected machine , the corresponding disjunctive graph defines an auxiliary instance of problem on machine , where every task to be performed on that machine is characterized by an early starting time (defined by the early completion times of its predecessor‐tasks) that is set as its release time and the tail or the delivery time (determined by the processing times of the predecessor‐tasks of task ). In multiprocessor job‐shop scheduling problems, a single machine is replaced by a group of parallel machines, and the corresponding multiprocessor version of problem is derived. For the purpose of a lower bound, preemptive version of the above problems with release and delivery times might be considered and preemptive J‐heuristic can then be applied. For relevant studies on a classical job‐shop scheduling problem, see, for example, Carlier , Carlier and Pinson , Brinkkotter and Brucker , and more recent works of Gharbi and Labidi  and Della Croce and T’kindt  and for multiprocessor job‐shop scheduling problem with identical machines, see Carlier and Pinson . This approach can also be extended for the case when parallel machines are unrelated .
2.4. Some other applications
Besides the above‐mentioned applications in multiprocessor, shop, and batch scheduling problems, our problem has numerous immediate real‐life applications in various production chains, CPU time sharing in operating systems (jobs being the processes to be executed by the processor), wireless sensor network transmission range distribution (where jobs are mobile devices with their corresponding ranges that can be modeled as release and due dates), and important transportation problems such as traveling salesman’s and vehicle routing problems with time windows. The reader may wish to have a look at reference  for an extensive overview of the variations of the vehicle routing problem, and we also refer to [11, 41] for more recent related work.
Our scheduling problem can be used for the solution of the latter transportation problems. Let us first describe these problems briefly. There are, say, customers or cities and one special location called depot. The distances between any pair of locations are known. The goods are to be distributed from depot to the customers using one or more (identical) vehicles. There are certain restrictions on how this distribution should be done that define set of all feasible solutions to the problem. A general notion is a tour carried out by a vehicle that initiates at depot, visits some of the customers, and returns to depot. All customers must be served, i.e., every customer is to be included in exactly one tour. There may be additional restrictions such as vehicle capacity constraints and customer requests. And another basic constraint, which relates these transportation problems with our scheduling problem, is that every customer can be served only within a certain time interval, whereas there is also a valid time interval given for the depot.
A common objective is to minimize the total service/travel time of all the vehicles. Whereas in the basic setting, it is straightforward to find a feasible solution, with time windows, this task is not obvious, in fact, there may exist no feasible solution. If it exists, then one aims to minimize the number of used vehicles and then construct the corresponding number of tours with the objective to minimize total service time.
Associating with every customer and the depot a unique job and with the corresponding time window the release and due dates of that job, we arrive at a corresponding scheduling problem, an instance of . Let us consider the feasibility version of this problem (in which a solution with no positive lateness is looked for). Note that if there is no feasible solution to that feasibility version, then there exists no feasible solution to the corresponding vehicle routing problem with a single vehicle. Then, we may consider the feasibility problem with two identical machines and so on, with identical machines , until a feasible solution is found. We may use a binary search within the interval instead when an upper limit on the maximum number of vehicles is known (otherwise, we set to a sufficiently large magnitude). In case, there exists a feasible solution for , once the minimum is found, the corresponding tours minimizing the total travel time might be constructed.
3. The structure of J‐schedules
Previous section’s brief survey clearly indicates importance of our scheduling problem and the power and flexibility of J‐heuristic as well. Whenever the direct application of J‐heuristic for the solution of the problem is concerned and the solution quality is important, the worst‐case bound of two may not be acceptable. Besides, J‐heuristic may not solve the feasibility version of our problem even though there may exist a feasible solution with no positive maximum lateness. To this end, there are two relevant points that deserve mentioning. On the one hand, the practical behavior of the heuristic might be essentially better than this worst‐case estimation . On the other hand, by carrying out structural analysis of J‐schedules, it is possible to obtain a better, more flexible worst‐case bound, as we show in the next section. In this section, we introduce some basic concepts that will help us in this analysis.
Let us denote by , the schedule obtained by the application of J‐heuristic to the originally given problem instance (as we will see later, this heuristic can also be beneficially applied to some other derived problem instances). Schedule , and, in general, any J‐schedule, may contain a gap, which is its maximal consecutive time interval in which the machine is idle. We shall assume that there occurs a 0‐length gap , whenever job starts at its earliest possible starting time (that is, its release time) immediately after the completion of job ; here, (, respectively) denotes the starting (completion, respectively) time of job .
Let us call a block, a maximal consecutive part of a J‐schedule, is consisting of the successively scheduled jobs without any gap in between (preceded and succeeded by a gap).
Now, we give some necessary concepts from  that will help us to expose useful structural properties of the J‐schedules.
Given a J‐schedule , let be a job that realizes the maximum job lateness in , i.e., . Let, further, be the block in that contains job . Among all the jobs in with this property, the latest scheduled one is called an overflow job in (we just note that not necessarily this job ends block ).
A kernel in is a maximal (consecutive) job sequence ending with an overflow job such that no job from this sequence has a due date more than . For a kernel , we let .
It follows that every kernel is contained in some block in , and the number of kernels in equals to the number of the overflow jobs in it. Furthermore, since any kernel belongs to a single block, it may contain no gap.
If schedule is not optimal, there must exist a job less urgent than , scheduled before all jobs of kernel that delays jobs in (see Lemma 1 a bit later). By rescheduling such a job to a later time moment, the jobs in kernel can be restarted earlier. We need some extra definitions to define this operation formally.
Suppose job precedes job in ED‐schedule . We will say that pushes in if ED‐heuristic will reschedule job earlier whenever is forced to be scheduled behind .
Since the earliest scheduled job of kernel does not start at its release time (see Lemma 1 below), it is immediately preceded and pushed by a job with . In general, we may have more than one such a job scheduled before kernel in block (one containing ). We call such a job an emerging job for , and we call the latest scheduled one (job above) the live emerging job.
Now, we can give some optimality conditions for J‐schedules. The proofs of Lemmas 1 and 2 can be found in references [33, 39], respectively. Lemma 4 is obtained as a consequence of Lemmas 1 and 2, though the worst‐case bound of two was also known earlier.
Lemma 1. The maximum job lateness (the makespan) of a kernel cannot be reduced if the earliest scheduled job in starts at time . Hence, if a J‐schedule contains a kernel with this property, it is optimal.
From Lemma 1, we obtain the following corollary:
Corollary 1. If schedule contains a kernel with no live emerging job , then is optimal.
Observe that the conditions of the Lemma 1 and Corollary 1 are satisfied for our first problem instance of Figure 1. We also illustrate the above‐introduced definitions on a (1‐processor) problem instance of with 11 jobs, job 1 with , , and . All the rest of the jobs are released at time moment 10, have the equal processing time 1, and the delivery time 100. These data completely define our problem instance.
The initial J‐schedule consists of a single block, in which jobs are included in the increasing order of their indexes. The earliest scheduled job 1 is the live emerging job which is followed by jobs 2–11 scheduled in this order. It is easy to see that the latter jobs form the kernel in schedule . Indeed, all the 11 jobs belong to the same block, job 1 pushes the following jobs, and its delivery time is less than that of these pushed jobs. Hence, job 1 is the live emerging job in schedule . The overflow job is job 11, since it realizes the value of the maximum full completion time (the makespan) in schedule , which is . Therefore, jobs 2–11 form the kernel in .
Note that the condition in Lemma 1 is not satisfied for schedule as its kernel starts at time 100 which is more than . Furthermore, the condition of Corollary 1 is also not satisfied for schedule , and it is not optimal. The optimal schedule has the makespan 120, in which the live emerging job 1 is rescheduled behind all kernel jobs.
From here on, we use for the makespan (maximum full job completion time) of J‐schedule and (, respectively) for the optimum makespan (lateness, respectively).
Lemma 2. , where is the live emerging job for kernel .
For our problem instance and the corresponding schedule , the above bound is almost reached. Indeed, , whereas ().
Note that Lemma 2 implicitly defines a lower bound of derived from the solution of the non‐preemptive J‐heuristic, which can further be strengthen using the following concept. Let the delay for kernel , be ((, respectively) stands again for the live emerging (overflow, respectively) job for kernel ). Then, the next lemma follows from the observation that is another (more accurate than ) estimation for the delay of the earliest scheduled job of kernel .
Lemma 3 (, respectively) is a lower bound on the optimal job makespan (lateness , respectively).
Lemma 4 J‐heuristic gives a 2‐approximate solution for , i.e., .
Proof. If there exists no live emerging job for , then is optimal by Corollary 1. Suppose job exists; clearly, (as has to be scheduled in and there is at least one more (kernel) job in it). Then, by Lemma 2,
4. Refining J‐heuristic’s worst‐case estimation
From Lemma 2 of the previous section, we may see that the quality of the solution delivered by J‐heuristic is somehow related with the maximum job processing time in a given problem instance. If such a long job turns out to be the live emerging job, then the corresponding forced delay for the successively scheduled kernel jobs clearly affects the solution quality. We may express the magnitude as a fraction the optimal objective value and derive a more accurate approximation ratio. It might be possible to deduce this kind of relationship priory with a good accuracy. Take, for instance, a large‐scale production where the processing time of an individual job is small enough compared to an estimated total production time .
If this kind of prediction is not possible, we can use the bound from Lemma 3 by a single application of J‐heuristic and represent as its fraction (instead of representing it as a fraction of an unknown optimal objective value). Then, we can give an explicit expression of the heuristic’s approximation ratio in terms of that fraction. As we will see, J‐heuristic will always deliver a solution within a factor of of the optimum objective value. Alternatively, we may use a lower bound on the optimal objective value from Lemma 3 (as may not be known). Let be such that , i.e., . Since is a lower bound on (), we let , and thus we have that , i.e., is a valid assignment. Then, note that for any problem instance, can be obtained in time .
Theorem 1 , for any .
Proof. By Lemma 2,
In the previous section’s example, we had a very long live emerging job that has essentially contributed in the makespan of schedule . The resultant J‐schedule gave an almost worst‐possible performance ratio from Lemma 4 due to a significant intersection (close to the magnitude ). We now illustrate an advantage of the estimation from Theorem 1. Consider a slightly modified instance in which the emerging job remains moderately long (a more typical average scenario). A long emerging job 1 has processing time , release time , and the delivery time . The processing time of the rest of the jobs is again 1. Latter jobs are released at time 5 and also have the same delivery times as in the first instance. The J‐schedule has the makespan 120. The lower bound on the optimum makespan defined by Lemma 2 is hence 120 − 10 = 110.
The approximation provided by J‐heuristic for this problem instance can be obtained from Theorem 1. Based on Theorem 1, we use the lower bound 115 on and obtain a valid and the resultant approximation ratio 1 + 1/11.5. Observe that for our second (average) problem instance, J‐heuristic gave an almost optimal solution.
5. An adaptive 3/2‐approximation heuristic
In Section 2, we have mentioned two heuristic algorithms from references [19, 27] solving for our generic problem with the approximation ratios 3/2 and 4/3, respectively. In the previous section, we have described a more flexible approximation ratio that was obtained by a single application of J‐heuristic.
In this section, we propose an heuristic that gives approximation ratio 3/2 for a large class of problem instances, which we will specify a bit later. This algorithm, unlike the above‐mentioned algorithms, carries out a constant number of calls to J‐heuristic (yielding thus the same time complexity as J‐heuristic). Recall that the initial J‐schedule is obtained by a call of J‐heuristic for the originally given problem instance. By slightly modifying this instance, we may create alternative J‐schedules with some desired property. With the intention to improve the initial J‐schedule , and more generally, any J‐schedule , jobs in kernel can be restarted earlier.
To this end, we activate an emerging job for kernel , that is, we force job and all jobs scheduled after kernel to be scheduled behind (all these jobs are said to be in the state of activation for ). Technically, we achieve this by increasing the release times of all these jobs to a sufficiently large magnitude, say, , so that when J‐heuristic is newly applied, neither job nor any job scheduled after in will surpass any job in , and hence the earliest job in kernel will start at time .
We call the resultant J‐schedule a complementary to schedule and denote it by . Thus, to create schedule , we just increase to and apply the heuristic again to the modified instance.
Our heuristic first creates schedule , determines kernel , and verifies if there exists the live emerging job ; if there is no , then is optimal (Corollary 1). Otherwise, it creates one or more complementary schedules. The first of these complementary schedules is . If job remains to be an emerging job in schedule , then the second complementary schedule , obtained from the first one by activating job for kernel , is created. This operation is repeatedly applied as long as the newly arisen overflow job, that is, the overflow job in the latest created complementary schedule is released within the execution interval of job in schedule (job is activated for the kernel of that complementary schedule). The algorithm halts when either is not an emerging job in the newly created complementary schedule or the overflow job in that schedule is released behind the execution interval of job in schedule . Then, the heuristic determines the best objective value among the constructed J‐schedules and halts.
Theorem 2 The modified heuristic has the performance ratio less than .
Proof. In an optimal schedule , either (1) job remains to be scheduled before the overflow job of schedule (and hence before all jobs of kernel ) or (2) is scheduled after job (and hence after kernel ).
Let be the set of emerging jobs in schedule not including the live emerging job . In case (1), either is already optimal or otherwise , and some job(s) from set are scheduled after kernel in an optimal schedule (so that job and the jobs in are rescheduled, respectively, earlier). Let be the total processing time of jobs in . Since job stays before kernel , (this can be seen similarly as Lemma 2). Let (real) be such that . Since schedule contains jobs of set and job , . We have
Hence, if (i.e., ), then .
Suppose now . Then, and using again Lemma 2
It remains to be considered in case (2) when job is scheduled after (all jobs from) kernel in schedule . We claim that schedule is again “long enough,” i.e., . Indeed, consider the J‐schedule . If is not optimal, then there exists an emerging job in . Similarly as above, in schedule , either (2.1) job remains before or (2.2) is scheduled after .
In case (2.2), must be an emerging job in . If the overflow job in kernel is released after time moment , then as job is scheduled after the jobs in in schedule . Otherwise, suppose the overflow job in schedule is released within time interval (note that it cannot be released at time 0 as otherwise would have originally been included ahead job by J‐heuristic). Without loss of generality and for the purpose of this proof, assume is an emerging job in schedule , as otherwise the latter schedule already gives a desired approximation, similarly as in case (1). Because of the same reason, either schedule gives a desired approximation or otherwise job remains to be an emerging job (now, ), and the heuristic creates the next complementary schedule . We repeatedly apply the same reasoning to the following created complementary schedules as long as the overflow job in the latest created such schedule is released within time interval . Once the latter condition is not satisfied, job will be started at time moment, larger than in the corresponding complementary schedule. Hence, its length will be at least . Moreover, an optimal schedule must be at least as long as unless one of the earlier created complementary schedules is optimal. Hence, one of the generated complementary schedules gives a desired approximation.
In case (2.1), if there is no emerging job in , then we are done. Otherwise, let be the set of emerging jobs in not including job . Then similarly as for case (1), there are two sub‐cases and , in each of which a desired approximation is reached. The theorem is proved.
As to the time complexity of the modified heuristic, note that, in the worst‐case, the overflow job in every created complementary schedule is released no later than at time . Then, the algorithm may create up to complementary schedules, and its time complexity will be the same as that of the earlier‐mentioned algorithms. However, it is clear that very unlikely, in an instance of , an “unlimited” amount of jobs are released before time (that would be a highly restricted instance). In average, however, we normally would expect a constant number of such jobs, which, more restrictively, must be overflow jobs in the created complementary schedules (not belonging to kernel ). In this case, our heuristic will obviously run in time . We have proved the following result:
Theorem 3. The modified heuristic runs in time for any problem instance of in which the total number of the arisen overflow jobs in all the created complementary schedules released before time is no more than a constant (more brutally, for any instance in which the total number of jobs released before time is bounded by ).
We have described efficient heuristic methods for the solution of a strongly NP‐hard scheduling problem that, as we have discussed, has a number of important real‐life applications. We have argued that it is beneficial as an analysis of the basic structural properties of the schedules created by J‐heuristic for the construction of efficient heuristic methods with guaranteed worst‐case performance ratios. As we have seen, not only J‐heuristic constructs 2‐optimal solutions in a low‐degree polynomial time, but it is also flexible enough to be served as a basis for other more efficient heuristics. The useful properties of J‐schedules were employed in our flexible worst‐case performance bound of Section 4 and in the proposed, in Section 5, heuristic algorithm with an improved performance ratio. The latter heuristic is adaptive in the sense that it takes an advantage of the structure of an average problem instance and runs faster for such instances.
We believe that J‐schedules possess further useful yet undiscovered properties that may lead to the disclosure of yet unknown insights of the structure of the related problems with release and delivery times. This kind of study was reported in recently published proceedings [8, 9] for the case of a single processor and two allowable job release and delivery times. It still remains open whether basic properties described in these works can be generalized for a constant number of job release and delivery times and for the multiprocessor case. At the same time, some other yet not studied properties even for a single processor and two allowable job release and delivery times may exist. The importance of such a study is emphasized by the fact that the basic single‐machine scheduling problem is strongly NP‐hard and that the version with only two allowable job release and delivery times remains NP hard .