Open access peer-reviewed chapter

Polynomial Algorithm for Constructing Pareto-Optimal Schedules for Problem 1∣rjLmax,Cmax

Written By

Alexander A. Lazarev and Nikolay Pravdivets

Submitted: 01 July 2020 Reviewed: 21 August 2020 Published: 02 November 2020

DOI: 10.5772/intechopen.93677

From the Edited Volume

Multicriteria Optimization - Pareto-Optimality and Threshold-Optimality

Edited by Nodari Vakhania and Frank Werner

Chapter metrics overview

539 Chapter Downloads

View Full Metrics

Abstract

In this chapter, we consider the single machine scheduling problem with given release dates, processing times, and due dates with two objective functions. The first one is to minimize the maximum lateness, that is, maximum difference between each job due date and its actual completion time. The second one is to minimize the maximum completion time, that is, to complete all the jobs as soon as possible. The problem is NP-hard in the strong sense. We provide a polynomial time algorithm for constructing a Pareto-optimal set of schedules on criteria of maximum lateness and maximum completion time, that is, problem 1 ∣ r j ∣ L max , C max , for the subcase of the problem: d 1 ≤ d 2 ≤ … ≤ d n ; d 1 − r 1 − p 1 ≥ d 2 − r 2 − p 2 ≥ … ≥ d n − r n − p n .

Keywords

  • single machine scheduling
  • two-criteria scheduling
  • Pareto-set
  • Pareto-optimality
  • minimization of maximum lateness
  • minimization of maximum completion time
  • polynomial time algorithm

1. Introduction

We consider a classical scheduling problem on a single machine. A release time of each job is predefined and represents the minimum possible start time of the job. When constructing schedules, we consider two objective functions. The first one is to minimize the maximum lateness, that is, maximum difference between each job due date and its actual completion time. The second one is to minimize the maximum completion time, that is, to complete all the jobs as soon as possible. The problem is NP-hard in the strong sense [1]. We provide a polynomial time algorithm for constructing a Pareto-optimal set of schedules on criteria of maximum lateness and maximum completion time, that is, problem 1 r j L max , C max , for the subcase of the problem when due dates are: d 1 d 2 d n ; d 1 r 1 p 1 d 2 r 2 p 2 d n r n p n . Example of a problem case that meets these constraints will be the case when all jobs have the same time for processing before due date.

Advertisement

2. Statement of the problem 1 r j L max , C max

We consider single machine scheduling problem, where a set of n jobs N = 1 2 n has to be processed on a single machine. Each job we is numbered, that is, the entry “job j ” is equivalent to the entry “job numbered j .” Simultaneous executing of jobs or preemptions of the processing of a job are prohibited. For jobs j N , value r j is the minimum possible start time, p j 0 is a processing time of job j and d j is a due date of job j .

The schedule is represented by a set π = s j j N of start times of each job. By τ , we denote the permutation of j 1 j n elements of the set N . A set of all different schedules of jobs from the set N is denoted by Π N . Schedule π is called feasible if s j π r j , j N . We denote the completion time of job j N in schedule π as C j π . Difference L j π = C j π d j , j N , denotes lateness of job j in the schedule π . Maximum lateness of jobs of the set N under the schedule π is

L max π = max j N C j π d j . E1

We denote the completion time of all jobs of the set N in schedule π as

C max π = max j N C j π .

The problem is to find the optimal schedule π with the smallest value of the maximum lateness:

L max = min π Π N L max π = L max π . E2

For any arbitrary set of jobs M N we also denote:

r M = min j M r j , d M = max j M d j , p M = j M p j . E3

In the standard notation of Graham et al. [2], this problem is denoted as 1 r j L max . Intensive work on the solution of this problem has continued since the early 50s of the 20th century. Lenstra et al. [1] showed that the general case of the problem 1 r j L max is NP -hard in the strong sense.

Potts [3] introduced an iterative version of extended Jackson rule (IJ) [4] and proved that L max π IJ L max 3 2 . Hall and Shmoys [5] modified the iterative version and created an algorithm (MIJ) that guarantees the evaluation L max π MIJ L max 4 3 . They also presented two approximation schemes that guarantee finding ε -approximate solution in O n log n + n 1 / ε O 1 / ε 2 and O n / ε O 1 / ε operations. Mastrolilli [6] introduced an improved approximation scheme with complexity of O n + 1 / ε O 1 / ε operations.

A number of polynomially solvable cases of the problem were found, starting with Jackson’s early result [4] for the case r j = 0 , j N , when the solution is a schedule in which jobs are ordered by nondecreasing due dates (by rule EDD ). Such a schedule is also be optimal for the case when the release times and due dates are associated ( r i r j d i d j , i , j N ).

Schedule is constructed according to the extended Jackson rule (Schrage schedule): on the next place in the schedule we select a released non-ordered job with the minimum due date; if there are no such jobs, then we select the job with the minimum release time among the unscheduled jobs.

If process times of all jobs are equal, the complexity can be reduced to O n log n [7]. Vakhania generalized this result [8] considering the case when the processing times of some jobs are restricted to either p or 2 p . An algorithm with complexity of O n 2 log n log p was suggested.

A case when job processing times are mutually divisible is considered in [9]. Author suggest a polynomial-time algorithm with a complexity of O n 3 log n log 2 p max operations for solving this case.

Special cases 1 prec ; r j C max , 1 prec ; p j = p ; r j L max and 1 prec ; r j ; pmtn L max with precedence constraints for jobs have been addressed in works of Lawler [10], Simons [11], Baker et al. [12]. Hoogeveen [13] proposed a polynomial algorithm for the special case when job parameters satisfy the constraints d j p j A r j d j A , j N , for some constant A . A pseudo-polynomial algorithm for the NP-hard case when release times and due dates are in the reversed order ( d 1 d n and r 1 r n ) was developed in [14].

We denote by L j A π and C j A π the lateness and completion time of job j N in schedule π , for instance, A with job parameters r j A p j A d j A , j N . Respectively, L max A π = max j N L j A π is a maximum lateness of the schedule π for instance A .

This paper deals with a problem with two objective functions L max and C max , which in general case can be referred as 1 r j L max , C max . This problem was considered in [15], where authors consider some dominance properties and conditions when the Pareto-optimal set can be formed in polynomial time.

Definition 1.1 For any instance A of the problem, each permutation τ of the jobs of the set N is uniquely defines early schedule π τ A . In the early schedule, each job j N starts immediately after the end of the previous job in the corresponding permutation. If the completion time of the previous job is less than the release time of the current job, then the beginning of the current job is equal to its release time. That is, if τ = j 1 j 2 j n , then π τ A = s j 1 s j 2 s j n , where

s j 1 = r j 1 A , s j k = max s j k 1 + p j k 1 A r j k A , k = 2 , , n . E4

Early schedules play an important role in our construction, since it is sufficient to check all early schedules to find the optimal schedule of any problem instance.

By τ A we denote the optimal permutation and π A we denote the optimal schedule for instance A . Only early optimal schedules are be considered, that is, π A = π τ A A .

We denote by Π N the set of all permutations of jobs of the set N , and by Π A the set of feasible schedules for instance A .

Advertisement

3. Problem 1 d i d j , d i r i p i d j r j p j L max , C max

This section deals with the problem of constructing a Pareto-optimal set by criteria C max and L max , that is, problem 1 r j L max , C max . We suggest an algorithm for constructing a set of schedules Φ N t = π 1 π 2 π m for which

C max π 1 < C max π 2 < < C max π m , E5
L max π 1 > L max π 2 > > L max π m . E6

There is no schedule π such that C max π C max π i and L max π L max π i , at least one of the inequalities is strict for some i , i = 1 , , m . It is shown that m n .

3.1 Problem properties

We denote the precedence of the jobs i and j in schedule π as i j π . We also introduce

r j t = max r j t ; E7
r N t = min j N r j t . E8

In cases when its obvious how many jobs we mean, we write r t instead of r N t .

We assume that the job parameters satisfy the following constraints:

d 1 d n , d 1 r 1 p 1 d n r n p n . E9

For example, these constraints correspond to the case when d j = r j + p j + z , j = 1 , , n , where z is a constant, that is, when all jobs have the same time for processing before due date. A problem with similar constraints but for a single objective function ( L max ) is considered in [16].

We assume that N > 1 and t is the time when the machine is ready. From the set N , we find two jobs f = f N t and s = s N t in the following way:

f N t = arg min j N d j r j t = r N t , E10
s N t = arg min j N \ f d j r j t = r N \ f t , E11

where f = f N t . If N = i , then we set f N t = i , s N t = 0 , t . We also define d 0 = + , f t = 0 , s t = 0 , t . For jobs f and s the following properties are true.

Lemma 1.1 If the jobs of the set N satisfy (4), then for any schedule π Π N for all j N \ f , for which j f π

L j π < L f π E12

is true, and for all j N \ f s , satisfying the condition j s π ,

L j π < L s π , E13

where f = f N t and s = s N t , is also true.

Proof: For each job j such that j f π , completion time C j π < C f π . If d j d f , then obviously

L j π = C j π d j < C f π d f = L f π , E14

therefore (12) is valid.

If for job j N , j f π , then d j < d f . Then r j > r f . If r j r f , then r j t r f t and r f t = r t , as follows from (7) and (10). Then r j t = r f t = r t and d j < d f , but this contradicts the definition of job f (10). Therefore, r j > r f . Its obvious that C j π p j < C f π p f and, since r j > r f ,

C j π p j r j < C f π p f r f , E15
C j π C f π < p j + r j p f r f . E16

Since d j < d f , then (from (9)) d j r j p j d f r f p f or d j d f r j + p j r f p f , so C j π C f π < p j + r j p f r f d j d f . Then, L j π t < L f π t for each job j , j f π .

The inequality (13) can be proved in a similar way.

For each job j satisfying the condition j s π , we have C j π < C s π . If d j d s , then L j π t = C j π d j < C s π d s = L s π t , therefore (13) is true.

Let for the job j N \ f , j s π , d j < d s , then r j > r s . Indeed, if we assume that r j r s , then r j t r s t (it follows from (7)). In addition, r s t r t for any job s according to definitions (8) and (11). If r s t = r t , then for the jobs j and s we can write r j t = r s t = r t and d j < d s , which contradicts the definition (11) of job s N t . If r s t > r t , that is, r s > r t , then there is no job i N \ f s , for which r s > r i > r t . Therefore, for the jobs j and s we get r j t = r s t and d j < d s , which contradicts the definition (11) of job s N t . Therefore, r j > r s .

Since C j π C s π p s and p j > 0 , then C j π p j < C s π p s and since r j > r s , therefore C j π p j r j < C s π p s r s and

C j π C s π < p j + r j p s r s . E17

Since d j < d s , then from (9) we have d j r j p j d s r s p s or

C j π C s π < p j + r j p s r s d j d s . E18

Hence, L j π < L s π for each job j N \ f , j s π .

Theorem 1.1 If conditions (9) are true for jobs in the subset N N , then at any time t t and any early schedule π Π N there exists π Π N such that

L max π t L max π t , and C max π t C max π t E19

and one of the jobs f = f N t or s = s N t is at the first position in schedule π . If d f d s , then job f is at the first position in schedule π .

Proof: Let π = π 1 f π 2 s π 3 , where π 1 , π 2 and π 3 are partial schedules of π . Then, we construct a schedule π = f π 1 π 2 s π 3 . From the definitions (7), (8), (10) we get r f t r j t , j N , hence C max f π 1 t C max π 1 f t and

C max π t C max π t , and E20
L j π t L j π t , j π 2 s π 3 . E21

From the lemma 1.1 we have

L j π t < L s π t , j π 1 π 2 . E22

Obviously, the following inequality is true for job f

L f π t L f π t . E23

From (20)(23) we get C max π t C max π t and L max π t L max π t .

Let π = π 1 s π 2 f π 3 , that is, job s is before job f . Construct a schedule π = s π 1 π 2 f π 3 . Further proof may be repeated as for job f . The first part of the theorem is proved.

Let us assume d f d s and the schedule π = π 1 s π 2 f π 3 . Then, we construct a schedule π = f π 11 π 12 π 3 , where π 11 , π 12 are schedules of jobs of the sets j N : j π 1 s π 2 d j < d f and j N : j π 1 s π 2 d j d f . Jobs in π 11 and π 12 are ordered according to nondecreasing release times r j . From d s d f we can conclude that s π 12 .

For each job j π 11 we have d j < d f . Of (9) we get d j r j p j d f r f p f , hence r j + p j < r f + p f , j π 11 , and C max f π 11 t = r f t + p f + j π 11 p j . Since jobs in schedule π 12 are sorted by nondecreasing release times, then C max f π 11 π 12 t C max π 1 s π 2 f t . As a result

C max π t C max π t , and E24
L j π t L j π t , j π 3 . E25

Job j π 12 satisfies d j d f and C j π t C f π t , which means

L j π t L f π t , j π 12 . E26

Since s π 12 , then

L s π t L f π t . E27

From the lemma 1.1

L j π t L s π t , j π 11 . E28

Moreover, it is obvious that

L f π t L f π t . E29

From inequalities (24)(29) it follows that C max π t C max π t and L max π t L max π t , the theorem is proved.

We call a schedule π Π N effective if there is no schedule π Π N such that L max π L max π and C max π C max π , that is, at least one inequality would be strict.

Thus, when constraints (9) are satisfied for jobs from the set N , then there is an effective schedule π , in which either the job f = f N t , or s = s N t is present. Moreover, if d f d s , then there is an effective schedule π with a priority of job f .

We define the set of schedules Ω N t as a subset of Π N consisting of n ! schedules. Schedule π = i 1 i 2 i n belongs to Ω N t if we choose job i k , k = 1 , 2 , , n as f k = f N k 1 C i k 1 or s k = s N k 1 C i k 1 , where N k 1 = N \ i 1 i 2 i k 1 , C i k 1 = C i k 1 π and N 0 = N , C i 0 = t . For d f k d s k it is true that i k = f k , so if d f k > d s k , then i k = f k or i k = s k . Its obvious that the set of schedules Ω N t contains at most 2 n schedules.that is, p 2 i > y p 2 i 1 .

Example 1.1

n = 2 m , t r 1 < r 2 < < r n , r 2 i 1 < r 2 i + p 2 i < r 2 i 1 + p 2 i 1 , 1 i m , r 2 i 1 + p 2 i 1 + p 2 i < r 2 i + 1 < r 2 i + p 2 i + p 2 i 1 < r 2 i + 2 , 1 i m 1 , r 2 i 1 + p 2 i 1 + p 2 i d 2 i 1 > y , 1 i m 1 , r 2 i + p 2 i + p 2 i 1 d 2 i y .

The set Ω N t contains 2 m schedules. The value of y is used below in the text. The optimal solution of the problem 1 r j , d j = r j + p j , L max y C max is π = 2,1,4,3 n n 1 .

Theorem 1.2 If for the jobs of the subset N N , N = n , is true (9), then at any time t t and any schedule π Π N exists a schedule π Ω N t such that

L max π t L max π t and C max π t C max π t . E30

Proof: Let π = j 1 j 2 j n be an arbitrary schedule. We denote the first l jobs of the schedule π as π l , l = 0,1,2 , , n , where π 0 is an empty schedule, and π ¯ l = j l + 1 j n , then π = π l π ¯ l . We introduce N l = N \ π l and C l = C max π l t . Suppose for some l , 0 l < n , π l is the largest initial partial the schedule of some schedule from Ω N t . If j 1 f N t and j 1 s N t , then π l = π 0 , l = 0 , then the largest partial schedule is empty. Let us say f = f N l C l and s = s N l C l . If d f > d s , then j l + 1 f and j l + 1 s , moreover when d f d s , then j l + 1 f , since π l + 1 is not an initial schedule of some schedule from Ω N t .

According to the theorem 1.1 for the jobs of the set π ¯ l , π ¯ l Π N l , there is a schedule π ¯ l starting at time C l , for which L max π ¯ l C l L max π ¯ l C l , C max π ¯ l C l C max π ¯ l C l , and π ¯ l 1 = f or s , moreover, with d f d s , true π ¯ l 1 = f , where σ k is the job in the k -th place in schedule σ . Hence, L max π l π ¯ l t L max π l π ¯ l t and C max π l π ¯ l t C max π l π ¯ l t .

Let us denote π = π l π ¯ l . A feature of schedule π is that the first l + 1 jobs are the same as first l + 1 jobs of some schedule from the set Ω N t , and L max π t L max π t , C max π t C max π t .

After no more than n sequential conversions (since schedule length n n ) of the original randomly selected schedule π we come to schedule π Ω N t , for which L max π t L max π t and C max π t C max π t . The theorem is proved.

We form the following partial schedule ω N t = i 1 i 2 i l . For each job i k , k = 1 , 2 , , l , we have i k = f k and d f k d s k , where f k = f N k 1 C k 1 and s k = s N k 1 C k 1 . For f = f N l C l and s = s N l C l inequality d f > d s is true. In case when d f > d s for f = f N t and s = s N t , then ω N t = . So ω N t is the “maximum” schedule, during the construction of which job (like f ) for the next place of the schedule can be uniquely selected. We can construct a schedule ω N t for set of jobs N starting at time t using the algorithm 1.1.

Algorithm 1.1 for constructing schedule ω N t .

1: Initial step. Let ω = .

2: Main step. Find the jobs f f N t and s s N t ;

3: if d f d s then

4: ω ω f

5: else

6: algorithm stops;

7: end if

8: Let N N \ f , t r f t + p f and go to the next main step.

Lemma 1.2 The complexity of the algorithm 1.1 for finding the schedule ω N t is at most O n log n operations for any N and any t .

Proof: At each iteration of the algorithm 1.1 we find two jobs: f = f N t and s = s N t . If jobs are ordered by release times r j (and, accordingly, time r t is for O 1 operations), then to find two jobs ( f and s ) you need O log n operations. Total number of iterations is not more than n . Thus, constructing a schedule ω N t requires O n log n operations.

The main step of algorithm 1.1 is finding the jobs f and s and it requires at least O log n operations. Obviously, the number of iterations of the algorithm is O(n), therefore, the complexity of the algorithm 1.1 of O n log n operations is the minimum possible for constructing the schedule ω N t .

Lemma 1.3 If for jobs of the set N conditions (9) are true, then any schedule π Ω N t starts with the schedule ω N t .

Proof: If ω N t = , that is, d f > d s , where f = f N t , s = s N t , the statement of the lemma is true, since any schedule starts from empty.

Let ω N t = i 1 i 2 i l , l > 0 , and so for each i k , k = 1 , 2 , , l , we have i k = f k and d f k d s k , where f k = f N k 1 C k 1 and s k = s N k 1 C k 1 . For f = f N l C l and s = s N l C l it is true that d f > d s . As seen from the definition of the set of schedules Ω N t all schedules in this subset start with a partial schedule ω N t .

Let us use the following notation ω 1 N t = f ω N t and ω 2 N t = s ω N t , where f = f N t , s = s N t , N = N \ f , N = N \ s , t = r f t + p f , t = r s t + p s . Obviously, the algorithm for finding ω 1 (as well as ω 2 ) requires O n log n operations, as much as the algorithm for constructing ω N t .

Consequence 1.1 from Lemma 1.3. If the jobs of the set N satisfy conditions (9), then each schedule π Ω N t starts either with ω 1 N t , or with ω 2 N t .

Theorem 1.3 If the jobs of the set N satisfy conditions (9), then for any schedule π Ω N t it is true that i j π for any i ω 1 N t and j N \ ω 1 N t .

Proof: In the case ω 1 N t = N statement of the theorem is obviously true. Let ω 1 N t N . Further in in the proof we use the notation ω 1 = ω 1 N t .

If f = f N t and s = s N t are such that d f d s , then all schedules from the set Ω N t begin with a partial schedule ω N t = ω 1 , therefore the statement of the theorem is also true.

Consider the case of d f > d s . All schedules from set Ω N t starting with job f have partial schedule ω N t = ω 1 .

Let us choose any arbitrary schedules π Ω N t with job s comes first, π 1 = s , and any schedule ω 1 = l , l < n , containing l jobs. Let π l = j 1 j 2 j l be a partial schedule of schedule π containing l jobs, where j 1 = s . We need to prove that π l = ω 1 . Let us assume the contrary that there is a job j π l , but j ω 1 .

For case j f π we need to check two subcases. If d j < d f , then from (9) we have d j r j p j d f r f p f , therefore r j + p j < r f + p f . Then job j is included in schedule ω 1 according to the definition of ω N t and ω 1 , but by our assumption j ω 1 . If d j d f , then from the fact that π Ω N t follows f j π , but this contradicts j f π . Therefore, j ω 1 .

The other case is f j π . Then for each job i ω 1 , for which i π l , conditions r i < r i + p i C max ω 1 < r s l + 1 r j are true, because j ω 1 , where s l + 1 = s N \ ω 1 C max ω 1 . Jobs s l + 1 and j were not ordered in schedule ω 1 , therefore, C max ω 1 < r s l + 1 r j . Besides, d i d j . If d i > d j , then r i + p i r j + p j , but r i + p i < r j is true. Hence i j π l , since π = π l π ¯ l Ω N t , but it contradicts our guess that i π l and j π l .

Therefore, our assumption is not true and ω 1 = π l . The theorem is proved.

Therefore, jobs of the set ω 1 N t precede jobs of the set N \ ω 1 N t for any schedule from the set Ω N t , including the optimal schedule.

3.2 Performance problem with constraint on maximum lateness

The problem 1 d i d j , d i r i p i d j r j p j ; L max y C max consists of the following. We need to find schedule θ for any y with C max θ = min C max π : L max π y . If L max π > y for any π Π N , then θ = .

Lemma 1.4 The complexity of algorithm 1.2 does not exceed O n 2 log n operations.

Proof: At each iteration of the main step of the algorithm 1.2 we find the schedules ω 1 and, if necessary, ω 2 in O n log n operations. Since ω 1 and ω 2 consist of at least one job, then at each iteration of the algorithm we either add one or mere jobs to the schedule θ , or assume θ = and stop. Therefore, the total number of steps of the algorithm is at most n . Thus, algorithm 1.2 requires O n 2 log n operations.

Algorithm 1.2 for solving the problem 1 d i d j , d i r i p i d j r j p j ; L max y C max .

1: Initial step. Let θ ω N t , t t ;

2: Main step.

3: if L max θ t > y then

4: θ and the algorithm stops.

5: end if

6: Find N N \ θ , t C max θ and ω 1 N t , ω 2 N t .

7: if N = then

8: the algorithm stops.

9: else

10: if L max ω 1 t y then

11: θ θ ω 1 and go to next step;

12: end if

13: if L max ω 1 t > y and L max ω 2 t y then

14: θ θ ω 2 and go to next step;

15: end if

16: if L max ω 1 t > y and L max ω 2 t > y then

17: θ and the algorithm stops.

18: end if

19: end if

The problem 1 d i d j , d i r i p i d j r j p j ; L max y C max cannot be solved in less than O n 2 log n operations because there exists (Example 1.1). The optimal schedule for this example is π = 2,1,4,3 n n 1 . To find this schedule, we need O n 2 log n operations.

We denote by θ N t y the schedule constructed by algorithm 1.2 starting at time t from the jobs of the set N with the maximum lateness not more than y . If N = , then θ t y = for any t and y .

Theorem 1.4 Let the jobs of the set N satisfy conditions (9). If the algorithm 1.2 constructs the schedule θ N t y , then C max θ = min C max π : L max π y π Π N . If, as a result of the algorithm 1.2 the schedule will not be generated, that is, θ N t y = , then L max π > y for each π Π N .

Proof: In case if for schedule π Π N condition L max π y is true, then according to Theorem 1.2 there is a schedule π Ω N t such that L max π L max π y and C max π C max π . Therefore, the required schedule θ contains in set Ω N t .

According to Lemma 1.3, all schedules of the set Ω N t start with ω N t . Let us take θ 0 = ω N t .

After k , k 0 main steps of the algorithm 1.2 we got the schedule θ k and N = N \ θ k , t = C max θ k . Let us assume that there is an optimal by the criterion of maximum completion time ( C max ) schedule θ starting with θ k . According to Theorem 1.2, there is an optimal extension of the schedule θ k among the schedules from the set Ω N t .

Let θ k + 1 = θ k ω 1 N t , that is, L max θ k + 1 y . According to Theorem 1.3, for schedule ω 1 , ω 1 = ω 1 N t , there is no artificial idle times of the machine and all schedules from the set Ω N t start with jobs of the set ω 1 N t . Therefore, ω 1 N t is the best by the criterion of C max among all feasible by maximum lateness ( L max ) extensions of the partial schedule θ k .

If θ k + 1 = θ k ω 2 N t , that is, L max ω 1 t > y , and L max ω 2 t y . All schedules of the set Ω N t start with either schedule ω 1 N t or ω 2 N t . As L max ω 1 t > y , then the only suitable extension is ω 2 N t .

Thus, at each main step of the algorithm, we choose the fastest continuation of the partial schedule θ k among all those allowed by the maximum lateness. After no more than n main steps of the algorithm, the required schedule is constructed.

Let us assume that after the k + 1 steps of the algorithm L max ω 1 t > y and L max ω 2 t > y . If schedule θ could exist, that is, θ , then θ would start with θ k . Then for any schedule π Π N t there would exist a schedule π Ω N t such that L max π t L max π t L max ω 1 t > y or L max π t L max π t L max ω 2 t > y . Therefore θ = .

Repeating our proof as many times as the main step of algorithm 1.2 (no more than n ), we come to the truth of the statement of the theorem.

3.3 Algorithm for constructing a set of Pareto schedules by criteria C max and L max

Let us develop an algorithm for constructing a set of Pareto schedules Φ N t = π 1 π 2 π m , m n , by criteria C max and L max according to conditions (5)–(6).

Schedule π m is a solution to problem 1 r j L max if (9) is true.

Algorithm 1.3 for constructing a set of Pareto schedules by criteria C max and L max .

1: Initial step. Y + , π ω N t , Φ , m 0 , N N \ π and t C max π .

2: if N = then

3: Φ Φ π , m 1 and the algorithm stops.

4: end if

5: Main step.

6: if L max ω 1 t L max π then

7: π π ω 1 , where ω 1 = ω 1 N t and go to the next step;

8: end if

9: if L max ω 1 t > L max π then

10: if L max ω 1 t < y then

11: find θ = θ N t y using algorithm 1.2, where y = L max ω 1 t ;

12: if θ = then

13: π π ω 1 and go to the next step;

14: else

15: π π θ

16: if C max π m < C max π then

17: m m + 1 , π m π , Φ Φ π m , y = L max π m ;

18: else

19: π m = π and go to next step;

20: end if

21: end if

22: if L max ω 1 t y then

23: find ω 2 = ω 2 N t ;

24: if L max ω 2 t < y then

25: π = π ω 2 and go to the next step;

26: else

27: π = π m ' and the algorithm stops.

28: end if

29: end if

30: end if

31: end if

As a result of the algorithm 1.3, a set of schedules Φ N t is constructed, for the set of jobs N starting at time t , for which inequality 1 Φ N t n true. We should note that the set Φ N t for Example 1.1 consists of two schedules, although set Ω N t consists of 2 n 2 schedules:

π 1 = 1,2,3,4 n 1 n , E31
π 2 = 2,1,4,3 n n 1 . E32

Lemma 1.5 The complexity of the algorithm 1.3 does not exceed O n 3 log n operations.

Proof: At each iteration of the main step of the algorithm 1.3 we find schedules ω 1 and, if necessary, ω 2 , which requires O n log n operations according to lemma 1.2, and also schedule θ in O n 2 log n operations. As ω 1 and ω 2 consist of at least one job, then at any iteration of the algorithm one or more jobs are added to the schedule π , or the algorithm stops at last schedule π . Therefore, the total number of iterations is at most n . Thus, it takes no more than O n 3 log n operations to execute algorithm 1.3.

Theorem 1.5 If case if (9) is true for each job of the set N , then the schedule π , constructed by algorithm 1.3, is optimal according to the criterion L max . Moreover, for any schedule π Π N there exists a schedule π Φ N t such that L max π L max π and C max π C max π .

Proof: According to Theorem 1.2, there exists an optimal (by L max ) schedule from set Ω N t . According to Lemma 1.3, all schedules of the set Ω N t start with a partial schedule ω N t .

Let π 0 = ω N t . After k , k 0 , main steps of algorithm 1.3 we have a partial schedule π k . Suppose there is an optimal (by L max ) schedule starting with π k . We denote N = N \ π k and t = C max π k .

If π k + 1 = π k ω 1 , where ω 1 = ω 1 N t , then either L max ω 1 t L max π k , or L max π k < L max ω 1 t < y , that is, current value of the criterion and the maximum lateness will “appear” on next steps of the algorithm 1.3. That is, θ N t y = , where y = L max ω 1 t . If θ = θ N t y , then we improve the current maximum lateness value: π = π k θ and y = L max π = L max ω 1 t . The schedule π is added to the set of schedules Φ N t . Moreover, according to Theorem 1.3 jobs of set ω 1 precede jobs of set N \ ω 1 . Thus, the schedule ω 1 alert(without artificial idle times of the machine) would be the best continuation for π k .

If π k + 1 = π k ω 2 , where ω 2 = ω 2 N t , that is, according to algorithm 1.3 L max ω 2 t < L max π L max ω 1 t . In this case the continuation ω 2 is “better” than ω 1 . Hence, the partial schedule π k + 1 is a part of some optimal schedule.

Repeating our proof no more than n times, we come to optimality (for L max ) of the schedule π .

The set of schedules Φ N t contains at most n schedules, since at each main step of the algorithm in the set Φ N t at most one schedule is “added,” and this step is executed no more than n times.

Suppose there is a schedule π Π N , π Φ N t , such that either C max π C max π and L max π L max π , or C max π C max π and L max π L max π for each schedule π Φ N t . Moreover, in each pair of inequalities at least one inequality is strict. According to Theorem 1.1, there is a schedule π Ω N t such that L max π L max π and C max π C max π . If π Φ N t . Thus, it becomes obvious that our assumption is not correct. Let π Ω N t \ Φ N t . Algorithm 1.3 shows that the structure of each schedule π Φ N t can be represented as a sequence of partial schedules π = ω 0 ω 1 ω 2 ω k , where ω 0 = ω N t , and ω i is either ω 1 N i C i , or ω 2 N i C i , and N i = N \ ω 0 ω i 1 , C i = C max ω 0 ω i 1 t , i = 1 , 2 , , k . The schedule π has the same structure according to the definition of the set Ω N t , that is, π = ω 0 ω 1 ω 2 ω k , possibly k k , where ω 0 = ω 0 = ω N t , ω i is equal to either ω 1 N i C i , or ω 2 N i C i , a N i = N \ ω 0 ω i 1 , C i = C max ω 0 ω i 1 t , i = 1 , 2 , , k .

We assume that the first k partial schedules π and π are equal, that is, ω i = ω i = ω i , i = 0 , 1 , , k 1 , ω k ω k . If y = L max ω 0 ω k 1 , let us construct a schedule θ using algorithm 1.2, θ = θ N k C k y . If θ = , then according to algorithm 1.3, ω k = ω 1 N k C k . Because of ω k ω k , schedule ω k = ω 2 N k C k . Objective function value ( L max ) can be reached on a job from the set N k , since θ = . The whole structure of the algorithm 1.3 construct in such a way that up to the “critical” job (according to L max ) order the jobs as “tightly” as possible, therefore we complete the schedule ω 1 , after which C max π C max π and L max π L max π . If θ , then for schedules π and π C max π C max π and L max π = L max π . Thus, for any schedule π Ω N t \ Φ N t exists schedule π Φ N t such that C max π C max π and L max π L max π . Hence, for any schedule π Π N there exists schedule π Φ N t such that L max π L max π and C max π C max π . The theorem is proved.

Figure 1 schematically shows the considered schedule.

Figure 1.

The set of Pareto-optimal schedules.

For the set of schedules Φ N t = π 1 π 2 π m , m n , we conditions (5)–(6) are true.

The schedule π 1 is optimal in terms of speed ( C max ), and π m is optimal in terms of the maximum lateness (by L max ) if the jobs of the set N satisfy the conditions (9).

Advertisement

4. Conclusions

Single machine scheduling problem with given release dates and two objective functions is considered in this chapter, which is NP -hard in the strong sense. A number of new polynomially and pseudo-polynomially solvable subcases of the problem were found. For a case when

d 1 d n , d 1 r 1 p 1 d n r n p n , E33

an algorithm for constructing a Pareto-optimal set of schedules by criteria C max and L max is developed. It is proved that the complexity of the algorithm does not exceed O n 3 log n operations.

An experimental study of the algorithm showed that it can be used to construct optimal schedules (by L max ) even for instances not satisfying the conditions (33).

Advertisement

Acknowledgments

The research was supported by RFBR (project 20-58-S52006).

References

  1. 1. Lenstra JK, Rinnooy Kan AHG, Brucker P. Complexity of machine scheduling problems. Annals of Discrete Mathematics. 1977;1:343-362
  2. 2. Graham RL, Lawler EL, Lenstra JK, Rinnooy Kan AHG. Optimization and approximation in deterministic sequencing and scheduling: A survey. Annals of Discrete Mathematics. 1979;5:287-326
  3. 3. Potts CN. Analysis of a heuristic for one machine sequencing with release dates and delivery times. Operations Research. 1980;28:1436-1441
  4. 4. Jackson JR. Scheduling a Production Line to Minimize Maximum Tardiness. Los Angeles, CA: University of California; 1955. Manag. Sci. Res. Project. Research Report N 43
  5. 5. Hall LA, Shmoys DB. Jackson’s rule for one-machine schedulings: Making a good heuristic better. Mathematics of Operations Research. 1992;17:22-35
  6. 6. Mastrolilli M. Efficient approximation schemes for scheduling problems with release dates and delivery times. Journal of Scheduling. 2003;6(6):521-531
  7. 7. Garey MR, Johnson DS, Simons BB, Tarjan RE. Scheduling UnitTime tasks with arbitrary release times and deadlines. SIAM Journal on Computing. 1981;10:256-269
  8. 8. Vakhania N. Single-machine scheduling with release times and tails. Annals of Operations Research. 2004;129(1–4):253-271
  9. 9. Vakhania N. Dynamic restructuring framework for Scheduling with release times and due-dates. Mathematics. 2019;7(11):1104
  10. 10. Lawler EL. Optimal sequencing of a single machine subject to precedence constraints. Management Science. 1973;19(5):544-546
  11. 11. Simons BB. A fast algorithm for single processor scheduling. In: Proceedings of the 19th IEEE Annual Symposium on Foundations of Computer Science. New York: Ann. Arbor. Mich; 1978. pp. 246-252
  12. 12. Baker KR, Lawler EL, Lenstra JK, Rinnooy Kan AHG. Preemtive scheduling of a single machine to minimize maximum cost subject to release dates and precedence constraints. Operations Research. 1983;31(2):381-386
  13. 13. Hoogeveen JA. Minimizing maximum promptness and maximum lateness on a single machine. Mathematics of Operations Research. 1996;21:100-114
  14. 14. Lazarev AA, Shulgina ON. Polynomially solvable subcases of the problem of minimizing maximum lateness. Izvestiya VUZov. Mathematics. 2000 (in Russian)
  15. 15. Vakhania N. Scheduling a single machine with primary and secondary objectives. Algorithms. 2018;11(6):80
  16. 16. Vakhania N. Fast solution of single-machine scheduling problem with embedded jobs. Theoretical Computer Science. 2019;782:91-106

Written By

Alexander A. Lazarev and Nikolay Pravdivets

Submitted: 01 July 2020 Reviewed: 21 August 2020 Published: 02 November 2020