Adaptive Production Scheduling and Control in One-Of-A-Kind Production

Mass customization is one of competitive strategies in modern manufacturing (Blecker & Friedrich, 2006), the objective of which is to maximize customer satisfaction by producing highly customized products with high production efficiency. There are two starting points moving towards mass customization, mass production and one-of-a-kind production (OKP). The production volume of mass production is normally large, whereas that of OKP is usually small or extremely even just one. Mass production can achieve high production efficiency but relatively low customization, because products are designed in terms of standard product families, and produced repetitively in large volume. Comparatively, OKP can achieve high customization but relatively low production efficiency, because product design in OKP is highly customer involved, and each customer has different requirements. Therefore, the variation of customer requirements causes differences on each product. To improve production efficiency, OKP companies use mixed-product production on a flow line (Dean et al., 2008, 2009). Moreover, the production scheduling and control on OKP shop floors is severely challenged by the variation of customer requirements, whereas that in mass production is comparatively simple. Therefore, we focus on the adaptive production scheduling and control for OKP.


Introduction
Mass customization is one of competitive strategies in modern manufacturing (Blecker & Friedrich, 2006), the objective of which is to maximize customer satisfaction by producing highly customized products with high production efficiency.There are two starting points moving towards mass customization, mass production and one-of-a-kind production (OKP).The production volume of mass production is normally large, whereas that of OKP is usually small or extremely even just one.Mass production can achieve high production efficiency but relatively low customization, because products are designed in terms of standard product families, and produced repetitively in large volume.Comparatively, OKP can achieve high customization but relatively low production efficiency, because product design in OKP is highly customer involved, and each customer has different requirements.Therefore, the variation of customer requirements causes differences on each product.To improve production efficiency, OKP companies use mixed-product production on a flow line (Dean et al., 2008(Dean et al., , 2009)).Moreover, the production scheduling and control on OKP shop floors is severely challenged by the variation of customer requirements, whereas that in mass production is comparatively simple.Therefore, we focus on the adaptive production scheduling and control for OKP.

Characteristics of one-of-a-kind production
OKP is product-oriented, not capacity-oriented (Tu, 1996a).Customers can only choose a product within one of product families provided by an OKP company.Although customer choice is confined by product families, OKP is so customer involved that every product is highly customized based on specific customer requirements, and products differ on matters of colors, shapes, dimensions, functionalities, materials, processing times, and so on.Consequently, production of a product is rarely repeated in OKP (Wortmann et al., 1997).Moreover, OKP companies usually adopt a market strategy of make-to-order or engineering-to-order. Therefore, it is very important to meet the promised due dates in OKP.This market strategy challenges production scheduling and control differently from that of make-to-stock.
Typically, there are five types of problems challenging production scheduling and control in an OKP company.(1) Job insertion or cancellation frequently happens in OKP due to Here is a real situation in Gienow Windows and Doors, Canada.Without a computeraided system for adaptive production scheduling and control, an experienced human scheduler in Gienow carries out scheduling three days before the real production.It is an offline scheduling.Processing times of operations are quoted by Gienow's standards, which are the average processing times of similar operations in the past.On the production day, the production is initially carried out according to the offline schedule.However, real processing times of highly customized products might not be exactly the same as the quoted ones.Therefore, customer orders may be finished earlier or later than they are scheduled offline.This will cause problems such as the overflow of WIP inventories, the delay of customer orders, and so on.The production delay of customer orders is not allowed in Gienow, because the delivery schedule has a high priority.In addition, unexpected supply delays, machine breakdown and operator absence could even cause more problems.To cope with these dynamic disturbances, the shop floor managers and production scheduler in Gienow carry out the following activities based on their experience: 1. Re-allocate operators among work stages in a production line or lines.2. Change the job sequence.3. Postpone the production of other orders purely for a rush order 4. Cancel or insert orders into the current production. 5. Alter the production routine to divert orders from one production line to another.6. Add more work shifts or overtime working.
Carrying out these activities by experience may avoid the overflow of WIP inventories in one stage or line, but cause it in other stages or lines, smoothing the production progress in one stage but slowing down the whole progress in Gienow.Due to the lack of an efficient computer system, Gienow does the adaptive production scheduling and control manually and inefficiently.Obviously, OKP shop floors have to be adaptively scheduled and controlled by a computer aided system (Wortmann et al., 1997;Tu, 1996b).
The rest of this chapter is organized as follows.Section 2 gives a brief literature review on flow shop production scheduling.Section 3 introduces a computer-aided production scheduling system for adaptive production scheduling and control in OKP, consisted of a feedback control scheme and a state space (SS) heuristic.Section 4 gives the results of various case studies.Finally, section 5 draws conclusions and proposes future work.

Literature review
In this section, we briefly review research of flow shop production scheduling from two perspectives first, seeking optimal solutions and seeking near-optimal solutions, and then discuss the requirements of heuristics for adaptive production scheduling and control.

Definition of flow shop scheduling
Scheduling is a decision making process of allocating resources to jobs over time to optimize one or more objectives.According to Pinedo (2002), one type of flow shop consists of m machines in series, and each job has the same flow pattern on m machines.This is typically called a traditional flow shop (TFS).Another type of flow shop is called a flexible flow shop or hybrid flow shop (HFS), where there are a number of machines/operators in parallel in each of S stages.In addition to the difference of flow shop configurations, processing constraints are also different for TFS and HFS.For TFS, if the first in first out (FIFO) rule is applied to jobs in WIP inventories, it becomes a no preemption flow shop problem.It is also called a permutation (prmu) flow shop problem, because the processing sequence of jobs on each machine is the same.For HFS, because there are multiple machines/operators in a stage, the first job coming into a stage might not be the first job coming out of the stage.Therefore, the first come first serve (FCFS) rule is applied (Pinedo, 2002).Consequently, it is still a problem of no pre-emption flow shop.Another processing constraint could be no waiting (nwt), that is, there is no intermediate storage or WIP inventories between two machines or stages.The most common objective of flow shop scheduling is to minimize the maximum completion time or makespan, i.e. min(C max ).By the three parameter notation, / / (Graham et al., 1979), the above problems can be notated as Fm/prmu/C max for m machine TFS problems with no preemption to minimize makespan, Fm/nwt/C max for m machine TFS problems with no waiting, FFs/FCFS/C max , for S-stage HFS problems with FCFS, and FFs/nwt/C max for Sstage HFS problems with no waiting.

Johnson's algorithm
Johnson proposed his seminal algorithm to get optimal solutions for n-job 2-machine flow shop problems in 1954 (Johnson, 1954), the objective of which is to min(C max ).The mathematical proof of his algorithm by using combinatorial analysis is as follows.
. The sum of processing times of n jobs on the last machine is a constant.Thus, the objective to min(C max ) is converted to minimize the sum of idle times on the last machine.Johnson models the sum of idle times caused on machine 2 as ,2 1

∑∑
, in which p i,1 and p i,2 are the processing times of job i on machine 1 and machine 2 respectively.
To illustrate how to sequence n jobs, Johnson uses a combinatorial analysis approach, which is to compare two sequences, { , i, i+1, } and { , i+1, i, }.The main difference of the two sequences is that two jobs exchange the positions, and is a subset for selected jobs, for unselected jobs, ∩ i ∩ i+1 ∩ = Ø, and ∪ i ∪ i+1 ∪ = {n}.An optimal ordering of jobs is given by the following scheme.Job i proceeds job i+1, if max {K 1u , K 1u+1 } ≤ max {K 2u , K 2u+1 }.

Extension of combinatorial approach
Dudek and Teuton extend Johnson's combinatorial approach to n-job m-machine flow shop problems to min(C max ) (Dudek & Teuton, 1964), comparing the same two sequences as in Johnson's proof, and then develop their dominance conditions.Dudek and Teuton began the analytical framework for the development of dominance conditions for flow shop scheduling, although their initial method is shown to be incorrect later (Karush, 1965).
Smith and Dudek correct Dudek and Teuton's combinatorial approach, by introducing partial enumeration into dominance conditions (Smith & Dudek 1967).They propos two checks of dominance conditions.One is job dominance check and the other is sequence dominance check.The job dominance checks two different sequences, { , i, i+1, ', "} and { , i+1, ', i, "}, in which ' and " are all possible combinations of exclusive subsets of .The sequence dominance checks another two sequences, { , } and { ', }, in which and ' are different permutations of the same selected jobs.The two dominance checks theoretically guarantee the optimal solution, but practically are still time consuming.
Based on D-T's framework, Szwarc proposes an elimination rule different from S-D's dominance checks ( Szwarc, 1971a( Szwarc, , 1971b)).Let t ( a, k) be the completion time of all jobs of sequence a on machine M k .Then t ( a, k) = max {t ( a, k-1), t ( , k)} + p a,k with t (Ø, k) = t ( , 0) = 0, where k = 1,…,m.Define the difference of completion times of two sequences as Δ k = t ( ab, k)t ( b, k), for k = 2,…,m.The elimination rule is to eliminate all sequences of the form b if Δ k-1 ≤ Δ k ≤ p a,k .However, Szwarc clearly stated that "if there is no job c such that for all k: c 1 ≤ c k or c m ≤ c k , then no single sequence could be eliminated.In this case, the elimination method offers no advantage since we could have to consider all n! sequences".

Branch and bound methods
Besides the combinatorial approach, a branch and bound (BB) method is also a general framework for NP-hard problems.It can be used to get optimal solutions to flow shop scheduling problems (Ignall & Schrage, 1965;Lageweg et al., 1978).
Usually, there are mainly three components in a BB method, a search tree, a search strategy, and a lower bound.A search tree represents the solution space of a problem (Fig. 2.2), the nodes on the tree represent subsets of solutions, and the descendants or childnodes are given by a branching scheme.For an n-job m-machine flow shop problem, the search tree begins with a virtual node 0. For the first position in a sequence, there are n candidates or nodes, i.e. each of n jobs can be a candidate for position 1.If one job is selected for position 1, it will have n-1 descendants or child-nodes.Consequently, there are n×(n-1) nodes for position 2, n×(n-1)×(n-2) nodes for position 3, and finally, n! nodes for the last position n.

Fig. 2.2 A solution space of a BB method
At each node, a lower bound is calculated in terms of makespan for all permutations that descend it.For each position, all nodes are examined and a node with the least lower bound is chosen for branching.When a node represents an allocation of all jobs and has a makespan less than or equal to the lower bound, it is an optimal solution.

Heuristics for near-optimal solutions
Framinan et al. propose a general framework for the development of heuristics (Framinan et al., 2004).It has three phases: index development, solution construction and solution improvement.Phase 1, index development, means a heuristic arranges jobs according to a certain property of processing times.For example, Campbell et al. propose a CDS heuristic for an n-job m-machine TFS problem to min(C max ) (Campbell et al., 19770).CDS arranges jobs as follows.If there is a counter (Ctr) pointing to a machine j, then for each job i (i = 1,…,n) the sum of processing times on the first Ctr machines is regarded as its processing time on virtual machine 1, and that on the rest m-Ctr machines as on virtual machine 2. Then apply JA to this virtual 2-machine flow shop problem to get a sequence.As Ctr changes from machine 1 to machine m-1, m-1 sequences are generated by CDS, and the one with the minimum makespan is the final solution.In phase 2, solution construction, a heuristic constructs a job sequence by a recursive procedure, trying to insert an unscheduled job into a partial sequence until all jobs are inserted.NEH heuristic (Nawaz et al., 1983) is a typical heuristic in phase 2, for an n-job mmachine TFS problem to min(C max ).NEH constructs a job sequence as follows.
Step 1, NEH heuristic calculates the sums of processing times on all of m machines for each of n jobs, and then arranges these sums in a non-ascending order.Step 2, NEH heuristic schedules the first two jobs in the order to get a partial sequence.Step 3, NEH heuristic inserts the third job into three possible positions to get another partial sequence, and so on.Finally, NEH heuristic inserts the last job into n possible positions, and then determines the final sequence.In phase 3, solution improvement, heuristics have two main characteristics, an initial sequence generated by other heuristics and artificial intelligence to improve the initial sequence.One typical heuristic in phase 3 is an iterated greedy (IG) heuristic (Ruiz & Stützle, 2007), denoted as IG_RS heuristic.IG method consists of two central procedures, destruction and construction.The initial sequence of IG_RS heuristic is generated by NEH heuristic.For destruction, IG_RS heuristic randomly removes a number of d jobs from the initial sequence resulting a partial sequence D ; and for construction, IG_RS heuristic follows step 3 of NEH heuristic to insert each of d jobs back in to D .Heuristic development in phase 1 is beneficial for future heuristic development in the other two phases (Framinan et al., 2004).Ruiz and Maroto (2005) compare 19 heuristics for Fm/prmu/C max problems, and concluded that NEH heuristic is the best, CDS heuristic the eighth, and two PDRs (LPT and SPT rules) the worst.However, CDS heuristic has the second simplest computational complexity among the first 8 heuristics, O(m 2 n+mnlogn).Moreover, King and Spachis (1980) compare 5 PDRs and CDS heuristic for two different TFS problems, Fm/prmu/C max and Fm/nwt/C max .They conclude that CDS heuristic and LWBJD (least weighted between jobs delay) rule are the best for Fm/prmu/C max problems and MLSS (maximum left shift savings) rule is the best for Fm/nwt/C max problems, but no single method is consistently the best for both Fm/prmu/C max and Fm/nwt/C max problems.
The literature on HFS is still scarce (Linn & Zhang, 1999;Wang, 2005).According to Botta-Genoulaz (2000), CDS heuristic is the best of 6 heuristics for HFS problems, including NEH heuristic.The problem in Botta-Genoulaz ( 2000) is an n-job S-stage HFS problem to minimize the maximum lateness.It is converted to an n-job S+1-stage HFS problem to min(C max ).The processing time of job i in stage S+1 is calculated by , and d k is the due date of job k, k = 1,…,n.When applying CDS heuristic to HFS problems, Botta-Genoulaz converts the processing times, p' i,j = p i,j /OPTR j , j = 1,…,S+1, where p i,j is the original processing time of job i in stage j, and OPTR j is the number of operators/machines assigned to stage j.
For FFs/nwt/C max problems, Thornton and Hunsucker (2004) propose an NIS heuristic, the best among CDS heuristic, LPT and SPT rules, and a heuristic of random sequence generation.Different from CDS heuristic, NIS heuristic uses a filter concept to convert a FFs/nwt/C max problem to a virtual 2-machine problem, and then applies JA to get a job sequence.The stages before the filter are regarded as virtual machine 1, after the filter as virtual machine 2, and the stages that are covered by the filter are ignored.The filter goes from stage 2 to stage S-1, and the width of filter changes from 1 to S-2.In total, there are 1+(S-1)×(S-2)/2 sequences generated by NIS heuristic and the one with the minimum makespan is the final schedule.

Three criteria
Three main criteria are used to evaluate a heuristic for adaptive production scheduling and control (Li et al., 2011a): optimality, computational complexity, and flexibility.Usually optimality is used to evaluate a heuristic for offline production scheduling.However, when adaptive production control is taken into consideration, the computational complexity becomes critical.That is why some heuristics based on artificial intelligence are not suitable for adaptive production control, although they can get better solutions.Another criterion is the flexibility, that is, whether a heuristic can deal with a disturbance.Of course, different situations have different requirements for optimality, computational complexity, and flexibility of a heuristic.There is inevitably a trade-off among these criteria, and the selection of heuristics for production scheduling and control depends on specifics of different situations, such as the value of optimality as compared to near optimal scheduling, as well as the type and volume of disturbances that underlies the requirements of response time.

Summary of existing heuristics for adaptive production control
For optimality, heuristics in phase 3 can get better solutions than heuristics in phases 1 and 2. However, for computational complexity, they take much longer time.For example, an adaptive learning approach (ALA) heuristic is in phase 3 for Fm/prmu/C max problems (Agarwal et al., 2006).The deviation of ALA heuristic is only 1.74% for Taillard's benchmarks (Taillard, 1993), much better than 3.56% of NEH heuristic.However, for the largest instance in Taillard's benchmarks, i.e. 500 jobs and 20 machines, it takes more than 19 hours for ALA heuristic to get a solution, more than 20 hours for Simulated Annealing, and more than 30 hours for Tabu search (Agarwal et al., 2006).Even for the recent IG_RS heuristic, it takes 300 seconds to get a solution to a 500-job 20-machine instance.For flexibility, we need to see if a heuristic can deal with a disturbance.According to Pinedo (2002), there are three types of disturbances in general for flow shop production, job insertion or cancellation, operator absence or machine breakdown, and variation in processing times.The perfect production information in OKP is available only after the production (Wortmann, 1992).Therefore, if a heuristic operates the known processing time only, it cannot deal with variation in processing times.
The performance of first eight of 19 heuristics is summarized in Table 2.1, and the optimality of each heuristic is quoted from Ruiz and Maroto (2005).However, there is a discrepancy of optimality of heuristics in the literature, because the optimality is evaluated by the deviation from the best known upper bounds that are under continuous improvement.For example, the deviation of 3.33% is for NEH and 9.96% for CDS in Ruiz and Maroto (2005), but 3.56% for NEH and 10.22% for CDS in Agarwal et al. (2006), and 3.59% for NEH and 11.28% for CDS in our case study.In the table, the column "Opt."means the optimality on Taillard's benchmarks for Fm/prmu/C max problems, "I/C" means the job insertion or cancellation, "OA/MB" means the operator absence or machine breakdown, and "Var."means the variation in processing times.The mark of "Yes § " means a heuristic can deal with a disturbance only with a modification of processing times, e.g. in Botta-Genoulaz (2000).It is self-illustrative for optimality and flexibility of each heuristic in the above table.We only discuss the computational complexity in the following.NEH heuristic, in its original version, has a computational complexity of O(mn 3 ), but, by calculating the performance of all partial sequences in a single step, its complexity is reduced to O(mn 2 ) (Taillard, 1990).Both Suliman (Suliman, 2000) and HoCha (Ho & Chang, 1991) heuristics use CDS heuristic to generate an initial sequence, and then exchange job pairs to improve the performance, but they use different mechanisms for job pair swaps.Because the number of job pair swaps depends on the calculation of performance of each job pair, the computational complexities of Suliman and HoCha heuristics are intractable.Job swaps are also involved in RACS and RAES heuristics (Dannenbring, 1977), and their computational complexities are intractable too.These two heuristics are based on a rapid access (RA) heuristic (Dannenbring, 1977), which is a mixture of JA and Palmer's slope index (Plamer, 1965).Koula heuristic (Koulamas, 1998) is not purely for permutation flow shop problems.The job passing is allowed in Koula heuristic, because Potts et al. (1991) point out that a permutation schedule is not necessarily optimal for all n-job mmachine flow shop problems.Koula heuristic extensively uses JA to generate initial sequences, and then job passing is allowed to make further improvement.The overall computational complexity of Koula heuristic is O(m 2 n 2 ).HunRa heuristic (Hundal & Rajgopal, 1988) is a simple extension of Palmer's slope index.HunRa heuristic generates three sequences, one by Palmer's slope index, the other two by calculating indices differently.Therefore, the HunRa heuristic has the same computational complexity as Palmer's slope index, O(mn+nlogn).
Usually, the number of jobs n is much larger than the number of machines m, thus, the computational complexity of O(m 2 n+mnlogn) for CDS heuristic is comparable with that of O(mn+nlogn) for HunRa heuristic.
For an industrial instance in Gienow with 1396 jobs and 5 machines, it takes NEH heuristic more than 70 seconds to generate a sequence, which is too slow to keep up with the production pace in Gienow.Therefore, NEH and other five heuristics, with computational complexity higher than O(mn 2 ), are not suitable for adaptive production scheduling and control in Gienow.It takes less than one second for CDS or HunRa heuristics to generate a sequence for the same industrial instance.However, their performance is not good from the optimality perspective, with more than 9% deviation on Taillard's benchmarks.

Adaptive production scheduling and control system
For adaptive production scheduling and control, it is necessary not only to monitor the production on the shop floor, but also to give a solution in time when a disturbance happens.Our computer-aided system for adaptive production scheduling and control in OKP consists of a close-loop structure and a state space (SS) heuristic.

The feedback control scheme
For adaptive production scheduling and control, a computer-aided scheduling and control system has been proposed as illustrated in Fig. 3.1, which consists of SS heuristic and a simulation model called temporized hierarchical object-oriented coloured Petri nets with changeable structure (THOCPN-CS) (Li, 2006).High customization and dynamic disturbances in OKP demand for a great effort on a simulation model.Simultaneously, adaptive production control demands for solutions in a short time.Therefore, the unique feature of the THOCPN-CS simulation model makes it easy and flexible to simulate frequent changes in OKP for adaptive production control.Steps to achieve adaptive production scheduling and control in OKP are summarized as follows.Through repeating the above-mentioned steps iteratively, the production on OKP shop floors can be adaptively scheduled and controlled.

The state space heuristic
SS heuristic is mainly for HFS problems.Because there are multiple operators in each stage and the capacity of WIP inventories is limited, SS heuristic is not only to min(C max ), but also to maximize the utilization, max(Util).There are two concepts used in SS heuristic, a state space concept and a lever concept.The main idea of SS is to find a job that fits S-1 spaces, without causing IDLE or DELAY.
After a job i is processed on a line, the next available times are changed, and the space is changed accordingly.Greater IDLE and DELAY are not good for production if objectives are to min(C max ) and max(Util), while greater SPACE is good to some extent.
From the foregoing description of SS, we can see IDLE and DELAY are evaluated according to job i and stage s, but SPACE is only evaluated by stage s.To make SPACE both job and stage dependant, there are two ways to model SPACE.One model is SPACE i,s = c i,s+1 -A s , for s = 1,…,S-1.The other model is SPACE i,s = p i,s+1 , for s = 1,…,S-1.In our current version of SS heuristic, we use the latter model of SPACE, reducing one calculation in iteration and increasing the computation speed for adaptive control.However, we illustrate the alternative model, SPACE i,s = c i,s+1 -A s , in section 4 to show the flexibility of SS concept.

The lever concept in SS
From our previous research on TFS problems, we find that the lever concept is suitable for flow shop production (Li et al., 2011b), which means IDLE (or DELAY) in an earlier stage is worse for min(C max ) objective than in a later stage.Consider a lever where force F takes effect and causes a torque of F×L, where F is the unit of force and L is the length of force arm.An

Fig. 3.4 A lever concept for DELAY in SS
There is also a lever concept for SPACE in SS, shown in Fig. 3.5.The length of force arm for a space is LVR_SPACE s = s, for s = 1,…,S-1.The fulcrum of a lever for SPACE is set between stage 1 and stage 2, which means SPACE in a later stage is better than in an earlier stage.
, that is, to select the job with the maximum torque difference between SPACE′s and IDLE′s + DELAY′s.

Steps to achieve the SS heuristic
Two items should be taken into consideration for initial job selection in SS.One is the number of initial jobs, and the other is the initial job selection scheme.The number of initial jobs is set as min(OPTR s , for s = 1,…,S).The reason is that if the number of initial jobs is smaller than min(OPTR s ), then the first available time of a stage is zero since all operators are available at time zero; if the number is greater than min(OPTR s ), then the number of (initial job number -min(OPTR s )) jobs are not selected by the state space concept.
For initial job selection scheme, five 1×S vectors are introduced as follows:

∑
) for i = 1,…,n, which means the minimum absolute difference between one job's processing times and the vector.
Step Step 2. Set the capacity of each of S-1 WIP inventories.
Step 3. Calculate five vectors for initial job selection.
Step 4. FOR v = 1:5, an iteration loop to select initial jobs according to one Vector_v.
Step 5. Select a number of min(OPTR s , for s = 1,…,S) jobs according to Vector_v by the equation min(

The computational complexity of SS
The computational complexity of SS heuristic consists of two parts, job selection and makespan calculation.
For job selection, if the state of a flow line is known, then to select one out of n unscheduled jobs takes S×n operations, which means the computational complexity for adaptive control is O(Sn).As n decreases from n to 1, the overall selection of n jobs takes S×n×(n+1)/2 operations.Although SS heuristic generates five sequences for an n by S HFS problem, the computational complexity of SS heuristic for job selection is O(Sn 2 ), because only the highest order of operations is counted in computational complexity.
For makespan calculation, we can model an n-job S-stage HFS problem by a 2-dimension matrix, where the row dimension is for jobs, and the column dimension for stages.The makespan calculation could be carried out along the column dimension.It means that, if the input sequence of n jobs in stage 1 is known, then the output sequence of n jobs in stage 1 (or the input sequence in stage 2) can be calculated; the output sequence is in a non-descending order of completion times of n jobs; and then the output sequence in stage 2 can be calculated, and so on, finally the output sequence in stage S can be calculated.However, the capacities of WIP inventories are limited, which means the completion times of jobs in stage s are constrained by the next available times of operators in stage s+1.For example, when calculating the output sequence of stage s, if a job i's completion time in stage s causes an overflow of WIP s , which means at that time the WIP s is full and there is no operator available in stage s+1 to process a job in WIP s , then a DELAY happens to such job i.This DELAY means the job i's completion time is delayed to a later time, and so is the next available time of operator k, who processes the job i in stage s.Consequently, the DELAY affects the completion times of all jobs following job i in stage s, and the completion times in the previous stage need to be checked because of the limitation on WIP inventories.In an extreme situation, when a DELAY happens in stage S-1, the job completion times in all previous stages have to be recalculated.Because of the recalculations, it is time consuming to calculate makespan along the column dimension.
For the makespan calculation along the row dimension, as n increases from 1 to n, the computational complexity is also O(Sn 2 ), although makespan calculation is carried out five times.Therefore, the overall computational complexity of the SS heuristic is O(Sn 2 ).
For an industrial instance with 1396 jobs and 5 machines in Gienow, the computation time of SS heuristic is 70.67 seconds, much longer than 782 milliseconds for CDS heuristic.However, the 782 milliseconds are only for CDS to generate sequences.Taking the makespan calculation into consideration, CDS will have the same computational complexity as SS.Moreover, from the adaptive control perspective, the computational complexity of SS heuristic is only O(Sn), which means it takes only 10.12 milliseconds for SS heuristic to select the next job dealing with disturbances in Gienow for this instance.

Case studies
The computational complexities of some existing heuristics and SS heuristic are analyzed in sections 2 and 3 respectively.In this section, the comparison and evaluation of heuristics are mainly based on optimality and flexibility.Two kinds of case studies, with and without disturbances, are carried out on Taillard's benchmarks (Taillard, 1993) and on an industrial case.Section 4.1 is for without disturbances, Section 4.2 is for with disturbances, and at last Section 4.3 gives a comparison between SS heuristic and Johnson's algorithm (JA).4.1.
In Table 4.1, the column "Scale" means the size of problems.For example, 20*5 means 20-job 5-machine problems.The column "Inst" means the number of instances in each scale.Columns 3 to 6 represent the average deviation of each of the CDS, NIS, SS, and SSnoLVR, heuristics respectively.We can see that SS heuristic has the smallest total average deviation for all 120 instances in Taillard's benchmarks, at 8.11%, NIS heuristic ranks the second at 9.01%, and CDS heuristic ranks the last at 11.28%.We can also see from Table 4.1 that the lever concept is suitable for flow shop production to minimize the makespan.The SS heuristic is better than the SSnoLVR heuristic, with a deviation of 8.11% versus 8.80%.To further compare the performance of the SS heuristic with the CDS heuristic's, a t-test is carried out using a function of TTEST (CDS results, SS results, 2, 1) in excel.The the SS heuristic's p-value is 3.20 × 10 -5 , which means the improvement is extremely significant.In Table 4.2, CDS heuristic performs 1.04% worse than NIS heuristic.In contrast, SS performs better than NIS on average, with an improvement of 2.27%.For the t-test based on 12 averages, SS heuristic's p-value is 0.0739.However, if the t-test is based on 120 individual cases, the SS' p-value is 2.07 × 10 -11 , which means an extremely significant improvement.Moreover, we recognize that for HFS no wait problems the improvement of SS over NIS will shrink as the number of operators/machines in each stage increases.For example, if the number of operators in each stage is the same as the number of jobs, then C max is fixed as max ( , 1 ) for i = 1,…,n, no matter for no wait or no pre-emption flow shop problems.

FFs/nwt/C max on Taillard's benchmarks
For hybrid no wait flow shop problems with identical parallel operators/machines in each stage, two operators/machines are assigned to each stage.The improvement of CDS and SS heuristics over NIS heuristic is shown in For such hybrid no wait flow shop problems with two operators/machines in each stage, SS heuristic has a small improvement of 0.39% over NIS heuristic, and CDS heuristic still performs worse, -1.33%.For the t-test, SS heuristic's p-value is 0.6739, meaning that its improvement over NIS heuristic is not statistically significant.For HFS problems with the FCFS rule applied to jobs in WIP inventories, two variables are set.One is a throughput rate r = 31, used to calculate the number of operators in each stage, where OPTR s = Roundup (APT s /r).The average processing time of each stage ranges from 30.75 to 64.40 for all of 120 instances in Taillard's benchmarks, therefore, OPTR s varies from 1 to 3 for each stage.Another variable is the capacity of WIP inventories.Different configurations of WIP inventories have different impacts on production (Vergara & Kim, 2009).For the ease of case study, the capacity of each WIP inventory is set the same, WIP s = 5, even though in theory each could be set to a different value.The calculation of processing times in CDS is p' i,s = p i,s /OPTR s , s = 1,…,S (Botta-Genoulaz, 2000).For the objective of min(C max ), the improvement (IMPR) of SS heuristic over CDS heuristic is used to evaluate performance, where IMPR 1 = (C max of CDS -C max of SS) ÷ (C max of CDS) in percentage.For the objective of max(Util), the improvement of SS heuristic over CDS heuristic is IMPR 2 = (Util of SS -Util of CDS) ÷ (Util of CDS).The results are shown in Table 4.4.

FFs
For the objective of max(Util), SS heuristic has an average 3.96% improvement over CDS heuristic on Taillard's benchmarks, and for the objective of min(C max ), SS heuristic has an average 1.16% improvement.For the t-test, SS heuristic's p-value is 0.0666 for min(C max ) meaning the improvement over CDS heuristic is not quite statistically significant.However, for max(Util), the p-value of SS heuristic is 3.34 × 10 -5 , an extremely significant improvement.

An industrial case study
To validate the SS heuristic in a real setting, an industrial case study was carried out in Gienow Windows and Doors, Canada.This case consists of 1396 jobs on a flow line with 5 www.intechopen.comProduction Scheduling 130 stages for one-day production.These jobs are delivered to customers at a predetermined time in 28 batches.Each batch of products is destined for customers in a given geographic area.Using data provided by Gienow, SS heuristic produces the results shown in Table 4.5.
In the SS heuristic, the SPACE is modelled as SPACE i,s = c i,s+1 -A s .

Case studies with disturbances
To test the suitability of SS heuristic to adaptive production control, a case study of operator absence is carried out on Taillard's benchmarks.Modeling operator absence is the same as modeling machine breakdown.We assume that, when a half of jobs are finished, one operator is absent in the middle stage of a flow line, specifically in stage 3, 6, or 11 according to the scale of instances in Taillard's benchmarks.For the remaining half of the jobs, if the production is carried on according to the original schedule when such disturbances happen to the shop floor, then the completion time is recorded as Original.If adaptive control is applied, that is, using SS heuristic to re-schedule the remaining jobs, then the completion time is recorded as Adaptive.The improvement of adaptive control over no adaptive control is used to evaluate the performance, i.e. (Original -Adaptive) ÷ (Original) in percentage.
To show the potential of the SS heuristic, case studies on operator absence are carried out under the two definitions of SPACE, SPACE i,s = p i,s+1 and SPACE i,s = c i,s+1 -A s .Moreover, a simple optimization method is also integrated with the SS heuristic.

SPACE i,s = p i,s+1
The results are given in Table 4

Integration with an optimization method
There are two effects in SS heuristic impacting the final production performance.SPACE is good for production but "IDLE & DELAY" is bad.We can introduce a weighting factor, α, into SS heuristic, and then sequence jobs according to max As α changes from 0 to 1 with increments of 0.1, the performance of SS heuristic, with SPACE i,s = p i,s+1 , is shown in Table 4.8.The columns represent the performance of each α integrated with SS heuristic.A weight α = 0.0 means no IDLE or DELAY is taken into consideration to sequence jobs, and α = 1.0 means no SPACE.We can see that SPACE affects the production more than IDLE or DELAY, where α = 0.1 has the greatest improvement of 2.77%.

Case studies on variation in processing times
It is normal to have variation in processing times, especially for the production of highly customized products and with manual operations.Thus, it is necessary to test the suitability of SS heuristic to the disturbance of variation in processing times.In Gienow, processing times of products are quoted by the company standards.
For variation in processing times in the industrial case, we assume that, initially, we have a matrix of quoted processing times of n jobs in S stages, and we do not know the real processing time beforehand, because the perfect production information in OKP can be available only after the production (Wortmann, 1992).If we define this matrix as B, which means before production, we carry out the offline scheduling according to B to get a sequence SB.We might setup due dates based on the performance of SB, that is, the original performance, PO.After the actual production, we have a matrix of real processing times of n jobs in S stages, i.e. matrix A.
During the production, when variation in processing times happens and the production is carried out according to the sequence SB, the performance is PB.4.9, we can see that adaptive control performs better than no adaptive control for all four ranges of variation in processing times.Moreover, SS heuristic is stable to such disturbance, because its average difference of performance is close to the expected value of variation in four ranges respectively.

A case study on a 2-machine flow shop problem
To reveal the rationale and coherent logic of the state space concept, a scaled down version of SS heuristic is compared with JA for a 2-machine flow shop problem, F2/prmu/C max .For F2/prmu/C max problems, the lever concept has no effect on the job selection in SS heuristic.This is because for this type of F2/prmu/C max problems, the WIP inventory between machines 1 and 2 is unlimited, thus no DELAY is taken into consideration.In addition, the length of force arm for SPACE or IDLE equals to one.However, the state space concept can yield different job sequences than JA.A numerical example is provided in  3,4,2] with C max = 62.According to the state space concept (but not exactly SS heuristic), and using JA for the initial job selection, two additional sequences can be obtained, [Job 1, 2, 3, 4] and [Job 1, 4, 3, 2], both of which have C max = 62, and are different from the one generated by JA.Therefore, it is obvious that JA uses a sufficient condition for F2/prmu/C max problems, but not necessary in some cases.The state space concept can yield different sequences than JA with the same level of optimality, and hence can provide greater opportunities for improvement as the core of a more elaborate heuristic for adaptive production scheduling and control.

Conclusions and future work
One-of-a-kind production (OKP) challenges production scheduling differently from mass production, because of high customer involvement in OKP.Especially, it challenges production control severely, because of dynamic disturbances.Traditionally, offline production scheduling is separated from the online adaptive production control.Dynamic disturbances in OKP fail the production schedule, which are generated by heuristics that are developed based on strong assumptions for offline scheduling (MacCarty & Liu, 1993).Accordingly, adaptive production control is in need to deal with disturbances.Currently, the adaptive production control in OKP companies is carried out by shop floor managers using priority dispatching rules (PDRs) and their experience.However, the performance of PDRs is poor on most scheduling objectives (Ruiz & Maroto, 2005), and the experience might be good for local optimization but definitely lacks global optimization for the overall production.Therefore, the adaptive production scheduling and control is essential and indispensable to improve the production efficiency in OKP.
In regards to three criteria of optimality, computational complexity, and flexibility to evaluate a heuristic for adaptive production control (Li et al., 2011a), the state space (SS) heuristic is the better than most existing heuristics.For optimality, SS heuristic outperforms the most popular alternative heuristics (CDS, NIS) against Taillard's benchmarks no matter for Fm/prmu/C max , Fm/nwt/C max , FFs/nwt/C max and FFs/FCFS/C max problems.In addition, the production schedule generated by SS heuristic outperforms Gienow's original schedule, improving Gienow's daily productivity by 1.25%.For computational complexity, O(m 2 n+mnlogn) of CDS heuristic is simpler one than O(mn 2 ) of SS heuristic for offline scheduling.However, if taking sequence evaluation into consideration, they have the same computational complexity of O(mn 2 ).In addition, for online adaptive production control, the computational complexity of SS decreases to O(mn), but that of CDS keeps the same.For flexibility, SS heuristic is more flexible than other heuristics.SS heuristic can deal with all three typical disturbances proposed by Pinedo (2002), job insertion or cancellation, operator absence or machine breakdown, and variation in processing times, whereas, CDS cannot deal with variation in processing times.Although NEH heuristic has the best performance for Fm/prmu/C max problems, its inflexible procedure to construct a job sequence renders it little flexibility to deal with disturbances.Moreover, SS heuristic is in the phase of index development, a phase that is beneficial for heuristic development in the other two phases (Framinan et al. 2004).
As discussed in this chapter, adaptive production scheduling and control in OKP challenges nearly all existing scheduling algorithms and heuristics, and almost all manufacturing companies are facing a certain level of disturbances, such as unreliable supplies, unexpected operator absence, machine breakdowns, etc.There is still a gap between the theoretical research and industrial applications.Industrial applications require further understandings and studies of production scheduling and control.This draws the following future work.
(1) Production planning on the company level should be integrated with production scheduling and control on the shop floor level.Production planning provides a company the production capacity that is a constraint for adaptive production scheduling and control.Meanwhile, the adaptive production scheduling and control requires frequent re-planning according to the production progress under unexpected disturbances.This is to meet due dates of customer orders or provide better estimated lead-times.The synergy and co-optimization between these two levels are necessary and should be further researched.
(2) Consequently, adaptive production scheduling and control for non-deterministic problems is inevitable.Stochastic modeling or simulation for non-deterministic production problems is a valuable research topic and lucrative.
(3) It is critical to integrate material flows on shop floors into a supply chain to successfully achieve adaptive production scheduling and control in OKP.This is in fact an urgent research topic to be studied.

Fig. 2
Fig. 2.1 n-job 2-machine flow shop problems, to min(C max ) The makespan or C max consists of the sum of processing times and the sum of idle times caused by n jobs on the last machine (Fig. 2.1).For n-job 2-machine flow shop problems, C max =

Fig. 3
Fig. 3.1 A computer-aided production scheduling and control systemStep 1. Assign possible manufacturing resources (e.g.operators/machines) to each stage, and hence form a task-resource matrix (TRM) with jobs.Step 2. Schedule the jobs by SS heuristic for offline scheduling, generating a sequence with the good performance for the next step.Step 3. Simulate the production by the THOCPN-CS model, and identify the bottleneck stage(s) and overflow of WIP inventories.Human schedulers may carry out some adjustment to smooth the production flow, such as re-allocate operators/machines in stage(s), take some jobs away and then re-schedule the rest jobs, and so on.Step 4. Re-schedule the jobs by both SS heuristic and human schedulers for offline scheduling.For online re-scheduling, re-schedule the jobs by either or both of the heuristic and scheduler, which depends on the time allowance for online rescheduling.Step 5. Repeat Steps 3 and 4 in the offline production scheduling phase until a satisfactory production schedule is obtained.This production schedule contains a job sequence and a number of operators/machines in each stage.In the adaptive production control phase, this step may be omitted, depending on specific requirements.Step 6. Deliver the production schedule to the shop floor and switch the control loop from the simulation model to the shop floor.Step 7. If any disturbance occurs on a shop floor, switch the control loop back to the simulation model, and go back to Step 3 if operators/machines re-allocation is necessary, or go back to Step 4.
Fig. 3.2 A 3-stage flow line with 2 operators in each stageThe operators in each stage follow a FCFS rule.Then there is a next available time of each stage, A s , where A s = min(a s,k ), for k = 1,…,OPTR s , in which a s,k is the next available time of operator k in stage s, and OPTR s is the number of operators in stage s.There are S-1 time differences between S-stage available times.In the example above, there are two differences of the next available times, A 2 -A 1 , and A 3 -A 2 .If we regard such a difference as a space, SPACE s = A s+1 -A s for s = 1,…,S-1, then SPACE s is a time period available for stage s to finish a job without causing idle to an operator in stage s+1.If the completion time of job i in stage s is larger than the next available time of stage s+1, then such a job causes idle to stage s+1, IDLE i,s = c i,s -A s+1 where c i,s is the completion time of job i in stage s, c i,s = max(A s , c i,s-1 ) + p i,s .If the completion time of job i in stage s is smaller than the next available time of stage s+1, then there are two possibilities depending on whether WIP is full.If the WIP inventory, WIP s , is full, then a delay happens to operator k who processed job i in stage s, DELAY i,s = A s+1c is .Such a delay means that, after finishing job i, operator k in stage s has to hold it in hand for DELAY i,s time units until there is a vacancy in WIP s .Therefore, the next available time of operator k in stage s is delayed.Alternatively, if WIP s is not full, job i goes into the inventory, and there is no IDLE or DELAY.
S-s t a g e f l o w l i n e i s m o d e l l e d a s a l e v e r , a n d IDLE i,s or DELAY i,s has a torque effect manifested as IDLE i,s ×LVR_IDLE s or DELAY i,s ×LVR_DELAY s .The lever concept for IDLE in SS is shown in Fig.3.3.For an S-stage flow line, a job could cause at most S-1 times of IDLE.No IDLE is caused in stage 1 and an IDLE takes effect in the next stage.Therefore, the fulcrum of a lever for IDLE is set between stages S-1 and S, and the length of arm for an IDLE caused by stage s in stage s+1 is LVR_IDLE s = S-s.

Fig. 3
Fig. 3.3 A lever concept for IDLE in SS The lever concept for DELAY in SS is shown in Fig. 3.4.Like the number of possible IDLEs, there could be S-1 DELAYs, and no DELAY in stage S.But a DELAY takes effect in current stage s, whereas IDLE is in the next stage.Therefore, one unit of DELAY in stage s should be worse than one unit of IDLE in stage s.Thus, the length of arm for a DELAY is LVR_DELAY s = S-s+1, for s = 1,…,S-1.The fulcrum of a lever for DELAY is set in stage S.

Fig. 3
Fig. 3.5 A lever concept for SPACE in SS Therefore, all SPACEs, IDLEs, and DELAYs are converted to torques, that is, SPACE′ i,s = p i,s+1 × LVR_SPACE s , IDLE′ i,s = IDLE i,s ×LVR_IDLE s , and DELAY′ i,s = DELAY i,s ×LVR_DELAY s .The job selection scheme is 1. Determine the number of operators in each stage, i.e.OPTR s .(a): Calculate n and S. (b): Set an expected throughput rate, r, which means a job is to be finished in every r time units.(c): OPTR s = Roundup (APT s /r).(d): Set the start time of every operator to 0. (e): Put all of n jobs into a candidate pool.(f): Set an output sequence to be a 1×n zero vector, Sequence_v.
in which c nks is the completion time of the last job in stage s, and c 1k's-1 is the start time of the first job in stage s, i.e. the completion time of the first job in stage s-1.(b): Calculate the average utilization of each stage, i.e.Util = average (Util s ), s = 1,…,S.Step 11.ND v. Output each of five sequences and related makespan and utilization, and the minimum makespan and the maximum utilization are regarded as the final performance of SS.
Fm/prmu/C max on Taillard's benchmarks For traditional permutation flow shop scheduling problems, the deviation (DEV) from the best known upper bounds is used to evaluate the performance of a heuristic, where DEV = (C max of a heuristic -The upper bound) ÷ (The upper bound) in percentage.The results of the deviation studies for CDS, NIS and SS, and a version of SS without the lever concept, SSnoLVR, heuristics are shown in Table is the average processing time of stage s; = − According to STATE and WIP_Status, calculate IDLE′ is , DELAY′ is and SPACE′ is .Calculate intermediate completion time of a partial schedule Sequence_v, update WIP_Status, and update STATE.Step 10.END i. Calculate the utilization of a line.(a): Calculate utilization of each stage first, = − ∑).Then put selected jobs into a Sequence_v and eliminate them from the candidate pool.Calculate the next available time of each operator, the next available time of each stage, namely STATE, and WIP inventory status, namely WIP_Status, which is initially a 1×(S-1) zero vector.Step 6. FOR i = min(OPTR s ) + 1: n, an iteration loop to sequence rest n -min(OPTR s ) jobs.Step 7.

2 Fm/nwt/C max on Taillard's benchmarks
For traditional no wait flow shop problems, an improvement (IMPR) over NIS heuristic is used to evaluate the performance of CDS and SS heuristics based on Taillard's benchmarks.IMPR = (C max of NIS -C max of CDS or SS) ÷ (C max of NIS) in percentage is shown in Table4.2.

Table 4 .
.6.As we see, adaptive control is slightly better than no adaptive control with a 0.10% improvement for the SS heuristic if SPACE i,s = p i,s+1 .The results are given in Table4.7.As we see, for SS heuristic, if we model SPACE i,s = c i,s+1 -A s , the adaptive control has a 2.02% improvement over no adaptive control.7 Adaptive control over no adaptive control, where SPACE i,s = c i,s+1 -A s It means no adaptive control.It is difficult to use CDS heuristic for such disturbance, because we only know part of matrix A for finished jobs, but not the rest for unfinished jobs.However, we can adaptively re-schedule the rest jobs by SS heuristic, because the actual processing times of finished jobs affect the space, although we only know matrix B for the unfinished jobs.After one job has been produced, we use SS heuristic to select a job from remaining jobs according to processing times of unfinished jobs in matrix B and the actual space created by finished jobs.Consequently, the performance of adaptive control by SS heuristic is PA.The SPACE of SS heuristic is modeled as SPACE i,s = c i,s+1 -A s .We compare the performance of no adaptive control PB or adaptive control PA with the original performance PO, by: Diff_OB = (PB -PO) ÷ PO and Diff_OA = (PA -PO) ÷ PO, both of which are in percentage.

Table 4 .
Table 4.10.10 A 2-machine flow shop example JA sequences jobs according to the following scheme.If min {p i,1 , p i+1,2 } ≤ min {p i+1,1 , p i,2 }, then job i should be processed earlier than job i+1.Therefore, for the example in the above table, JA generates a sequence of [Job 1,