Scheduling linearly deteriorating jobs on parallel machines: A simulated annealing approach

Scheduling deteriorating jobs on parallel machines is an NP-hard problem, for which heuristics would be the first solution option. Two variants of linearly deteriorating jobs are considered. The first is that with simple linear deterioration, i.e. where there is a deterioration rate only, which is meaningful only if the jobs are assumed to be available at a positive time t0. In the second variant, there is a basic processing time and a deterioration rate and all jobs are available at time t = 0. In both cases, we seek to minimize the makespan. Starting from simple heuristics, both steepest descent search and simulated annealing are designed and implemented to arrive at optimal or near-optimal solutions. Computational results for randomly generated problem instances with different job/machine combinations are presented.


Introduction
Usually, scheduling problems involve jobs with constant, independent processing times. However, situations arise where processing times are not constant but increasing over time, i.e. deteriorating, and, therefore, interdependent. The associated scheduling problems occur, e.g. in maintenance scheduling and cleaning assignments, and also in contexts where machines are deteriorating, so that jobs processed later require longer processing times. Browne and Yechiali (1990) introduced the problem of scheduling deteriorating jobs on a single machine, where p i …t †ˆa i ‡ ¬ i t i is the processing time for job i when processing begins at t i , a i is the basic processing time, and 0 µ ¬ i µ 1 is the deterioration rate. Mosheiov (1991) investigated minimizing the total¯ow time when all Authors: K. S. Hindi, Department of Systems Engineering, Brunel University, Uxbridge, Middlesex UB8 3PH, UK. E-mail: khalil.hindi@brunel.ac.uk, and S. Mhlanga, Department of Industrial Engineering, National University of Science and Technology, P.O. Box 939, Ascot, Bulawayo, Zimbabwe.
KHALIL HINDI is a Professor at the Department of Systems Engineering of Brunel University (West London). He is a Fellow of the Institution of Electrical Engineers (FIEE), a Fellow of the Institute of Mathematics and its Applications (FIMA), a Fellow of the British Computer Society (FBCS), and a Fellow of the Royal Society for the Encouragement of Arts, Manufacture and Commerce (FRSA).
SAMSON MHL A NGA is a lecturer with the Zimbabwe National University of Science and Technology. He is a GradMember of the Zimbabwe Institute of Engineers and a Member of the Institute of Industrial Engineers (USA). His current research interests are in manufacturing systems redesign, modelling and simulation, metaheuristics and CAD/CAM. jobs have the same basic processing time and showed that the optimal sequence is V-shaped. He also studied the case where the basic processing time is zero, considering some of the most commonly used performance measures (Mosheiov 1994). In a third work, Mosheiov (1995) considered scheduling jobs with step-deterioration on a single machine, as well as on a number of identical machines, with the objective of minimizing the makespan.
Recently, Hsieh and Bricker (1997) studied the multiprocessor version of the problem, and developed several heuristics for both the case where the basic processing time is zero and the case where it is not. In the present work, we also address the multi-processor version of the problem and develop both a steepest descent search (DS) and a simulated annealing search (SA).

Scheduling of deteriorating jobs
Two cases are considered: the ® rst is the case of simple linear deterioration, where there is a deterioration rate only, which is meaningful only if the jobs are assumed to be available at a positive time t 0 . In the second case, there is a basic processing time and a deterioration rate and all jobs are available at time tˆ0. In both cases, we seek to minimize the makespan.

Simple linear deterioration
Consider ® rst scheduling n jobs on a single machine, where p i …t †ˆ¬ i t and the jobs are available at time t 0 . It is easy to see that for any job sequence, the completion time of any job i is from which, the makespan is clearly independent of job sequence and is equal to: Now consider the multi-machine case, with m identical machines. Analogy with the`longest processing time (LPT) ® rst' heuristic suggests the following simple heuristic, which may be called a`longest deterioration time (LDT) ® rst' heuristic (Hsieh andBricker 1997, Mhlanga 1997): order the jobs in the non-increasing order of deterioration rate, then take each job in turn and assign it to the machine that has the shortest makespan for the partial schedule assembled so far. Hsieh and Bricker (1997) show that this heuristic has the asymptotic optimality property; i.e. the makespan achieved by it tends to equal the optimal makespan as n ! 1, and should therefore give good results, especially when n ¾ m. They also show that is a lower bound on the optimal value of the makespan.
Clearly, because the order of jobs on each machine is of no importance, the task is to partition the set of n jobs into m partitions and assign each subset to a single machine, such that the longest completion time is minimized. It is this idea that we use in the local-search-base d algorithms developed in this work.

Jobs with basic processing time
Consider ® rst scheduling n jobs, all available for processing at time tˆ0, on a single machine. Each job has a basic job-speci® c processing time a i (the random time to complete job i if it is processed ® rst). If the processing of a job is delayed, the initial requirement deteriorates such that the processing time grows linearly with the delay: where ¬ i is the deterioration rate. It is also assumed that deterioration stops as soon as processing starts.
If the n jobs are to be processed non-preemptively on a single machine and we do not allow the machine to stay idle if there are jobs waiting, then we need to consider only permutations of the index set Iˆf1; 2; . . . ; ng. Now consider a particular permutation (sequence) º. Let Y i be the actual processing time of job i and C iˆP i kˆ1 Y i be the completion time of job i. It is then easy to see that which exhibits the solution Because the corresponding makespan is C n , it follows that: …1 ‡ ¬ r †: Rau (1971) had shown that the sum ® r is minimized when calculated over the permutation ordered by non-decreasing values of · i =…® i ¡ 1 †. Using Scheduling linearly deteriorating jobs on parallel machines this result, Browne and Yechiali (1990) conclude that minimizing the makespan is clearly achieved by scheduling the jobs by non-decreasing values of a i =¬ i . This conclusion naturally leads in the case of scheduling the jobs on several identical machines to the following simple heuristic: order the jobs by non-decreasing values of a i =¬ i , then take each job in turn and assign it to the machine that has the shortest makespan for the partial schedule assembled so far.
However, in this case also the task is, once again, to partition the set of n jobs into m subsets and assign each to a single machine such that the longest completion time is minimized, knowing a priori that each subset must be placed on its respective machine in the proper order of non-decreasing values of a i =¬ i . This idea is used in the local-search-base d algorithms developed in this work.

Steepest descent search (SD)
SD is started with a partition of the n jobs into m partitions. This partition could either be random or that arrived at by a suitable heuristic; e.g. the LDT heuristic for the simple deterioration case. At each step, the best partition in the neighbourhood of the current one, i.e. the partition that leads to the largest decrease in the makespan is identi® ed and adopted. The search is terminated when the current partition cannot be improved upon.
The neighbourhoo d is de® ned as the set of partitions that can be achieved by a single move from the current partition. A single move could be a transfer of one job from one subset to another or a swap of two jobs belonging to two diå erent subsets. However, because only a move involving the jobs assigned to the machine with the longest completion time, i.e. the machine associated with the makespan, will aå ect the makespan, the neighbourhood is de® ned dynamically by these moves. Thus, at each step, the following is carried out.
Step 1. Calculate the makespan and identify the associated machine and the jobs assigned to it.
Step 2. Evaluate all the moves (transfers and swaps) involving the jobs on the makespan machine and identify the one that would lead to the biggest improvement of the makespan. If there is no such move, stop, the current solution is a local optimum.
Step 3. Carry out the move identi® ed in step 2 and go to step 1.
It is worth noting that the calculation of the makespan in step 1 and the worth of moves in step 2 is carried out incrementally, rather than from scratch.

Simulated annealing implementation
Proposed initially by Kirkpatrick et al. (1983), simulated annealing (SA) has been applied successfully to a large number of diå erent combinatorial optimization problems. It is based on an analogy between the process of ® nding an optimal solution of a combinatorial optimization problem and the process of annealing of a solid to its minimum energy state in statistical physics.
SA employs a randomized move acceptance criterion in order to escape poor quality local minima. Whereas local search descent methods, like the steepest descent algorithm presented in section 3, do not accept nonimproving moves at any iteration, SA does with certain probabilities. These probabilities are determined by a control parameter T , called temperature, which tends to zero according to a deterministic cooling schedule. For the problem in hand, the monotonic cooling schedule of Lundy and Mees (1986) was adopted.
The steps of the algorithm designed for the problem in hand are as follows.
Step 1. Adopt the solution given by the steepest descent algorithm as an initial solution S.
Step 2. Calculate the initial temperature (Aarts and Van Laarhoven 1985) where m ‡ is the number of cost increase moves found during the steepest descent search, ¢ ‡ is the average cost increase over these moves and 0 < x < 1 is the acceptance ratio. In the current scheme x was set to 0:95.
Step 3. Select a solution S 0 2 N …S †. This is carried out by identifying the best move involving the jobs on the current makespan machine.
where is a uniform random parameter 0 < < 1, then accept the new solution and set SˆS 0 , otherwise, retain S.
Step 5. Decrement the temperature: where ½ 1=U, and U is an upper bound on the absolute value of moves. In the current scheme was set to 0:0001=U, with U being the largest absolute move value found during the steepest descent.
Step 6. Stop if the stopping criterion is met; otherwise go to step 3. For the case of simple linear deterioration, the stopping criterion chosen is proximity to the lower bound or reaching the ® nal tempera-ture, while for the case with basic processing time, the criterion chosen was reaching the ® nal temperature. Proximity to the lower bound was assessed by calculating the gap: gˆS olution cost ¡ Lower bound Lower bound : If g µ 0:5%, the solution is deemed to be close enough to the optimum. The ® nal temperature T f was set to 0:001 * T s .

Computational experience
The eå ectiveness of the developed algorithms was assessed through extensive computational experimentation on 16 problem classes, each with a diå erent jobs/ machines combination. Twenty problem instances of each class were created, with deterioration rates randomly varying in the interval …0; 1 †, while the basic processing times for the relevant case were chosen from a normal distribution, with a mean of 50 and a standard deviation of 10. All algorithms were coded in Turbo Pascal and run on a Pentium 133 MHz PC.

Simple linear deterioration
For each problem instance, the heuristic is applied ® rst. If the gap with respect to the lower bound is greater than 0.5% then the search is continued using the steepest descent method. If the local optimum arrived at in this way still has a gap greater than 0.5%, the search for a better solution is continued by simulated annealing. Table 1 shows for each problem class, the number of problem instances solved at the ® rst two stages, as well as the number of the remaining instances that were tackled by SA. The average computing times for SD and SA are shown, but for the heuristic these were so small as to be negligible.
The results show that as the number of jobs increases the problem becomes easier; the problem instances from the classes with 100 and 500 jobs are almost all solved in the ® rst two stages, without recourse to SA. It is also clear that for the same number of jobs, problem diae culty is aå ected by number of machines; the higher the more diae cult. For the more diae cult problem instances, SA is clearly superior to the heuristic and to SD. The results also show that solution times are very small, even for those problem instances where SA proved to be necessary. The eae cacy and eae ciency of both SD and SA are attributable to the judicious choice of moves.

Jobs with basic processing time
In the absence of a computed lower bound, every problem instance was tackled by a three-stage solution process: the heuristic is applied ® rst to provide an initial solution, followed by SD then SA. The results obtained show that SA clearly outperforms SD. However, the question arises as to whether SA would still be superior to a random search that employs SD. To resolve this question, SD was run 20 times, each time starting from a random partition of jobs, and the best result obtained

Conclusion
Scheduling problems usually involve jobs with constant, independent processing times. However, situations arise where processing times are not constant but increasing over time, i.e. deteriorating, and, therefore, interdependent.
In this work, scheduling jobs with linear deterioration on parallel, identical machines has been considered. A steepest descent search scheme and a simulated annealing search scheme for solving the resulting NP-hard problem have been designed and implemented. The eae cacy and eae ciency of these schemes, as shown by the results of extensive computationa l experimentation on a large set of randomly generated problem instances, is attributable to the eå ective move strategy designed for them.
Search schemes similar to those developed in this work should be of use in designing solution algorithms for other scheduling on parallel machines problems, involving deterioration models other than those considered.