Startseite Uniform Parallel Machine Scheduling Problem with Controllable Delivery Times
Artikel Öffentlich zugänglich

Uniform Parallel Machine Scheduling Problem with Controllable Delivery Times

  • Kai Li EMAIL logo , Hui Li , Bayi Cheng und Qing Luo
Veröffentlicht/Copyright: 25. Dezember 2015
Veröffentlichen auch Sie bei De Gruyter Brill

Abstract

This paper considers the uniform parallel machine scheduling problem with controllable delivery times, which assumes that the delivery times of jobs are linear decreasing functions of the consumed resource. It aims to minimize the maximum completion time under the constraint that the total resource consumption does not exceed a given limit. For this NP-hard problem, we propose a resource allocation algorithm, named RAA, according to the feasible solution of the uniform parallel machine scheduling problem with fixed delivery times. It proves that RAA algorithm can obtain the optimal resource allocation scheme for any given scheduling scheme in O(n log n)time. Some algorithms based on heuristic algorithm LDT, heuristic algorithm LPDT and simulated annealing are proposed to solve the uniform parallel machine scheduling problem with controllable delivery times. The accuracy and efficiency of the proposed algorithms are tested based on those data with problem sizes varying from 40 to 200 jobs and 2 to 8 machines. The computational results indicate that the SA approach is promising and capable of solving large-scale problems in a reasonable time.

1 Introduction

In the field of production scheduling, the tails of the jobs are a kind of common scheduling parameters, which are corresponding to some products’ post-processing time in the reality, such as the cooling process of hot products, and the shaded drying process. Especially the delivery times are treated as the tails when the scheduling problems in the direct distribution mode. Generally, delivery time is regarded as a substitution word of tail in the field of scheduling. The scheduling problems subject to delivery times(equal to tails) are researched extensively by scholars.

At the earliest Carlier[1] considered the problem of scheduling independent jobs with tails on m identical machines to minimize the makespan. Following him, Drozdowski and Kubiak[2] considered scheduling of parallel tasks in which a linear programming was presented to find an optimal schedule for a given sequence with tails. Lancia[3] dealt with the problem of assigning a set of n jobs with tails, to either one of two unrelated parallel machines and scheduling each machine so that the makespan is minimized. Sourd and Nuijten[4] discussed scheduling problems that combine tails and deadlines or, equivalently, due delivery times and deadlines. Mauguière et al.[5] considered the problem of the representation of a set of dominant schedules by a sequence of groups of permutable jobs in a single machine problem with tails. Gharbi and Haouari[6] considered the problem of minimizing the makespan on identical parallel machines subject to delivery times. Vakhania[7] studied the problem of scheduling jobs with tails on a single machine with the objective to minimize the makespan. Haouari and Gharbi[8] investigated new lower bounds for the scheduling problem on identical parallel machines with tails. Gharbi and Haouari[9] addressed the makespan minimization for parallel identical machines subject to tails. Boxma and Zwart[10] gave an overview of recent researches on the impact of scheduling on the tail behavior of the response time of a job. Li and Yang[11] considered the uniform parallel machine scheduling problem with unequal release dates and delivery times to minimize the maximum completion time. The delivery times(or tails) are all assumed to be constant parameters in the above-mentioned classical deterministic situations, thus these could be classified as the problems of scheduling jobs with fixed delivery times.

In reality, the production efficiency can be raised by investing additional resource, such as money, energy, and catalyst. There is generally the assumption that additional resources inputs affect the release times and processing times at present achievements. Release time is another general parameter in scheduling problem. Job must be processed after its release time, so that release time is called as head. For example, Janiak[12] studied an extension of the classical single machine scheduling problem with release dates which is a positive linear decreasing function with respect to the amount of a common resource. For the same problem as studied in [12], Li et al.[13] designed a simulated annealing algorithm to obtain near-optimal solutions with high quality. Computational results show that the proposed algorithm is promising and is capable of solving large-scale problems in a reasonable amount of time. Zhang et al.[14] considered the single-machine scheduling problems with release time which is a positive and strictly decreasing function about resource consumption.

For controllable processing time problem, Wei and Wang[15] considered single-machine scheduling problems in which the processing time of a job is a function of its starting time and its resource allocation. Zhao and Tang[16] considered the single machine scheduling problems with deteriorating jobs whose processing times are a decreasing linear function of their starting time. Wang and Wang[17] studied a single-machine earliness-tardiness scheduling problem with due date assignment, in which the processing time of a job is a function of its starting time and its resource allocation. Wang and Wang[18] considered scheduling problems with convex resource dependent processing times and deteriorating jobs, in which the processing time of a job is a function of its starting time and its convex resource allocation. Janiak and Portman[19] considered a single machine scheduling problem with job processing times dependent on continuously-divisible resource. Li et al.[20] considered the identical parallel machine problem to minimize the makespan with controllable processing times, in which the processing times are linear decreasing functions of the consumed resource. A simulated annealing algorithm was designed to obtain the near-optimal solutions with high quality. Hsu and Yang[21] analyzed unrelated parallel-machine scheduling resource allocation problems with position-dependent deteriorating jobs, in which each job processing time can be compressed through incurring an additional cost.

In many practical cases, the job tails (delivery times) would be shortened by using additional resources for increasing customer satisfaction just as release times and processing times. In such cases, each delivery time is a decision variable to be determined by the scheduler, who can take advantage of this flexibility to improve system performance. Scheduling problems with variable delivery times are very interesting both from the practical and theoretical point of view. For instance, such a problem arises in steel production, where color-coated steel sheets must be dried in baking ovens by gas. The drying time is inversely proportional to the gas flow intensity. The drying time of each color-coated steel sheet may be regarded as a tail time and its value is a linear decreasing function of the consumed gas. Although the scheduling problems with controllable release times or controllable processing times have been extensively studied, to the best of our knowledge, there is no paper on controllable delivery times.

Consider a manufacturer that has m machines for processing jobs. Some of the machines are newer models while others are older models. The machines are functionally the same; they only differ in terms of speed. So uniform machines are the parallel machines with different processing speeds. And uniform parallel machine problem is a extended version of identical parallel machine problem in which the machines have the same speed. In this paper we study a class of uniform parallel machine scheduling problem with controllable delivery times, in which the delivery times of jobs are supposed to be the linear decreasing functions of the consumed resource. For convenient demonstration, UPCD is short for “Uniform parallel scheduling problem with controllable delivery times”, and UPFD short for “Uniform parallel scheduling problem with fixed delivery times”.

To solve the new UPCD problem, we extend the existing algorithms of the corresponding UPFD problem firstly, and then introduce simulated annealing to obtain the solutions with higher quality. Simulated annealing algorithm, which is a global optimization algorithm based on neighborhood searching, has strong ability of jumping out of the local optimum and searching the global optimum or approximate optimum, and is irrelevant to the initial solution. Metropolis[22] first proposed the simulated annealing algorithm, which is an optimal algorithm simulating solid annealing process. Kirkpatrick et al.[23] applied the simulated annealing in the field of combinatorial optimization. Yin et al.[24] addressed a two-agent scheduling problem on a single machine where the objective is to minimize the total weighted earliness cost of all jobs, while keeping the earliness cost of one agent below or at a fixed level Q. A simulated annealing algorithm was developed to derive the near-optimal solutions. Bank et al.[25], Nouria et al.[26] and Dai et al.[27] introduced the idea of simulated anneal to solve flow shop scheduling problems. Naderi et al.[28] designed simulated annealing algorithm for the job shop scheduling problem with sequence-dependent setup times. Shafia et al.[29] defined the train scheduling problem as a job shop scheduling problem and developed a simulated annealing algorithm to solve the problems with large-scales.

The paper is organized as follows. In Section 2, the problem description is provided. A resource allocation algorithm is proposed in Section 3. Section 4 is devoted to three algorithms for solving the considered UPCD problem. Section 5 lists a number of computational results to analyze the performance of the three algorithms. Finally, some concluding remarks are provided in Section 6.

2 Problem description

This paper considers a class of UPCD problem, in which the job’s delivery times are assumed as the linear decreasing function of resources consumed. The objective is to minimize the maximum completion time under the given total resources. The job completion time equals to the sum of the finished time and the corresponding delivery time. Assume there are m machines Mi(i = 1, 2, ··· , m), the processing speed of which have constant difference, the processing speed of machine Mi is si(si > 0). Given n jobs Jj(j = 1, 2, · · · , n), the corresponding basic processing time is pj(pj > 0) if it is processed by a machine with speed 1. Job Jj(∀j = 1, 2, ··· , n) can be processed on any machine, and if Jj is processed at machine Mi, the actual processing time is pij = pj/si. Job Jj(j = 1, 2, · · · , n) has a post process after the completion, the time qj of which is the linear decreasing function of resources consumed uj, qj = juj, in that j(j ≥ 0) is the basic delivery times of job Jj without any additional resources, and uj(0 ujj) is the additional resources for shorting the delivery times. This problem can be denoted as Qm|qj=q¯juj,ujU^|Cmax with the three parameters notation, where Qm means that the scheduling type is uniform parallel machine, qj = juj means that the delivery times has the linear decreasing relation with the additional resources, the objective is to minimize Cmax under the constraint that the total resource consumption does not exceed a given limit Û.

Let Π be the universal set of scheduling schemes, π(π ∈ Π) is a feasible scheduling scheme that can be run for π = {π1, π2, · · · , πm}, and πi(i = 1, 2, ··· , m) is the sub-scheduling scheme at the machine Mi. We use |πi| to represent the number of the sub-scheduling scheme πi, πki(0k|πi|) to be the kth job in the sub-scheduling scheme πi, and k = 0 if and only if |πi| = 0, when there are not any jobs in the sub-scheduling scheme πi. Similarly, as U is the universal set of resource allocation schemes, u(uU)is a feasible resource allocation scheme that can also be run for u = {u1, u2, · · · , um} corresponding to the scheduling scheme, and ui(i = 1, 2, · · · , m) is the sub scheme formed by the resource allocation of each job in the sub-scheduling scheme at the machine Mi, with uki(0k|πi|)to be the additional resources allocated on the kth job in the sub-scheduling scheme πi. Note the processing time and the delivery times of πkiaspki(pki=p(πki)/si)andqki(qki=qki¯uki),withp(πki)/siandqki¯to be the basic processing time and the delivery times of πki. A feasible solution (π, u)can be described as a two-tuples made up by scheduling scheme π and its corresponding resource allocation scheme u. Under the precondition without ambiguity, U(π, u)shows the resources consumed by the feasible solution (π, u), that is U(π,u)=i=1mk=0|πi|uki.

Given a feasible solution (π,u),S(πki),c(πki),C(πki,uki) are indicated as the starting time, the finished time and the completion time (the sum of the finished time and the delivery time) of the job πki.Then

(1)S(πki)=r=1k1pri;1<k|πki|0;k=1
(2)c(πki)=S(πki)+pki=r=1kpri;1k|πki|
(3)C(πki,uki)=c(πki)+qki=r=1kpri+qki¯uki;1k|πki|
(4)Cmax(π,u)=max1immax1k|πi|C(πki,uki)=max1immax1k|πi|(r=1kpri+qki¯uki)

The objective of this problem is to find π* and its corresponding u*, makes

(5)Cmax(π,u)=minπΠminuUCmax(π,u)
(6)s.t.U(π,u)U^

3 Resource Allocation Algorithm

This section assumes (π, 0) to be an arbitrary feasible solution of UPCD, in which any job’s resource allocation amount is 0, obviously (π, 0) is a feasible solution of the corresponding UPFD problem. Then we try to construct a resource allocation algorithm named RAA, which can improve the existing UPFD algorithm to solve the UPCD.

Algorithm RAA (Resource allocation algorithm of UPCD)

Step 1 Given a feasible solution (π, 0) of UPCD;

Step 2 Calculate the sum of job’s completion times C(π, 0), and sequence n jobs in non-increasing order of the completion times;

Step 3 Let j = 1, C(πn+1, 0) = 0;

Step 4 If j > n, then end; else Δ = C(πj, 0) – C(πj+1, 0);

Step 5qmin = q1; for k = 1 to j: qmin = min(qmin, qk);

Step 6 If Δ > qminand Ûj · qmin, then Cmax( π, u) = C(π1, 0) – qmin, return Cmax(π, u), end.

Step 7 If Δ > qmin and Û < (j + 1) · qmin, then Cmax(π, u)= C(π1, 0) – Û/(j + 1), return Cmax(π, u), end.

Step 8 If Δ < qmin and Û < (j + 1) ·Δ, then Cmax(π, u) = C(π1, 0) – Û/(j + 1), return Cmax(π, u), end.

Step 9 If Δ < qmin and Û ≥ (j + 1) ·Δ, then Û = Û – (j + 1) · Δ.If Û = 0, then return

Step 10 For k = 1 to j: qk = qk– Δ, C(πk, 0) = C(πk, 0) – Δ; Ifqk = 0, then return Cmax(π, u), end; Else j = j + 1, go to Step 4.

Theorem 1

For any feasible solution (π, 0) of UPCD, suppose that the resource allocation scheme obtained by RAA algorithm is u*, then u* is the optimal resource allocation scheme of scheduling scheme π.

Proof

Proof Assuming that (π, 0) is the feasible solution of UPCD, u* = {u1, u2, · · · ,un} is the resource allocation scheme obtained by RAA algorithm. From the resource allocating for jobs by RAA, the completion times have C(π1i,0)C(π2i,0)C(πni,0). The property is proven by induction of n: Since n = 1, Property 1 is established apparently. Since n = 2, =C(πn1i,0)C(πni,0); suppose qmin = q1, for k = 1 to n – 1: qmin = min(qmin, qk); There are three cases to be considered:

Case 1: Û ≤(n – 1)Δ, u* is the optimal resource allocation scheme apparently.

Case 2: Û ≤(n – 1) · qmin and qmin ≤ Δ, u* is the optimal resource allocation scheme apparently.

Case 3: Û ≤(n — 1) · qmin and qmin > Δ, after allocating resources for scheduling scheme π using RAA algorithm, u* = {u1, u2}is a resource allocation scheme of π with Cmax(π*, u*) = C(π1, 0) — (u1 + u2), u1 = Δ , u2 = min(U/n, qmin) where qmin = q1 , for k = 1 to n : q min = min(qmin, qk).

Resources are allocated to the job with the longest completion time firstly, then the second-longest. u1is the maximum level of shortening of job J1’s delivery times, the same to u2 for job J2. Thus, u* = {u1, u2}is the optimal resource allocation scheme. Since n = 3, take the first two jobs as one job after one allocation, and then the situation is similar to n = 2. Since n = k, take the first j jobs as one job after one allocation, and then the situation is similar to n = k – 1. Therefore, the resource allocation scheme obtained by RAA algorithm is optimum.

Theorem 2

The time complexity of resource allocation algorithm RAA is O(n log n).

4 Algorithms for UPCD

The resource allocation algorithm RAA in the previous section can find the optimal resource allocation scheme in polynomial time for any given feasible solution (π, 0) of UPCD; therefore, this section design algorithms for UPCD based on it. We allocate resources to the solution of LDT (largest delivery time firstly) and LPDT (largest processing and delivery time firstly) algorithm for obtaining the corresponding UPCD problem solution firstly, and then construct optimization algorithm for UPCD based on simulated annealing to obtain satisfactory solution of higher quality.

4.1 Algorithms for UPCD Based on Heuristic Rules

In solving UPFD problem, the most common heuristic rules are LDT and LPDT. [30] addressed the LDT algorithm for UPFD problem, and demonstrated that LDT algorithm is the ((m1)s1/i=1msi+1)-approximation algorithm to UPFD, where s1is the fastest processing speed of the machines. Although the LDT rules can obtain the optimal solution for the corresponding single machine problem with fixed delivery times, but there is a clear defect in solving UPFD problem. Due to the different speed of the machine in the uniform parallel machine problem, the faster machine is usually given the priority for making full use of processing capacity of the machine. Taking the job with longer delivery times precedence in the LDT algorithm may make a short job assigned a faster machine, and a long job assigned to a slower one. While the delivery times of the two jobs only have small differences, longer job gets very big completion time on slow machine, so as to make the poor solution. In [31] we constructed a heuristic algorithm named LPDT, which gives priority to the job with the largest sum of delivery and processing time when assigning machine for jobs. It usually can get a better solution than LDT, and avoid the large completion time situation caused by assigning long job on the slow machine in LDT algorithm. Thus we realize the optimal allocation of resources based on LDT algorithm and LPDT algorithm of UPFD, and then the corresponding UPCD solving.

Algorithm LDT-RAA (LPDT-RAA)

Step 1 Sort all the jobs in the non-increasing order of delivery time in the job queue; (in LPDT-RAA, the jobs are sorted in the non-increasing order of the sum of the processing time and the corresponding delivery time);

Step 2 Put the job in the job queue on the machine with the smallest Cmax (If there are more than one, then choose the slow machine), and delete the job from the collection;

Step 3 If the job queue is empty, then generate the solution (π, 0), end; else go to Step 2;

Step 4 Implement RAA algorithm on the solution (π, 0), get the resource allocation scheme u, and thus the solution (π, u)of UPCD are generated. Return (π, u)and Cmax(π, u), end.

4.2 Simulated Annealing Algorithm for UPCD

Simulated annealing algorithm is based on neighborhood searching; therefore we construct the neighborhood generation methods firstly. Here we construct two neighborhood generation methods, swap neighborhood and insertion neighborhood.

A swap neighbor is obtained by exchanging the positions of a pair of selected jobs. The swap neighborhood 𝓝1 is the universal set of all the swap neighbors of the current schedule. Here we generate two random integer numbers r2, r3(r2, r3∈[1, n]; r2 = r3) and the job Jr2andJr3 are not processed by the same machine. Exchange job Jr2andJr3to generate a new sequence (π′, 0), and then a new neighborhood solution (π′, u′) can be obtained by RAA algorithm.

An insertion neighbor is obtained by inserting a selected job into a different position in the current schedule. The insertion neighborhood N2 is the universal set of all the insertion neighbors of the current solution. In the simulated annealing algorithm, we generate two random integer numbers r4(r4 [1, n]) and r5(r5 [1, m]), and job Jr4 is not in the sub-scheduling on machine Mr5. Insert job Jr4 on the machine Mr5 to generate a new schedule (π′, 0), which is distributed resources for new neighborhood solution (π′, u′) according to RAA algorithm.

Algorithm SA for UPCD

Step 1 As dependency of SA to initial solution is not strong, then order all jobs with the rules LDT to (π, 0), which is implemented the RAA algorithm to generate initial solution (π, u)and the sum of completion times Cmax(π, u);

Step 2 Set the initial temperature T = 100;

Step 3 If Tε (where ε = 0.001) or there is no new solution at the same temperature, then return (π, u) and Cmax(π, u), end.

Step 4 Set the iteration length L at the same temperature (where L := n/2);

Step 5 Generate a random number r1(r1 [0, n]);

Step 6If r1 < 0.7, then goto Step 7; else goto Step 8;

Step 7 Generate the new neighborhood solution (π′, 0) from swap neighborhood 𝓝1, and get the corresponding solution (π′, u′) and Cmax(π′, u′); goto Step 9;

Step 8 Generate the new neighborhood solution (π′, 0) from insertion neighborhood 𝓝2, and get the corresponding solution (π′, u′) and Cmax(π′,u′); goto Step 9;

Step 9 Δ Cmax := Cmax(π′, u′) – Cmax(π, u). If Δ Cmax < 0, then (π, u) := (π′, u′), Cmax(π, u) := Cmax( π′, u′); goto Step 11;

Step 10 Generate a random number r6(r6 [0, 1]). If exp(–Δ Cmax/T) > r6, then (π, u) := (π′, u′), Cmax(π, u) := Cmax(π′, u′);

Step 11L := L – 1. If L = 0, then T := αT (where α = 0.8) and goto Step 5; else goto Step 3.

Table 1

Experiments with shorter basic delivery times and more resources

mnCmaxLDTRAACmaxLPDTRAACmaxSAGap(SA)Time(SA)
240162.62174.87172.121.5780.15
80386.50383.37383.370.0001.28
120587.75585.37585.370.0004.17
160813.37812.25812.000.0319.82
2001005.881001.751005.000.02519.67
440120.13114.50112.381.8520.14
80247.20243.38242.130.5140.56
120379.62371.00369.500.4044.14
160515.80512.00510.600.2739.90
200635.37630.80630.250.08719.65
64062.5060.5059.002.4800.12
80135.25128.60127.001.2441.26
120197.60194.90193.000.5154.25
160271.00266.90266.000.33710.00
200332.40327.80327.400.12220.25
84044.7539.8838.004.7140.11
8086.1081.6080.221.6911.28
120126.44123.70122.780.7444.19
160173.50168.11168.000.06510.25
200212.22207.89207.670.10620.77

5 Experiment Results and Analysis

In this section, we describe the numerical experiments to evaluate the algorithm proposed in the previous section. All algorithms are enabled by C++ language in BloodShed Dev-C+ + 4.9.9.2. Experimental environment is the Pentium (R) 3.2 GHZ Dual-Core CPU, with 4GB memory and the Microsoft Windows XP professional operating system. All the experimental data are generated randomly by computer.

In order to make the experiment results more objective, four groups results cross combined by longer, shorter delivery times, more and little resources are given. In each experiment, UPCD problem with 2, 4, 6, 8 machines, 40, 80, 120, 160, 200 jobs are considered. We select various parameters for the actual production environment: 1) Set the speed si of machine Mi to a integer in [1, 10]; 2) Take the job length pj to a integer [1, 100] in randomly; 3) Longer basic delivery times jU(50, 100) and shorter basic delivery times jU(0, 30); 4) More resources U = 3, 000 and little resources U = 150.

Table 2

Experiments with longer basic delivery times and more resources

mnCmaxLDTRAACmaxLPDTRAACmaxSAGap(SA)Time(SA)
240179.37175.00172.121.5730.14
80386.00383.88383.370.1331.20
120588.88585.50585.370.0224.41
160819.12812.75812.000.0939.81
2001004.381002.251005.000.07519.37
440114.75115.00112.501.9610.15
80249.87244.00242.000.8201.28
120379.00370.38369.500.2374.10
160514.38511.88510.600.25010.00
200636.80631.20630.250.15120.01
64063.2563.7558.608.0780.12
80133.70129.88127.202.0561.2
120199.40194.80193.000.9244.12
160271.60267.00266.000.35710.12
200336.60328.20327.300.27420.18
84040.7048.5637.757.2480.12
8084.7081.5680.001.9131.22
120128.77124.67122.801.5004.12
160174.75168.00167.400.3579.94
200213.60208.80207.660.27420.56

In these four tables, LDT-RAA gets the objective function value based through carrying out RAA on LDT algorithm with LPDT-RAA and SA corresponding to the algorithm, and defines Gap(SA) as follows:

(7)Gap(SA)=min{CmaxLDTRAA,CmaxLPDTRAA}CmaxSAmin{CmaxLDTRAA,CmaxLPDTRAA}×100

in which min{CmaxLDTRAA,CmaxLPDTRAA}is the less between objective function values of LDT-RAA and LPDT-RAA algorithm, CmaxSA is objective function values of SA algorithm, obviously the Gap(SA) is expressed as the quality’s percentage of the best situation between another two algorithms’ solution improved by SA, and time(SA) is the running time of SA algorithm in seconds.

Table 3

Experiments with longer basic delivery times and little resources

mnCmaxLDTRAACmaxLPDTRAACmaxSAGap(SA)Time(SA)
240195.56200.40195.040.2660.17
80396.28397.50394.710.3961.28
120597.57599.76594.000.5974.13
160823.23822.69819.750.35710.26
2001011.131013.641010.230.08920.17
440146.52147.97142.752.5730.15
80266.61270.23265.700.3411.29
120392.19398.10389.690.6374.39
160532.13534.44530.480.31010.31
200651.60655.44650.150.22220.84
640104.94105.14103.030.8760.12
80164.01162.79160.000.6311.25
120224.82228.41223.660.5164.31
160277.32300.90295.570.55810.59
200358.39361.32356.490.53020.12
84092.3695.1391.960.4330.10
80119.93121.73118.830.8341.31
120160.33162.64159.580.4684.34
160203.52206.33202.660.42310.14
200243.27245.57242.070.49321.11

What can be seen from the analysis of experimental data in these four tables are:

1) SA algorithm improves most on 8.078%, least on 0.000%, overall average on 0.922%. In addition to corresponding SA algorithms of 2 machines 80 jobs, 120 jobs in the Table 1 and Table 4 not improving the quality of the solution, which explains that the objective function value solved by SA algorithm is very close to the optimal solution, the rest of the SA algorithm’s performance is all superior to LDT–RAA and LPDT–RAA algorithm.

2) SA algorithm improves 0.839% in Table 1, 1.431% in Table 2, 0.579% in Table 3, 0.839% in Table 4 on average. When the delivery times are longer, the average percentage improvement of SA algorithm with more resources is better than the one with little resources. When there are sufficient resources, the average percentage improvement of SA algorithm with longer delivery times is better than shorter ones. It’s because that SA algorithm can make full use of resources in the process of finding the optimal solution with larger resources, but the degree of resource utilization with shorter delivery times is lower than the longer ones for the delivery time limitation under the circs of the definite resource.

Table 4

Experiments with shorter basic delivery times and little resources

mnCmaxLDTRAACmaxLPDTRAACmaxSAGap(SA)Time(SA)
240176.63174.88172.121.5780.15
80386.50383.37383.370.0001.28
120587.75585.37585.370.0004.29
160813.87812.25812.000.03110.07
2001005.881001.751001.500.02519.53
440120.13114.50112.831.8520.14
80247.20243.38242.130.5140.23
120379.63371.00369.500.4044.12
160515.80512.00510.600.2739.92
200635.37630.80630.250.08719.85
64060.2560.5059.002.4800.12
80135.25128.60127.001.2441.21
120197.60194.00193.000.5154.21
160271.00266.90266.000.33710.26
200332.40327.80327.400.12220.17
84044.7539.8838.004.7140.12
8086.1081.6080.221.6911.21
120126.44123.70122.780.7444.20
160173.50168.11168.000.06510.26
200212.22207.89207.670.10620.82

3) SA algorithm improves greater with the increase of machines with the definite jobs in Tables 1, 2, 4. The reason for this is that resources are more plentiful in Tables 1, 2, 4 and the increase of machine number would weaken the difference of jobs’ completion times at some level. The difference of jobs’ completion times after non-increased sorting become smaller with the increase of jobs under the same machine number and therefore SA algorithm improves lesser. In Table 3 data do not show the above rules because of the insufficient resources.

4) The corresponding objective function values of LDT-RAA, LPDT-RAA and SA algorithm in the Table 1 are fundamentally the same as the Table 4. With shorter delivery times in Tables 1 and 4, jobs processed in the machines can’t be fully allocated the resources, so the same data value occurs in Tables 1 and 4.

5) As SA algorithm here can solve problem of 200 jobs in 23 seconds, the computation efficiency is acceptable and SA algorithm’s solution accuracy is obviously superior to other algorithms.

6 Conclusion

We address a class of uniform parallel machine scheduling problem with that job’s delivery times is linear decreasing functions of the consumed resource. The objective is minimizing the maximum completion time based on total given resource, for whose restriction to delivery times, RAA is designed. Considering the NP-hard characteristics of this problem, we construct the simulated annealing algorithm to solve the problem for approximate optimal solution, in which exchanging neighborhood and inserting neighborhood are employed. To evaluate the quality of the solution, LDT-RAA and LPDT-RAA algorithm are presented for UPCD problem on the existing heuristic algorithm. Plenty of experimental data and analyzing results show that simulated annealing algorithm can solve 200 operation scales’ problem effectively in 23 seconds, and the quality is better than the other two algorithms.

Future research will be focused on solving large-scale hybrid flow shop and flexible job shop problems. We would expect that the good performance of SA would prove to be useful to achieve this goal, but this requires further investigation.


Supported by National Natural Science Foundation of China (71521001, 71471052, and 71202048), and the Specialized Research Fund for the Doctoral Program of Higher Education of China (20120111120013)


References

[1] Carlier J. Scheduling jobs with release dates and tails on identical machines to minimize the makespan. European Journal of Operational Research, 1987, 29: 298–306.10.1016/0377-2217(87)90243-8Suche in Google Scholar

[2] Drozdowski M, Kubiak W. Scheduling parallel tasks with sequential heads and tails. Annals of Operations Research, 1999, 90: 221–246.10.1023/A:1018964732122Suche in Google Scholar

[3] Lancia G. Scheduling jobs with release dates and tails on two unrelated parallel machines to minimize the makespan. European Journal of Operational Research, 2000, 120: 277–288.10.1016/S0377-2217(99)00156-3Suche in Google Scholar

[4] Sourd F, Nuijten W. Scheduling with tails and deadlines. Journal of Scheduling, 2001, 4: 105–121.10.1002/jos.71Suche in Google Scholar

[5] Mauguière P, Billaut J C, Artigues C. Grouping jobs on a single machine with heads and tails to represent a family of dominant schedules. Paper presented at the 8th Workshop on Project Management and Scheduling, Valencia 3–5 April, 2002, 265–269.Suche in Google Scholar

[6] Gharbi A, Haouari M. Minimizing makespan on parallel machines subject to release dates and delivery times. Journal of Scheduling, 2002, 5: 329–355.10.1002/jos.103Suche in Google Scholar

[7] Vakhania N. Single-machine scheduling with release times and tails. Annals of Operations Research, 2004, 129: 253–271.10.1023/B:ANOR.0000030692.69147.e2Suche in Google Scholar

[8] Haouari M, Gharbi A. Lower bounds for scheduling on identical parallel machines with heads and tails. Annals of Operations Research, 2004, 129: 187–204.10.1023/B:ANOR.0000030688.31785.40Suche in Google Scholar

[9] Gharbi A, Haouari M. An approximate decomposition algorithm for scheduling on parallel machines with heads and tails. Computer & Operations Research, 2007, 34: 868–883.10.1016/j.cor.2005.05.012Suche in Google Scholar

[10] Boxma O, Zwart B. Tails in scheduling. ACM SIGMETRICS Performance Evaluation Review, 2007, 34: 13–20.10.1145/1243401.1243406Suche in Google Scholar

[11] Li K, Yang S L. Heuristic algorithms for scheduling on uniform parallel machines with heads and tails. Journal of Systems Engineering and Electronics, 2011, 22: 462–467.10.3969/j.issn.1004-4132.2011.03.014Suche in Google Scholar

[12] Janiak A. Single machine scheduling problem with a common deadline and resource dependent release dates. European Journal of Operational Research, 1991, 53: 317–325.10.1016/0377-2217(91)90065-4Suche in Google Scholar

[13] Li K, Yang S L, Ren M L. Single-machine scheduling problem with resource dependent release dates to minimise total resource-consumption. International Journal of Systems Science, 2011, 40: 1811–1820.10.1080/00207721003653716Suche in Google Scholar

[14] Zhang X G, Yan G L, Tang G C. Single-machine scheduling problems with release time of jobs depending on resource allocated. International Journal of Advanced Manufacturing Technology, 2011, 57: 1175–1181.10.1007/s00170-011-3335-1Suche in Google Scholar

[15] Wei C M, Wang J B. Single-machine scheduling with time-and-resource-dependent processing times. Applied Mathematical Modelling, 2012, 36: 792–798.10.1016/j.apm.2011.07.005Suche in Google Scholar

[16] Zhao C L, Tang H Y. Single machine scheduling problems with deteriorating jobs. Applied Mathematics and Computation, 2005, 161: 865–874.10.1016/j.amc.2003.12.073Suche in Google Scholar

[17] Wang X Y, Wang J J. Single-machine due date assignment problem with deteriorating jobs and resource-dependent processing times. International Journal of Advanced Manufacturing Technology, 2013, 67: 255– 260.10.1007/s00170-013-4771-xSuche in Google Scholar

[18] Wang X R, Wang J J. Single-machine scheduling with convex resource dependent processing times and deteriorating jobs. Applied Mathematical Modelling, 2013, 37: 2388–2393.10.1016/j.apm.2012.05.025Suche in Google Scholar

[19] Janiak A, Portmann M C. Single machine scheduling with job ready and delivery times subject to resource constraints. IEEE International Symposium on Parallel and Distributed Processing, 2008: 1–7.10.1109/IPDPS.2008.4536372Suche in Google Scholar

[20] Li K, Shi Y, Yang S L, et al. Parallel machine scheduling problem to minimize makespan with resource dependent processing times. Applied Soft Computing, 2011, 11: 5551–5557.10.1016/j.asoc.2011.05.005Suche in Google Scholar

[21] Hsu C J, Yang D L. Unrelated parallel-machine scheduling with position-dependent deteriorating jobs and resource-dependent processing time. Optimization Letters, 2014, 8(2): 519–531.10.1007/s11590-012-0594-1Suche in Google Scholar

[22] Metropolis N, Rosenbluth A W, Resenbluth M N, et al. Equation of state calculations by fast computing machines. Journal of Chemical Physics, 1953, 21: 1087–1092.10.2172/4390578Suche in Google Scholar

[23] Kirkpatrick S, Gelatt C D, Vecchi M P. Optimization by simulated annealing. Science, 1983, 220: 671–690.10.1126/science.220.4598.671Suche in Google Scholar

[24] Yin Y, Wu C C, Wu W H, et al. A branch-and-bound procedure for a single-machine earliness scheduling problem with two agents. Applied Soft Computing, 2013, 13: 1042–1054.10.1016/j.asoc.2012.09.026Suche in Google Scholar

[25] Bank M, Ghomia S F, Jolaib F, et al. Application of particle swarm optimization and simulated annealing algorithms in flow shop scheduling problem under linear deterioration. Advances in Engineering Software, 2012, 47: 1–6.10.1016/j.advengsoft.2011.12.001Suche in Google Scholar

[26] Nouria B V, Fattahia P, Ramezanian R. Hybrid firefly-simulated annealing algorithm for the flow shop problem with learning effects and flexible maintenance activities. International Journal of Production Research, 2013, 51: 3501–3515.10.1080/00207543.2012.750771Suche in Google Scholar

[27] Dai M, Tang D B, Giretc A, et al. Energy-efficient scheduling for a flexible flow shop using an improved genetic-simulated annealing algorithm. Robotics and Computer-Integrated Manufacturing, 2013, 29: 418– 429.10.1016/j.rcim.2013.04.001Suche in Google Scholar

[28] Naderi B, Fatemi Ghomi S M T, Aminnayeri M. A high performing metaheuristic for job shop scheduling with sequence-dependent setup times. Applied Soft Computing, 2010, 10: 703–710.10.1016/j.asoc.2009.08.039Suche in Google Scholar

[29] Shafia M A, Sadjadi S J, Jamili A, et al. The periodicity and robustness in a single-track train scheduling problem. Applied Soft Computing, 2012, 12: 440–452.10.1016/j.asoc.2011.08.026Suche in Google Scholar

[30] Koulamas C, Kyparisis G J. Scheduling on uniform parallel machines to minimize maximum lateness. Operations Research Letters, 2000, 26: 175–179.10.1016/S0167-6377(00)00018-3Suche in Google Scholar

[31] Li K, Yang S L, Ma H W. A simulated annealing approach to minimize the maximum lateness on uniform parallel machines. Mathematical and Computer Modelling, 2011, 53: 854–860.10.1016/j.mcm.2010.10.022Suche in Google Scholar

Received: 2015-5-25
Accepted: 2015-6-29
Published Online: 2015-12-25

© 2015 Walter de Gruyter GmbH, Berlin/Boston

Heruntergeladen am 20.11.2025 von https://www.degruyterbrill.com/document/doi/10.1515/JSSI-2015-0525/html
Button zum nach oben scrollen