Home Preemptive Scheduling with Controllable Processing Times on Parallel Machines
Article Publicly Available

Preemptive Scheduling with Controllable Processing Times on Parallel Machines

  • Guiqing Liu EMAIL logo , Kai Li and Bayi Cheng
Published/Copyright: February 25, 2015
Become an author with De Gruyter Brill

Abstract

This paper considers several parallel machine scheduling problems with controllable processing times, in which the goal is to minimize the makespan. Preemption is allowed. The processing times of the jobs can be compressed by some extra resources. Three resource use models are considered. If the jobs are released at the same time, the problems under all the three models can be solved in a polynomial time. The authors give the polynomial algorithm. When the jobs are not released at the same time, if all the resources are given at time zero, or the remaining resources in the front stages can be used to the next stages, the offline problems can be solved in a polynomial time, but the online problems have no optimal algorithm. If the jobs have different release dates, and the remaining resources in the front stages can not be used in the next stages, both the offline and online problems can be solved in a polynomial time.

1 Introduction

Generally, the scheduling problems assume that the processing times of the jobs are fixed numbers. However, the processing times may be compressed by some extra resources, such as some other budgets, manpower, energy, and so on. Therefore, the production efficiency can be improved by using these extra resources.

In this paper, we assume that the manufacturer with a parallel machine scheduling system can use some extra budgets to compress the processing times of the jobs. Preemption is allowed. Let the extra budgets can shorten the workload at most X^ time units. The objective is to minimizing the makespan. Because all the jobs may be released at different times, manufacturer can use the total resources as the following three modes in the practical production environment:

Mode 1: Total X^ resources are available at time zero, and can be used to compress any jobs;

Mode 2: Total X^ resources are divided into some parts and the parts are available at different release dates respectively, and the remaining resources in the front stages can be used to the next stages;

Mode 3: Total X^ resources are divided into some parts and the parts are available at different release dates respectively, but the remaining resources in the front stages can not be used to the next stages.

The problems can be described as follows. Given n independent jobs Jj(j = 1, 2, · · ·, n), the processing time of the job Jj is pj=pj¯xj, where pj¯ is the basic processing time without extra resource allocation, xj(0xjpj¯) is the decreased workload of the job Jj. Each of the n jobs can be processed on one of the m identical parallel machines after it is released at rj(rj ≥ 0). A machine can process at most one job at a time, and a job can only run on one machine at a time. Because preemption is allowed, the processed jobs can be interrupted when some other jobs are finished or a new job is released. The processing time of the job Jj, pj, is the amount of the processing times of all the pieces of the job Jj. Let Cmaxi be the maximal completion time on the machine Mi. Define Cmaxmin=mini=1mCmaxi .In mode 2 and 3, we define X^t is the available resources at time t. Jt={Jj|rj=t} is the job set in which the jobs are released simultaneously at time t. Without loss of the generality, we assume that X^t<jJtpj¯ . The objective is to minimize the makespan Cmax=maxi=1mCmaxi. The offline schedule means the complete knowledge of the job release dates and the available resource at any time, while the online schedule is not the case.

If the job processing times can not be compressed, there are some available research results on the offline and online parallel machine scheduling with preemption. For the offline version, when the jobs arrive at the same time, it can be solved in linear time by the well-known McNaughton’s rule[1]. Hong and Leung[2] gave another modified McNaughton’s rule, in which when maxj=1npj>j=1npj/m , the longest job is assigned a machine firstly. For the online version, they also showed that it can be solved by the following algorithm: whenever new jobs are released, schedule the new jobs along with the unexecuted portions of the remaining jobs by the modified McNaughton’s rule.

If the job processing times are controllable, some papers considered the scheduling problems without preemption. Most of the papers focused on the single machine problems with different objective functions, such as minimizing the total compression cost plus weighted completion times[3-5], minimize the total compression cost plus weighted completion times[6], the tradeoff curve between the number of tardy jobs and the total amount of allocated resource[7], minimizing the total amount of allocated resource subject to a limited number of tardy jobs[8], minimize a linear combination of scheduling, due date assignment and resource consumption costs[9]. Little literature considered the corresponding parallel machine problems. Jansen and Mastrolilli[10] considered the identical parallel machine scheduling problems with controllable processing times, in which the processing times of the jobs lie in different intervals and the costs depend linearly on the position of the corresponding processing times in the intervals. They presented polynomial time approximation schemes for the problems. Li et al.[11] designed a fast simulated annealing algorithm for the identical parallel machine scheduling problem to minimize the makespan with controllable processing times, which can solve the problems with 1,000 jobs within a reasonable time. Shabtay and Kaspi[12] assumed that the processing times can be compressed by a convex decreasing resource consumption function, and considered the identical parallel machine problems. They show the makespan problem with nonpreemptive jobs is NP-hard. Nowicki and Zdrzalka[13] proposed a bicriterion approach to preemptive scheduling of m identical parallel machines for jobs with controllable processing times. An O(n2) greedy algorithm was given to generate all breakpoints of a piecewise linear efficient frontier by fixing on a resource allocation firstly. Their method was to determine a resource allocation firstly, and then the optimal schedule was obtained.

All the above papers considered the offline scheduling problems. The purpose of this paper is to study the preemptive scheduling with controllable processing times on parallel machines and considers the situations of both offline and online. In Section 2, we give the optimal algorithm for the problem with only one release date. In Section 3, we study the offline scheduling problem with different release dates. The online algorithms are discussed in Section 4. Finally, we draw some concluding remarks in Section 5.

2 The Situation That All the Jobs Arrive at Time Zero

If all the jobs are released at only one time point, the three modes are the same. It can be described as Pm|pmtn,pj=pj¯xj,xjX^|Cmax according to the standard three-field notation α|β|γ. In this problem, once a resource allocation x = (x1,x2, …,xn) is given, then the optimal solution can be obtained by Modified McNaughton’s Rule and the makespan is

Cmax(s,x)=max(maxj=1n{pj¯xj},j=1n(pj¯xj)/m).

If the resource allocation scheme is not given, we can use the following algorithm A1 to solve this problem.

Algorithm

A1forPm|pmtn,pj=pj¯xj,xjX^|Cmax

Step 1 Let x = 0. Sort all jobs in non-increasing order of basic processing time, i.e., p1¯p2¯pn¯.

Step 2 Ifmaxj=1npj¯j=1npj¯/m, then schedule all jobs by McNaughton’s wrap-around rule. Go to Step 4.

Step 3 Schedule the longest job on one machine and delete the job. Decrement m by 1. If the longest job is the last job, then go to Step 4, else go to Step 2.

Step 4 Use all of the resource X^ to shorten the Cmax backward.

See Figure 1, for the situation that all the jobs arrive at the same time, the Step 1 of Algorithm A1 is to obtain a solution for the corresponding problem with x = 0 by using the Modified McNaughton’s rule, and get the makespan is Cmax(s1,0). The second step is to shorten the makespan backward by using the extra resources and obtain the makespan Cmax(s, x).

Figure 1 Three cases of the situation with the same release time
Figure 1

Three cases of the situation with the same release time

Theorem 1

ForPm|pmtn,pj=pj¯xj,xjX^|Cmax, A1 is optimal.

Proof

To prove the algorithm A1 is optimal for the problem Pm|pmtn,pj=pj¯xj,xjX^|Cmax, we show the following two results: (i) Let the resource allocation scheme is x in algorithm A1, then under this resource allocation scheme, the solution (s,x) obtained by A1 is optimal; (ii) If there is another resource allocation scheme x*, which is different to x, then we can not find another solution (s*,x*) such that Cmax(s*, x*) < Cmax(s, x).

(i) Let the schedule s1 be obtained in Step 1, then s1 is optimal for the corresponding problem Pm|pmtn|Cmax according to Hong & Leung[2]. Case 1. s1 is a block schedule. Then it is apparent that there is no another resource allocation scheme can shorten the Cmax less than Cmax(s, x). See Figure 1(a). Case 2. All s1 and s are not block. See Figure 1(b). Let the machine Mk is the first machine on which the maximal completion time equals to Cmaxmin in solution (s1,0). The schedule s1 is not a block schedule, so the machine M1 to machine Mk-1 must process only one of the k – 1 longest jobs, s is not block, thus the total resources do not shorten the Cmax down to Cmaxmin, and only the jobs on machine M1 to Mk-1 are compressed. Thus the schedule (s, x) is still followed by the Modified McNaughton’s Rule. Case 3. s1 is not block but s is not the case. See Figure 1(c). Because we shorten the Cmax in Step 2 and the direction is backward, it is easy to prove that it is true by Case 2 and Case 1. So under the resource allocation scheme x, the solution (s,x) is optimal.

(ii) (s, x) is optimal under the resource allocation x, so if there is another optimal solution (s*,x*), and Cmax(s*,x*) <Cmax(s,x), it is not doubt that x* x. Because (s*, x*) is optimal, it must be j=1nxj=X^. Thus j=1nxj=j=1nxj. Because x* x, we can assume that there are two jobs Jp and Jq,xp>xp while xq<xq,xp+xq=xp+xq. Let p′ and q′ be the machine number which process job Jp and Jq in the solution (s,x). Let the machine Mk is the first machine on which the maximal completion time equals to Cmaxmin in the solution (s, x). We show that when x changes to x*, there is no a schedule s* makes that Cmax(s*,x*) < Cmax(s,x). Casel. k = 1. The (s,x) is block, therefore there is no another resource allocation scheme x* can satisfy Cmax(s*,x*) < Cmax(s,x). Case 2. k ≥ 2,p′k. This case is impossible because the corresponding xp equals to zero, and thus it can not be reduced. Case 3. k ≥ 2, q′ = p′ < k. So Cmax(s*,x*) = Cmax(s,x). Case 4. k2,p′ < k < q′. In this case, Cmaxq=Cmaxmin, so increasing resource to reduce Cmaxq is nonsensical. Case 5. k2,p′ < q′ < k. In this case, Cmaxp>Cmaxq. Decreasing xp to xp means to increase Cmaxp, so it must be Cmax(s*,x*) ≥ Cmax(s,x). Case 6. k ≥2, q′ < p′ < k. In this case, CmaxpCmaxq. If Cmaxp=Cmaxq, then Cmax(s*, x*) = Cmax(s, x). If Cmaxp<Cmaxq, then xp = 0 and it can not be reduced again. The situation that the number of different jobs is more than two in the two allocation schemes can be deduced according to the two jobs situation. Because the jobs are preemptive, than we can construct some pairs like above p′ and q′ by splitting some jobs, and then prove according to the above process with only two jobs with different resource allocation.

Therefore, there is no another resource allocation scheme x* and its corresponding schedule s* such that Cmax(s*,x*) < Cmax(s,x).

3 The Offline Version of Problem with Different Release Dates

When the jobs are released at different times, we can build the following algorithm A2 to solve the problem Pm|rj,pmtn,pj=pj¯xj,xjX^|Cmax inspired by algorithm A1.

Algorithm

A2for Modes 1 and 2 ofPm|rj,pmtn,pj=pj¯xj,xjX^|Cmax.

Step 1 At each preemptive time (there is a job is finished or a new batch of jobs arrive), do

  1. Let x = 0. Sort all the unfinished jobs or job pieces in non-increasing order of basic processing time. Let the number of the unfinished jobs or job pieces is n.

  2. If maxj=1npj¯j=1npj¯/m, then schedule all jobs by McNaughton’s wrap-around rule.

  3. Schedule the longest job on one machine and delete the job. Decrement m by 1. Go to 2).

Step 2 Use all of the resource X^ to shorten the Cmax backward.

Theorem 2

For Modes 1 and 2 of the problemPm|rj,pmtn,pj=pj¯xj,xjX^|Cmax, A2 is optimal.

Proof

Let x and s be the resource allocation and sequencing schemes according to algorithm A2 for the problem Pm|rj,pmtn,pj=pj¯xj,xjX^|Cmax respectively. Supposing (s*,x*) is another solution, which is different from (s,x), and Cmax(s*, x*) < Cmax(s,x).

If there is no job, which is released in one of the some front stages, is compressed in the last stage of the solution (s*,x*), then (s*,x*) must be the same with (s,x) according to Theorem 1. So we assume that there is a job Jj is not released in the last stage and it is compressed in one of the front stages in (s*,x*), and show that it can not be optimal. Let pj be the real processing time of the job Jj in (s*,x*), so xj=pj¯pj.

Case 1. There are some idle times before the last stage in (s*,x*). See Figure 2. If the solution (s*,x*) likes Figure 2(a), then if we allocate xj=xjmin(x,y) and assign the resource min(x*,y) to shorten the maximal completion time in the last stage, there must be another solution (s,x) is better than (s*,x*). If the solution (s*,x*) likes Figure 2(b), it is the same with Figure 2(a) except that xj=xjmin(x,z).

Figure 2 Case 1 of Theorem 2
Figure 2

Case 1 of Theorem 2

Case 2. There is no idle time before the last stage and (s*,x*) is not block. See Figure 3. Let the machine k(k ≥ 2) is the first machine on which the maximal completion time equals to Cmaxmin in solution (s*, x*). If we reallocate xj=xjmin(x,yε) resources to Jj, where ε is a very little positive number. Then we can break the job Jj to two parts, the processing times of these two parts are pj=pj¯xj andpj=min(x,yε) and p′j = min(x*, y – ε). Thus we can reallocate the resources with amount min(x*, y — ε) to compress the jobs in the last stage on the k –1machines, which the maximal completion times equal to Cmax. Therefore, another better solution (s,x) can be rebuilt from (s*,x*) and Cmax(s,x)Cmax(s,x)min(ε,min(x,yεk1).

Figure 3 Case 2 of Theorem 2
Figure 3

Case 2 of Theorem 2

Case 3. There is no idle time before the last stage and (s*,x*) is block. See Figure 4. In this case, if we release the resources xj from the job Jj, assume that the job Jj is processed on machine Mi, thus Cmaxi=Cmax+xj. Then we can reallocate the resources xj to shorten Cmaxi to Cmax backward and the solution is still feasible. So all the solutions in this case can be transformed to the case that total resources are assigned to the jobs backward and Cmax is not increased. It is easy to show that under the resource allocation backward model, the solution (s,x) satisfies the Modified McNaughton’s Rule, and thus Cmax(s,x) ≤ Cmax(s*, x*).

Therefore, Algorithm A2 is optimal for the problem Pm|rj,pmtn,pj=pj¯xj,xjX^|Cmax when total resources are released at time zero or the resources can be used in the next stages.

Figure 4 Case 3 of Theorem 2
Figure 4

Case 3 of Theorem 2

Though Algorithm A2 is optimal for Mode 1 and Mode 2 of the problem Pm|rj,pmtn,pj=pj¯xj,xjX^|Cmax, it can not obtain a optimal solution for Mode 3, see Example 1.

Example 1

There are 3 machines. At time 0, the available resource amount is 1, i.e., X^0=1, and three jobs J1, J2 and J3 arrive with the basic processing times as 3, 3, 2. At time 2, X^2=2, and four jobs J4, J5, J6 and J7 come with the basic processing times as 5, 3, 2, 2.

Then the solution obtained by A2 is like Figure 5(a) and Cmax = 6(X^2=2 is used to shorten J4, J6, and J2). But the optimal solution is like Figure 5(b) and Cmax=513(X^0=1 is used to shorten J1; X^2=2 is to shorten J4, J6, and J7). The resources released at the front stages can not be used in the next stages, thus if we use the resources backward to shorten some pieces of the jobs released in the front stages, it must deteriorate the quality of the solution. Therefore for the Mode 3, we must allocate the available resources to the jobs in the current stage but not in the last stage.

Figure 5 Example 1
Figure 5

Example 1

Algorithm A3

At each stage at time t (a new batch jobs arrive), do

  1. Let x = 0. Sort all the unfinished jobs or job pieces in non-increasing order of basic processing time. Let the number of the unfinished jobs or job pieces is n.

  2. If maxj=1npj¯j=1npj¯/m, then schedule all jobs by McNaughton’s wrap-around rule. Go to 4).

  3. Schedule the longest job on one machine and delete the job. Decrement m by 1. If the longest job is the last job, then go to 4), else go to 2).

  4. Use the available resources X^t to compress the processing time of the jobs in the current stage backward until all the resources X^t are allocated or the maximal completion time of the current stage equals to the start time of the next stage.

Theorem 3

For Mode 3 of the problemPm|rj,pmtn,pj=pj¯xj,xjX^|Cmax, algorithm A3 is optimal.

This theorem is simple to prove. Because in Mode 3 of the problem with different release times, the extra resources can only be used in the current stage but not in the successive stages, we must use the available resources to shorten the maximal completion times in the current stages. But when the maximal completion time of the current stage is shortened before the start time of the next stage, it is not useful to compress the jobs any more, and then we can stop to allocate the resources in the current stage. In each stage, the method in Algorithm A3, is optimal according to Theorem 1.

4 The Online Version of the Problem with Different Release Dates

In this section, we firstly show that no optimal online scheduling algorithm for Mode 1 and Mode 2 of the parallel machine scheduling problem with different release dates.

Theorem 4

There is no optimal algorithm for Modes 1 and 2 of the online version of the problemPm|rj,pmtn,pj=pj¯xj,xjX^|Cmax.

Proof

We prove that the theorem is true for m = 2; it is easy to see that the proof can be generalized to m > 2. Suppose there is an optimal online scheduler for 2 machines and consider the following scenario. At time 0, X^0=1.J0={J1,J2}, in which the basic processing times are 1, 2 respectively. In the online version, we do not know the information about the job release dates and the available resource in the future, so it is amphibolous to use the available resource in the current stage or not. Case 1: The resource X^0=1 is used in the first stage. In this case, the maximal completion time in the first stage can controllable to 1. Now, consider the scenario where J1 = {J3, J4} with basic processing times 1 and 4 are released at time 1, and X^1=0. Clearly, the makespan of the schedule that constructed by the optimal online scheduler is at least 5. However, the optimal offline makespan according to algorithm A2 is 4. Case 2: The resource X^0=1 is not used in the first stage. In this case, the maximal completion time in the first stage is 2. However, if there is no job released again, the extra resource would not be used to compress the jobs. It is clear that the schedule is not optimal. The extra resource can not also be determined to use partly in the first stage according to Case 1 and Case 2.

As a result of Theorem 4, we are motivated to construct online scheduling algorithm for Mode 3 of the online version of the problem Pm|rj,pmtn,pj=pj¯xj,xjX^|Cmax.

Algorithm A4

Whenever a new batch of jobs arrive at t, do

  1. Unite the unfinished jobs with the remainder compressed parts to the new batch jobs.

  2. Let x = 0. Sort all the new batch jobs in non-increasing order of basic processing time. Let the number of the new batch is n.

  3. If maxj=1npj¯j=1npj¯/m, then schedule all jobs by McNaughton’s wrap-around rule. Go to 5).

  4. Schedule the longest job on one machine and delete the job. Decrement m by 1. If the longest job is the last job, then go to 5), else go to 3).

  5. Use all available resources X^t to shorten the maximal completion time of the current stage backward.

Theorem 5

Algorithm A4 is optimal for Mode 3 of the online version of the problemPm|rj,pmtn,pj=pj¯xj,xjX^|Cmax.

It is clear true optimal for this situation according to Theorem 3 though the extra resource maybe is wasted.

5 Conclusions

In this paper, we study the parallel machine scheduling problems with controllable processing times and preemption. We assume that the extra resource can be used under three models to compress the processing times of the jobs. Model 1 is that all the resources are given at time zero and can be used all the stages. Models 2 and 3 are that all the resources are divided into some parts with different release dates. The difference is that the remaining resources in the front stages can be used to the next stages in Model 2, but it is not the case in Model 3. When the jobs are released at the same time, all the three models are identical, we prove that the problem can be solved in a polynomial time by using algorithm A1 in this paper. When the jobs have different release dates, all the offline versions of the problems in Models 1, 2 and 3 and the online version of the problems in Model 3 can be solved in polynomial times, however the online version of the problems in Model 1 and 2 have no optimal algorithm. We propose the algorithms and prove the related results.

The further research work on this work may assume that when using the extra resources to compress the processing times of the jobs, the jobs have different compressed weight values.


Supported by the Fund from National Natural Science Foundation (71471052, 71101040, 71202048) and the Specialized Research Fund for the Doctoral Program of Higher Education of China (20120111120013)


References

[1] McNaughton R. Scheduling with deadlines and loss functions. Management Science, 1959, 6: 1–12.10.1287/mnsc.6.1.1Search in Google Scholar

[2] Hong K S, Leung J Y T. On-line scheduling of real-time tasks. IEEE Transactions on Computers, 1992, 41(10): 1326-1331.10.1109/REAL.1988.51119Search in Google Scholar

[3] Vickson R G. Choosing the job sequence and processing times to minimize total processing plus flow cost on a single machine. Operations Research, 1980, 28: 1155–1167.10.1287/opre.28.5.1155Search in Google Scholar

[4] Wan G, Yen B P C, Li C L. Single machine scheduling to minimize total compression plus weighted flow cost is NP-hard. Information Processing Letters, 2001, 79: 273–280.10.1016/S0020-0190(01)00143-0Search in Google Scholar

[5] Janiak A, Kovalyov M Y, Kubiak W, et al. Positive half-product and scheduling with controllable processing times. European Journal of Operational Research, 2005, 165: 413–422.10.1016/j.ejor.2004.04.012Search in Google Scholar

[6] Wang J B, Xia Z Q. Single machine scheduling problems with controllable processing times and total absolute differences penalties. European Journal of Operational Research, 2007, 177: 638–645.10.1016/j.ejor.2005.10.054Search in Google Scholar

[7] Danial R L, Sarin R K. Single machine scheduling with controllable processing times and number of jobs tardy. Operations Research, 1989, 37(6): 981-984.10.1287/opre.37.6.981Search in Google Scholar

[8] Cheng T C E, Chen Z L, Li C L. Single-machine scheduling with trade-off between number of tardy jobs and resource allocation. Operations Research Letters, 1996, 19: 237–242.10.1016/S0167-6377(96)00035-1Search in Google Scholar

[9] Ng C T D, Cheng T C E, Kovalyov M Y, et al. Single machine scheduling with a variable common due date and resource-dependent processing times. Computers & Operations Research, 2003, 30: 1173–1185.10.1016/S0305-0548(02)00066-7Search in Google Scholar

[10] Jansen K, Mastrolilli M. Approximation schemes for parallel machine scheduling problem with controllable processing times. Computers & Operations Research, 2004, 31: 1565–1581.10.1016/S0305-0548(03)00101-1Search in Google Scholar

[11] Li K, Shi Y, Yang S, et al. Parallel machine scheduling problem to minimize makespan with resource dependent processing times. Applied Soft Computing, 2011, 11(8): 5551–5557.10.1016/j.asoc.2011.05.005Search in Google Scholar

[12] Shabtay D, Kaspi M. Parallel machine scheduling with a convex resource consumption function. European Journal of Operational Research, 2006, 173(1): 92-107.10.1016/j.ejor.2004.12.008Search in Google Scholar

[13] Nowicki E, Zdrzałka S. A bicriterion approach to preemptive scheduling of parallel machines with control-lable job processing times. Discrete Applied Mathematics, 1995, 63: 237–256.10.1016/0166-218X(94)00071-5Search in Google Scholar

Received: 2014-7-17
Accepted: 2014-10-15
Published Online: 2015-2-25

© 2015 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 19.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/JSSI-2015-0068/html
Scroll to top button