Home Non-dominated Sorting Genetic Algorithms for a Multi-objective Resource Constraint Project Scheduling Problem
Article Open Access

Non-dominated Sorting Genetic Algorithms for a Multi-objective Resource Constraint Project Scheduling Problem

  • Xixi Wang ORCID logo EMAIL logo , Farouk Yalaoui and Frédéric Dugardin
Published/Copyright: October 23, 2017
Become an author with De Gruyter Brill

Abstract

The resource constraint project scheduling problem (RCPSP) has attracted growing attention since the last decades. Precedence constraints are considered as well as resources with limited capacities. During the project, the same resource can be required by several in-process jobs and it is compulsory to ensure that the consumptions do not exceed the limited capacities. In this paper, several criteria are involved, namely makespan, total job tardiness, and workload balancing level. Our problem is firstly solved by the non-dominated sorting genetic algorithm-II (NSGAII) as well as the recently proposed NSGAIII. Giving emphasis to the selection procedure, we apply both the traditional Pareto dominance and the less documented Lorenz dominance into the niching mechanism of NSGAIII. Hence, we adopt and modify L-NSGAII to our problem and propose L-NSGAIII by integrating the notion of Lorenz dominance. Our methods are tested by 1350 randomly generated instances, considering problems with 30–150 jobs and different configurations of resources and due dates. Hypervolume and C-metric are considered to evaluate the results. The Lorenz dominance leads the population more toward the ideal point. As experiments show, it allows improving the original NSGA approach.

MSC 2010: 90B35; 90B50

1 Introduction

1.1 Problem Statement

The resource constraint project scheduling problem (RCPSP) was proposed in 1969 by Pritsker et al. [30] for the first time. It was later proved to be NP-hard in the strong sense in 1983 by Blazewicz et al. [6]. In the RCPSP, we consider a scheduling problem with a set of non-preemptive jobs, J={0, 1, …, n, n+1}, submitted to a set of precedence relationships A. The jobs 0 and n+1 are dummy activities that represent the start and end of the project, respectively. The precedence graph, denoted by G=(J, A), can thus be defined with J and A. The jobs in process need given amounts of resources with limited capacities. A resource is called “renewable” if its capacity is brought to the maximum at the beginning of each period. Otherwise, a resource that is invested only once or progressively during the whole project is called “non-renewable.” The resource consumption cannot exceed the available resource quantities during each period of time. The basic RCPSP aims to minimize the makespan with respect to the precedence relationships and resource constraints.

Our earlier work is devoted to solve a bi-objective RCPSP [33]. In this paper, our problem is extended to an RCPSP with three criteria, namely the makespan, total tardiness of jobs, and workload balance. Moreover, in this work, we pay attention to resource allocation, which is a major problem that industry has to cope with. The non-dominated sorting genetic algorithms (NSGAII and NSGAIII) are firstly applied to solve our problem. Hence, instead of considering the Pareto dominance in the selection procedure, we integrate the Lorenz dominance and solve our problem with L-NSGAII [15] and then propose L-NSGAIII.

This paper is organized as follows: in Subsection 1.2, we present a specific literature review. Our three-criterion problem is presented in Section 2, in which we present a mathematical formulation. The NSGAII and NSGAIII are then applied to solve the multi-objective problem, as explained in Section 3. In Section 4, the principle of Lorenz dominance is explained as well as its application on the previously methods. In Section 5, we report the experimental results. Conclusion and perspectives are discussed in Section 6.

1.2 Literature Review

The RCPSP is firstly formulated with binary variables xit that are equal to 1 if the job i starts at the moment t, and 0 otherwise, in the work of Pritsker et al. [30]. The completion time of a job iJ can thus be written as ∑0≤tHtxit+pi, where H is the scheduling horizon. Later, Christofides et al. [8] proposed a similar model by reformulating the precedence constraints. The work of Klein [22] defined two binary variables in order to describe the job status: sit and fit that take the value 1 if at time 0≤tH, the job iJ has already started and finished, respectively. In Ref. [25], Mingozzi et al. proposed a formulation based on the idea of building sets of jobs that can be processed at the same time without violating the resource and precedence constraints. This formulation is proved to provide strong lower bounds on the makespan. The compatible job sets were considered by Alvarez-Valdés and Tamarit [3], and the authors proposed a mixed-integer linear programming formulation. Some other formulations are recently presented with innovative visions. Koné et al. [23] proposed an event-based formulation where an event can be the start/end of a job. The model of Bianco and Camaria [5] was based on the idea of job completion percentage: a job is only completed when the ratio reaches 100%. Other than the mathematical models, branch-and-bound methods were also applied by Brucker et al. [7], Christofides et al. [8], and Demeulemeester and Herroelen [14], to solve the RCPSP optimally. Some of these methods are known to be especially efficient, such as the work of Klein [22] and Brucker et al. [7]. However, due to the NP-hardness of the problem, exact methods only allow to solve instances with very limited number of jobs.

Meta-heuristics are also implemented by researchers: the genetic algorithm was applied by Debels and Vanhoucke [13] and Peteghem and Vanhoucke [29]. In Ref. [4], the authors used the Tabu search. Sirdey et al. [31] tackled the problems with the simulated annealing algorithm. In the work by Jia and Seo [19], a permutation-based artificial bee colony algorithm is chosen to find the solutions. A comparative study, including the methods mentioned above and particle swarm optimization, was carried out by Das and Acharyya [10].

However, unlike the RCPSP with a single criterion, the multi-objective RCPSP is not yet well documented. Only few works have been proposed to solve the multi-objective RCPSP with exact methods. A branch-and-bound-based procedure was proposed by Gutjahr [18] to solve a bi-criterion stochastic multi-mode RCPSP (MMRCPSP) under risk aversion, where the project makespan and the cost are to be optimized. We have considered the project makespan and the total job lateness in a bi-objective RCPSP, with the two-phase method (TPM) to find the optimal Pareto front [33].

More works have been proposed in the literature as approximated solutions. In Ref. [21], Kim et al. proposed a hybrid genetic algorithm with fuzzy logic controller in order to minimize the makespan and the total tardiness penalty. Simulated annealing and Tabu search were discussed in the works of Abbasi et al. [1] and Al-Fawzan and Haouari [2], respectively, where makespan and robustness were considered. The works of Vanucci et al. [32] focused on the bi-objective problem with makespan and total cost minimization in the case of MMRCPSP with NSGAII. The NSGAII can also be found in the work of Damak et al. [9], where an MMRCPSP was studied in order to minimize the makespan and the non-renewable resource cost. Khalili et al. [20] applied two genetic algorithms, namely multi-population genetic algorithm and two-phase sub-population genetic algorithm, to minimize the makespan and maximize the net present value. In the work by Gomes et al. [17], the minimization of makespan and total weighted job starting time was tackled and the efficiency of five meta-heuristic algorithms was compared. These methods were based on the multi-objective greedy randomized adaptive search procedure, the multi-objective variable neighborhood search, and the Pareto iterative local search.

Our works aim at solving the RCPSP in the multi-objective context. Our previous work [33] allowed finding the optimal Pareto front with the TPM for a bi-objective problem. In this paper, our work is extended by adding a third criterion, namely the workload balance level. The NSGAII is adapted to solve the three-criterion problem and compared to the recently proposed NSGAIII by Deb and Jain [11]. In addition to the Pareto niching mechanism, we also propose the L-NSGAII and L-NSGAIII for our problem by integrating the Lorenz dominance. For the sake of brevity, the makespan, total job tardiness, and workload balance are denoted by O1, O2, and O3, respectively.

2 Problem Description and a Mathematical Formulation

Earlier, we have solved a bi-objective problem with the two-phase exact method by Wang et al. [33]. The first objective is to minimize the makespan as it represents the project duration, in other words, the “quantity” side of a project. The “quality” of the project in our problem is measured by the tardiness, as high tardiness that does not meet the exigency of clients can lead to poor satisfaction. As presented in Section 1.2, when more than one objectives are involved, most studies in the RCPSP literature focused on the approximated methods for large-sized problems. Hence, the criterion “tardiness” often refers to the total tardiness of projects in the context of multi-project problems. In our study, we consider the total tardiness of “jobs.” A project is constituted by a series of activities, and the continuous respect of due dates is as important as that of the whole project. Furthermore, the specification of the RCPSP lies in the notion of resource requirements and capacities. Other than guaranteeing the project accomplishment, it is also interesting to focus on the resource utilization. To this purpose, we consider the workload balance. In addition to respecting the resource availabilities, we try to balance the resource consumptions.

In the literature, the problem of “resource leveling” has been investigated in several works. Neumann and Zimmermann [27] synthesized three formulations of resource leveling, involving the global maximal resource utilization and the periodical variations of resource utilization. In this paper, we wish to evaluate resource utilization in a global way, which is related to the first formulation of resource leveling. Ouazene et al. [28] recently worked on the resource balancing problem of parallel machines. The authors proposed to measure the workload balance by using the difference between the maximum and minimum machine utilizations instead of minimizing the maximal level. The authors also justified that this new formulation of minimizing the gap is more effective than minimizing the maximum. Inspired by the work of Ouazene et al. [28], we minimize the gap between the highest and lowest resource consumptions in our work in order to optimize the resource workload balance. A schedule with better smoothness of resource utilization stands for more balanced resource consumptions.

In this paper, the requirements of different resources are supposed to follow similar scales, so that we use a simple linear combination of allocation gaps of different resources [see Eq. (3)]. It is worth noting that when the values of resource requirements vary strongly, it may be necessary to proceed to a normalization of the resource workload gaps in order to carry out the optimization in a fair way.

In this section, we present an efficient mixed integer programming model for a multi-objective problem with binary decision variables: sit (fit) that take the value 1 if the job iJ has already started (finished, respectively), and 0 otherwise. The completion time of job iJ can thus be written as Ci=H−∑tTfit+1, where T is a set of time points on the scheduling horizon H. For the sake of clarity, notations used in the mathematical model are resumed in Table 1. The problem can be formulated as follows:

Table 1:

Notations for the Mathematical Formulation.

JSet of jobs, numbered by {0, 1, …, n+1}TSet of time points T={0, 1, …, H}
ASet of precedence relationshipsCmaxProject makespan
GPrecedence graph, drawn with J and AHScheduling horizon, an upper bound of Cmax
RSet of renewable resourcesESi /LSiEarliest/latest starting date of job iJ
piProcessing time of job iJEFi /LFiEarliest/latest finishing date of job iJ
diDue date of job iJTiTardiness of a job iJ
qrPeriodical capacity of the resource rRer,maxMaximal consumption of resource rR
qirResource requirement of rR for job iJer,minMinimal consumption of resource rR
MA very big number

Objective functions:

(1)minO1=HtTf(n+1)t+1.
(2)minO2=iJmax(HtTfit+1di,0).
(3)minO3=rR(er,maxer,min).

Subject to:

(4)sjtfit(i,j)A,t[ESi,H].
(5)t=ESiLFi(sitfit)=piiJ.
(6)iJqir(sitfit)qrrR,tT.
(7)sitsi,t+1iJ,tT.
(8)fitfi,t+1iJ,tT.
(9)sit=0iJ,t[0,ESi1].
(10)fit=0iJ,t[0,EFi1].
(11)sit=1iJ,t[LSi,H].
(12)fit=1iJ,t[LFi,H].
(13)er,mini=1nqir(sitfit)+Mf(n+1)ttT,rR.
(14)er,maxi=1nqir(sitfit)tT,rR.
(15)sit,fit{0,1}iJ,tT.

The makespan, total job tardiness, and workload balance are formulated in constraints (1), (2), and (3), respectively. Constraint (4) describes the precedence relationships. The job processing times are integrated in constraint (5). Constraint (6) states that the resource consumptions cannot exceed the capacities. Constraints (7) and (8) make sure that the variables sit and fit are non-decreasing functions over time so as to meet the non-preemption conditions. Constraints (9) to (12) define some properties of sit and fit based on the executable windows of each job. The minimal and maximal resource consumptions are calculated in constraints (13) and (14), respectively, where M is a big number. The minimum resource utilization needs more attention. After the end of the project, the constraint should always be satisfied. Constraint (15) defines the binary variables.

3 NSGAs

In a previous study [33], we developed an exact method based on TPM to find the exact Pareto front. However, beyond two objectives, the complexity of the exact procedure becomes much more important and needs a lot more computation time. This is why in this paper, we propose to tackle the problem with an approximate approach.

In this study, we have chosen the NSGA to find the approximated Pareto fronts. The NSGAII [12] and NSGAIII [11] follow the general scheme of a genetic algorithm by adapting Pareto dominance for the selection. The program starts with randomly generating N individuals, known as the initial population. Crossover and mutation operators are used to proceed the offspring. The selection is then proceeded in order to choose N members out of the whole population and create the next generation. The offspring and selection procedures are then repeated till the stopping criteria are met.

3.1 Chromosome and Initial Population

The initial population is constituted by N individuals, each of whom is identified by its chromosomes. Two chromosome parts are considered in our problem: a priority sequence of jobs, denoted by X1, and a series of matching time lags, denoted by X2. Regarding the example of Table 2, a whole chromosome is proposed in Figure 1 and denoted by χ.

Table 2:

An Example with Nine Jobs.

Job (i)012345678
Predi000142, 356, 7
pi1211122
qi11412422
Figure 1: A Complete Chromosome with Nine Jobs.
Figure 1:

A Complete Chromosome with Nine Jobs.

For a given problem, X1 is only valid if all the precedence relationships of jobs are respected. For instance, if a job iJ is a direct or indirect predecessor of a job jJ, then j cannot be ahead of i in the priority sequence.

For this purpose, we identify two sets of jobs: the available jobs (AJ) and the non-available jobs (NJ). At first, only job 0 is available so that AJ={0} and NJ=J\{0}. We choose the only job in AJ and put it in the first place of X1. Next, considering that job 0 finished, we update the two job sets AJ and NJ: job 0 is firstly moved out of AJ; in the meantime, jobs whose predecessors are all finished are moved from NJ to AJ. In the example, jobs 1, 2, and 3 take job 0 as the only predecessor. Thus, after choosing job 0, they become available and we have AJ={1, 2, 3}. One job in AJ is then randomly chosen, for example, job 1. Next, we move job 1 out of AJ and move job 4 from NJ to AJ, as its only predecessor (job 1) is already chosen. We can thus update AJ={2, 3, 4}. Repeating this procedure explained above, we can obtain a valid sequence X1, leading to a feasible solution.

Let us now focus on X2, which is created specially for the workload balance criterion. This objective is measured by the gap between the highest and the lowest resource utilization. It is thus interesting for some jobs to wait before starting the operation so as to lower the eventual peak of resource consumptions. To cope with this problem, we introduce the second part of the chromosome, which contains a set of “non-negative time lags” between the actual and the earliest possible starting dates of jobs. For a given individual, if there exists at least one positive time lag, X2 is said to be effective.

For a given chromosome, we consider each job one by one according to the priority sequence and assign a start time for each of them. A job iJ can only start if three conditions are satisfied: firstly, all predecessors of i should already be finished; secondly, the time lag must be respected; and finally, there should be enough resources during periods when i is in process. With all conditions satisfied, the job starts at the earliest possible moment.

3.2 From Chromosome to Solution

Let us consider one single resource with periodic capacity q1=5. Job durations and resource consumptions are defined in Table 2. In Figure 2, we provide the schedule obtained by X1 of χ, where all jobs start as soon as possible and there is no time lag. This leads to a makespan of 6 and a workload balance of 3. Now let us add X2 to complete the chromosome and assume that job 2 has a time lag that equals to 1. This means that job 2 cannot start earlier than t=1, and we find a schedule as shown in Figure 3, which proposes a higher makespan of 7 but a better workload balance of 2.

Figure 2: Example of Solution.
Figure 2:

Example of Solution.

Figure 3: Schedule with TL2=1.
Figure 3:

Schedule with TL2=1.

Besides, it is worth noting that the priority sequence does not have to be the final schedule sequence due to the resource constraints. In the example of Figure 2, job 4 has the priority over job 3 in the chromosome but somehow starts later. As a matter of fact, given two jobs (i, j)∈J2, if there is not any direct or indirect precedence relationship between them, even when i has the priority over j in X1, it is possible that j starts before i. This case arises typically when the resource requirements of i and j are extremely high and low, respectively. Job i can only start when the resource availability is high; on the contrary, it is a lot easier for j to find a position in the partial schedule. As a result, it is possible to have two different chromosomes that lead to the same schedule, called “equivalent chromosomes.” In order to guarantee the diversity of solutions, equivalent chromosomes are not allowed in this step.

3.3 Crossover

Crossover allows creating diversified individuals. Both one-point and two-point crossovers are used in our program so as to explore more possibilities of chromosomes. The crossover points, denoted by CP1 and CP2, are randomly chosen among positions in [1, n−1], so that the chromosome can be divided into several parts. It is worth noting that CP1 is by default in front of CP2. Let us remember the example in Table 2 and denote FX1, FX2 as the chromosomes of the father and MX1, MX2 as those of the mother. As shown in Figure 4, the son, denoted by SX1, SX2, completely inherits the father’s chromosomes before CP1 and after CP2 (for one-point crossover, CP2 is automatically set to the last position). For the remaining part, the son follows the mother’s sequence for the remaining jobs. However, considering the precedence relationships, a simple combination of partial chromosomes may lead to an invalid chromosome. To guarantee a feasible schedule in SX1, for the positions between CP1 and CP2, jobs in FX1 have to follow the same relative sequence in MX1. In the meantime, it is important to keep the “pairing” relationship between X1 and X2. A job and its time lag should always be on the same position. The daughters (DX1, DX2) are created in the same way by reversing the roles of the father and the mother.

Figure 4: Two-Point Crossover.
Figure 4:

Two-Point Crossover.

3.4 Mutation

Mutation allows furthering the diversity of chromosomes and can help step out of possible local optimum. Considering X1, for a given individual, we choose firstly a mutation job, denoted by jm. A time window, defined as between the end of the last predecessor and the beginning of the first successor, frames a free zone where jm can start at any moment without violating the precedence relationships. We then choose a reference job randomly in this window and denote it as jr. If jm is before (after) jr, then we put jm right after (before) jr and the rest of the jobs keep their relative order in the origin chromosome. Once again, we should pay attention to the pairing relationship between X1 and X2 and make sure that X2 keeps us with X1 in the mutation. Moreover, if the time lag of jm was positive before the mutation, we set it to 0, and otherwise, we assign a positive time lag for it. In Figure 5, X1, X2 represents the original chromosome while X1ʹ, X2ʹ stands for that after the mutation.

Figure 5: Mutation.
Figure 5:

Mutation.

3.5 Selection

After the offspring, the children and parents are gathered into the same pool from which N individuals with be selected according to the Pareto dominance. Let us consider a problem with K objectives to minimize. For a given solutions s, denote Ok(s) its objective values for k=1, 2, …, K. For two solutions s1 and s2, if ∀k=1, 2, …, K, Ok(s1)≤Ok and ∃i∈1, 2, …, K|Oi(s1)<Oi(s2), s2 is said to be Pareto dominated by s1 (s1 π P s2). Hence, the two solutions s1 and s2 are mutually non-dominated if and only if s1P s1 and s2P s1. A set of mutually non-dominated solutions is on the same front.

During the selection, non-dominated fronts are established. Let us denote these fronts by {F1, F2, …, Flast}, where F1 contains the best solutions while Flast is constituted by the worst ones in the given population. In most cases, Flast requires a selection within the non-dominated front. In this case, the NSGAII uses the crowding distance, denoted by Dc, to measure the solution distribution on the solution space. Solutions with higher values of Dc are selected as they are located on less visited zones. Readers can refer to Ref. [12] for more details.

As stated earlier, we have to deal with the equivalent chromosomes in our problem. For this purpose, we limit the size of each front Fi (i=1, 2, …) by a parameter δ, such that the number of solutions chosen on Fi should not exceed δ*N. This means that the selection within a front is not only proceeded for Flast, but may also be necessary for any other front. Generally speaking, the extreme solutions have the highest priority so that the exploration zone is as large as possible. However, all extreme solutions are not necessarily chosen. When lots of equivalent solutions are identified, choosing simply those with the highest Dc can limit the diversity of the population as well as the solution distribution on the search space. As a result, the equivalent solutions are gathered together as a group. The selection always follows the principle of crowding distance; however, for a group of equivalent solutions, only one is chosen at a time. Once all the groups are visited, we return to the head of the list and repeat the previous step until we have enough individuals.

3.6 Stopping Criteria and Equivalent Chromosomes

Two stopping criteria are considered in our program. If the first front does not vary for ϕ generations, we conclude the convergence of the population and the program is stopped. Otherwise, the program ends when the number of generation attains θ and we keep the last-found generation.

4 Lorenz Dominance

The Lorenz dominance (denoted by L-dominance) was introduced by Kostreva et al. [24] in a study of equitable aggregations for multi-objective optimization problems. It was recently integrated by Dugardin et al. [15] for a re-entrant hybrid flow shop scheduling problem as well as by Moghaddam et al. [26] for a single machine scheduling problem with rejection. Instead of searching for the largest possible coverage like the Pareto dominance does, the L-dominance focuses on a subset of the Pareto front that allows leading the population toward the ideal point.

4.1 Definition

In a multi-objective problem with K criteria to minimize, given two solutions s1 and s2, s2 is said to be Lorenz dominated by s1 if the Lorenz vector of s2 is Pareto dominated by that of s1, denoted by s1πL s2. In order to find the Lorenz vector of a solution s, we need to follow several steps. Firstly, the objective values are normalized to a value in [0, 1] by

(16)Ok(s)=Ok(s)minOkmaxOkminOk,

where max Ok and min Ok are, respectively, the maximum and minimum values on the objective k∈1, …, K in the actual generation. The normalized objectives are then sorted in the decreasing order such that

(17)O[1](s)>O[2](s)>>O[K](s).

The Lorenz vector can thus be found as

(18)OkL(s)=i=1kO[i](s),kK,sS,

where OkL(s) stands for the kth element in the Lorenz vector of s, computed by the cumulus of the first k elements in the sorted normalized vector. In Figure 6, we give an illustration for the bi-objective case. For a given solution s, the dominated area provided by L-dominance contains that provided by P-dominance. Besides, the L-dominated area is almost symmetric to the line O1L=O2L except for the symmetric point of s itself. This is due to the fact that values in the Lorenz vector is sorted in decreasing order. Finally, the Lorenz dominance can be established by comparing the Lorenz vectors according to the Pareto dominance: given two solution sa and sb, if the Lorenz vector of sa dominates that of sb in the Pareto sense, then sa dominates sb regarding the L-dominance.

Figure 6: Pareto-dominated (Left) and Lorenz-dominated (Right) Area.
Figure 6:

Pareto-dominated (Left) and Lorenz-dominated (Right) Area.

4.2 Numerical Example

For illustrative purposes, let us consider a bi-objective example with 10 solutions as in Table 3. The solutions are sorted with the Pareto dominance, as given in the column P-rank.

Table 3:

A Numerical Example for the Lorenz Dominance.

O1O2P-rankO1O2O[1]O[2]O1LO2LL-rank
s1154010.001.001.000.001.001.007
s2173610.070.730.730.070.730.804
s3203510.170.670.670.170.670.834
s4213310.200.530.530.200.530.733
s524281(16)0.300.20(17)0.300.20(18)0.300.501
s6292710.470.130.470.130.470.602
s7392510.800.000.800.000.800.805
s8452621.000.071.000.071.001.078
s9412720.870.130.870.130.871.006
s10403520.830.670.830.670.831.507

In order to implement the Lorenz dominance, the solutions are firstly normalized according to Eq. (16). Next, for each solution, their normalized objectives are sorted in decreasing order as described in Eq. (17). Thus, we can obtain the Lorenz vector OkL by cumulating the values of O[k] as in Eq. (18). This transformation procedure is also illustrated in Figure 7. Finally, the solutions are re-ranked by comparing their Lorenz vectors according to the Pareto dominance.

Figure 7: Numerical Example – Creation of the Lorenz Vectors from Original Solution Coordinates.
Figure 7:

Numerical Example – Creation of the Lorenz Vectors from Original Solution Coordinates.

The L-dominance aims at giving priority to solutions that equally optimize each objective. In the example, s5 is classified as rank 1 by the L-dominance as it proposes good performances on both O1 and O2. On the other hand, the solution s1, which provides a good result on O1 but a bad result on O2, is poorly ranked by the L-dominance even if it belongs to the first Pareto front.

4.3 Adaptation on NSGAs

The Lorenz dominance is then integrated in our programs. The basic idea is to sort and select the solutions according to their Lorenz vectors instead of their original coordinates. For instance, at the beginning of the selection, we compute the Lorenz vectors before proceeding the non-dominated sorting. The Pareto fronts are thus built according to the Lorenz vectors of solutions. Moreover, while the selection within a non-dominated front is involved, the crowding distances (NSGAII), the reference points, and the hyperplane (NSGAIII) are calculated regarding the Lorenz vectors as well. Inheriting all the other procedures from the methods presented earlier, the L-NSGAII and L-NSGAIII are applied to solve our problem.

5 Experiments

In order to measure and compare the performances of our methods, 1350 instances with 30–150 jobs are randomly generated under different parameters. Our programs are developed with C++ language, and all tests are launched on a Dell platform.

5.1 Data Generation and Experimental Design

As stated in our previous study [33], several parameters are used during the data generation in the literature, such as net complexity (NC) and resource strength. Inspired by these works and considering the proper characteristics of our problem, we proposed a new set of instances that allow evaluating our methods with adapted parameters, as described below:

  • NC: ratio between the number of precedence relationships and the number of non-dummy jobs. A higher NC value leads to a more tightly constrained network with less possible schedules.

  • Demand rate (DR): proportion of positive resource requirement qir with iJ and rR.

  • Demand factor (DF): parameter that allows generating resource requirements. For a given non-null requirement for iJ and rR, qir is randomly chosen in [1, DF * qr] according to uniform distribution.

  • Delay proportion (DP): proportion of jobs whose due dates are set as their earliest finishing dates. The rest of the due dates are chosen randomly with uniform distribution in [EFi+1, EFi+pi] for a given job iJ.

With these parameters, we have generated instances with different combinations. Let us denote GY as the group of instances with Y non-dummy jobs.

As presented in previous sections, we solve this multi-objective problem with the NSGAII, L-NSGAII, NSGAIII, and L-NSGAIII (respectively denoted by M1, M2, M3, and M4). Instances with 30–150 jobs are considered with DP∈{0.3, 0.5, 0.7}, DR∈{0.4, 0.6, 0.8}, and DF∈{0.5, 0.7, 0.9}. For all the groups, we have set Pc=0.9, Pmut=0.05. For the sake of brevity, the subgroups are denoted by G30.γ.α.β, where γ, α, β indicate the position of the chosen value in the DP, DR, and DF sets, respectively. For instance, when DP=0.3, we have γ=0. Each subgroup contains 10 randomly created instances.

Our methods are evaluated and compared with two indicators – the hypervolume and the C-metric. The hypervolume is introduced by Zitzler et al. [34] and measures, for a Pareto front, the coverage level of its dominance zone. A value in (0, 1) is assigned to the evaluated front, where 1 stands for the perfect coverage and 0 represents extremely poor coverage. In the work of Fonseca et al. [16], the authors recently proposed a fast algorithm for computing the hypervolume and integrated the computation procedure especially for the three-objective case. We adapt this method as well as the program proposed by the authors to our problem. The C-metric allows comparing two Pareto fronts. It defines the proportion of solutions on one front that are dominated by at least one solution on the other front. A lower value for the C-metric stands for better performances.

5.2 Experimental Results

We observe firstly the performance of the four methods small instance G30 with NC=1. The population size is N=100 with the stopping criteria parameters θ=500 and ϕ=50. The experimental results are presented in Table 4, where the better results are highlighted in bold letters. As the computation times of the four methods are very close, they are not included in the table. All methods require 0.06 s on average for each generation.

Table 4:

G30 – Hypervolume and C-metric.

G30HypervolumeC-metric
(αβγ)M1M2M3M4M1M2M3M4M1M3M2M4
0000.530.590.490.580.150.060.150.040.090.100.080.10
0010.510.500.470.430.060.110.050.140.040.130.070.15
0020.590.600.540.550.110.120.120.120.060.140.070.11
0100.540.460.520.530.100.090.090.090.130.060.100.05
0110.600.570.530.570.060.110.100.090.070.110.090.08
0120.570.560.560.550.080.120.110.070.100.090.120.05
0200.480.510.420.500.120.050.140.050.080.100.080.07
0210.540.530.500.550.110.100.140.070.090.110.110.09
0220.510.500.510.500.120.090.100.080.110.070.120.09
1000.470.530.450.490.120.100.090.120.090.130.100.11
1010.480.520.440.460.130.020.120.100.070.100.070.14
1020.560.550.400.480.130.090.170.030.010.190.070.15
1100.580.660.530.530.130.040.070.090.070.120.030.16
1110.440.440.450.370.090.090.070.160.080.100.080.09
1120.550.550.500.450.120.100.030.160.120.050.020.16
1200.540.550.490.490.120.110.090.070.060.120.090.04
1210.580.550.490.500.100.120.080.120.050.180.080.13
1220.420.440.450.490.120.080.100.060.110.090.100.07
2000.440.470.580.440.110.080.080.150.170.070.080.08
2010.540.550.550.530.100.120.040.140.090.100.110.07
2020.500.550.520.540.120.090.080.100.090.120.120.08
2100.580.560.470.450.090.140.120.060.030.170.110.08
2110.510.390.390.480.060.150.140.040.030.120.120.09
2120.520.450.510.490.050.160.110.100.090.140.160.05
2200.580.530.480.470.040.180.110.080.040.160.100.12
2210.550.490.440.460.080.120.120.080.030.130.090.10
2220.520.480.490.500.040.130.130.090.090.140.130.12
Best14/2710/274/272/2713/2714 /2711/2717/2720/277/2713/2715/27

Regarding the hypervolume, we can see that the NSGAII-based methods (M1 and M2) have better performances that the NSGAIII-based methods (M3 and M4). The NSGAII provides better solutions in 14 out of 27 subgroups, and is the best among the four methods. The NSGAIII-based methods give higher hypervolumes in only six groups among the 27 tested. These instances are characterized by medium resource requirements. In Ref. [33], we have shown that difficulty of instances arises with higher resource requirements. Furthermore, according to this metric, the Lorenz dominance does not seem to have strong improvements for the Pareto dominance on G30.

With the C-metric, the methods are studied in pairs. According to the comparison between M1 and M2, as well as that between M3 and M4, it is obvious that the Lorenz dominance improves the solution quality both for the NSGAII and NSGAIII approaches. Moreover, this improvement is stronger for the NSGAIII. By comparing M1 and M3, we find the same tendency as for the hypervolume, which means that the NSGAII outperforms the NSGAIII in most cases, especially in the easy and difficult instances. Finally, the comparison between M2 and M4 supports the previous statement that the Lorenz dominance has a much stronger improvement for the NSGAIII than the NSGAII.

Combining the two metrics, we find that the NSGAII holds its superior performance over the NSGAIII on easy and difficult instances, for both hypervolume and C-metric. Moreover, the Lorenz-based methods seem to be weaker. Nevertheless, while considering the C-metric, the improvements of Lorenz dominance are revealed. Even if it does not provide a higher hypervolume, it somehow generates more “truly” non-dominated solutions.

Moving on to medium-sized problems, for instance, the group G60 with NC=1.33, whose results are shown in Table 5. Once again, the bold letters represent the better results. In these experiments, we have set N=125, θ=700, and ϕ=70. The computation times of different methods are still very close, which is around 0.20 s. Compared to G30, NSGAIII-based methods start to show their advantages. Considering the hypervolume, M1 and M2 still provide better solutions for more instances; however, the proportion has fallen from 24 to 16 subgroups out of 27. In the meantime, the advantages of NSGAII are, on easy and difficult instances, reduced as well.

Table 5:

G60 – Hypervolume and C-metric.

G60HypervolumeC-metric
(αβγ)M1M2M3M4M1M2M3M4M1M3M2M4
0000.630.620.610.660.130.070.100.090.110.070.110.09
0010.630.470.460.450.020.160.130.080.060.180.140.10
0020.510.490.390.450.080.130.090.080.040.140.130.10
0100.680.640.610.560.110.120.050.160.100.080.030.14
0110.570.500.490.560.060.100.110.090.070.110.100.08
0120.610.640.660.560.130.100.060.150.100.130.050.07
0200.520.430.550.530.070.150.050.130.120.060.140.06
0210.520.490.490.580.040.140.120.040.070.050.150.01
0220.380.480.520.470.150.100.080.120.180.040.100.09
1000.470.460.410.460.120.060.100.090.070.140.090.10
1010.480.520.550.510.100.070.070.150.120.060.080.11
1020.390.430.410.590.100.050.220.000.130.090.140.05
1100.450.400.390.510.110.110.140.070.070.090.110.08
1110.410.490.440.290.080.090.040.090.070.080.050.10
1120.480.580.420.450.160.060.110.090.110.080.050.19
1200.620.530.550.580.070.170.110.090.040.160.110.09
1210.440.520.410.470.110.070.100.110.070.100.090.08
1220.450.450.590.550.100.130.110.150.110.100.140.07
2000.410.530.480.420.150.070.060.110.140.080.090.10
2010.490.310.580.500.020.160.070.160.160.050.210.02
2020.500.470.430.420.040.160.110.130.040.140.110.08
2100.410.600.490.390.170.040.020.150.160.040.050.17
2110.510.400.400.330.050.180.080.130.070.140.120.10
2120.530.370.510.480.090.150.070.130.110.090.120.06
2200.460.450.460.420.100.090.080.140.130.120.080.13
2210.470.430.490.520.060.170.120.060.080.110.170.04
2220.530.420.480.540.040.160.130.100.100.080.170.05
Best10/276/277/275/2715/2713 /2715/2712/2713/2714/279/2718/27

This tendency is proved by the comparison between M1, M3 and M2, M4 regarding the C-metric. It is shown that NSGAIII-based methods propose more interesting solutions. However, the improvements of Lorenz dominance are not as strong as in G30. The performance of Lorenz-based methods is slightly weaker than the Pareto-based methods. Within both the pairs M1/M2 and M3/M4, the better performances are rather evenly located and their behaviors on instances with moderate resource requirements are relatively close. However, it is worth noting that M4 still proposes better solutions on most instances than M2, while M3 is more efficient than M1 in around half of the cases. This meets our earlier remark that the Lorenz dominance has a stronger effect on NSGAIII than NSGAII.

Finally, we present the results on large instances by taking the example of G120. In Table 6 with NC=2, N=120, θ=1000, and ϕ=100 and better results marked in bold letters. The computation time for each generation varies from 0.45 s (M1) to 0.53 s (M4). The NSGAIII-based methods continue to outperform the NSGAII-based methods in 19 out of 27 subgroups on the hypervolume. The latter loses its superiority on both easy and difficult instances. The efficiency of NSGAIII-based methods is proved once more by comparing the C-metric of M1 and M3, as well as that of M2 and M4. Meanwhile, the improvements of Lorenz dominance rewind regarding the pairs M1, M2 and M3, M4. In Figure 8, where we provide an example comparing the NSGAIII and the L-NSGAIII, the advantage of Lorenz dominance is clearly shown. We also notice that the C-metrics of M2 and M4 are lower than those of M1 and M3, and the two Pareto fronts found by Lorenz-based methods are closer to each other than the other two methods. This means that the Lorenz dominance allows finding more stable solutions than the Pareto dominance.

Table 6:

G120 – Hypervolume and C-metric.

G120HypervolumeC-metric
(αβγ)M1M2M3M4M1M2M3M4M1M3M2M4
0000.460.460.450.480.060.130.110.090.060.100.040.04
0010.320.380.320.390.160.070.140.060.140.050.040.03
0020.390.490.500.430.150.090.090.140.170.050.050.07
0100.370.350.400.420.060.130.110.120.100.100.080.02
0110.400.440.380.440.110.100.160.040.090.120.050.04
0120.490.400.410.380.150.090.090.130.080.090.040.06
0200.520.370.440.470.010.200.110.100.040.170.070.02
0210.500.460.470.470.100.110.120.040.050.110.060.04
0220.340.470.420.460.210.050.130.090.160.080.050.04
1000.340.320.400.380.030.130.110.050.120.070.080.03
1010.390.380.290.310.130.080.080.100.070.150.040.07
1020.150.280.520.460.190.050.100.090.160.040.070.00
1100.400.310.430.300.030.130.030.150.180.050.030.04
1110.270.300.420.430.130.040.110.070.140.100.060.02
1120.280.360.480.370.130.100.040.130.160.060.060.03
1200.450.480.460.360.150.060.050.120.100.070.000.08
1210.380.330.380.370.040.100.090.100.100.060.060.04
1220.440.370.370.440.110.130.140.040.060.160.060.03
2000.330.510.410.440.110.030.070.110.120.110.040.06
2010.320.410.460.340.080.100.050.120.120.070.020.05
2020.470.460.240.290.070.070.130.020.030.170.020.07
2100.390.310.200.230.060.120.130.060.000.180.030.05
2110.360.340.450.400.070.130.080.090.140.080.050.03
2120.220.330.330.480.120.040.150.090.150.060.060.02
2200.380.350.340.410.100.090.110.110.050.160.050.04
2210.370.380.330.440.090.070.150.030.060.080.060.04
2220.360.390.400.370.120.110.060.170.120.110.030.07
Best8/273/279/2710/2712/2716/2713/2715/2711/2716/2711/2717/27
Figure 8: Example of Comparison Between NSGAIII and L-NSGAIII.
Figure 8:

Example of Comparison Between NSGAIII and L-NSGAIII.

By observing the behaviors of different groups of instances, we notice that for small instances, NSGAII is the most robust method and the Lorenz dominance does not provide significant improvements. However, as the problem becomes more complicated, the NSGAIII-based methods show their advantage and take over the superiority. The Lorenz dominance always has an obvious effect of improvements considering the C-metric as it tends toward the ideal point, and this effect is stronger on NSGAIII than on NSGAII.

6 Conclusion and Perspectives

In this paper, we have tackled a multi-objective RCPSP with minimization of makespan, total job tardiness, and maximization of workload balance, which has not yet been integrated in the literature of the RCPSP thus far. The three-objective problem is solved with NSGAII and NSGAIII as well as L-NSGAII and L-NSGAIII to find approximated results. The differences lie in the niching rules of these methods, leading to different performances.

Our methods are tested with small-, medium-, and large-sized instances characterized by different parameters of resource requirements and due dates. The solutions are evaluated by both hypervolume and C-metric. With the experiments, we notice that NSGAII outperforms the other methods on small instances. Nevertheless, this advantage is less and less obvious till it disappears. The NSGAIII-based methods gradually take the superiority with medium- and large-sized problems. In the meantime, the advantage of Lorenz dominance is shown in each group of tests, and these improvements are more effective for NSGAIII than for NSGAII.

Our future works may involve enhancements for the NSGAs and L-NSGAs by integrating local search adapted with this problem, especially considering the workload balance objective, which can lead to specific moves. We can also solve the problems with other meta-heuristic methods and propose potential improvements. Furthermore, to our best knowledge, as the three-objective problems are not yet solved by exact methods in the literature of RCPSP, it is thus interesting to apply the exact algorithms and to conclude the performance of our approximated solutions of small instances.

Bibliography

[1] B. Abbasi, S. Shadrokh and J. Arkat, Bi-objective resource-constrained project scheduling with robustness and makespan criteria, Appl. Math. Comput.96 (2006), 175–187.10.1016/j.amc.2005.11.160Search in Google Scholar

[2] M. A. Al-Fawzan and M. Haouari, A bi-objective model for robust resource-constrained project scheduling, Int. J. Prod. Econ.180 (2006), 146–152.10.1016/j.ijpe.2004.04.002Search in Google Scholar

[3] R. Alvarez-Valdés and J. M. Tamarit, The project scheduling polyhedron: dimension, facets and lifting theorems, Eur. J. Oper. Res.67 (1993), 204–220.10.1016/0377-2217(93)90062-RSearch in Google Scholar

[4] T. Baar, P. Brucker and S. Knust, Tabu search algorithms and lower bounds for the resource-constrained project scheduling problem, in: S. Voß, S. Martello, I. H. Osman and C. Roucairol (eds.), Meta-Heuristics, pp. 1–18, Springer US, New York, NY, 1999.10.1007/978-1-4615-5775-3_1Search in Google Scholar

[5] L. Bianco and M. Camaria, A new formulation for the project scheduling problem under limited resources, Flexible Serv. Manuf. J.25 (2013), 6–24.10.1007/s10696-011-9127-ySearch in Google Scholar

[6] J. Blazewicz, J. K. Lenstra and A. H. G. R. Kan, Scheduling subject to resource constraints: classification and complexity, Discrete Appl. Math.5 (1983), 11–24.10.1016/0166-218X(83)90012-4Search in Google Scholar

[7] P. Brucker, S. Knust, A. Schoo and O. Thiele, A branch and bound algorithm for the resource-constrained project scheduling problem, Eur. J. Oper. Res.107 (1998), 272–288.10.1016/S0377-2217(97)00335-4Search in Google Scholar

[8] N. Christofides, R. Alvarez-Valdes and J. M. Tamarit, Project scheduling with resource constraints: a branch and bound approach, Eur. J. Oper. Res.29 (1987), 262–273.10.1016/0377-2217(87)90240-2Search in Google Scholar

[9] N. Damak, B. Jarboui and T. Loukil, Non-dominated sorting genetic algorithm-II to solve bi-objective multi-mode resource-constrained project scheduling problem, in: 2013 International Conference on Control, Decision and Information Technologies (CoDIT), Hammamet, Tunisia, pp. 842–846, 2013.10.1109/CoDIT.2013.6689652Search in Google Scholar

[10] P. P. Das and S. Acharyya, Meta-heuristic approaches for solving resource constrained project scheduling problem: a comparative study, in: 2011 IEEE International Conference on Computer Science and Automation Engineering (CSAE), vol. 2, pp. 474–478, 2011.10.1109/CSAE.2011.5952511Search in Google Scholar

[11] K. Deb and H. Jain, An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: solving problems with box constraints, IEEE Trans. Evol. Comput.18 (2014), 577–601.10.1109/TEVC.2013.2281535Search in Google Scholar

[12] K. Deb, A. Pratap, S. Agarwal and T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Trans. Evol. Comput.6 (2002), 182–197.10.1109/4235.996017Search in Google Scholar

[13] D. Debels and M. Vanhoucke, A decomposition-based genetic algorithm for the resource-constrained project-scheduling problem, Oper. Res.55 (2007), 457–469.10.1287/opre.1060.0358Search in Google Scholar

[14] E. Demeulemeester and W. Herroelen, A branch-and-bound procedure for the multiple resource-constrained project scheduling problem, Manage. Sci.38 (1992), 1803–1818.10.1287/mnsc.38.12.1803Search in Google Scholar

[15] F. Dugardin, F. Yalaoui and L. Amodeo, New multi-objective method to solve re-entrant hybrid flow shop scheduling problem, Eur. J. Oper. Res.203 (2012), 22–31.10.1016/j.ejor.2009.06.031Search in Google Scholar

[16] C. M. Fonseca, L. Paquete and M. Lopez-Ibanez, An improved dimension-sweep algorithm for the hypervolume indicator, in: IEEE Congress on Evolutionary Computation, 2006, CEC 2006, Vancouver, BC, Canada, pp. 1157–1163, July 2006.Search in Google Scholar

[17] H. C. Gomes, F. de Assis das Neves and M. J. F. Souza, Multi-objective metaheuristic algorithms for the resource-constrained project scheduling problem with precedence relations, Comput. Oper. Res.44 (2014), 92–104.10.1016/j.cor.2013.11.002Search in Google Scholar

[18] W. J. Gutjahr, Bi-objective multi-mode project scheduling under risk aversion, Eur. J. Oper. Res.246 (2015), 421–434.10.1016/j.ejor.2015.05.004Search in Google Scholar

[19] Q. Jia and Y. Seo, Solving resource-constrained project scheduling problems: conceptual validation of {FLP} formulation and efficient permutation-based {ABC} computation, Comput. Oper. Res.40 (2013), 2037–2050.10.1016/j.cor.2013.02.012Search in Google Scholar

[20] S. Khalili, A. A. Najafi and S. T. A. Niaki, Bi-objective resource constrained project scheduling problem with makespan and net present value criteria: two meta-heuristic algorithms, Int. J. Adv. Manuf. Technol.69 (2013), 617–626.10.1007/s00170-013-5057-zSearch in Google Scholar

[21] K. Kim, Y. Yun, J. Yoon, M. Gen and G. Yamazaki, Hybrid genetic algorithm with adaptive abilities for resource-constrained multiple project scheduling, Comput. Ind.56 (2005), 143–160.10.1016/j.compind.2004.06.006Search in Google Scholar

[22] R. Klein, Project scheduling with time-varying resource constraints, Int. J. Prod. Res.38 (2000), 3937–3952.10.1080/00207540050176094Search in Google Scholar

[23] O. Koné, C. Artigues, P. Lopez and M. Mongeau, Event-based MILP models for resource-constrained project scheduling problems, Comput. Oper. Res.38 (2011), 3–13.10.1016/j.cor.2009.12.011Search in Google Scholar

[24] M. Kostreva, W. Ogryczak and A. Wierzbick, Equitable aggregation and multiple criteria analysis, Eur. J. Oper. Res.158 (2004), 362–377.10.1016/j.ejor.2003.06.010Search in Google Scholar

[25] A. Mingozzi, V. Maniezzo, S. Ricciardelli and L. Bianco, An exact algorithm for the resource constrained project scheduling problem based on a new mathematical formulation, Manage. Sci.44 (1998), 714–729.10.1287/mnsc.44.5.714Search in Google Scholar

[26] A. Moghaddam, F. Yalaoui and L. Amodeo, Lorenz versus Pareto dominance in a single machine scheduling problem with rejection, in: Evolutionary Multi-criterion Optimization: 6th International Conference, EMO 2011, Ouro Preto, Brazil, April 5–8, 2011, Proceedings, pp. 520–534, Springer, Berlin, 2011.10.1007/978-3-642-19893-9_36Search in Google Scholar

[27] K. Neumann and J. Zimmermann, Procedures for resource leveling and net present value problems in project scheduling with general temporal and resource constraints, Eur. J. Oper. Res.127 (2000), 425–443.10.1016/S0377-2217(99)00498-1Search in Google Scholar

[28] Y. Ouazene, F. Yalaoui, H. Chehade and A. Yalaoui, Workload balancing in identical parallel machine scheduling using a mathematical programming method, Int. J. Comput. Intell. Syst.7 (2014), 58–67.10.1080/18756891.2013.853932Search in Google Scholar

[29] V. V. Peteghem and M. Vanhoucke, A genetic algorithm for the preemptive and non-preemptive multi-mode resource-constrained project scheduling problem, Eur. J. Oper. Res.201 (2010), 409–418.10.1016/j.ejor.2009.03.034Search in Google Scholar

[30] A. A. B. Pritsker, L. J. Watters and P. M. Wolfe, Multiproject scheduling with limited resources: a zero-one programming approach, Manage. Sci.16 (1969), 93–108.10.1287/mnsc.16.1.93Search in Google Scholar

[31] R. Sirdey, J. Carlier and D. Nace, Approximate solution of a resource-constrained scheduling problem, J. Heuristics15 (2007), 1–17.10.1007/s10732-007-9052-0Search in Google Scholar

[32] S. C. Vanucci, R. Bicalho, E. G. Carrano and R.H.C. Takahashi, A modified NSGA-II for the multiobjective multi-mode resource-constrained project scheduling problem, in: 2012 IEEE Congress on Evolutionary Computation (CEC), Brisbane, Australia, pp. 1–7, 2012.10.1109/CEC.2012.6256616Search in Google Scholar

[33] X. Wang, F. Dugardin and F. Yalaoui, An exact method to solve a bi-objective resource constraint project scheduling problem, in: 8th IFAC Conference on Manufacturing Modelling, Management and Control, France, June 2016.10.1016/j.ifacol.2016.07.579Search in Google Scholar

[34] E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca and V. G. da Fonseca, Performance assessment of multiobjective optimizers: an analysis and review, IEEE Trans. Evol. Comput.7 (2003), 117–132.10.1109/TEVC.2003.810758Search in Google Scholar

Received: 2016-10-30
Published Online: 2017-10-23

©2019 Walter de Gruyter GmbH, Berlin/Boston

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Articles in the same Issue

  1. Frontmatter
  2. Precursor Selection for Sol–Gel Synthesis of Titanium Carbide Nanopowders by a New Cubic Fuzzy Multi-Attribute Group Decision-Making Model
  3. Modified and Optimized Method for Segmenting Pulmonary Parenchyma in CT Lung Images, Based on Fractional Calculus and Natural Selection
  4. PCI-PSO: Preference-Based Component Identification Using Particle Swarm Optimization
  5. Performance Evaluation of Modified Color Image Steganography Using Discrete Wavelet Transform
  6. Pythagorean Hesitant Fuzzy Hamacher Aggregation Operators in Multiple-Attribute Decision Making
  7. Mitral Regurgitation Severity Analysis Based on Features and Optimal HE (OHE) with Quantification using PISA Method
  8. Non-dominated Sorting Genetic Algorithms for a Multi-objective Resource Constraint Project Scheduling Problem
  9. Substation Equipment 3D Identification Based on KNN Classification of Subspace Feature Vector
  10. Mathematical Model Using Soft Computing Techniques for Different Thermal Insulation Materials
  11. Prediction Method of Railway Freight Volume Based on Genetic Algorithm Improved General Regression Neural Network
  12. Tree Physiology Optimization in Benchmark Function and Traveling Salesman Problem
  13. Design and Development of a Multiobjective Cost Function for Robust Video Watermarking Using Wavelet Transform
  14. Forecasting Air Quality Index Using an Ensemble of Artificial Neural Networks and Regression Models
  15. Particle Swarm Optimization-Enhanced Twin Support Vector Regression for Wind Speed Forecasting
Downloaded on 20.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jisys-2017-0241/html
Scroll to top button