Home A Hybrid Cuckoo Search and Simulated Annealing Algorithm
Article Open Access

A Hybrid Cuckoo Search and Simulated Annealing Algorithm

  • Faisal Alkhateeb EMAIL logo and Bilal H. Abed-alguni
Published/Copyright: October 5, 2017
Become an author with De Gruyter Brill

Abstract

Simulated annealing (SA) proved its success as a single-state optimization search algorithm for both discrete and continuous problems. On the contrary, cuckoo search (CS) is one of the well-known population-based search algorithms that could be used for optimizing some problems with continuous domains. This paper provides a hybrid algorithm using the CS and SA algorithms. The main goal behind our hybridization is to improve the solutions generated by CS using SA to explore the search space in an efficient manner. More precisely, we introduce four variations of the proposed hybrid algorithm. The proposed variations together with the original CS and SA algorithms were evaluated and compared using 10 well-known benchmark functions. The experimental results show that three variations of the proposed algorithm provide a major performance enhancement in terms of best solutions and running time when compared to CS and SA as stand-alone algorithms, whereas the other variation provides a minor enhancement. Moreover, the experimental results show that the proposed hybrid algorithms also outperform some well-known optimization algorithms.

MSC 2010: 68T01

1 Introduction

Cuckoo search (CS) is a recent population-based optimization algorithm [30] that can be used for optimizing discrete and continuous problems. CS is distinguished from other optimization algorithms by the use of a randomization method based on Lévy flight and the use of one insensitive tuning parameter [20], [23], [30], [31].

On the contrary, simulated annealing (SA) is a single-solution search algorithm that can be used for optimizing discrete [9], continuous, and statistical combinatorial problems [14]. CS and SA are iterative algorithms that generate solutions based on a random method and replace the current solutions by better ones. Unlike CS and some local search algorithms (see Ref. [12] for Hill-climbing algorithm), SA increases the chance of avoiding local optima because it explores the solution space and allows replacing current solutions by worse ones with a decreasing probability [12], [28].

Recently, several researchers have attempted to hybridize SA with other techniques to solve different types of optimization problems [13], [25], [26]. Liu and Wang [13] used the SA algorithm with some priority rules to solve the problem of cellular manufacturing system (CMS) with dual-resource constrained setting. The SA algorithm with some parallel operators and constraints has been used by Shivasankaran et al. [25] to solve the repair shop job sequencing and operator allocation problem. Shivasankaran et al. [26] proposed a hybrid sorting immune SA algorithm to solve the flexible job-shop scheduling problem. The SA algorithm has been combined with differential evolution algorithms to solve joint replenishment and delivery problem with trade credit [33].

Similar to many other optimization algorithms, CS suffers from the problem of premature convergence and the problem of slow convergence especially when it is used to solve complex optimization problems [2]. Therefore, many research studies have attempted to improve CS. Marichelvam and colleagues [15], [17], [18] proposed an improved algorithm that hybridizes CS with metaheuristics to solve flow shop scheduling problems. Marichelvam and Geetha [16] used a hybrid algorithm of CS and metaheuristics to solve single machine total weighted tardiness scheduling problems with sequence-dependent setup times. Based on several benchmark functions, the experiments show that the proposed hybrid algorithms provide better results than some other metaheuristic algorithms.

A hybrid algorithm that merges the exploitation mechanism of the CS algorithm and the exploration mechanism of global harmony search was proposed by Feng et al. [5] for solving 0-1 knapsack problems. The experimental results showed that the proposed algorithm outperform the original CS algorithm. Liao et al. [11] proposed a dynamic adaptive CS algorithm with crossover operator to finely tune the parameters of CS to maximize its effectiveness.

Abed-alguni and Alkhateeb [2] proposed six variations of the CS algorithm based on six existing randomized selection schemes: greedy, proportional, exponential, ε-greedy, softmax, and reinforcement learning selection schemes. The proposed variations have been evaluated and compared using 20 well-known benchmark functions. The experimental results show that the proposed variations perform better than the original CS algorithm.

In this paper, we propose a novel algorithm that combines the CS and SA algorithms in one algorithm (CSA). CSA aims to enhance the performance of the CS algorithm by exploiting the exploration mechanism of SA. More precisely, we design and implement four variations of CSA. Then, we experimentally study the performance of the four variations of the proposed algorithm as well as the CS and SA algorithms based on 10 benchmark functions.

It is worth mentioning that Sheng et al. [24] proposed a variation of the CS algorithm that uses the selection method of the SA algorithm to enhance the local search process of the CS algorithm. This variation is slightly similar to variation 2 of CSA, which provides minor enhancement to the CS algorithm as shown in the experimental results of the current paper.

The remainder of this paper is organized as follows: the research background (including CS and SA algorithms) is discussed in Section 2. The variations of the hybrid algorithm combining the CS and SA algorithms are introduced in Section 3. The results of the experiments and the implemented benchmark functions are presented in Section 4. Finally, Section 5 presents the conclusion of this paper.

2 Preliminaries

This section briefly introduces the notations used in this paper: the SA and CS algorithms.

2.1 Notations

First, we define all the notations used in this paper.

  • n is the number of stored solutions (nests) in a given population.

  • x=(x1, …, xm ) is a vector of m decision variables representing a solution, where the possible value range for each decision variable xi ∈[min Valuei , max Valuei ].

  • xi is the ith solution.

  • f(x) is the fitness value of solution x.

  • pi is the probability of selecting the ith solution xi .

Figure 1 represents an example of solution representation scheme in which x=(x1, …, x10) is a vector of 10 decision variables and the values are randomly generated from the domain of the sphere function [−100,100]. The objective value in this example is f(x)=9029.745.

Figure 1: Example of Solution Representation Scheme.
Figure 1:

Example of Solution Representation Scheme.

2.2 SA Algorithm

SA is a probabilistic and metaheuristic algorithm that can be used in a large search space to reach a global optimum. SA is inspired from the annealing process in metallurgy (i.e. a process involving the heating and cooling of a given material in a controlled manner to increase its crystals’ size and reduce their defects) and was formulated by Khachaturyan et al. [9] to be used as a minimization approach. The efficiency of SA has been demonstrated for solving many problems such as traveling salesman problem [4].

Basically, SA works as follows:

  1. Start with an initial state (solution) generated randomly for a given problem.

  2. Select a random successor state (new solution) within a determined range depending on the problem.

  3. Assess the new solution. If the new solution is better than the current solution, then accept the new solution (i.e. select it as the current solution). Otherwise, select successor (new solution) as current solution based on the decreasing probability of acceptance.

  4. Adjust the temperature by implementing either kinetic equations for density functions [9] or using the stochastic sampling method, which was independently described by Kirkpatrick et al. [10].

  5. Terminate searching once the stopping condition is satisfied; otherwise, go back to Step 2.

It should be noticed that, for maximization problems, a new solution i is considered better than the current solution j if i has a greater fitness value than j. Thus, for minimization problems, the condition in Line 13 (Figure 2) should be changed to be (ΔE<0).

Figure 2: SA Algorithm.
Figure 2:

SA Algorithm.

2.3 CS Algorithm

The CS algorithm is a population-based optimization algorithm that simulates the parasitic breeding behavior of some cuckoo species [30]. Some cuckoo species lay their eggs on the nests of host birds. In the case that the host bird discovers the replacement of its eggs, it may either throw the eggs or abandon its nest.

The CS algorithm mimics the breeding and the Lévy flight behaviors of some bird species [7], [8], [19], [30] for solving different kinds of optimization problems. Basically, in the CS algorithm, each nest contains an egg representing a potential solution, whereas each cuckoo egg represents a new solution.

The main objective of the CS algorithm is to enhance the candidate solutions (eggs in the nests) by replacing them with possibly better solutions (cuckoos’ eggs). Mainly, at each iteration of the CS algorithm (Figure 3), the following events take place:

  • Each cuckoo bird lays one egg (i.e. generates a new solution) and then places it in a randomly selected host nest.

  • The host nests that contain the best eggs (best solutions) will be selected for the next generation.

  • A portion pa∈[0,1] of the n nests (worst solutions) will be replaced with new nests.

Figure 3: CS Algorithm.
Figure 3:

CS Algorithm.

2.4 Initialization of the Algorithm

In CS, the number of host nests is denoted by n. Initially, the solutions in the n nests, denoted by xi (i=1, 2, …, n), are initialized with random values from the solutions’ range of the objective function f(x). In addition, CS stops when either it reaches a predetermined maximum number of iterations or it satisfies stop criteria based on f(x).

2.5 Update Rule

During each iteration t of CS, a candidate solution (xit) is chosen randomly from the n nests (stored solutions). After that, a new solution (xit+1) is calculated at iteration t+1 by applying a Lévy flight function as follows:

(1)xit+1=xit+αLévy(λ),

where α>0 is the step value selected based on the size of the tested problem and the symbol ⊕ is the entry-wise product. Exploring the potential solution space using Lévy flight is more efficient than using a random walk because the step length of Lévy flight becomes longer with the progress of the CS algorithm.

The Lévy flight is basically a random walk based on a random step length, which is extracted from a heavy tailed Lévy distribution with an infinite mean and variance [1], given as follows:

(2)Lévyu=Lλ,

where L is the step size and λ∈(1, 3) is a parameter related to fractal dimension. It should be noticed that Lévy flight generates a portion of the solutions around the local optima, whereas it generates randomly a significant portion of the solutions away from the local optima. This will allow the CS algorithm to avoid getting stuck around local optima.

3 Hybrid Search Algorithm between CS and SA

In the CS algorithm (Lines 5–10), a cuckoo (solution) i and a nest j are selected randomly. Then, the solution i is replaced by j only when j is better. However, in SA, there is a chance to replace the current solution with a worse successor solution. This process allows SA to avoid getting stuck in a local optima.

Therefore, we adopt this idea from SA to design four variations of CS by hybridizing CS and SA. In all variations (see Figure 4), we replace several steps (Lines 5–10) in the CS algorithm of Figure 3 by calling the SA algorithm to retrieve a cuckoo (solution) as shown in the step (Line 5 in Figure 4). The difference between the four variations is in the arguments passed to the SA algorithm, particularly in the last two arguments. The following summarizes the different calls to SA in the four variations:

  • In the first variation, SA is called as follows:

    xi ←SIMULATED-ANNEALING(problem, schedule, ∞, LB, UB)

    The first two arguments, which are common in the four variations, represent the problem to be solved using the CS algorithm and the scheduling mechanism of the temperature. ∞ means that the number of iterations in SA is independent from the number of iterations of CS. Furthermore, the process of selecting a solution in SA is fully random and should be within the range of the problem (i.e. the lower and upper bounds: LB and UB).

  • In the second variation, the SA algorithm is called once (represented by the third parameter).

    xi ←SIMULATED-ANNEALING(problem, schedule, 1, LB, UB)

    This means that the loop used inside SA iterates for one time. This is because SA is used by CS as a selection method.

  • In the third variation, maxIterations-t is passed to the SA algorithm as follows:

    xi ←SIMULATED-ANNEALING(problem, schedule, maxIterations-t, LB, UB)

    This means that the maximum number of iterations in SA, which depends on the maximum number of iterations (maxIterations) and the number of iterated steps t of CS, decreases in each call to SA.

  • In the last variation, the range of the search space is passed to SA [i.e. LNeighbors (xit1) and UNeighbors (xit1)] as follows:

    xit←SIMULATED-ANNEALING(problem, schedule, maxIterations-t, LNeighbors (xit1), UNeighbors (xit1)

    Intuitively, the range of the search space for the solutions in SA depends on the current best solution of CS and it is calculated by a percentage.

Figure 4: Hybrid CSA Algorithm.
Figure 4:

Hybrid CSA Algorithm.

Figure 5 shows the flow chart of the main steps of the CSA algorithm, which are described as follows:

  • Step 1: Initialize problem and CSA parameters. In this step, the optimization problem is initially modeled as min(f(x)), where the lower and upper bounds for each decision variable are initialized. Also, the parameters of the CS and SA algorithms are initialized.

  • Step 2: Generate the initial population of n nests (solutions). For instance, three solutions with 10 decision variables each are shown in Figure 6.

  • Step 3: In this step, the current population is improved by calling the SA algorithm to enhance a randomly selected solution from the population (e.g. see the modified solutions in Figure 6).

  • Step 4: Update the population of CSA using the CS procedures (abandon and ranking). In Figure 6, the worst solution (the second one) was selected and replaced by a new solution that is generated randomly.

Figure 5: Flow Chart of Hybrid CSA Algorithm.
Figure 5:

Flow Chart of Hybrid CSA Algorithm.

Figure 6: Numerical Example of CSA.
Figure 6:

Numerical Example of CSA.

4 Experiments

4.1 Setup

The performance of the four variations of the proposed hybrid algorithm was compared to the CS and SA algorithms using 10 well-known test functions (Tables 18 ). These variations can be distinguished in this section as follows:

  • CS algorithm,

  • Variation 1 of the CSA algorithm (CSA1),

  • Variation 2 of the CSA algorithm (CSA2),

  • Variation 3 of the CSA algorithm (CSA3),

  • Variation 4 of the CSA algorithm (CSA4), and

  • SA algorithm.

Table 1:

Experimental Results of the Algorithms for each of the 10 Benchmark Functions.

Function nameCSCSA1CSA2CSA3CSA4SA
Sphere function2.33E−043.31E−101.09E−045.96E−112.12E−104.84E−02
4.16E−047.69E−103.18E−049.65E−112.67E−101.82E−01
2.33E−043.31E−101.09E−045.96E−112.12E−104.84E−02
Easom’s test function−8.14E−01−9.95E−01−7.72E−01−9.95E−019.96E−01−7.51E−02
1.77E−011.86E−031.63E−011.79E−036.07E−041.97E−01
1.86E−014.78E−032.28E−015.16E−033.63E−039.25E−01
Step function0.00E+000.00E+000.00E+000.00E+000.00E+005.09E+02
0.00E+000.00E+000.00E+000.00E+000.00E+002.77E+03
0.00E+000.00E+000.00E+000.00E+000.00E+005.09E+02
Schwefel’s problem1.28E−031.83E−066.93E−021.27E−061.62E−074.20E−02
1.25E−031.86E−062.00E−011.00E−061.53E−071.57E−01
1.28E−031.83E−066.93E−021.27E−061.62E−074.20E−02
Rastrigin’s test function2.33E−043.31E−101.09E−045.96E−112.12E−104.84E−02
4.16E−047.69E−103.18E−049.65E−112.67E−101.82E−01
2.33E−043.31E−101.09E−045.96E−112.12E−104.84E−02
Rotated hyperellipsoid function4.80E−021.52E−077.67E−027.00E−081.14E−091.44E+04
9.77E−023.13E−072.68E−011.31E−072.54E−093.41E+04
4.80E−021.52E−077.67E−027.00E−081.14E−091.44E+04
Beale’s function3.01E−023.03E−023.02E−023.01E−023.03E−029.30E−01
4.60E−046.66E−045.44E−042.88E−046.68E−042.65E+00
3.01E−023.03E−023.02E−023.01E−023.03E−021.16E−04
Shifted sphere function−4.50E+024.50E+02−4.50E+02−4.50E+02−4.50E+021.16E+04
5.91E−021.98E−025.29E−023.83E−025.82E−022.77E+03
7.00E−024.42E−034.05E−023.57E−025.00E−021.21E+04
Shifted Schwefel’s problem−4.50E+02−4.50E+02−4.50E+02−4.50E+024.50E+023.49E+04
1.09E−062.02E−066.59E−075.80E−073.40E−071.38E+04
6.77E−071.11E−065.40E−074.36E−072.53E−073.54E+04
Booth’s function9.69E−031.04E−027.70E−031.05E−021.01E−021.29E+01
8.43E−039.27E−039.84E−038.53E−038.23E−034.68E+00
9.69E−031.04E−027.70E−031.05E−021.01E−021.29E+01
  1. The number of decision variables is 10, the number of iterations is 10,000, and the number of runs is 100.

Table 2:

Experimental Results of the Algorithms for each of the 10 Benchmark Functions.

Function nameCSCSA1CSA2CSA3CSA4SA
Sphere function6.04E−071.43E−122.85E−071.37E−122.62E−142.54E−04
1.12E−062.95E−124.59E−071.84E−124.29E−145.91E−04
6.04E−071.43E−122.85E−071.37E−122.62E−142.54E−04
Easom’s test function−8.43E−01−9.95E−01−8.09E−01−9.95E−019.96E−01−8.29E−02
1.23E−012.22E−031.59E−011.81E−038.81E−042.02E−01
1.57E−015.09E−031.91E−015.27E−033.75E−039.17E−01
Step function0.00E+000.00E+000.00E+000.00E+000.00E+001.16E+01
0.00E+000.00E+000.00E+000.00E+000.00E+005.91E+01
0.00E+000.00E+000.00E+000.00E+000.00E+001.16E+01
Schwefel’s problem1.21E−031.45E−061.91E−021.74E−061.75E−076.46E−02
1.16E−032.05E−062.37E−021.49E−061.35E−072.78E−01
1.21E−031.45E−061.91E−021.74E−061.75E−076.46E−02
Rastrigin’s test function7.40E−056.76E−105.89E−052.36E−101.44E−108.59E−03
1.09E−041.60E−098.71E−057.13E−102.12E−101.17E−02
7.40E−056.76E−105.89E−052.36E−101.44E−108.59E−03
Rotated hyperellipsoid function1.15E+007.73E−061.12E+003.71E−082.29E+051.16E−04
1.68E+001.71E−052.00E+007.99E−081.25E+061.16E−04
1.15E+007.73E−061.12E+003.71E−082.29E+051.16E−04
Beale’s function9.92E+143.02E−023.02E−023.00E−023.02E−029.37E−01
5.43E+155.93E−045.36E−042.80E−045.94E−042.64E+00
9.92E+143.02E−023.02E−023.00E−023.02E−029.37E−01
Shifted sphere function1.32E+041.23E+031.04E+044.45E+02−4.40E+022.16E+04
1.32E+044.09E−044.09E−044.09E−044.09E−041.90E+03
1.37E+041.68E+031.09E+045.00E+009.50E+002.21E+04
Shifted Schwefel’s problem−4.50E+02−4.50E+02−4.50E+02−4.50E+024.50E+025.59E+05
5.21E−072.11E−064.66E−076.94E−075.77E−075.83E+04
3.50E−076.55E−073.24E−074.12E−072.30E−075.59E+05
Booth’s function9.45E−031.05E−029.99E−037.94E−039.86E−031.46E+01
7.71E−031.01E−021.22E−028.69E−039.94E−034.08E+00
9.45E−031.05E−029.99E−037.94E−039.86E−031.46E+01
  1. The number of decision variables is 30, the number of iterations is 10,000, and the number of runs is 100.

Table 3:

Experimental Results of the Algorithms for each of the 10 Benchmark Functions.

Function nameCSCSA1CSA2CSA3CSA4SA
Sphere function1.79E−061.57E−124.01E−071.40E−121.31E−124.00E−03
4.04E−061.68E−127.63E−072.23E−121.96E−121.96E−02
1.79E−061.57E−124.01E−071.40E−121.31E−124.00E−03
Easom’s test function−8.24E−01−9.95E−01−7.72E−01−9.95E−019.96E−01−9.86E−02
1.77E−011.86E−031.63E−011.79E−036.07E−042.28E−01
1.76E−014.78E−032.28E−015.16E−033.63E−039.01E−01
Step function0.00E+000.00E+000.00E+000.00E+000.00E+001.83E+01
0.00E+000.00E+000.00E+000.00E+000.00E+007.67E+01
0.00E+000.00E+000.00E+000.00E+000.00E+001.83E+01
Schwefel’s problem1.34E−031.66E−067.71E−031.37E−061.63E−072.11E−02
1.36E−031.34E−066.69E−031.31E−061.26E−076.41E−02
1.34E−031.66E−067.71E−031.37E−061.63E−072.11E−02
Rastrigin’s test function1.66E−044.93E−101.86E−042.46E−104.96E−101.84E−02
2.73E−046.67E−102.53E−049.33E−123.91E−102.08E−02
1.66E−044.93E−101.86E−042.46E−104.96E−101.84E−02
Rotated hyperellipsoid function7.58E+015.54E−059.66E+018.05E−062.69E−076.80E+03
1.26E+025.54E−058.85E+012.67E−062.19E−077.16E+03
7.58E+015.54E−059.66E+018.05E−062.69E−076.80E+03
Beale’s function3.01E−023.02E−023.02E−023.00E−023.02E−027.87E−01
3.52E−045.93E−045.04E−042.56E−046.49E−042.08E+00
3.01E−023.02E−023.02E−023.00E−023.02E−027.87E−01
Shifted sphere function1.32E+05−4.04E+022.31E+04−4.07E+024.20E+021.86E+05
5.58E+031.41E+024.99E+041.32E+021.14E+021.04E+04
1.33E+054.63E+012.35E+044.34E+013.00E+011.87E+05
Shifted Schwefel’s problem8.73E+054.08E+048.47E+052.55E+045.10E+037.91E+06
2.51E+063.48E+032.43E+062.17E+034.35E+027.93E+05
8.74E+054.12E+048.47E+052.59E+045.55E+037.91E+06
Booth’s function6.80E−031.03E−021.02E−021.22E−021.27E−021.35E+01
7.45E−031.14E−021.09E−021.49E−021.23E−024.56E+00
6.80E−031.03E−021.02E−021.22E−021.27E−021.35E+01
  1. The number of decision variables is 100, the number of iterations is 10,000, and the number of runs is 100.

Table 4:

Experimental Results of the Algorithms for each of the 10 Benchmark Functions.

Function nameCSCSA1CSA2CSA3CSA4SA
Sphere function8.61E−061.30E−124.49E−072.29E−122.16E−148.90E−05
4.22E−051.96E−127.19E−073.05E−121.97E−148.09E−05
8.61E−061.30E−124.49E−072.29E−122.16E−148.90E−05
Easom’s test function−1.43E−01−9.95E−01−8.31E−01−9.93E−019.97E−01−2.02E−04
3.28E−011.21E−032.60E−011.73E−033.03E−042.65E−04
8.57E−014.71E−031.69E−017.28E−033.25E−031.00E+00
Step function0.00E+000.00E+000.00E+000.00E+000.00E+003.17E+00
0.00E+000.00E+000.00E+000.00E+000.00E+003.17E+00
0.00E+000.00E+000.00E+000.00E+000.00E+001.83E+01
Schwefel’s problem1.57E−031.76E−062.19E−031.23E−063.20E−071.75E−02
1.35E−031.34E−061.24E−038.12E−074.06E−081.96E−02
1.57E−031.76E−062.19E−031.23E−063.20E−071.75E−02
Rastrigin’s test function9.90E−042.13E−104.00E−041.73E−102.09E−102.76E−02
2.75E−041.77E−103.13E−041.36E−101.49E−103.81E−02
9.90E−042.13E−104.00E−041.73E−102.09E−102.76E−02
Rotated hyperellipsoid function5.17E+044.78E+034.53E+044.09E+032.93E+034.79E+07
4.63E+041.02E+044.42E+047.23E+033.65E+032.07E+08
5.17E+044.78E+034.53E+044.09E+032.93E+034.79E+07
Beale’s function3.03E−023.03E−023.01E−023.03E−023.01E−022.68E−01
6.51E−044.78E−044.43E−045.69E−042.73E−044.91E−02
3.03E−021.16E−043.01E−023.03E−023.01E−022.68E−01
Shifted sphere function1.32E+051.19E+039.23E+041.12E+031.15E+032.15E+06
6.00E+035.40E+014.20E+035.11E+015.21E+015.77E+05
1.32E+051.64E+039.28E+041.57E+031.60E+032.15E+06
Shifted Schwefel’s problem5.88E+067.23E+055.39E+065.53E+056.95E+041.01E+09
1.81E+062.22E+051.66E+061.70E+052.14E+041.95E+06
5.88E+067.24E+055.39E+065.54E+057.00E+041.01E+09
Booth’s function6.64E−031.13E−029.33E−031.24E−021.37E−021.40E+01
6.89E−031.19E−021.10E−021.44E−021.27E−023.97E+00
6.64E−031.13E−029.33E−031.24E−021.37E−021.40E+01
  1. The number of decision variables is 1000, the number of iterations is 10,000, and the number of runs is 100.

Table 5:

Convergence Results of the Algorithms for each of the 10 Benchmark Functions.

Function nameCSCSA1CSA2CSA3CSA4SA
Sphere function2.13E+046.05E+021.00E+046.78E+021.28E+02*
2.89E+035.02E+020.00E+005.13E+021.33E+02*
Easom’s test function7.02E+038.16E+014.64E+031.12E+028.14E+019.17E+02
4.36E+031.00E+024.57E+038.64E+013.85E+010.00E+00
Step function2.22E+021.37E+001.79E+021.40E+001.50E+005.20E+02
2.25E+026.15E−011.47E+027.70E−018.20E−013.28E+01
Schwefel’s problem9.14E+031.42E+011.38E+011.80E+012.60E+00*
1.91E+031.44E+011.16E+011.87E+011.52E+00*
Rastrigin’s test function1.00E+042.39E+031.00E+041.41E+032.83E+03*
0.00E+007.71E+020.00E+005.51E+012.68E+03*
Rotated hyperellipsoid function1.00E+042.67E+036.67E+034.37E+038.47E+01*
0.00E+003.78E+035.77E+035.12E+038.16E+01*
Beale’s function1.00E+049.56E+034.38E+034.45E+038.98E+03*
0.00E+001.90E+033.51E+034.90E+031.77E+03*
Shifted sphere function1.62E+056.90E+041.52E+056.64E+046.37E+04*
3.29E+042.36E+044.51E+042.28E+042.18E+04*
Shifted Schwefel’s problem1.41E+042.87E+031.35E+046.71E+032.94E+03*
2.35E+033.50E+032.25E+032.23E+033.03E+03*
Booth’s function7.02E+038.16E+014.40E+031.12E+028.14E+016.32E+03
4.36E+031.00E+024.27E+038.64E+013.85E+014.39E+03
Table 6:

Comparison of the Running Time of all the Variations of CSA to the Running Time of CS and SA.

Function nameCSCSA1CSA2CSA3CSA4SA
Sphere function1.20E+031.05E+041.11E+031.19E+041.20E+041.60E+01
Easom’s test function9.68E+022.44E+047.96E+022.88E+043.15E+041.50E+01
Step function1.22E+034.63E+041.40E+036.06E+045.99E+042.80E+01
Schwefel’s problem6.47E+021.65E+041.50E+011.93E+042.01E+041.20E+01
Rastrigin’s test function1.74E+032.99E+052.72E+033.51E+053.52E+051.89E+02
Rotated hyperellipsoid function1.02E+039.10E+037.51E+021.01E+049.55E+031.60E+01
Beale’s function9.08E+023.56E+037.04E+024.42E+034.05E+032.00E+00
Shifted sphere function1.21E+043.52E+061.81E+043.81E+063.21E+061.44E+04
Shifted Schwefel’s problem1.05E+033.23E+051.68E+033.66E+053.66E+054.70E+01
Booth’s function1.11E+033.48E+036.24E+023.84E+033.57E+031.20E+01
  1. The number of decision variables is 100, the number of iterations is 10,000, and the number of runs is 100.

Table 7:

Comparison of the Running Time of all the Variations of CSA to the Running Time of CS and SA based on the Convergence to the Best Value of CS.

Function nameCSCSA1CSA2CSA3CSA4SA
Sphere function1.29E+031.70E+017.36E+022.80E+019.00E+005.43E+03
Easom’s test function9.05E+024.70E+011.50E+011.25E+027.80E+019.56E+02
Step function1.14E+031.60E+011.40E+013.10E+013.20E+011.12E+04
Schwefel’s problem7.95E+026.70E+013.00E+011.00E+021.00E+015.10E+01
Rastrigin’s test function1.89E+031.73E+021.48E+033.12E+022.51E+022.73E+03
Rotated hyperellipsoid function8.02E+025.52E+021.20E+037.11E+021.85E+021.51E+02
Beale’s function1.27E+033.68E+031.16E+033.92E+034.21E+039.00E+00
Shifted sphere function3.30E+011.40E+012.00E+001.80E+011.90E+018.20E+01
Shifted Schwefel’s problem1.02E+033.25E+051.04E+043.24E+053.11E+05*
Booth’s function8.10E+023.47E+031.15E+036.81E+023.53E+031.92E+04
Table 8:

Comparison of the Running Time of all the Variations of CSA to the Running Time of CS and SA based on the Convergence to the Best Value of CSA4.

Function nameCSCSA1CSA2CSA3CSA4SA
Sphere function1.52E+037.50E+016.87E+027.80E+013.30E+012.52E+03
Easom’s test function2.22E+022.10E+013.00E+001.02E+023.32E+029.39E+02
Step function1.12E+035.10E+015.60E+013.30E+012.90E+013.13E+03
Schwefel’s problem1.40E+041.65E+042.18E+041.92E+041.94E+049.00E+04
Rastrigin’s test function*2.21E+05*1.45E+051.33E+05*
Rotated hyperellipsoid function*2.61E+022.22E+056.93E+031.20E+01*
Beale’s function*4.02E+03*4.66E+033.03E+03*
Shifted sphere function*8.17E+04*6.12E+043.38E+04*
Shifted Schwefel’s problem*7.34E+05*5.86E+054.58E+05*
Booth’s function1.79E+032.81E+037.85E+021.69E+033.87E+03*

All the variations of CSA used the same parameter settings as suggested by Yang and Deb [30]: the population size n=15, the step size of the Lévy flight β=1, and the fraction of abandon nests Pa=0.25. For the SA algorithm, the temperature T=1000 and the cooling rate c=0.01.

In all experiments, each algorithm was executed 100 times as recommended by Yang and Deb [29], [30], [31] for similar optimization problems to provide meaningful statistical analysis of the proposed variations of CS. For CSA4, the range of the search space (Line 5 in Figure 4) is 15% below and above the current best solution.

The experiments were conducted using an Intel Core i7-4510U, 1.8 GHz CPU with 16 GB RAM running 64-bit Windows. All the algorithms were implemented using Java programming language. Table 9 provides a summary of the benchmark functions, which have been used for evaluating CS algorithm as well as other famous optimization algorithms [30], [31]. The dimensions for the benchmark functions are as follows:

  • 2 for Easom’s, Beale’s, and Booth’s test functions and

  • 10, 30, 100, and 1000 for the rest of the functions.

Table 9:

Benchmark Functions used to Evaluate the Algorithms.

Function nameExpressionSearch rangeOptimum valueCategory [22]
Sphere function [21]f1(x)=i=1Nxi2xi ∈[−100,100]min(f1)=f(0, …, 0)=0Unimodal
Easom’s test functionf2(x, y)–cos(x)cos(y)exp[–(xπ)2 –(yπ)2](x, y)∈[−100,100] ×[−100,100]min(f2)=f(π, π)=−1Unimodal
Step function [21]f3(x)=i=1N(⌊xi+0.5⌋)2xi ∈[−100,100]min(f3)=f(0, …, 0)=0Discontinuous unimodal
Schwefel’s problem 2.22f4(x)=i=1N(xi)+i=1N(xi)xi ∈[−10,10]min(f4)=f(0, …, 0)=0Unimodal
Rastrigin’s test function [27]f5(x)=i=1N(xi210cos(2πxi)+10)xi ∈[−5.12,5.12]min(f5)=f(0, …, 0)=0Multimodal
Rotated hyperellipsoid functionf6(x)=i=1N(j=1ixj)2xi ∈[−100,100]min(f6)=f(0, …, 0)=0Unimodal
Beale’s functionf7(x, y)=(1.5–x+xy)2+(2.25–x+xy2)2+(2.625–x+xy3)2x, y∈[−4.5,4.5]min(f7)=f(3, 0.5)=0Multimodal
Shifted sphere function [27]f8(x)=i=1Nzi2+f_bias1, where z=xoxi ∈[−100,100]min(f8)=f(o1, …, oN) =f_bias1=−450Unimodal
Shifted Schwefel’s problem 1.2 [27]f9(x)=i=1N(j=1izj)2+f_bias2, where z=x-oxi ∈[−100,100]min(f9)=f(o1, …, oN) =f_bias2=−450Unimodal
Booth’s functionf10(x, y)=(x+2y–7)2+(2x+y–5)2x, y∈[−10,10]min(f10)=f(1, 3)=0Unimodal

4.2 Comparison of CS, SA, and the Proposed Four Variations of CSA: Results and Discussion

Tables 14 report the experimental results with respect to each of the 10 benchmark functions described in Table 9. The results are in the following format: average of 100 independent runs (row 1), standard deviation (row 2), and error value (row 3) for the best solutions. The error value is the distance from the global minimal solution to the average of the best solutions [6].

The main goal of the experiments is to compare the performance of the proposed variations of CSA to the performance of CS and SA. For the 10D functions shown in Table 1, all the variations of CSA (CSA1–CSA4) perform better than the CS and SA algorithms. It can also be noted that CSA4 outperforms all the other algorithms for 4 of 10 test functions. However, the results of CSA3 are less but near to the results of CSA4 for most of the test functions. On the contrary, the CSA2 variation provides minor enhancement to the performance of the original CS algorithm. This could be justified by the fact that CSA2 does not deeply search for better solutions within the whole range of potential solutions like the other variations of CSA. In other words, CSA2 uses the SA algorithm as a selection method but not as an exploration algorithm.

Table 2 shows the results of the 30D test functions. The CSA4 and CSA3 variations again outperform all the other algorithms for 5 of 10 functions and 4 of 10 functions, respectively. In addition, the CSA2 algorithm performs slightly better than the CS and SA algorithms. However, the other variations of CSA perform much better than CSA2. These observations confirm the observations obtained from Table 1.

As in Tables 1 and 2, the reported results in Table 3 for the 100D problems indicate that all the variations of CSA outperform the CS and SA algorithms. Obviously, CSA4 is the best performing algorithm (superior performance for 6 of 10 functions) and CSA2 is the worst performing variation of CSA. The same observations noted in Tables 13 apply to Table 4 (functions with 1000D).

To sum up, the overall results of the experiments indicate that all the variations of CSA perform better than the CS and SA algorithms. In general, CSA4 is the most efficient variation of CSA, whereas CSA2 is the least efficient variation of CSA. This might be because CSA4 performs a local search within the neighbors of the current solutions, whereas CSA2 performs a shallow search for better solutions. In addition, all the population-based search algorithms perform better than SA (single-solution search algorithm).

Table 5 summarizes the convergence results of the tested algorithms using the 10 test functions with dimension 10. The results are in the following format: average number (row 1) and standard deviation of iterations (row 2). The number of iterations for each algorithm is reported in Table 5 when the variations of the values of the test function is less than a given threshold value ε≤10−10. As shown in Table 5, all the variations of CSA perform much better than CS and SA. In other words, all the variations of CSA converge faster to solutions than CS and SA. In addition, CSA3 and CSA4 are the fastest converging algorithms, whereas SA is the slowest converging algorithm.

4.3 Runtime Performance Comparison of CS, SA, and the Proposed Four Variations of CSA

Tables 68 provide the running time comparison of CS, SA, and the proposed four variations of CSA for each of the 10 benchmark functions described in Table 9 (the number of decision variables is 100). The results are reported in milliseconds representing the average of 100 independent runs.

The values in Table 6 for each algorithm was reported after 10,000 iterations, which represent the running time required to obtain the results in Table 3. As shown in the table, SA followed by CS are faster than the other algorithms. However, SA and CS do not provide better objective values compared to the other algorithms (as shown in Table 3 for the same dimension).

Table 7 shows the running time of all the variations of CSA as well as CS and SA, which were reported when each algorithm converged to the best objective value of CS (see Table 3). In the same way, Table 8 shows the running time of all the variations of CSA as well as CS and SA, which were reported when each algorithm converged to the best objective value of CSA4 (see Table 3).

As shown in Table 7, all the proposed variations converge to the target solution faster than CS and SA for all functions (except one). Also, as shown in Table 8, all the proposed variations (except CSA2) converge to the target solution faster than CS and SA for all functions (except one). It should be noticed that some algorithms did not converge to the target solution, denoted by *, for some functions within reasonable time.

4.4 Comparison of CSA3, CSA4, and Other Well-Known Optimization Algorithms

The performance of the two most successful variations of CSA according to Tables 18 was compared to the performance of particle swarm optimization (PSO) algorithm [3], Bat algorithm (BA) [1], [32], Boltzman CS algorithm (BCS), and ε-greedy CS algorithm (EGCS) [2] as shown in Tables 10 and 11. The parameters of the algorithms in Tables 10 and 11 were set as follows:

  1. As recommended by Abed-alguni et al. [1], [3], the weight parameters in PSO were W=0, C1=C2=1.

  2. For the BA algorithm, the discount parameter of the frequencies β=0.5, the loudness A=1, and the pulse rate r=0 for each candidate solution.

  3. For EGCS, the exploration parameter ε =0.1 as suggested by Abed-alguni and Alkhateeb [2].

  4. For BCS, the temperature T=0.1 as suggested by Abed-alguni and Alkhateeb [2].

Table 10:

Comparison Results of CSA3 and CSA4 to Four Other Optimization Algorithms.

Function nameCSA3CSA4BABCSEGCSPSO
Sphere function1.19E−122.00E−141.30E−072.24E−074.73E−071.47E−06
1.85E−123.07E−141.49E−074.19E−079.60E−073.54E−06
1.19E−122.00E−141.30E−072.24E−074.73E−071.47E−06
Easom’s test function−9.95E−019.96E−01−9.82E−50−8.86E−01−8.47E−01−4.19E−12
1.96E−033.56E−033.26E−498.97E−021.15E−012.30E−11
5.19E−034.30E−031.00E+001.14E−011.53E−011.00E+00
Step function0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Schwefel’s problem1.39E−061.49E−070.00E+006.63E−040.00E+002.25E−03
1.42E−061.22E−070.00E+004.41E−040.00E+003.06E−03
1.39E−061.49E−070.00E+006.63E−040.00E+002.25E−03
Rastrigin’s test function8.58E−111.24E−102.52E−052.28E−051.99E−051.19E−04
1.34E−102.09E−102.99E−053.79E−052.44E−052.90E−04
8.58E−111.24E−102.52E−052.28E−051.99E−051.19E−04
Rotated hyperellipsoid function3.00E−023.02E−029.93E−012.16E−052.01E−053.12E+00
2.68E−046.14E−041.55E+003.72E−052.38E−058.89E+00
3.00E−023.02E−029.93E−012.16E−052.01E−053.12E+00
Beale’s function3.00E−023.02E−028.35E−023.02E−023.03E−023.02E−02
2.68E−046.14E−042.07E−014.03E−046.07E−045.87E−04
3.00E−023.02E−028.35E−023.02E−023.03E−023.02E−02
Shifted sphere function4.45E+02−4.40E+021.99E+042.20E+042.24E+041.96E+04
2.74E+013.65E+017.12E+031.78E+031.64E+038.16E+03
5.00E+009.50E+002.04E+042.24E+042.29E+042.01E+04
Shifted Schwefel’s problem−4.50E+02−4.50E+02−4.50E+02−4.50E+024.50E+02−4.50E+02
1.03E−051.02E−052.25E−016.34E−073.46E−072.11E−05
2.28E−062.07E−065.63E−023.39E−072.49E−074.17E−06
Booth’s function7.94E−039.86E−032.20E−026.34E−031.26E−021.49E−02
8.69E−039.94E−034.38E−026.44E−031.35E−022.72E−02
7.94E−039.86E−032.20E−026.34E−031.26E−021.49E−02
  1. The number of decision variables is 30, the number of iterations is 10,000, and the number of runs is 100.

Table 11:

Comparison Results of CSA3 and CSA4 to Four Other Optimization Algorithms.

Function nameCSA3CSA4BABCSEGCSPSO
Sphere function1.40E−121.31E−123.21E−074.13E−073.70E−071.12E−05
2.23E−121.96E−124.27E−078.03E−078.34E−073.41E−05
1.40E−121.31E−123.21E−074.13E−073.70E−071.12E−05
Easom’s test function−9.95E−019.96E−01−1.56E−32−8.47E−01−7.81E−011.91E−22
1.79E−036.07E−045.94E−329.20E−022.00E−016.34E−22
5.16E−033.63E−031.00E+001.53E−012.19E−011.00E+00
Step function0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Schwefel’s problem1.37E−061.63E−077.26E−044.92E−040.00E+002.18E−03
1.31E−061.26E−075.52E−043.91E−040.00E+004.47E−03
1.37E−061.63E−077.26E−044.92E−040.00E+002.18E−03
Rastrigin’s test function2.46E−104.96E−105.37E−058.71E−053.15E−056.72E−04
9.33E−123.91E−101.21E−041.89E−048.35E−051.54E−03
2.46E−104.96E−105.37E−058.71E−053.15E−056.72E−04
Rotated hyperellipsoid function8.05E−062.69E−072.64E+011.79E+017.75E+013.12E+01
2.67E−062.19E−073.19E+012.45E+011.63E+023.08E+01
8.05E−062.69E−072.64E+011.79E+017.75E+013.12E+01
Beale’s function3.00E−022.95E−023.21E−023.22E−023.47E−028.44E−02
2.56E−043.69E−031.09E−027.93E−039.09E−031.15E−01
3.00E−022.95E−023.21E−023.22E−023.47E−028.44E−02
Shifted sphere function−4.07E+024.20E+021.45E+051.36E+051.38E+051.36E+05
1.32E+021.14E+021.74E+046.49E+036.06E+038.51E+03
4.34E+013.00E+011.45E+051.37E+051.38E+051.36E+05
Shifted Schwefel’s problem2.55E+045.10E+035.26E+065.36E+065.52E+066.84E+06
2.17E+034.35E+026.21E+057.53E+051.32E+061.70E+06
2.59E+045.55E+035.26E+065.36E+065.52E+066.84E+06
Booth’s function1.22E−021.27E−021.18E−025.63E−037.46E−034.36E−02
1.49E−021.23E−021.04E−026.06E−036.37E−031.31E−01
1.22E−021.27E−021.18E−025.63E−037.46E−034.36E−02
  1. The number of decision variables is 100, the number of iterations is 10,000, and the number of runs is 100.

Tables 10 and 11 show the experimental results with respect to each of the 10 benchmark functions described in Table 9. The average of 100 independent runs (row 1), standard deviation (row 2), and error value (row 3) for the best solutions are shown in the tables.

For the 30D functions shown in Table 10, CSA3 and CSA4 outperform the other algorithms for 5 of 10 test functions. Moreover, as we scale the problem dimensionality from 30D to 100D, CSA3 and CSA4 perform even better (7 of 10 functions). This indicates that the performance of CSA3 and CSA4 improves as the problem complexity increases compared to the other algorithms.

5 Conclusion

The CS and SA algorithms have been successfully used for optimizing both discrete and continuous optimization problems. However, these algorithms do not converge to good solutions when applied to complex optimization problems as shown in the current paper.

This paper presented four novel variations of a hybrid CS and SA algorithm that increases the chance of avoiding local optima. More precisely, these variations use the SA algorithm to provide a chance for exploring the solution space by replacing current solutions by worse ones with a decreasing probability. In other words, unlike CS, CSA overcomes the problem of premature convergence using SA and its operators.

The four variations of CSA as well as CS and SA were compared using well-known benchmark functions. The dimensions for the benchmark functions are 2, 10, 30, 100, and 1000. The experimental results show that three variations of the hybrid algorithm in all of the tested functions (with dimensions of 2, 10, 30, 100, and 1000) provide major improvement of the performance of CS and SA, whereas one variation provides minor improvement. However, for the two-dimensional test functions, all algorithms provide slightly similar performance results.

In addition, the experiments show that the proposed variations of the hybrid algorithm provide better objective values and comparable (sometimes better) computational time compared to CS and SA.

Finally, the performance of the two most successful variations of CSA (namely, CSA3 and CSA4) was compared to the performance of PSO algorithm, BA, BCS, and EGCS. As shown in the experiments, the performance of CSA3 and CSA4 improves as the problem complexity increases compared to the above-mentioned algorithms. Also, these results suggest that the variations of CSA (in particular, CSA3 and CSA4) can be used for solving simple and complex problems.

Bibliography

[1] B. H. Abed-alguni, Bat q-learning algorithm, Jord. J. Comput. Inf. Technol.3 (2017), 56–77.Search in Google Scholar

[2] B. H. Abed-alguni and F. Alkhateeb, Novel selection schemes for cuckoo search, Arab. J. Sci. Eng.42 (2017), 3635–3654.10.1007/s13369-017-2663-3Search in Google Scholar

[3] B. H. Abed-Alguni, D. J. Paul, S. K. Chalup and F. A. Henskens, A comparison study of cooperative q-learning algorithms for independent learners, Int. J. Artif. Intell.14 (2016), 71–93.Search in Google Scholar

[4] V. Černý, Thermodynamical approach to the traveling salesman problem: an efficient simulation algorithm, J. Optim. Theory Appl.45 (1985), 41–51.10.1007/BF00940812Search in Google Scholar

[5] Y. Feng, G.-G. Wang and X.-Z. Gao, A novel hybrid cuckoo search algorithm with global harmony search for 0-1 knapsack problems, Int. J. Comput. Intell. Syst.9 (2016), 1174–1190.10.1080/18756891.2016.1256577Search in Google Scholar

[6] B. H. F. Hasan, I. A. Doush, E. Al Maghayreh, F. Alkhateeb and M. Hamdan, Hybridizing harmony search algorithm with different mutation operators for continuous problems, Appl. Math. Comput.232 (2014), 1166–1182.10.1016/j.amc.2013.12.139Search in Google Scholar

[7] K. Huang, Y. Zhou, X. Wu and Q. Luo, A cuckoo search algorithm with elite opposition-based strategy, J. Intell. Syst.25 (2016), 567–593.10.1515/jisys-2015-0041Search in Google Scholar

[8] W. Kartous, A. Layeb and S. Chikhi, A new quantum cuckoo search algorithm for multiple sequence alignment, J. Intell. Syst.23 (2014), 261–275.10.1515/jisys-2013-0052Search in Google Scholar

[9] A. Khachaturyan, S. Semenovskaya and B. Vainstein, A statistical-thermodynamic approach to determination of structure amplitude phases, Sov. Phys. Crystallogr.24 (1979), 519–524.Search in Google Scholar

[10] S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi, Optimization by simulated annealing, Science220 (1983), 671–680.10.1126/science.220.4598.671Search in Google Scholar PubMed

[11] Q. Liao, S. Zhou, H. Shi and W. Shi, Parameter estimation of nonlinear systems by dynamic cuckoo search, Neural Comput.29 (2017), 1103–1123.10.1162/NECO_a_00946Search in Google Scholar PubMed

[12] A. Lim, B. Rodrigues and X. Zhang, A simulated annealing and hill-climbing algorithm for the traveling tournament problem, Eur. J. Oper. Res.174 (2006), 1459–1478.10.1016/j.ejor.2005.02.065Search in Google Scholar

[13] C. Liu and J. Wang, Cell formation and task scheduling considering multi-functional resource and part movement using hybrid simulated annealing, Int. J. Comput. Intell. Syst.9 (2016), 765–777.10.1080/18756891.2016.1204123Search in Google Scholar

[14] M. Lundy, Applications of the annealing algorithm to combinatorial problems in statistics, Biometrika72 (1985), 191–198.10.1093/biomet/72.1.191Search in Google Scholar

[15] M. Marichelvam, An improved hybrid cuckoo search (IHCS) metaheuristics algorithm for permutation flow shop scheduling problems, Int. J. Bio-Inspired Comput.4 (2012), 200–205.10.1504/IJBIC.2012.048061Search in Google Scholar

[16] M. Marichelvam and M. Geetha, A hybrid cuckoo search metaheuristic algorithm for solving single machine total weighted tardiness scheduling problems with sequence dependent setup times, Int. J. Comput. Complex. Intell. Algorithms1 (2016), 23–34.10.1504/IJCCIA.2016.077463Search in Google Scholar

[17] M. Marichelvam and Ö. Tosun, Performance comparison of cuckoo search algorithm to solve the hybrid flow shop scheduling benchmark problems with makespan criterion. Int. J. Swarm Intell. Res.7 (2016), 1–14.10.4018/IJSIR.2016040101Search in Google Scholar

[18] M. Marichelvam, T. Prabaharan and X.-S. Yang, Improved cuckoo search algorithm for hybrid flow shop scheduling problems to minimize makespan, Appl. Soft Comput.19 (2014), 93–101.10.1016/j.asoc.2014.02.005Search in Google Scholar

[19] U. Mlakar and I. Fister, Hybrid self-adaptive cuckoo search for global optimization, Swarm Evol. Comput.29 (2016), 47–72.10.1016/j.swevo.2016.03.001Search in Google Scholar

[20] P. Mohapatra, S. Chakravarty and P. Dash, An improved cuckoo search based extreme learning machine for medical data classification, Swarm Evol. Comput.24 (2015), 25–49.10.1016/j.swevo.2015.05.003Search in Google Scholar

[21] M. G. H. Omran and M. Mahdavi, Global-best harmony search, Appl. Math. Comput.198 (2008), 643–656.10.1016/j.amc.2007.09.004Search in Google Scholar

[22] Q.-K. Pan, P. N. Suganthan, M. F. Tasgetiren and J. J. Liang, A self-adaptive global best harmony search algorithm for continuous optimization problems, Appl. Math. Comput.216 (2010), 830–848.10.1016/j.amc.2010.01.088Search in Google Scholar

[23] H. Rakhshani and A. Rahati, Intelligent multiple search strategy cuckoo algorithm for numerical and engineering optimization problems, Arab. J. Sci. Eng.42 (2017), 567. https://doi.org/10.1007/s13369-016-2270-8.10.1007/s13369-016-2270-8Search in Google Scholar

[24] Z. Sheng, J. Wang, S. Zhou and B. Zhou, Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm, Chaos24 (2014), 013133.10.1063/1.4867989Search in Google Scholar

[25] N. Shivasankaran, P. S. Kumar, G. Nallakumarasamy and K. V. Raja, Repair shop job scheduling with parallel operators and multiple constraints using simulated annealing, Int. J. Comput. Intell. Syst.6 (2013), 223–233.10.1080/18756891.2013.768434Search in Google Scholar

[26] N. Shivasankaran, P. S. Kumar and K. V. Raja, Hybrid sorting immune simulated annealing algorithm for flexible job shop scheduling, Int. J. Comput. Intell. Syst.8 (2015), 455–466.10.1080/18756891.2015.1017383Search in Google Scholar

[27] P. N. Suganthan, N. Hansen, J. J. Liang, K. Deb, Y.-P. Chen, A. Auger and S. Tiwari, Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. Technical Report KanGAL Report#2005005, IIT Kanpur, India, Nanyang Technological University, Singapore, 2005.Search in Google Scholar

[28] H. Szu and R. Hartley, Fast simulated annealing, Phys. Lett. A122 (1987), 157–162.10.1016/0375-9601(87)90796-1Search in Google Scholar

[29] X.-S. Yang, Bat algorithm and cuckoo search: a tutorial, in: Artificial Intelligence, Evolutionary Computing and Metaheuristics, pp. 421–434, Springer, 2013.10.1007/978-3-642-29694-9_17Search in Google Scholar

[30] X.-S. Yang and S. Deb, Cuckoo search via Lévy flights, in: World Congress on Nature & Biologically Inspired Computing, 2009, NaBIC 2009, pp. 210–214, IEEE, 2009.10.1109/NABIC.2009.5393690Search in Google Scholar

[31] X.-S. Yang and S. Deb, Engineering optimisation by cuckoo search, Int. J. Math. Modell. Numer. Optim.1 (2010), 330–343.10.1504/IJMMNO.2010.035430Search in Google Scholar

[32] X.-S. Yang and A. Hossein Gandomi, Bat algorithm: a novel approach for global engineering optimization, Eng. Comput.29 (2012), 464–483.10.1108/02644401211235834Search in Google Scholar

[33] Y.-R. Zeng, L. Peng, J. Zhang and L. Wang, An effective hybrid differential evolution algorithm incorporating simulated annealing for joint replenishment and delivery problem with trade credit, Int. J. Comput. Intell. Syst.9 (2016), 1001–1015.10.1080/18756891.2016.1256567Search in Google Scholar

Received: 2017-06-05
Published Online: 2017-10-05
Published in Print: 2019-09-25

©2019 Walter de Gruyter GmbH, Berlin/Boston

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Downloaded on 15.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jisys-2017-0268/html
Scroll to top button