Home Elite Opposition-Based Cognitive Behavior Optimization Algorithm for Global Optimization
Article Open Access

Elite Opposition-Based Cognitive Behavior Optimization Algorithm for Global Optimization

  • Shaoling Zhang , Yongquan Zhou EMAIL logo and Qifang Luo
Published/Copyright: June 29, 2017
Become an author with De Gruyter Brill

Abstract

This paper presents an elite opposition-based cognitive behavior optimization algorithm (ECOA). The traditional COA is divided into three stages: rough search, information exchange and share, and intelligent adjustment process. In this paper, we introduce the elite opposition-based learning in the third stage of COA, with a view to avoid the latter congestion as well as to enhance the convergence speed. ECOA is validated by 23 benchmark functions and three engineering design problems, and the experimental results have proven the superior performance of ECOA compared to other algorithms in the literature.

1 Introduction

There are many methods for solving optimization problems, but the whole can be classified into two categories. One is using traditional optimization methods, such as dynamic programming method, Newton method, and conjugate gradient method [29]. However, when solving large-scale combinatorial optimization problems, it will show great limitations, and as the scale of the problem continues to expand, the traditional optimization methods cannot satisfy the need of solving complex problems. Another is that modern optimization methods, such as metaheuristic algorithms [30], are based on intuitive or empirical construction algorithms. This algorithm has the advantages of fast convergence speed and high stability and makes up for the deficiency of the traditional optimization method in solving complex combination optimization problems.

The cognitive behavior optimization algorithm (COA) was proposed by Li et al. [16]. The COA is inspired by the artificial bee colony (ABC) algorithm [1, 13] based on a general framework constituted from bees searching for food sources. Meanwhile, combined with the human social cognitive behavior and differential evolution (DE) [23] to form a detailed model of cognitive behavior optimization, the COA is divided into three sections and two groups. The process includes rough search, information exchange and share, and intelligent adjustment. The first step, rough search, is similar to the ABC in that the scouts search for the food source. In this part, the Gaussian random walk method and Lévy flight are used to balance the exploration and exploitation. The second step, information exchange and share, is similar to the ABC in that the employed foragers dance in the dance area to share information. In this behavior, it uses the improved crossover and mutation operation of DE, and the select probability Pc is proposed [23]. The final step, intelligent adjustment, is similar to the observation of bees according to food source select food. The two groups of population are the cognitive population (Cpop) and the memory population (Mpop); each was one-second of the population. The Cpop randomly searches for the food source in space, and the Mpop stores the information of the good food source found by the Cpop. The two groups work together to find the optimal solution.

In this paper, an elite opposition-based COA (ECOA) is proposed. Twenty-three benchmark test functions and three popular structure engineering design problems are compared to illustrate the improved ECOA convergence effect and the optimization effects are obviously enhanced.

Section 2 briefly introduces the COA. In Section 3, the ECOA is presented. The simulation experiment and the test results are shown in Section 4. The conclusions and future work are described in Section 5. Finally, the acknowledgments are shown in Section 6.

2 COA

The COA was proposed by Li et al. [16]. Based on the behavior model of ABC and DE, the cognitive behavior model is proposed and it contains three main behaviors: rough search, information exchange and share, and intelligent adjustment. The COA is also divided into two groups: Cpop and Mpop; each was one-second of the population.

2.1 Rough Search

Before the step of rough search, according to the cognitive behavior model, the Cpop and Mpop are initialized as formulas (1) and (2):

(1)Cpopi=rand(uplow)+low
(2)Mpopi=rand(uplow)+low

where the Cpopi and Mpopi are the ith-generation Cpop and Mpop (i=1, 2, 3, …, N), respectively, N is the population size, up and low are the upper and lower boundaries of the search space, and rand is the random number of [0, 1].

In this part, the Cpop searches for food sources in space using the Gaussian random walk or Lévy flight to generate the new individual around the current individual. The Gaussian random walk is used to expand the search space that can be enlarged, whereas the Lévy flight is used for faster convergence. At this stage, mining and detection are made into balance by mutual cooperation. The formulas are as follows:

(3)Cpopi=Gaussian(Gbest,σ)+(r1Gbestr2Cpopi), rand0.5
(4)Cpopi=Cpopi+αlévy(s)(CpopiGbest)|s=μ|ν|1/β, rand>0.5
(5)σ=(log(g)g)(CpopiGbest)
(6)σμ=(Γ(1+β)sin(πβ/2)Γ[(1+β)/2]β2(β1)/2)1/β,σν=1

Here, Gbest is the current best solution, the step size σ is calculated by CpopiGbest, and log(g)/g is used to control the range of σ. r1 and r2 are random numbers on [0, 1]. In Eq. (4), α is the step-size scaling factor, where α=0.01; μ, ν is the parameter from the standard normal distribution, specifically Eq. (6), where β=3/2.

2.2 Information Exchange and Share

In this stage, the Mpop is used to store the original food information. This process draws on the crossover and mutation operations in DE [23]. The crossover probability Pc is used to determine how the Cpop is updated, and then the good locations are stored in the Mpop. The specific steps are as follows.

Step 1: Calculate the crossover probability (Pci) according to Eq. (7):

(7)Pci=rank(fitCpopi)N/2

Here, fitCpopi is the fitness value of the Cpop (Cpopi). The rank(fitCpopi) is the ranking of the fitness value of the individual of the Cpop (Cpopi) according to the order from high to low.

Step 2: Update the Mpop:

(8)if r1<r2, then Mpop=Cpop
(9)Mpop=permuting(Mpop)

Step 3: The crossover probability (Pci) was used to select the update mode of the Cpop (Cpopi).

(10){Cpopi,j=Cpopk,j+rand(Gbest,jCpopi,j+Mpopi,jCpoph,j), randPciCpopi,j=Cpopi,j+rand(Mpopi,jCpopk,j), rand>Pci

Here, i, k, h∈(1, 2, 3, …, N/2), ikh, j∈{1, 2, 3, …, D}, among them, D is the dimension, where Eq. (7) ranks the Cpop (Cpopi) by the fitness value from high to low, so the probability that the higher the crossover probability (Pci) is preserved. Eqs. (8) and (9) are crossover operations, and the original Mpop is randomly ordered in formula (9) to ensure that the memory capacity of the population is constant. Eq. (10) borrows from the classic DE algorithm [23] in the classical mutation operation DE/rand/1, DE/best/1, and the two variants are used to ensure that the Cpop can find a better solution location.

2.3 Intelligent Adjustment

This stage is to improve the ability to find the optimal solution to the Cpop of individuals to update. After the first two stages, exploitation and exploration in this phase of the basic COA are balanced. The position of the individuals was adjusted by the information exchange among individuals. Here, it is emphasized that Eqs. (11) and (12) are updated under the condition of rand >Pci. If rand ≤Pci is not updated, Cpop still retains the original value.

(11)Cpopi=Cpopi+φ(CpopiGbest) rand<0.5
(12)Cpopi=Cpopi+φ(CpopiCpopj) rand>0.5

In the above formula, φ is the random number of [−1, 1], and the adjustment of the Cpop through Eqs. (11) and (12) makes it easier to find the global most position.

Based on the above three stages, the whole model of the Cpop optimization algorithm is constructed, and its pseudo-code is shown in Algorithm 1, where N is the population number. The number of the Cpop and Mpop is N/2, and D is the dimension.

Algorithm 1:

Pseudo-code of the COA.

Start
 Initialize the populations Cpop and Mpop according to Eqs. (1) and (2);
g=1;
//Rough Search
fori from 1 to N/2 do
  if rand ≤0.5, then
  Cpopi=Gaussian(Gbest,σ)+(r1Gbestr2Cpopi)//r1 and r2 are random values;
  else
  Cpopi=Cpopi+αlévy(s)(CpopiGbest)|s=μ|ν|1/β;
  end-if
end-for
//Information Exchange and Share
 Calculate the Pc through Pci =(rank(fitCpopi ))/(N/2);
  ifr1<r2, thenMpop=Cpop, end-if
  Mpop=permuting(Mpop)
  fori from 1 to N/2 do
   forj from 1 to Ddo
    if rand ≤Pci , then
    Cpopi,j=Cpopk,j+rand(Gbest,jCpopi,jMpopi,jCpoph,j);
    else
    Cpopi,j=Cpopi,j+rand(Mpopi,jCpopk,j);
    end-if
   end-for
  end-for
//Intelligent Adjustment
 Calculate the Pc through Pci =(rank(fitCpopi ))/(N/2);
   fori from 1 to N/2 do
    if rand >Pci , then
     if rand <0.5, then
     Cpopi=Cpopi+φ(CpopiGbest);
     else
     Cpopi=Cpopi+φ(CpopiCpopj);
     end-if
    end-if
   end-for
  Memorize the best solution achieved so far and exist main repeat;
  g=g+1;
untilFes=MaxFes
End

3 ECOA

The standard COA (as proposed by Li et al. [16]) is outstanding in solving low-dimensional multimodal problems but is prone to postcongestion in high-dimensional problems and has been trapped in local optimal solutions. Therefore, this chapter introduces the elite opposite strategy in the intelligent adjustment phase of the basic COA, reduces the late congestion degree through the elite opposite strategy, and speeds up the convergence speed. This algorithm is called the ECOA.

3.1 Opposition-Based Learning (OBL)

In 2005, Professor Tizhoosh proposed the concept of OBL [24]. The main idea of OBL was to consider each individual candidate while considering its opposite individual, which might be closer to the optimal individual. The OBL strategy can effectively improve the population diversity and avoid the premature convergence of the algorithm. General intelligent algorithms randomly generate initial population, then approach to the optimal solution gradually, and eventually find or close to the optimal solution. In the search process, while searching for the current solution and opposite solution, choose a better solution for the next generation of groups that greatly improve the efficiency of the algorithm.

Opposition solution: Suppose that there is a real number x on the range of [a,b], then the opposite of x is defined as x′=a+bx. Based on this, it is assumed that there is a solution point p=(x1, x2, ……, xd) of a D dimension on the region R, xj∈[aj, bj], and then define p=(x1,x2,,xd) for the opposite solution of p. Among them, xj=k(aj+bj)xj,k is the random number of [0, 1], also known as the generalization coefficient.

3.2 Elite OBL Strategies

The OBL can better expand the search range of the population and improve the performance of the algorithm. However, the OBL has a certain randomness in generating its opposite solution. Each randomly generated candidate compared to its opposite individual has a probability of 50% away from the optimal solution of the problem. The current optimal individual in the cognitive group is regarded as the elite individual, and the opposite solution is generated by the elite OBL. The sharing of the information among the individuals can make the cognitive individual better search the global optimal value.

The reverse learning mechanism of elite is a new search strategy in the field of evolutionary computation. Its guiding ideology is to evaluate a feasible solution and simultaneously evaluate the solution of reverse mapping. From these two alternative solutions, a better solution is chosen as the next generation of the solution. In this chapter, the solution with the best fitness value in the population is defined as the elite cognitive individual, which is expressed as Cpopε=(Cpopε,1, Cpopε,2, …, Cpopε,D ).

A cognitive individual in the population (Cpopi) and a cognitive individual obtained by inverse mapping (Cpopi) can be expressed as Cpopi=(Cpopi,1, Cpopi,2, …, Cpopi,D) and Cpopi=(Cpopi,1,Cpopi,2,,Cpopi,D), respectively. Cpopε, Cpopi, and (Cpopi) satisfy Eq. (13). Among them, n is the population size, D is the search space dimension, kU(0, 1), and dbj and daj are the upper and lower bounds of the first decision variable, which can be calculated by Eq. (14):

(13)Cpopi,j=k·(daj+dbj)Cpopε,j,i=1,2,,n/2, j=1, 2, , D
(14)daj=lb,dbj=ub

In this case, we need to use the boundary control strategy to prevent the perceptual individuals of the reverse mapping (Cpopi) from jumping out (daj,dbj). Therefore, we can adjust it using Eq. (15):

(15)Cpopi,j=rand(daj,dbj) if Cpopi,j<daj or dbj>Cpopi,j

Specific implementation steps of the elite opposition-based flower pollination algorithm (EOFPA) can be summarized in the pseudo-code shown in Algorithm 2.

Algorithm 2:

Pseudo-code of the ECOA.

Start
 Initialize the populations Cpop and Mpop according to Eqs. (1) and (2);
g=1;
//Rough Search
fori from 1 to N/2 do
  if rand ≤0.5, then
  Cpopi=Gaussian(Gbest,σ)+(r1Gbestr2Cpopi)//r1 and r2 are random values;
  else
  Cpopi=Cpopi+αlévy(s)(CpopiGbest)|s=μ|ν|1/β;
  end-if
end-for
//Information Exchange and Share
 Calculate the Pc through Pci =(rank(fitCpopi ))/(N/2);
  ifr1<r2, thenMpop=Cpop, end-if
  Mpop=permuting(Mpop)
  fori from 1 to N/2 do
    forj from 1 to Ddo
     if rand≤Pcithen
     Cpopi,j=Cpopk,j+rand(Gbest,jCpopi,j+Mpopi,jCpoph,j);
     else
     Cpopi,j=Cpopi,j+rand(Mpopi,jCpopk,j);
     end-if
    end-for
  end-for
//Intelligent Adjustment
 Calculate the Pc through Pci =(rank(fitCpopi ))/(N/2);
     fori from 1 to N/2 do
      if rand >Pci , then
        if rand <0.5, then
        Cpopi=Cpopi+φ(CpopiGbest);
        else
        Cpopi=Cpopi+φ(CpopiCpopj);
        end-if
        Cpopi,j=k·(daj+dbj)Cpopε,j,i=1,2,,n/2,j=1,2,,D
        if Cpopi,j<daj or dbj>Cpopi,j
        Cpopi,j=rand(daj,dbj)
        end-if
      end-if
     end-for
   Memorize the best solution achieved so far and exist main repeat;
   g=g+1;
untilFEs=MaxFEs
End

4 Simulation Experiments and Result Analysis

To verify the improved ECOA from many aspects, 23 classical test functions and three engineering examples were selected to be compared.

All the experiments in this section were operated on computer with 3.30 GHz Intel® Core™ i5-4590 processor and 4 GB of RAM using MATLAB R2012a.

4.1 Functions Test

4.1.1 Benchmark Test Functions

To verify the effectiveness of the algorithm, 23 standard test functions [6, 15, 17] were tested to reflect the objectivity of the experimental results.

These 23 standard test functions can be divided into three types (high-dimensional unimodal function, high-dimensional multimodal function, and low-dimensional function), where f01 to f07 are high-dimensional unimodal functions, f08 to f12 are high-dimensional multimodal functions, and f13 to f23 are low-dimensional functions. f05 is a classical test function whose global minimum value is at the bottom of a parabola and the fitness value of the position near the bottom of the trough changes little, so it is difficult to find the global minimum of the test function. f07 is a multidimensional multimodal function with a large number of local minima in its domain, which increases the difficulty of searching the objective function. In the 23 standard test functions selected in this chapter, the optimal value of most functions is zero, which can fully verify the optimization ability of the algorithm and also select the standard test function with some nonzero optimal values. The test functions and related configurations are shown in Table 1.

Table 1:

Benchmark Test Functions.

Benchmark test functionsDRangeOptimum
f01(x)=i=1nxi230[−100, 100]0
f02(x)=i=1n|xi|+i=1n|xi|30[−10, 10]0
f03(x)=i=1n(j=1ixj)230[−100, 100]0
f04(x)=maxi{|xi|,1iD}30[−100, 100]0
f05(x)=i=1D1[100(xi+1xi2)2+(xi1)2]30[−30, 30]0
f06(x)=i=1n(|xi+0.5|)230[−100, 100]0
f07(x)=i=1nxi4+random(0,1)30[−1.28, 1.28]0
f08(x)=i=1nxisin(|xi|)30[−500, 500]−12569.5
f09(x)=i=1n[xi210cos(2πxi)+10]30[−5.12, 5.12]0
f10(x)=20exp(0.21ni=1nxi2exp(1ni=1ncos2πxi))+20+e30[−32, 32]0
f11(x)=14000i=1n(xi2)i=1ncos(xii)+130[−600, 600]0
f12(x)=πD{10sin2(πy1)+i=1D1(y1)2[1+10sin2(πy1)](y1)2[1+10sin2(πy1)]+(yD1)2}+i=1Du(xi,10,100,4)yi=1+xi+14u(xi,a,k,m)={k(xia)m,xi>a0,axiak(xiz)m,xi<a30[−50, 50]0
f13(x)=i=111[aix1(bi2+bix2)bi2+bix+3x4]24[−5, 5]0.0003075
f14(x)=4x122.1x14+13x16+x1x24x22+4x242[−5, 5]−1.0316285
f15(x)=1+cos(12x12+x22)0.5(x12+x22)+22[−5.12, 5.12]−1
f16(x)=[1+(x1+x2+1)2(1914x1+3x1214x2+6x1x2+3x22)]×[30+(2x13x2)2(1832x1+12x12+48x236x1x2+27x22)]2[−5, 5]3
f17(x)=i=14ciexp(j=13aij(xjpij)2)3[0, 1]−3.8628
f18(x)=i=14ciexp(j=16aij(xjpij)2)6[0, 1]−3.32
f19(x)=i=15[(xai)(xai)T+ci]14[0, 10]−10.1532
f20(x)=i=17[(xai)(xai)T+ci]14[0, 10]−10.4029
f21(x)=i=110[(xai)(xai)T+ci]14[0, 10]−10.5364
f22(x)=cos(x1)cos(x2)exp((x1π)2(x2π)2)2[−100, 100]−1
f23(x)=0.5+sin2(x12+x22)0.5(1+0.001(x12+x22))22[−100, 100]−1

4.1.2 Results of Test Results Analysis

In the test, the number of population N is set to 50, the maximum number of iterations is 1000, independently run 30 times. In the test results table, Best, Mean, Worst, and Std, respectively, in 30 independent experiments, are the optimal solution, mean, worst-case solution, and standard deviation of the 30 run results. In addition, the dimension of the test function is described in the table. The rank of each test function is sorted by the variance value. Sum1 is the number of the first rankings in the statistics, and the last rank is the average of the variance. It is worth noting in the table the boldfaced underlined values for the test function of the optimal value.

Table 2 shows the results of the high-dimensional unimodal functions test. Tables 3 and 4 show the results of high-dimensional multimodal functions and low-dimensional functions, respectively. To verify the superiority of the function test results, the algorithm is compared to ABC [18], Cuckoo search (CS) [27], FPA [28], and Grey wolf optimizer (GWO) [19]. The following are the parameter settings of the comparison algorithms:

Table 2:

Simulation Results for Test Functions fi, i=1, 2, 3, 4, 5, 6, 7.

Benchmark functionsResultMethod
ABCCSFPAGWOCOAECOA
f01 (D=30)Best0.0005720.0035829020.3871778628.4294E-1501.076E-1420
Worst0.0133450.0158703681.1480992513.7913E-1409.8669E-1040
Mean0.0050.0091650810.7284604061.2743E-1415.7426E-1050
Std0.0037270.0029556990.1916806496.9199E-1412.2095E-1040
rank546231
f02 (D=30)Best0.0068950.6429967751.5170562391.43855E-834.79188E-730
Worst0.0267082.345794693.038947794.34386E-802.03567E-480
Mean0.0147611.3146012352.2776017664.45919E-816.79029E-500
Std0.0048040.4734218580.3837906029.53865E-813.71652E-490
rank465231
f03 (D=30)Best14078.44268.96298772.2726997623.15374E-775.2439E-1390
Worst37939.76626.99264288.8798909543.91528E-628.54963E-950
Mean28852.36453.55835744.8824539681.30531E-632.84991E-960
Std4943.0194.077022521.571144577.14825E-631.56094E-950
rank564321
f04 (D=30)Best67.497631.6292474980.6877440251.29688E-485.43846E-690
Worst88.103666.1211256831.173724782.00775E-449.76111E-470
Mean79.496363.0716292180.9620416861.35812E-453.33062E-480
Std4.7828380.8444843270.1166809813.76964E-451.78093E-470
rank654321
f05 (D=30)Best63.3912830.02363766143.33862174.94486959919.579696691.20E-06
Worst259.301768.50521836396.14316047.21410908422.4667104921.87445495
Mean134.427639.50931063241.7467185.99352842921.0114548815.24227714
Std53.4235210.4224926466.893272270.7521413210.7812949229.379551237
rank546123
f06 (D=30)Best0.0001780.003614324002.41782E-223.54003E-25
Worst0.0109190.019058352304.85147E-172.04178E-21
Mean0.0035740.0087842360.901.77611E-182.80758E-22
Std0.0026120.0036712270.8847364708.83302E-185.06707E-22
rank456132
f07 (D=30)Best0.1247480.012125320.8954897531.70094E-050.0001125936.67353E-07
Worst0.503850.0668823827.6775279960.0006137550.0031109980.000273177
Mean0.3199180.0358345243.3156567180.0002056950.0010625948.16213E-05
Std0.0915120.0123379821.3540253550.0001697080.0007913127.51741E-05
rank546231
Table 3:

Simulation Results for Test Functions fi, i=8, 9, 10, 11, 12.

Benchmark functionsResultMethod
ABCCSFPAGWOCOAECOA
f08 (D=30)Best−12928.3−9236.781−59.26595676−3439.67801−10713.925512569.4866
Worst−10882.8−8361.57903−59.26593773−2427.32615−8423.38809−9272.76399
Mean−11892.5−8673.80683−59.26595555−2905.52286−9668.18692−12415.5099
Std427.5466189.62425163.66702E-06246.0091954605.6102804621.246281
rank421356
f09 (D=30)Best11.0917959.706345244.303034334000
Worst41.73442109.391466691.262000053.09352758500
Mean24.8153985.9698873633.51596090.10311758600
Std6.64930110.450163718.537340580.5647982800
rank456311
f10 (D=30)Best6.4878842.127398980.7239992084.44089E-158.88178E-168.88178E-16
Worst12.29387.1922234272.2124112037.99361E-158.88178E-168.88178E-16
Mean9.7958464.0807666011.4181564414.91459E-158.88178E-168.88178E-16
Std1.5755471.3107729910.3643133861.22834E-1500
rank654311
f11 (D=30)Best0.0042030.0316631880.009906228000
Worst0.1392890.2416015540.0358285020.06174233600
Mean0.0498330.0884454670.0230148520.01180965700
Std0.0361160.0439911160.0072642240.01933381200
rank563411
f12 (D=30)Best6.79E-060.440671620.0015598942.99552E-085.03621E-245.55916E-30
Worst0.0010922.0467046780.0064891770.0201272381.79269E-204.55893E-22
Mean0.0001341.0704932240.0034290150.0018730552.11792E-212.96744E-23
Std0.0002060.3770327030.0013354030.0057428383.88986E-218.55077E-23
rank463521
Table 4:

Simulation Results for Test Functions fi, i=13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23.

Benchmark functionsResultMethod
ABCCSFPAGWOCOAECOA
f13 (D=4)Best0.0007950.0003074860.0003080340.0003074870.0003074860.000307486
Worst0.0013950.000307530.0003208360.020363340.0003074860.000424294
Mean0.0010130.0003074880.0003109060.0043890540.0003074860.00031138
Std0.0001628.16255E-093.43522E-060.0081269332.08502E-192.13261E-05
rank523614
f14 (D=2)Best1.0316284531.0316284531.0316284531.0316284531.0316284531.031628453
Worst−1.031628453−1.031628453−1.031628453−1.031628441−1.031628453−1.031628453
Mean−1.031628453−1.031628453−1.031628453−1.031628451−1.031628453−1.031628453
Std4.46E-166.77522E-166.51945E-162.90834E-096.77522E-166.77522E-16
rank132633
f15 (D=2)Best11−1−1−1−1
Worst−0.99992−1−1−1−1−1
Mean−0.99999−1−1−1−1−1
Std2.01E-051.08403E-120000
rank651111
f16 (D=2)Best333333
Worst333333
Mean333333
Std0.0011351.87325E-151.34749E-153.57516E-061.28021E-151.96537E-15
rank632514
f17 (D=3)Best−3.862782148−3.862782148−3.862782148−3.86278206−3.862782148−3.862782148
Worst−3.862781981−3.862782148−3.862782148−3.85489964−3.862782148−3.862782148
Mean−3.862782142−3.862782148−3.862782148−3.86116516−3.862782148−3.862782148
Std1.49E-072.71009E-152.71009E-150.0028562522.71009E-152.71009E-15
rank511611
f18 (D=6)Best−3.321995172−3.321995172−3.32199517−3.32199439−3.32199517−3.32199517
Worst−3.321995171−3.321995172−3.20310205−3.08064935−3.20310205−3.20310205
Mean−3.321995172−3.321995172−3.26888982−3.26221742−3.30217965−3.31010586
Std3.86E-111.09359E-130.0586063910.0751972060.0450663210.036277689
rank215643
f19 (D=4)Best−10.15319968−10.15319968−5.055197729−10.1531021−10.15319968−10.15319968
Worst−10.0668242−10.15319968−5.055197729−5.05518379−10.15319968−10.15319968
Mean−10.1467941−10.15319968−5.055197729−8.96947353−10.15319968−10.15319968
Std0.0184762.45741E-149.03362E-162.1818333297.12072E-157.2269E-15
rank541623
f20 (D=4)Best−10.4029406−10.40294057−5.087671825−10.4028899−10.40294057−10.40294057
Worst−10.290525−10.40294057−5.087671825−5.08766834−10.40294057−10.40294057
Mean−10.3958391−10.40294057−5.087671825−10.0482985−10.40294057−10.40294057
Std0.0062883.00029E-133.64717E-151.3484486941.47518E-151.51161E-15
rank543612
f21 (D=4)Best−10.5364098−10.53640982−5.128480787−10.5363233−10.53640982−10.53640982
Worst−10.3324712−10.53640982−5.128480787−5.12847474−10.53640982−10.53640982
Mean−10.525536−10.53640982−5.128480787−9.99689365−10.53640982−10.53640982
Std0.0051681.85544E-123.59458E-151.6452508481.80672E-151.80672E-15
rank543611
f22 (D=2)Best−1−1−0.01277964−0.999999998−1−1
Worst−1−1−0.01277964−0.999999721−1−1
Mean−1−1−0.01277964−0.999999895−1−1
Std6.31E-10007.66376E-0800
rank511611
f23 (D=2)Best−0.99995−1−1−1−1−1
Worst−0.99028−0.999971724−1−0.99028409−1−1
Mean−0.99696−0.999997733−1−0.99805682−1−1
Std0.0033555.65366E-0600.00395280200
rank541611
Sum113631114
ave4.53.8753.45833.95832.33332.1467
RANK643521
  1. ABC: limit=5D;

  2. CS: β=1.5, ρ0=1.5;

  3. FPA: ρ=0.8;

  4. GWO: Linearly α decreased from 2 to 0.

In Table 2, in the high-dimensional single-peak test functions, f01 to f04 found the optimal value, and its mean and Std are also 0; whereas the other comparison algorithms cannot test the five benchmark functions, it still found the optimal solution in a 30-dimensional space. f05 ranked third, while it is best to find that the results are smaller than the rest of the algorithm. In the f06 function, obviously, GWO find the results better than the rest of the algorithm. The f07 test results can be found, although the standard test results are not achieved, but the variance of the ECOA is optimal, and its optimal value is also obvious.

Figures 114 show the convergence and ANOVA test plots of the ECOA and other comparison algorithms in the f01 to f07 test functions. The logarithm of the Y-axis is taken in the convergence plot, and it is easy to find that the convergence rate is faster in f01 to f04, f06 than in other algorithms. Although f05 convergence accuracy is good for GWO, the ECOA converges earlier than GWO. In the ANOVA test plots, although f05 is not as good as GWO and the COA, f01 to f04, f06 are very stable compared to other contrasting algorithms.

Figure 1: D=30, Evolution Curves of Fitness Value for f01.
Figure 1:

D=30, Evolution Curves of Fitness Value for f01.

Figure 2: D=30, ANOVA Test of Global Minimum for f01.
Figure 2:

D=30, ANOVA Test of Global Minimum for f01.

Figure 3: D=30, Evolution Curves of Fitness Value for f02.
Figure 3:

D=30, Evolution Curves of Fitness Value for f02.

Figure 4: D=30, ANOVA Test of Global Minimum for f02.
Figure 4:

D=30, ANOVA Test of Global Minimum for f02.

Figure 5: D=30, Evolution Curves of Fitness Value for f03.
Figure 5:

D=30, Evolution Curves of Fitness Value for f03.

Figure 6: D=30, ANOVA Test of Global Minimum for f03.
Figure 6:

D=30, ANOVA Test of Global Minimum for f03.

Figure 7: D=30, Evolution Curves of Fitness Value for f04.
Figure 7:

D=30, Evolution Curves of Fitness Value for f04.

Figure 8: D=30, ANOVA Test of Global Minimum for f04.
Figure 8:

D=30, ANOVA Test of Global Minimum for f04.

Figure 9: D=30, Evolution Curves of Fitness Value for f05.
Figure 9:

D=30, Evolution Curves of Fitness Value for f05.

Figure 10: D=30, ANOVA Test of Global Minimum for f05.
Figure 10:

D=30, ANOVA Test of Global Minimum for f05.

Figure 11: D=30, Evolution Curves of Fitness Value for f06.
Figure 11:

D=30, Evolution Curves of Fitness Value for f06.

Figure 12: D=30, ANOVA Test of Global Minimum for f06.
Figure 12:

D=30, ANOVA Test of Global Minimum for f06.

Figure 13: D=30, Evolution Curves of Fitness Value for f07.
Figure 13:

D=30, Evolution Curves of Fitness Value for f07.

Figure 14: D=30, ANOVA Test of Global Minimum for f07.
Figure 14:

D=30, ANOVA Test of Global Minimum for f07.

In Table 3, f08 to f11 found the theoretical optimal solution, and the variance is 0 in f09 to f11, which shows its high stability. Although f08 found the optimal value, its relatively large base caused by the corresponding variance is relatively large. Although the results of f12 did not meet the standard, the variance is still ranked in the best of six, and the effect is more obvious. It is shown that the ECOA has better stability and robustness in the multimodal function optimization problem.

Figures 1524 show the convergence and variance plots of the ECOA and other comparison algorithms in the f08 to f12 test functions. In f09 to f12 convergent graphs, the logarithm of the Y-axis is taken. In the f08 function, the logarithm is not taken because the theoretical optimal value is negative. It is easy to find that f08 to f12 converge faster than other algorithms, and in the variance graph, f08 to f12 are stable compared to other algorithms.

Figure 15: D=30, Evolution Curves of Fitness Value for f08.
Figure 15:

D=30, Evolution Curves of Fitness Value for f08.

Figure 16: D=30, ANOVA Test of Global Minimum for f08.
Figure 16:

D=30, ANOVA Test of Global Minimum for f08.

Figure 17: D=30, Evolution Curves of Fitness Value for f09.
Figure 17:

D=30, Evolution Curves of Fitness Value for f09.

Figure 18: D=30, ANOVA Test of Global Minimum for f09.
Figure 18:

D=30, ANOVA Test of Global Minimum for f09.

Figure 19: D=30, Evolution Curves of Fitness Value for f10.
Figure 19:

D=30, Evolution Curves of Fitness Value for f10.

Figure 20: D=30, ANOVA Test of Global Minimum for f10.
Figure 20:

D=30, ANOVA Test of Global Minimum for f10.

Figure 21: D=30, Evolution Curves of Fitness Value for f11.
Figure 21:

D=30, Evolution Curves of Fitness Value for f11.

Figure 22: D=30, ANOVA Test of Global Minimum for f11.
Figure 22:

D=30, ANOVA Test of Global Minimum for f11.

Figure 23: D=30, Evolution Curves of Fitness Value for f12.
Figure 23:

D=30, Evolution Curves of Fitness Value for f12.

Figure 24: D=30, ANOVA Test of Global Minimum for f12.
Figure 24:

D=30, ANOVA Test of Global Minimum for f12.

In Table 4, the ECOA finds the standard results from the comparison of f13 to f23, the 11 low-dimensional test functions, and the overall ranking is also relatively high, where f13, f14, f16, f18, and f20 are in the variance rank. However, it still found the theoretical optimal value, and the variance is basically in the same series. Although FPA ranked first in f19, it did not find the theoretical optimal value, but both the COA and the ECOA found the optimal value, and the variance and FPA also did not differ much. f15, f17, f21, f22, and f23 are ranked first, showing a stronger search capability, higher accuracy, and high robustness.

Figures 2546 show the convergence and variance graphs of the ECOA and other comparison algorithms in f13 to f23 test functions, where f13 is logarithmic to the Y-axis, and it is easy to see that it converges faster than other algorithms. In f14 to f23 functions, the theoretical optimal value is negative, so there is no logarithm. The convergence speed of f14 to f20, f23 is relatively fast compared to other contrast algorithms. In the ANOVA, f14 to f23 compared to other contrast algorithms are very stable.

Figure 25: D=4, Evolution Curves of Fitness Value for f13.
Figure 25:

D=4, Evolution Curves of Fitness Value for f13.

Figure 26: D=4, ANOVA Test of Global Minimum for f13.
Figure 26:

D=4, ANOVA Test of Global Minimum for f13.

Figure 27: D=2, Evolution Curves of Fitness Value for f14.
Figure 27:

D=2, Evolution Curves of Fitness Value for f14.

Figure 28: D=2, ANOVA Test of Global Minimum for f14.
Figure 28:

D=2, ANOVA Test of Global Minimum for f14.

Figure 29: D=2, Evolution Curves of Fitness Value for f15.
Figure 29:

D=2, Evolution Curves of Fitness Value for f15.

Figure 30: D=2, ANOVA Test of Global Minimum for f15.
Figure 30:

D=2, ANOVA Test of Global Minimum for f15.

Figure 31: D=2, Evolution Curves of Fitness Value for f16.
Figure 31:

D=2, Evolution Curves of Fitness Value for f16.

Figure 32: D=2, ANOVA Test of Global Minimum for f16.
Figure 32:

D=2, ANOVA Test of Global Minimum for f16.

Figure 33: D=3, Evolution Curves of Fitness Value for f17.
Figure 33:

D=3, Evolution Curves of Fitness Value for f17.

Figure 34: D=3, ANOVA Test of Global Minimum for f17.
Figure 34:

D=3, ANOVA Test of Global Minimum for f17.

Figure 35: D=6, Evolution Curves of Fitness Value for f18.
Figure 35:

D=6, Evolution Curves of Fitness Value for f18.

Figure 36: D=6, ANOVA Test of Global Minimum for f18.
Figure 36:

D=6, ANOVA Test of Global Minimum for f18.

Figure 37: D=4, Evolution Curves of Fitness Value for f19.
Figure 37:

D=4, Evolution Curves of Fitness Value for f19.

Figure 38: D=4, ANOVA Test of Global Minimum for f19.
Figure 38:

D=4, ANOVA Test of Global Minimum for f19.

Figure 39: D=4, Evolution Curves of Fitness Value for f20.
Figure 39:

D=4, Evolution Curves of Fitness Value for f20.

Figure 40: D=4, ANOVA Test of Global Minimum for f20.
Figure 40:

D=4, ANOVA Test of Global Minimum for f20.

Figure 41: D=4, Evolution Curves of Fitness Value for f21.
Figure 41:

D=4, Evolution Curves of Fitness Value for f21.

Figure 42: D=4, ANOVA Test of Global Minimum for f21.
Figure 42:

D=4, ANOVA Test of Global Minimum for f21.

Figure 43: D=2, Evolution Curves of Fitness Value for f22.
Figure 43:

D=2, Evolution Curves of Fitness Value for f22.

Figure 44: D=2, ANOVA Test of Global Minimum for f22.
Figure 44:

D=2, ANOVA Test of Global Minimum for f22.

Figure 45: D=2, Evolution Curves of Fitness Value for f23.
Figure 45:

D=2, Evolution Curves of Fitness Value for f23.

Figure 46: D=2, ANOVA Test of Global Minimum for f23.
Figure 46:

D=2, ANOVA Test of Global Minimum for f23.

4.2 p-Values of the Wilcoxon Rank-Sum Test

In this section, the Wilcoxon rank-sum test [12, 26] is used to test the performance of the function, and the test results are distinguished by p=0.05. p>0.05 indicates that the result is accidental, and p<0.05 indicates that the result is not accidental. Higher reliability underlined results are worse.

In Table 5, data with p>0.05 are underlined. In comparison to ABC, the ECOA was significantly better. CS, FPA, and GWO, on the contrary, have data greater than 0.05. Although the COA has five more than 0.05, its overall view is good. Therefore, these results are not accidental, and we can see that the ECOA in the function test results are excellent.

Table 5:

p-Values of the Wilcoxon Rank-Sum Test Results.

FunctionsECOA vs. COAECOA vs. GWOECOA vs. FPAECOA vs. ABCECOA vs. CS
f011.21E-121.21E-121.21E-121.21E-121.21E-12
f021.21E-121.21E-121.21E-121.21E-121.21E-12
f031.21E-121.21E-121.21E-121.21E-121.21E-12
f041.21E-121.21E-121.21E-121.21E-121.21E-12
f050.0260770.0019533.02E-113.02E-113.02E-11
f063.2E-091.21E-120.1826553.02E-113.02E-11
f079.92E-110.0004713.02E-113.02E-113.02E-11
f082.00141E-102.78622E-112.75124E-115.77359E-072.78622E-11
f09N/A0.3337111.21E-121.21E-121.21E-12
f10N/A8.6442E-141.21178E-121.21178E-121.21178E-12
f11N/A6.60964E-051.21178E-121.21178E-121.21178E-12
f122.60151E-083.01986E-113.01986E-113.01986E-113.01986E-11
f130.5342441.96E-105.09E-102.73E-115.09E-10
f14N/A1.21E-120.0417742.71E-14N/A
f15N/AN/AN/A1.21178E-125.82631E-09
f160.1003311.09E-110.0001921.08E-110.502352
f17N/A1.21178E-12N/A9.65378E-13N/A
f180.3677719.53E-109.53E-103.42E-088.64E-08
f190.1607421.21E-121.69E-141.14E-125E-11
f200.7902141.01E-114.28E-132.39E-111.11E-10
f21N/A1.21178E-126.13374E-141.19214E-121.14996E-12
f22N/A1.21E-121.69E-140.000661N/A
f23N/A0.011035N/A1.21E-121.21E-12

4.3 ECOA for Engineering Optimization Problem

Design optimization, especially structural design optimization, has a wide range of applications in engineering and industry, because it has a lot of constraints. As a result, it is a test of its constraint to solve a problem. To verify the effectiveness of the algorithm for complex optimization problems, in this chapter, three engineering design examples are used: pressure vessel design problem [10], cantilever beam design problem [3], and welded beam design problem [5].

4.3.1 Pressure Vessel Design Problem

The pressure vessel design problem [10] is a classical hybrid constrained optimization problem. The working pressure is 2000 psi and the maximum capacity is 750 ft3. As shown in Figure 47, both ends of the cylindrical container are covered with a hemispherical head. Using a rolled steel plate, the shell is made into two halves, which are joined by two longitudinal welds to form a cylinder. The goal is to minimize the total cost, including the cost of materials, molding, and welding.

Figure 47: Pressure Vessel Design Problem.
Figure 47:

Pressure Vessel Design Problem.

Minimize f(x)=0.6224x1x3x4+1.7781x2x32+3.1661x12x4+19.8x12x3

Subject to g1(x)=−x1+0.0193x3≤0

g2(x)=−x3+0.00954x3≤0

g3(x)=πx32x443πx33+12960000

g4(x)=x4−240≤0

In this paper, the ECOA is used to solve this problem, in which the comparison algorithms run 20 times independently, with GA [8], HS [14], ABC [18], CS [10], GSA [22], CoBiDE [25], DSA [4], and AMO [15] are presented in Table 6.

Table 6:

Comparison Results for the Pressure Vessel Design Problem.

AlgorithmOptimal values for variablesOptimal cost
x1x2x3x4
GSA [22]1.1250000.62500055.988659884.45420258538.8359
GA [8]0.9375000.50000048.329000112.6790006410.3811
HS [14]1.1250000.62500058.27890043.754900007198.433
ABC [18]0.83370110.4179237343.0948918164.7810436021.770461
CS [10]0.8125000.43750042.0984456176.63659586059.7143348
CoBiDE [25]0.77816860.3846491640.3196187199.9999985885.332773
DSA [4]0.78269570.3853175840.3859923199.8810925928.486479
AMO [8]0.78507760.3891728040.6371423195.9733155913.45051
ECOA0.77816860.384649240.31962199.9999985885.332773

The ECOA is less costly than other comparison algorithms. Thus, the ECOA used to solve the pressure vessel design problem is feasible.

4.3.2 Cantilever Beam Design Problem

The cantilever beam design problem [11] structure is shown in Figure 48, which minimizes the weight of the cantilever beam with a hollow square block. There are five squares, where the first block is fixed and the fifth block is borne by the vertical load. The five parameters define the shape of the cross-section of the cube. The form of the problem is as follows:

Figure 48: Cantilever Beam Design Problem.
Figure 48:

Cantilever Beam Design Problem.

Minimize f(x)=0.0624(x1+x2+x3+x4+x5);

Subject to g(x)=61x13+37x23+19x33+7x43+1x531;

In this paper, the ECOA is used to solve this problem. The algorithms include the method of moving asymptotes (MMA) [5], generalized convex approximation (GCA_I) [5], GCA_II [5], CS [10], symbiotic organisms search (SOS) [2], and MVO [20] run independently 20 times, and the results are presented in Table 7.

Table 7:

Comparison Results for the Cantilever Beam Design Problem.

AlgorithmOptimal values for variablesOptimal cost
x1x2x3x4x5
MMA [5]6.01005.30004.49003.49002.15001.3400
GCA_I [5]6.01005.30004.49003.49002.15001.3400
GCA_II [5]6.01005.30004.49003.49002.15001.3400
CS [10]6.00895.30494.50233.50772.15041.33999
SOS [2]6.018785.303444.495873.498962.155641.33996
MVO [8]6.023945.306014.495013.496022.152731.3399595
ECOA6.0159575.3091764.49433673.50153562.15265331.3399564

It can be found by comparison that the minimum cost of the ECOA in solving the cantilever problem is in the first place, which can verify the superiority of the algorithm.

4.3.3 Welded Beam Design Problem

The welded beam design problem is a problem that is academically widely used in practical engineering design (Figure 49). The purpose of the design is to minimize the manufacturing costs. The design model is affected by the shear stress (τ), the beam bending stress (θ), the rod buckling load (Pc), the beam end deflection (δ), and the normal stress (σ) related to the seven conditional constraints. Specific optimization problems can be described as a formula. The four decision variables x1 to x4 defined the domain as the formula:

Figure 49: Welded Beam Design Problem.
Figure 49:

Welded Beam Design Problem.

Minimize f(x)=1.10471x12x2+0.04811x3x4(14+x2);

Subject to g1(x)=τ(x)−τmax≤0,

g2(x)=σ(x)−σmax≤0,

g3(x)=x3x4≤0,

g4(x)=0.125−x1≤0,

g5=δ(x)−0.25≤0,

g6=PPc(x)≤0,

g7=0.10471x12+0.04811x3x4(14+x2)50;

Variable range 0.1≤x1≤2; 0.1≤x2≤10;0.1≤x3≤10;0.1≤x4≤2;

where τ(x)=τ12+2τ1τ2(x22R)+τ22;

τ1=P2x1x2;

τ2=MRJ;

M=P(L+x22);

J(x)=2{2x1x2[x224+(x1+x22)2]};

R=x224+(x1+x32)2;

σ(x)=6PLx4x32;

δ(x)=6PL3Ex33x4;

Pc=4.013Ex32x4636L2(1x32LE4G).

In this paper, the ECOA is used to solve this problem, which includes GWO [19], GSA [19], CPSO [19], GA (Coello) [10], GA (Deb) [9], GA (Deb) [7], HS [5], Random [21], Simplex [21], David [21], APPROX [21], and BA [11] run 20 times independently and presented in Table 8.

Table 8:

Comparison Results for the Welded Beam Design Problem.

AlgorithmOptimal values for variablesOptimal cost
hltb
GWO [19]0.2056763.4783779.036810.2057781.72624
GSA [19]0.1821293.85697910.00000.2023761.87995
CPSO [19]0.2023693.5442149.0482100.2057231.72802
GA (Coello) [10]N/AN/AN/AN/A1.8245
GA (Deb) [6]N/AN/AN/AN/A2.3800
GA (Deb) [24]0.24896.17308.17890.25332.4331
HS [5]0.24426.22318.29150.24432.3807
Random [21]0.45754.73135.08530.66004.1185
Simplex [21]0.27925.62567.75120.27962.5307
David [21]0.24346.25528.29150.24442.3841
APPROX [21]0.24446.21898.29150.24442.3815
BA [11]0.20153.5629.04140.20571.7312
ECOA0.205733.253129.0366240.205731.695247

By comparison, we can find that the ECOA has the lowest cost and the most obvious effect in the welded beam design problem, which shows that the ECOA has obvious advantages in the welded beam design problem.

4.4 Result Analysis

In Section 4.1, 23 standard benchmark functions were selected to evaluate the performance of the ECOA. f01 to f07 are unimodal functions, f08 to f12 are multimodal functions, and f13 to f23 are low-dimensional functions. The experimental results are shown in Tables 24. Figures 146 show the convergence and variance graphs of the corresponding functions. The results in the table show that the ECOA can find a more accurate solution. Convergence maps and variance maps reflect that the ECOA has faster convergence and higher stability. In Section 4.3, three structural design issues (pressure vessel design problem, welded beam design problem, and cantilever design problem) were selected to test the proposed ECOA. The results show that the ECOA performs well in solving constrained optimization problems.

5 Conclusions

In this paper, the ECOA is proposed based on the COA, which is used to solve the problem of functional optimization and structural engineering design. By introducing the elite opposite learning strategy, it can help to improve its exploration ability. From the results of the 23 benchmark functions and three engineering design problems, the performance of the ECOA is superior to that of other group-based intelligent algorithms mentioned in this paper. Compared to other algorithms, the ECOA is faster and more accurate at the convergence rate, whereas the variance is smaller than the others to show its stability. In addition, it can be seen that the ECOA is more robust, so its development prospects are still relatively broad.

In future research, we hope to combine the cultural algorithm by combining the two-layer mechanism of cultural algorithm to produce a three-layer structure of cultural cognitive algorithm to improve the performance of cultural algorithms. At the same time, the COA will be applied to the large-scale 0-1 backpack, which is the traditional NP-hard problem to reflect the searchability and robustness of the algorithm. In addition, through the introduction of learning strategies and other excellent strategies to improve the algorithm itself, to solve more examples of life, such as the fields of wireless sensor network coverage optimization problem and the image edge detection. Through the in-depth study of the algorithm to expand the scope of application of the cognitive behavior algorithm.

Acknowledgments

This work is supported by the National Natural Science Foundation of China grant nos. 61463007 and 61563008 and the Project of Guangxi University for Nationalities Science Foundation grant 2012MDZD037.

Bibliography

[1] B. Basturk and D. Karaboga, An artificial bee colony (ABC) algorithm for numeric function optimization, in: IEEE Swarm Intelligence Symposium, Indiana, 2006.Search in Google Scholar

[2] M.-Y. Cheng and D. Prayogo, Symbiotic organisms search: a new metaheutistic optimization algorithm, Comput. Struct.139 (2014), 98–112.10.1016/j.compstruc.2014.03.007Search in Google Scholar

[3] H. Chickermane and H. C. Gea, Structural optimization using a new local approximation method, Int. J. Numer. Methods Eng.39 (1996), 829–846.10.1002/(SICI)1097-0207(19960315)39:5<829::AID-NME884>3.0.CO;2-USearch in Google Scholar

[4] P. Civicioglu, Transforming geocentric Cartesian coordinates to geodetic coordinates by using differential search algorithm, Comput. Geosci. U. K.46 (2012), 229–247.10.1016/j.cageo.2011.12.011Search in Google Scholar

[5] C. A. Coello, Use of a self-adaptive penalty approach for engineering optimization problems, Comput. Ind.41 (2000), 113–127.10.1016/S0166-3615(99)00046-9Search in Google Scholar

[6] M. Crepinšek, S.-H. Liu and M. Mernik, Replication and comparison of computational experiments in applied evolutionary computing: common pitfalls and guidelines to avoid them, Appl. Soft Comput.19 (2014), 161–170.10.1016/j.asoc.2014.02.009Search in Google Scholar

[7] K. Deb, Optimal design of a welded beam via genetic algorithms, AIAA J.29 (1991), 2013–2015.10.2514/3.10834Search in Google Scholar

[8] K. Deb and A. S. Gene, A robust optimal design technique for mechanical component design, in: D. Dasgupta and Z. Michalewicz, (eds.), Evolutionary Algorithms in Engineering Applications, Springer, Berlin, pp. 497–514, 1997.10.1007/978-3-662-03423-1_27Search in Google Scholar

[9] K. Deb, An efficient constraint handling method for genetic algorithms, Comput. Methods Appl. Mech. Eng.186 (2000), 311–338.10.1016/S0045-7825(99)00389-8Search in Google Scholar

[10] A. H. Gandomi, X.-S. Yang and A. H. Alavi, Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems, Eng. Comput.29 (2013), 17–35.10.1007/s00366-011-0241-ySearch in Google Scholar

[11] A. Gandomi, X. S. Yang, A. Alavi and S. Talatahari, Bat algorithm for constrained optimization tasks, Neural Comput. Appl.22 (2013), 1239–1255.10.1007/s00521-012-1028-9Search in Google Scholar

[12] S. Garcia, D. Molina, M. Lozano and F. Herrera, A study on the use of nonparametric tests for analyzing the evolutionary algorithms’ behaviour: a case study on the CEC’2005 special session on real parameter optimization, J. Heuristics15 (2008), 617–644.10.1007/s10732-008-9080-4Search in Google Scholar

[13] D. Karaboga, An idea based on honey bee swarm for numerical optimization, Technical Report-TR06, Computer Engineering Department, Engineering Faculty, Erciyes University, 2005.Search in Google Scholar

[14] K. S. Lee and Z. W. Geem, A new meta-heuristic algorithm for continuous engineering optimization: harmony search theory and practice, Comput. Methods Appl. Mech. Eng.194 (2005), 3902–3933.10.1016/j.cma.2004.09.007Search in Google Scholar

[15] X. Li, J. Zhang and M. Yin, Animal migration optimization: an optimization algorithm inspired by animal migration behavior, Neural Comput. Appl.24 (2014), 1867–1877.10.1007/s00521-013-1433-8Search in Google Scholar

[16] M. Li, H. Zhao, X. Weng and T. Han, Cognitive behavior optimization algorithm for solving optimization problems, Appl. Soft Comput.39 (2016), 199–222.10.1016/j.asoc.2015.11.015Search in Google Scholar

[17] J. J. Liang, B. Y. Qu and P. N. Suganthan, Problem definitions and evaluation criteria for the CEC 2014 Special Session and Competition on Single Objective Realparameter Numerical Optimization, Technical Report, 2013, pp. 1–32.Search in Google Scholar

[18] M. Mernik, S. H. Liu, M. D. Karaboga and M. Crepinšek, On clarifying misconceptions when comparing variants of the artificial bee colony algorithm by offering anew implementation, Inf. Sci.291 (2015) 115–127.10.1016/j.ins.2014.08.040Search in Google Scholar

[19] S. Mirjalili, S. M. Mirjalili and A. Lewis, Grey wolf optimizer, Adv. Eng. Softw.69 (2014), 46–61.10.1016/j.advengsoft.2013.12.007Search in Google Scholar

[20] S. Mirjalili, S. M. Mirjalili and A. Hatamlou, Multi-verse optimizer: a nature-inspired algorithm for global optimization, Neural Comput. Appl. 27 (2016), 495–513.10.1007/s00521-015-1870-7Search in Google Scholar

[21] K. Ragsdell and D. Phillips, Optimal design of a class of welded structures using geometric programming, ASME J. Eng. Ind.98 (1976), 1021–1025.10.1115/1.3438995Search in Google Scholar

[22] E. Rashedi, H. Nezamabadi-Pour and S. Saryazdi, GSA: a gravitational search algorithm, Inf. Sci.179 (2009), 2232–2248.10.1016/j.ins.2009.03.004Search in Google Scholar

[23] R. Storn and K. Price, Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces, J. Global Optim.11 (1997), 341–359.10.1023/A:1008202821328Search in Google Scholar

[24] H. R. Tizhoosh, Opposition-based learning: a new scheme for machine intelligence, in: Proceedings of International Conference on Computational Intelligence for Modeling Control and Automation, IEEE, USA, pp. 695–701, 2005.10.1109/CIMCA.2005.1631345Search in Google Scholar

[25] Y. Wang, H. X. Li, T. W. Huang and L. Li, Differential evolution based on covariance matrix learning and bimodal distribution parameter setting, Appl. Soft Comput.18 (2014), 232–247.10.1016/j.asoc.2014.01.038Search in Google Scholar

[26] F. Wilcoxon, Individual comparisons by ranking methods, Biom Bull. Biomet.1 (1945), 80–83.10.2307/3001968Search in Google Scholar

[27] X. S. Yang and S. Deb, Cuckoo search via Levy flights, in: World Congress on Nature & Biologically Inspired Computing (NaBIC 2009), IEEE Publication, USA, pp. 210–214, 2009.10.1109/NABIC.2009.5393690Search in Google Scholar

[28] X. S. Yang, Flower pollination algorithm for global optimization, in: Unconventional Computation and Natural Computation, Lecture Notes in Computer Science, vol. 7445, 2012, pp. 240–249.10.1007/978-3-642-32894-7_27Search in Google Scholar

[29] X.-S. Yang, Nature-Inspired Optimization Algorithms, Elsevier B.V., Amsterdam, Netherlands, 2014.10.1016/B978-0-12-416743-8.00005-1Search in Google Scholar

[30] Y. Zhao and X.-S. Yang, New Meta-heuristic Optimization Methods, Science Press, Beijing, 2013.Search in Google Scholar

Received: 2017-02-16
Published Online: 2017-06-29
Published in Print: 2019-04-24

©2019 Walter de Gruyter GmbH, Berlin/Boston

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Downloaded on 11.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jisys-2017-0046/html
Scroll to top button