Home A hybrid engineering algorithm of the seeker algorithm and particle swarm optimization
Article Open Access

A hybrid engineering algorithm of the seeker algorithm and particle swarm optimization

  • Haipeng Liu

    Haipeng Liu is an associate professor in the Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan, China. His research interests include: new energy technology, energy saving, and simulation, modeling and optimization of manufacturing process using intelligence techniques.

    , Shaomi Duan

    Shaomi Duan is currently a PhD Student in the Faculty of Civil Engineering and Mechanics, Kunming University of Science and Technology, Kunming, Yunnan, China. Her research interests include: modeling and optimization of manufacturing process using statistical and computational intelligence techniques; and optimization using metaheuristics.

    EMAIL logo
    and Huilong Luo

    Huilong Luo is a professor in the Faculty of Civil Engineering and Mechanics, Kunming University of Science and Technology, Kunming, Yunnan, China. His main research interests are optimization, artificial intelligence, manufacturing processes, heat exchange, and energy saving.

Published/Copyright: July 7, 2022
Become an author with De Gruyter Brill

Abstract

A newly hybrid algorithm is proposed based on the combination of seeker optimization algorithm and particle swarm optimization. The hybrid algorithm is based on a double population evolution strategy, and the populations of individuals are evolved from the seeker optimization algorithm and the particle swarm optimization separately. The populations of individuals employ an information sharing mechanism to implement coevolution. The hybrid algorithm enhances the individuals’ diversity and averts fall into the local optimum. The hybrid algorithm is compared with particle swarm optimization, the simulated annealing and genetic algorithm, the dragonfly algorithm, the brain storming algorithm, the gravitational search algorithm, the sine cosine algorithm, the salp swarm algorithm, the multi-verse optimizer, and the seeker optimization algorithm, then 15 benchmark functions, five proportional integral differential control parameters models, and six constrained engineering optimization problems are selected for optimization experiment. According to the experimental results, the hybrid algorithm can be used in the benchmark functions, the proportional integral differential control parameters optimization, and in the optimization constrained engineering problems. The optimization ability and robustness of the hybrid algorithm are better.

1 Introduction

Recently, the modern optimization techniques have aroused great interest among the scientific and technical community in a wide variety of fields, due to their ability to solve problems with a non-linear and non-convex dependence of design parameters. Since the no free lunch (NFL) theorem, no one optimization solution can optimize overall questions [1]. Therefore, researchers pose new algorithms or enhance the current algorithms to deal with optimization problems. The current algorithms are the genetic algorithm (GA) [2], the particle swarm optimization (PSO) [3], the simulated annealing (SA) [4], the harmony search (HS) [5], the dragonfly algorithm (DA) [6], the brain storming algorithm (BSO) [7], the gravitational search algorithm (GSA) [8], the moth-flame optimization (MFO) [910], the sine cosine algorithm (SCA) [11], the salp swarm algorithm (SSA) [12], the multi-verse optimizer (MVO) [13], the seeker optimization algorithm (SOA) [14], the artificial bee colony (ABC) algorithm [15], the krill herd (KH) [16], the monarch butterfly optimization (MBO) [17], the elephant herding optimization (EHO) [18], the moth search (MS) algorithm [19], the slime mould algorithm (SMA) [20], and the Harris hawks optimization (HHO) [21], the spotted hyena optimization algorithm (SHO) [22], the butterfly optimization algorithm (BOA) [23], the henry gas solubility optimization algorithm (HGS) [24], the equilibrium optimization algorithm (EO) [25], the mine blast algorithm (MBA) [26], the interior search algorithm (ISA) [27].

However, some optimization algorithms are still not very successful in optimization problems. The optimization problems include: issues with low optimization precision, being premature, having only a local optimal solution, slow convergence speed, and insufficient robustness. To better overcome the issues of optimization precision, prematurity, having only a local optimal solution, the slow convergence rate, and poor robustness, some improved algorithms have proven to be feasible optimization algorithms and have been used in practical engineering. For instance, the heuristic algorithm combined with other algorithms has been proposed according to the characteristics of some algorithms, which now is a popular strategy to improve the algorithms. These hybridizations have been shown to be effective global optimization algorithms and have been applied to solve application problems. For example, a hybrid of genetic algorithm (GA) and particle swarm optimization (PSO) was applied to the recurrent neural/fuzzy network design [28]. Comparison of grey wolf, whale, water cycle optimization algorithm, ant lion and sine cosine algorithms are used to optimize the vehicle engine connecting rod [29]. A hybrid metaheuristic differential evolution (DE) and cuckoo search (CS) algorithm was implemented to solve the unmanned combat aerial vehicles (UCAVs) path planning problem [30]. The Harris hawks optimization algorithm, salp swarm optimization algorithm, grasshopper optimization algorithm and dragonfly algorithm was applied to optimize the structural design of vehicle components [31]. A hybrid particle swarm optimization (PSO) and ant colony optimization (ACO) algorithm were implemented to solve the hierarchical classification [32]. The Harris hawks, grasshopper and multi-verse optimization algorithms are used to optimize the machining parameters in manufacturing operations [33]. The hybrid PSO and adaptive large neighborhood search (ALNS) algorithm for software and mobile was applied to the transportation in ice manufacturing industry 3.5 [34]. A new hybrid grasshopper optimization algorithm is applied to make the robust design of a robot gripper mechanism [35]. A new hybrid Taguchi-salp swarm optimization algorithm is used for the robust design of real-world engineering problems [36]. A novel hybrid Harris hawks simulated annealing algorithm and RBF-based metamodel is used to the design optimization of highway guardrails [37].

Dai et al. proposed the SOA algorithm in 2006 [38] which aims to optimize practical application problems by mimicking human behavior and information exchange. In the recent decade, the SOA algorithm has been used in many fields, such as in parameter estimation of time-delay chaotic systems problems [39], optimal reactive power dispatch [40], a challenging set of benchmark problems [41], the design of a digital filter [42], optimizing parameters of artificial neural networks [43], the optimizing model and structures of fuel cell [44], the novel human group optimizer algorithm [45], and several practical applications [46]. However, in the initial stage of dealing with optimization problems, the SOA converges faster than others; When all individuals are near the best individual for solving the optimization problem, the individuals will lose the diversity and fall into the prematurity.

The particle swarm optimization was introduced by Kennedy and Eberhart in 1995 [3], as the class of swarm intelligence techniques that are used to solve optimization problems. In the following years, the particle swarm optimization was adopted as a popular swarm intelligence (SI) algorithm. The PSO has been used in solve various mathematical, engineering, design, network, robotic, and image processing optimization problems [28, 32, 34, 47]. In the PSO, the particles explore the search space by following the personal and global best experiences. After the population initialization, the population through velocity, acceleration coefficients, and inertial weight generates new population, it is able to maintain the diversity of population, which can achieve the search of global optimal solution after several iterations. However, it also has flaws, the PSO operation is more likely to be trapped in local optima because the search process is led by only one leader.

The SOA algorithm can easily suffer from the premature convergence when solving global optimization problems. In order to overcome the deficiencies of SOA algorithm in solving a global optimization problem, in this paper, a new hybrid global optimization algorithm based on the seeker optimization algorithm (SOA) and particle swarm optimization (PSO) is proposed. This evolution strategy allows the SOA and the PSO algorithm to give full play to their respective advantages. Using the PSO algorithm enables keeping the diversity of the population, which can be a good remedy for the defect of only using the SOA algorithm optimization, avoiding falling into a local optimum due to the loss of population diversity. And the PSO algorithm converges faster; it is a good solution to make up for the shortcomings of SOA algorithm in the convergence speed, reducing the risk of falling into a local optimal solution by utilizing the individual of PSO algorithm to guide the evolution of the individual of SOA algorithm. The hybrid algorithm not only can ensure the accuracy of the algorithm, but also guarantee the speed of solving problems. Finally, compared with the PSO, SA_GA, DA, BSO, GSA, SCA, SSA, MVO, and the SOA, the SOAPSO algorithm has been implemented and tested on a complete set of well-known benchmark functions, five proportional integral differential (PID) control parameters optimization, and six optimization constrained engineering problems taken from literature. According to the experimental results, the SOAPSO is feasible in the benchmark functions, the PID parameter optimization problems, and in the constrained engineering optimization problems. The SOAPSO can find better values for solving the questions. The improved SOA algorithm successfully overcomes its tendency to prematurely converge to local optima for problems. The SOAPSO algorithm has a better optimization performance and robustness. The algorithm also has an improvement over the original SOA algorithm. The advantages of the SOAPSO are summed up as follows.

  1. An SOAPSO algorithm is raised to enhance the precision and robustness of the optimization process.

  2. The hybrid strategy can improve the diversity of individuals, enhance local search, and avert premature convergence.

The rest of the article structure is as follows. Section 2 presents the SOA and the PSO algorithm. Section 3 describes the SOAPSO. Section 4 shows the algorithm optimization experiments, the results and the analyses. At last, Section 5 gives some conclusions.

2 Basic SOA and PSO algorithm

2.1 Seeker optimization algorithm

The SOA algorithm carries out in-depth research on human search behavior. It considers optimization as a search for an optimal solution by a search team in search space, taking search team as population and the site of the searcher as task method. Using “experience gradient” to determine the search direction, the search step measurement is resolved in view of using uncertain reasoning, through the scout direction and search step size to complete the searchers’ position in the search interspace update, to attain the optimization of the solution.

2.1.1 Search Direction

The forward orientation of search is defined by the experience gradient obtained from the individuals’ movement and the evaluation of other individuals’ search historical position. The egoistic direction f i , e ( t ) , altruistic direction f i , a ( t ) , and the preemptive direction f i , p ( t ) of the ith individual in any dimension can be obtained.

(1) f i , e ( t ) = p i , best x i ( t )

(2) f i , a ( t ) = g i , best x i ( t )

(3) f i , p ( t ) = x i ( t 1 ) x i ( t 2 )

The searcher uses the method of a random weighted average to obtain the search orientation.

(4) f i t = sign ω f i , e t + φ 1 f i , e t + φ 2 f i , a t

with t 1, t 2 ∈{t, t–1, t–2}, x i ( t 1 ) and x i ( t 2 ) : Best advantages of { x i ( t 2 ) , x i ( t 1 ) , x i ( t ) } separately, g i,best: Historical optimal location in the neighborhood where the ith search factor is located, p i,best: Optimum locality from the ith search factor to the current locality, ψ 1 and ψ 1: Random numbers in [0,1] nad ω is: Wight of inertia.

2.1.2 Search step size

The SOA algorithm refers to the reasoning of the fuzzy approximation ability. The SOA algorithm, through the computer language, describes some of the human natural languages that can simulate human intelligence reasoning search behavior. If the algorithm expresses a simple fuzzy rule, it adapts to the best approximation of the objective optimization problems. The greater search step length is more important. However, to the smaller fitness, step length makes the corresponding smaller. The Gaussian distribution function is adopted to describe the search step measurement.

(5) μ ( α ) = e α 2 2 δ 2

with α and δ: parameters of a membership function.

According to Equation (5), the probability of the output variable exceeding [−3δ, 3δ] is less than 0.0111. Therefore, μ min = 0.0111. Under normal circumstances, the optimal position of an individual has μ max = 1.0 and the worst place is 0.0111. However, to accelerate the convergence speed and get the optimal individual to have an uncertain step size, μ max is set as 0.9 in this paper. Select the following function as the fuzzy variable with a “small” target function value:

(6) μ i = μ max s I i s I ( μ max μ min ) , i = 1 , 2 , , s

(7) μ i j = rand ( μ i , 1 ) , j = 1 , 2 , , D

with μ ij : Determined by Equations (6) and (7), I i : Count of the sequence x i (t) of the current individuals arranged from high to low by function value and the function rand (μ i ,1): Real number in any partition [μ i ,1].

It can be seen from Equation (6) that it simulates the random search behavior of human beings. The step measurement of j-dimensional search interspace is determined by Equation (8):

(8) α i j = δ i j ln ( μ i j )

with δ ij : Parameter of the Gaussian distribution function, which is defined by Equation (9):

(9) ω = ( i t e r max t ) / i t e r max

(10) δ i j = ω abs ( x min x max )

with ω: Weight of inertia.

As the evolutionary algebra increases, ω decreases linearly from 0.9 to 0.1. x min and x max are respectively the variate of the minimum value and maximum value of the function.

2.1.3 Individual location updates

After obtaining the scout direction and scout step measurement of the individual, the location update is represented by Equation (10):

(11) x i j ( t + 1 ) = x i j ( t ) + α i j ( t ) f i j ( t ) , i = 1 , 2 , , s ; j = 1 , 2 , , D

with i: ith searcher individual, j: Individual dimension, f ij (t) and α ij (t): Searchers’ search direction, respectively, as well as search step size at time t and x ij (t) as well as x ij (t+1): Searchers’ site at time t and (t+1), respectively.

2.2 Particle swarm optimization

The particle swarm optimization is defined by Equations (12)(14):

(12) x i j ( t + 1 ) = x i j ( t ) + v i j ( t + 1 ) , i = 1 , 2 , , s ; j = 1 , 2 , , D

(13) ω = 0 . 9 t . ( 0 . 4 . / G )

(14) v i j ( t + 1 ) = ω × v i j ( t ) + c 1 × r 1 × ( p i d x i j ( t ) ) + c 2 × r 2 × ( p g d x i j ( t ) ) , i = 1 , 2 , , s ; j = 1 , 2 , , D

with x i j ( t + 1 ) and v i j ( t + 1 ) : Position and velocity of particle x i in the jth dimension of the (t + 1)th iteration, respectively.

The acceleration coefficients, c 1 and c 2 play significant role in balancing the search between cognitive (p id) and social (p gd) components, respectively, while r 1 and r 2 are the random vectors with each element having a value between 0 and 1. The personal and global best experiences are denoted as p id and p gd, respectively. The inertial weight ω determines the impact of the previous velocity on the current one; G is the largest optimal generation value.

3 SOAPSO algorithm

This section focuses on the rationale of the SOAPSO algorithm. The SOA and the PSO are based on the population of the global search techniques. And the SOAPSO is based on a double population evolution strategy [48, 49]. The individuals both in the SOA and the PSO employ an information sharing mechanism to implement coevolution. The strategy makes the SOAPSO enjoy the advantages of two algorithms. It can maintain the diversity of populations, and the SOAPSO algorithm has the capability to jump out of the local optimal solution. Based on the above description, the main procedure of the SOAPSO is as shown in Algorithm 1.

4 Experimental results

4.1 Setup

These algorithms used in the experiment in this paper were running under MATLAB R2016a. The computer is configured as Intel (R) Core (TM) i7-7500U CPU @2.7 GHz 2.9 GHz processor with 8 GB of memory, Windows 10 operating system.

4.2 Benchmark test function

To test the performance of SOAPSO algorithm, 15 benchmark functions [9, 14, 50], [51], [52] which have been widely used in the test are used to test. Tables 1 and 2 show the benchmark functions used in the experiment.

Table 1:

Description of unimodal benchmark functions.

Test functions Range of search Theoretical optimal fitness value
f 1 ( x ) = i = 1 n x i 2 [−100,100] 0
f 2 ( x ) = i = 1 n | x i | + i = 1 n | x i | [−10,10] 0
f 3 ( x ) = i = 1 n j = 1 i x j 2 [−100,100] 0
f 4 ( x ) = max i { | x i | , 1 i n } [−100,100] 0
f 5 ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] [−30,30] 0
f 6 ( x ) = i = 1 n ( | x i + 0.5 | ) 2 [−100,100] 0
f 7 ( x ) = i = 1 n i x i 4 + r a n d m [ 0 , 1 ] [−1.28,1.28] 0
f 8 ( x ) = i = 1 n i x i 2 [−10,10] 0
f 9 ( x ) = i = 1 n | x i | ( i + 1 ) [−1.28,1.28] 0
Table 2:

Description of multimodal benchmark functions.

Test functions Range of search Theoretical optimal fitness value
f 10 ( x ) = i = 1 n x i sin ( | x i | ) [−500,500] −418.9829 *Dimension
f 11 ( x ) = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] [−5.12,5.12] 0
f 12 ( x ) = 20 + e 20 e 0.2 1 n i = 1 n x i 2 e 1 n i = 1 n cos ( 2 π x i ) [−32,32] 0
f 13 ( x ) = 1 4000 i = 1 n x i 2 i = 1 n cos ( x i i ) + 1 [−600,600] 0
f 14 ( x ) = π n { 10 sin 2 ( π y 1 ) + i = 1 n 1 ( y i 1 ) 2 [ 1 + sin 2 ( π y i + 1 ) ] + ( y n - 1 ) 2 } + i = 1 n u ( x i , 10 , 100 , 4 ) y i = 1 + 1 4 ( x i + 1 ) , u ( x i , a , k , m ) = { k ( x i - a ) m , x i > a 0 , a x i a ( x i a ) m , x i < a [−50,50] 0
f 15 ( x ) = 1 10 { sin 2 ( 3 π x 1 ) + i = 1 n 1 ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x i + 1 ) ] + ( x n - 1 ) 2 [ 1 + sin 2 ( 2 π x n ) ] } + i = 1 n u ( x i , 5 , 100 , 4 ) [−50,50] 0

4.3 Parameter setting

Dai et al. performed much research for the parameters set of the SOA [38]. The parameters set of the SOAPSO is based on the practical experience to take the appropriate value. Table 3 represents the necessary parameters in the experiment. To the SOAPSO algorithm, the population number is 15, and the evolutionary algebra is 1000. The population size of other algorithms is 30, the max number of iterations of all tests is 1000.

Table 3:

Parameters sets for the algorithms.

Algorithm Parameters and value
PSO [53] Constant inertia: 0.9–0.4, the first and second acceleration coefficients: 1.4962.
SA_GA [54] Select probability: 0.6, crossover probability: 0.7, the mutation scale factor: 0.05, initial temperature: 100, temperature reduction parameter: 0.98.
DA [6] s shows the separation weight: 0–0.2, a is the alignment weight: 0–0.2, c indicates the cohesion weight: 0–0.2, f is the food factor: 0–2, e is the enemy factor: 0–0.1, w is the inertia weight: 0.9–0.4.
BSO [55] Number of clusters: 10, probability for select one cluster: 0.8, probability for select one cluster center to be replaced: 0.2, probability for use the center: 0.4.
GSA [56] Value of the gravitational constant at the first cosmic quantum-interval: G0 = 100, alfa = 20, proportion of an individual’s power over another final_per = 2.
SCA [11] Random numbers: r 1 = 0–2, r 2 = 0∼2π, r 3 = 0–2, r 4 = 0–1.
SSA [12] Random numbers: c 1 = 0–2, c 2 = 0–1, c 3 = 0–1.
MVO [13] Wormhole existence probability: WEP_Max = 1, WEP_Min = 0.2, travelling distance rate: TDR = 0–1, random numbers: r 1 = 0–1, r 2 = 0–1, r 3 = 0–1.
SOA [14] Maximum membership degree value: 0.95, minimum membership degree value: 0.0111, maximum inertia weight value: 0.8, minimum inertia weight value: 0.2.
SOAPSO Maximum membership degree value: 0.95, minimum membership degree value: 0.0111, maximum inertia weight value: 0.9, minimum inertia weight value: 0.1, constant inertia: 0.9–0.4, first and second acceleration coefficients: 1.4962.

4.4 Algorithm performance comparison in benchmark functions

To ensure that the comparison of these algorithms is fair, the population number of algorithms is 30, and the evolutionary algebra is 1000. At the same time, for further ensuring the fairness of algorithm comparison and reducing the effect of randomness, the results of the 10 algorithms after 30 independent runs were selected for comparison.

In this section, in order to test the performance of SOAPSO algorithm, the SOAPSO algorithm has been compared with the PSO, SA-GA, DA, BSO, GSA, SCA, SSA, MVO and the SOA algorithm in low dimension and high dimension. The mean results, the standard deviation (Std.) results, the optimal fitness value, the worst fitness value, and the rank results between the algorithms of 30 independent runs for f1 ∼ f15 are shown in low dimension in Table 4 and for f1 ∼ f15 are shown in high dimension in Table 5.

Table 4:

Comparison of performance of algorithms in benchmark function of 30 independent runs in low dimensions.

Test functions Result Algorithms
PSO SA_GA DA BSO GSA SCA SSA MVO SOA SOAPSO
f 1 (D = 100) Mean 0.0031 3.02E+04 1.67E+04 70.588780 6.21E+02 6.42E+03 2.4258997 40.269355 1.0524957 1.93039e-04
Std. 6.31E-04 6.69E+03 7.16E+03 14.271695 2.71E+02 5.62E+03 1.6157099 6.8812839 0.2051906 3.06900e-05
Worst 0.0044 4.05E+04 3.09E+04 1.25E+02 1.15E+03 1.89E+04 6.2348128 60.394282 1.4502675 2.67295e-04
Best 0.0022 1.71E+04 4.70E+02 49.150576 2.50E+02 96.291688 0.3206193 27.376332 0.6215966 1.39244e-04
Rank 2 10 9 6 8 7 3 5 4 1
f 2 (D = 100) Mean 0.2192801 1.19E+02 68.015914 N/A 7.1387617 1.3795842 22.960614 7.34E+20 13.002085 0.28414018
Std. 0.0260935 13.168468 23.977776 N/A 3.2135126 1.4335667 6.7732128 3.92E+21 2.0008395 0.04094720
Worst 0.2684631 1.42E+02 1.27E+02 N/A 14.185711 5.8449387 36.961794 2.15E+22 18.305798 0.35504890
Best 0.1732089 92.513923 28.540106 N/A 2.0654838 0.0310204 11.525488 5.43E+02 9.4729280 0.21052655
Rank 2 8 7 10 4 1 6 9 5 3
f 3 (D = 100) Mean 2.27E+03 3.77E+05 1.90E+05 1.63E+04 8.80E+03 1.84E+05 3.58E+04 4.24E+04 9.87E+03 2.3896e+03
Std. 1.30E+03 8.95E+04 7.28E+04 5.18E+03 1.84E+03 3.48E+04 1.67E+04 6.09E+03 3.47E+03 1.8910e+03
Worst 5.58E+03 5.79E+05 3.44E+05 3.40E+04 1.40E+04 2.78E+05 1.01E+05 5.30E+04 1.78E+04 7.0512e+03
Best 8.43E+02 2.55E+05 6.87E+04 9.50E+03 6.26E+03 1.24E+05 1.36E+04 3.11E+04 3.32E+03 1.6448e+02
Rank 2 10 8 5 4 9 6 7 3 1
f 4 (D = 100) Mean 1.4624169 89.626879 54.046551 23.239192 15.671857 85.539422 26.50225 49.557442 21.743821 1.53637621
Std. 0.2035038 3.4544225 5.8105757 3.3497291 1.5605923 3.5980134 3.0835448 6.2884298 5.6809434 0.75124538
Worst 2.0084420 95.051439 60.342117 31.391629 20.036975 90.947406 34.066627 62.165472 29.187985 3.95113076
Best 1.0929345 83.496263 38.891864 18.846933 10.958917 77.909721 22.210338 37.299715 2.9045088 0.17953942
Rank 2 10 8 5 4 9 6 7 3 1
f 5 (D = 100) Mean 2.51E+02 5.78E+07 1.34E+07 1.57E+04 1.78E+04 6.43E+07 4.70E+03 2.44E+03 7.09E+02 1.1032e+02
Std. 59.018749 2.62E+07 6.99E+06 4.80E+03 1.58E+04 4.67E+07 7.23E+03 1.96E+03 1.67E+02 41.4146145
Worst 3.61E+02 1.25E+08 2.75E+07 2.66E+04 7.03E+04 2.11E+08 3.41E+04 9.44E+03 1.01E+03 2.8092e+02
Best 1.24E+02 1.92E+07 1.73E+06 6.88E+03 1.99E+03 7.16E+06 9.05E+02 7.75E+02 2.79E+02 91.3081407
Rank 2 10 8 7 6 9 5 4 3 1
f 6 (D = 100) Mean 0.0119146 3.07E+04 1.80E+04 71.311851 7.00E+02 6.13E+03 4.0344913 40.089213 1.1365348 0.00261787
Std. 0.0020781 8.32E+03 7.81E+03 19.030272 3.89E+02 4.65E+03 3.5377887 5.9162672 0.1751582 5.63575e-04
Worst 0.0163601 5.23E+04 3.33E+04 1.32E+02 1.84E+03 1.99E+04 16.516267 52.047400 1.5083506 0.00392564
Best 0.0086605 1.36E+04 4.09E+03 46.048287 2.17E+02 6.73E+02 0.6398690 28.916720 0.7992740 0.00159638
Rank 2 10 9 6 7 8 3 5 4 1
f 7 (D = 100) Mean 0.2211229 77.487242 20.638807 13.058642 2.3136998 73.024146 1.4115482 0.3355664 3.4645595 0.06687563
Std. 0.0419681 49.238469 16.771470 3.8383051 1.1506156 46.772681 0.3068424 0.0733789 0.9443482 0.01908711
Worst 0.3406667 2.23E+02 85.271478 25.176691 6.3399231 2.08E+02 2.2269363 0.4938711 6.5532009 0.11685566
Best 0.1180966 19.170953 3.9670294 6.6128823 0.8371509 11.108763 0.9996030 0.2176315 2.0472780 0.04060105
Rank 2 10 7 8 4 9 5 3 6 1
f 8 (D = 100) Mean 0.0624146 1.29E+04 6.68E+03 1.38E+03 1.32E+02 1.52E+03 1.28E+02 1.05E+02 1.39E+02 0.05831202
Std. 0.0164020 3.57E+03 3.65E+03 3.91E+02 74.414138 1.16E+03 49.510547 43.821833 33.925549 0.01512488
Worst 0.1011455 2.33E+04 1.61E+04 2.28E+03 3.78E+02 5.07E+03 2.51E+02 2.04E+02 2.27E+02 0.09834756
Best 0.0291246 7.55E+03 1.59E+03 6.76E+02 31.981022 2.47E+02 43.759004 34.296414 79.288776 0.03366305
Rank 1 10 9 8 3 7 5 4 6 2
f 9 (D = 100) Mean 2.08E-26 1.78E+03 8.14E-04 3.91E-05 3.67E-12 35.077327 9.57E-07 1.21E-06 1.47E-05 1.32212e-08
Std. 3.99E-26 9.54E+03 0.0028250 8.18E-05 7.66E-12 80.100469 6.27E-07 5.23E-07 1.28E-05 8.86567e-09
Worst 1.72E-25 5.23E+04 0.0154537 3.88E-04 3.81E-11 4.17E+02 2.59E-06 3.03E-06 5.47E-05 3.86518e-08
Best 2.07E-29 0.0028833 2.59E-06 6.77E-07 4.23E-16 0.0268174 2.02E-07 4.55E-07 6.32E-07 4.92666e-10
Rank 1 9 8 7 2 10 4 5 6 3
f 10 (D = 100) Mean -4.75E+03 −2.47E+04 −1.07E+04 −1.81E+04 −4.73E+03 −7.26E+03 −2.36E+04 −2.41E+04 −2.32E+04 −2.5735e+04
Std. 6.89E+02 8.85E+02 1.32E+03 1.52E+03 8.70E+02 6.96E+02 1.49E+03 1.53E+03 3.40E+03 4.3087e+03
Worst −3.56E+03 2.32E+04 −7.30E+03 −1.49E+04 −3.60E+03 −6.21E+03 −1.99E+04 −2.14E+04 −1.87E+04 −1.8817e+04
Best −6.44E+03 −2.66E+04 −1.28E+04 −2.11E+04 −7.35E+03 −9.46E+03 −2.59E+04 −2.66E+04 −3.17E+04 3.4341e+04
Rank 10 3 7 6 9 8 5 3 2 1
f 11 (D = 100) Mean 33.896488 4.26E+02 7.64E+02 5.00E+02 1.36E+02 2.06E+02 1.62E+02 6.41E+02 4.15E+02 26.0813111
Std. 6.2350851 48.908770 1.19E+02 44.678043 18.532649 92.249647 44.581629 69.443553 47.133647 25.7528475
Worst 49.898365 5.40E+02 1.04E+03 5.82E+02 1.73E+02 4.32E+02 2.67E+02 8.01E+02 5.05E+02 1.0811e+02
Best 23.655297 3.34E+02 5.60E+02 3.93E+02 88.161003 67.314606 84.812071 5.01E+02 3.30E+02 0.09947806
Rank 2 7 10 8 5 3 4 9 6 1
f 12 (D = 100) Mean 0.0220555 15.347972 12.879210 4.7554512 3.1378392 18.309269 7.5175875 6.5910720 2.4523845 0.00599338
Std. 0.0032932 0.7816167 2.1899705 0.3262885 0.6404753 4.7183569 1.2414366 6.0322718 0.3106583 6.57922e-04
Worst 0.0297039 16.669138 16.049443 5.5752180 4.6781816 20.678970 9.4909679 20.171725 3.2763189 0.00777180
Best 0.0161767 13.796287 7.8089273 4.2290358 2.0521342 6.872015 4.8882702 3.2031355 1.9634230 0.00486383
Rank 2 10 9 7 4 8 5 6 3 1
f 13 (D = 100) Mean 9.1115292 2.75E+02 1.36E+02 1.24E+02 98.761595 53.145123 0.7064324 1.3775687 0.5354221 0.00381629
Std. 1.8646352 66.687333 71.679953 34.409820 11.527169 38.195663 0.2200883 0.0626040 0.3343659 0.01426213
Worst 14.163886 4.19E+02 2.75E+02 2.05E+02 1.24E+02 1.45E+02 1.0776799 1.5175488 1.0812126 0.06113412
Best 6.1937260 1.38E+02 7.041402 76.125105 81.688324 1.7219609 0.2641328 1.2241138 0.0806019 2.16274e-05
Rank 6 10 7 8 9 5 3 4 2 1
f 14 (D = 100) Mean 0.0167129 4.42E+07 5.63E+06 14.732482 4.5497727 1.58E+08 17.291593 11.656751 20.231983 1.47161804
Std. 0.0241246 5.11E+07 1.06E+07 3.4008233 1.2139614 1.35E+08 4.5189004 4.2010589 8.8688013 1.10331377
Worst 0.0623283 2.13E+08 5.56E+07 22.801357 6.8218450 6.52E+08 25.115105 25.911307 48.892258 3.33398551
Best 4.84E-05 3.88E+06 1.33E+05 9.5966961 2.1673113 6.76E+06 7.5727656 6.7785747 8.9451796 1.35789e-04
Rank 1 9 8 7 3 10 5 4 6 2
f 15 (D = 100) Mean 0.0010357 1.44E+08 2.33E+07 1.17E+02 1.31E+02 2.67E+08 1.77E+02 1.21E+02 1.30E+02 0.13034585
Std. 0.0028003 1.01E+08 1.59E+07 29.055176 64.254053 1.53E+08 15.845079 30.717732 62.979168 0.12750016
Worst 0.0113607 4.26E+08 6.87E+07 1.67E+02 4.50E+02 5.83E+08 2.15E+02 1.80E+02 2.18E+02 0.46282091
Best 1.90E-04 2.31E+07 1.83E+06 55.987388 76.410368 3.11E+07 1.45E+02 51.265613 1.7998260 0.00680983
Rank 1 9 8 5 6 10 7 4 3 2
Average Rank 2.533333333 9 8.133333333 6.866666667 5.2 7.533333333 4.8 5.266666667 4.133333333 1.466666667
Overall Rank 2 9 8 7 5 10 3 4 6 1
  1. The bold values in the table indicate the best results.

Table 5:

Comparison of performance of algorithms in benchmark function of 30 independent runs in high dimensions.

Test functions Result Algorithm
PSO SA_GA DA BSO GSA SCA SSA MVO SOA SOAPSO
f 1 (D = 1000) Mean 9.7477e+03 2.5732e+06 3.6829e+05 1.5118e+05 9.7519e+04 3.5105e+05 1.9108e+05 3.5977e+05 8.3814e+03 39.2393
Std. 1.6106e+03 5.6744e+04 1.8383e+05 1.6112e+04 4.9089e+03 1.3646e+05 9.6344e+03 1.6121e+04 1.5983e+03 1.4424e+2
Worst 1.4005e+04 2.6799e+06 7.8629e+05 1.8873e+05 1.0536e+05 7.7196e+05 2.1341e+05 3.9287e+05 1.2907e+04 8.0148e+2
Best 6.9495e+03 2.4536e+06 1.0420e+05 1.1957e+05 8.7230e+04 4.0211e+04 1.7058e+05 3.3044e+05 4.7822e+03 2.1465002
Rank 3 10 6 7 5 4 8 9 2 1
f 2 (D = 1000) Mean Inf Inf 1.0431e+03 N/A 1.801e+281 Inf 1.0772e+03 1.344e+273 7.7812e+278 57.1333
Std. N/A N/A 3.3461e+02 N/A Inf N/A 20.2070 Inf Inf 4.1107
Worst Inf Inf 1.6602e+03 N/A 3.795e+282 Inf 1.1116e+03 4.031e+274 2.334e+280 66.4263
Best 2.9025e+02 Inf 3.0660e+02 N/A 4.608e+244 Inf 1.0358e+03 2.315e+209 2.1280e+03 51.2417
Rank 2 8 3 10 7 8 4 6 5 1
f 3 (D = 1000) Mean 3.3245e+06 3.3451e+07 2.3366e+07 1.8797e+06 1.9611e+06 2.4026e+07 4.1724e+06 6.1928e+06 2.3906e+06 1.2672e+6
Std. 1.4068e+06 1.1343e+07 5.9873e+06 4.7421e+05 8.1529e+05 4.7320e+06 1.9370e+06 4.3229e+05 7.6981e+05 6.3085e+5
Worst 7.9869e+06 8.2745e+07 3.7730e+07 3.2090e+06 4.8408e+06 3.4515e+07 8.4958e+06 7.3742e+06 3.9890e+06 3.1987e+6
Best 1.6748e+06 1.9532e+07 1.1474e+07 1.1610e+06 9.5585e+05 1.3982e+07 1.5071e+06 5.5268e+06 2.0598e+05 5.4842e+4
Rank 6 9 8 4 3 10 5 7 2 1
f 4 (D = 1000) Mean 18.3429 99.5440076 73.9886785 42.9599576 28.7444424 99.5223 43.9089 97.4126 37.5552471 23.9116482
Std. 1.3182 0.19176900 14.4216872 3.44320734 1.79619392 0.1313 2.9961 0.7405 8.75145918 17.9559077
Worst 21.4724 99.7898020 88.5995335 50.5377515 33.6662360 99.7171 53.5153 98.7713 44.9098123 48.1995287
Best 16.3774 98.958778 19.5323743 36.5946922 25.8833818 99.1965 39.4689 95.6086 12.0877478 0.51588439
Rank 3 9 4 6 5 10 7 8 2 1
f 5 (D = 1000) Mean 2.7249e+05 1.1187e+10 4.8447e+08 8.5617e+07 1.6750e+07 3.2676e+09 7.5113e+07 6.7902e+08 1.0308e+08 6.2151e+4
Std. 1.4792e+05 3.3483e+08 3.8528e+08 1.4661e+07 1.7268e+06 7.0048e+08 8.8633e+06 7.3930e+07 2.5353e+07 7.6347e+4
Worst 7.5049e+05 1.2027e+10 1.5016e+09 1.2280e+08 2.2405e+07 4.6597e+09 1.0506e+08 8.4443e+08 1.4810e+08 2.8829e+5
Best 1.2499e+05 1.0569e+10 1.8947e+07 5.7480e+07 1.4932e+07 2.0841e+09 6.0279e+07 5.4843e+08 6.2501e+07 1.4431e+3
Rank 2 10 4 5 3 9 6 8 7 1
f 6 (D = 1000) Mean 1.0284e+04 2.5635e+06 2.9250e+05 1.5572e+05 9.8733e+04 4.1315e+05 1.9105e+05 3.5579e+05 7.9675e+03 3.5788e+2
Std. 1.3817e+03 4.0795e+04 1.2442e+05 1.8394e+04 4.9212e+03 1.2787e+05 1.0328e+04 1.7687e+04 1.41579e+03 3.2306e+2
Worst 1.3566e+04 2.6317e+06 5.6448e+05 1.9830e+05 1.0908e+05 7.3580e+05 2.1256e+05 3.9770e+05 1.1528e+04 1.5098e+3
Best 6.1887e+03 2.4668e+06 6.1537e+04 1.1722e+05 9.0772e+04 1.6278e+05 1.6527e+05 3.1703e+05 4.8067e+03 1.5114e+2
Rank 3 10 4 6 5 7 8 9 2 1
f 7 (D = 1000) Mean 1.1474e+02 1.7871e+05 7.9249e+03 6.5609e+03 5.2986e+03 4.7647e+04 1.1040e+03 8.7236e+03 2.0441e+04 1.0252e+3
Std. 22.2657388 6.0811e+03 5.9118e+03 2.1474e+03 5.8169e+02 1.1048e+04 129.0645 771.5517 2.6651e+03 4.8520e+2
Worst 1.7347e+02 1.9064e+05 1.9723e+04 1.3399e+04 6.9318e+03 6.9571e+04 1.3689e+03 1.1079e+04 2.6136e+04 2.1401e+3
Best 80.9849331 1.6720e+05 667.2428 3.2310e+03 4.3702e+03 2.6553e+04 878.7155 7.2739e+03 1.4371e+04 80.264704
Rank 2 10 3 5 6 9 4 7 8 1
f 8 (D = 1000) Mean 4.7291e+04 1.2547e+07 1.9203e+06 1.3736e+06 3.8862e+05 1.8012e+06 8.6050e+05 1.5454e+06 1.6177e+06 4.5990e+3
Std. 6.6212e+03 3.5539e+05 9.6039e+05 1.6383e+05 2.2329e+04 5.5949e+05 5.1965e+04 6.4511e+04 1.1284e+05 3.3693e+3
Worst 6.0802e+04 1.3042e+07 3.8303e+06 1.6356e+06 4.3797 e+05 2.6519e+06 9.8685e+05 1.6892e+06 1.8715e+06 1.2994e+4
Best 3.6574e+04 1.1381e+07 6.5489e+05 1.0579e+06 3.5017e+05 4.5453e+05 7.8606e+05 1.4470e+06 1.3452e+06 1.1801e+3
Rank 2 10 5 7 3 4 6 9 8 1
f 9 (D = 1000) Mean 9.7670e-09 1.5616e+92 4.8316e+27 N/A 7.6578e-05 9.1116e+83 1.6993e-06 1.0103e+56 1.4012e+12 4.0119e-08
Std. 3.3014e-08 7.3622e+92 2.6457e+28 N/A 2.5646e-04 4.6589e+84 1.2518e-06 5.5321e+56 5.4965e+12 4.4393e-08
Worst 1.5129e-07 4.0431e+93 1.4491e+29 N/A 0.00128117 2.5521e+85 5.3454e-06 3.0301e+57 2.6100e+13 2.3807e-07
Best 5.4973e-16 1.1184e+7 4.8301e-04 N/A 1.9892e-09 9.2943e+69 4.1424e-07 5.6083e+38 0.024180374 2.7889e-09
Rank 1 7 5 10 2 9 4 8 6 3
f 10 (D = 1000) Mean −1.7007e+4 −5.8435e+4 −3.1555e+4 −6.5837e+4 −1.4744e+4 −2.2849e+4 −1.1867e+5 −1.3394e+5 −1.2726e+5 1.4785e+5
Std. 2.5847e+3 3.4160e+3 4.8833e+3 4.3367e+3 2.3262e+3 1.3757e+3 6.9059e+3 5.9809e+3 2.7008e+4 4.6972e+4
Worst −1.0338e+4 −5.2541e+4 −2.3985e+4 −5.4523e+4 −1.1005e+4 −2.0040e+4 −1.0778e+5 −1.1839e+5 −9.0577e+4 9.5352e+4
Best −2.1550e+4 −6.7056e04 −4.2025e+4 −7.3504e+4 −1.8442e+4 −2.6647e+4 −1.3612e+5 −1.4670e+5 −1.9652e+5 2.6675e+5
Rank 9 6 7 5 10 8 4 3 2 1
f 11 (D = 1000) Mean 2.8217e+03 1.5245e+04 9.3813e+03 9.4292e+03 5.7869e+03 1.8362e+03 6.3367e+03 1.3788e+04 9.9711e+03 1.6919e+3
Std. 1.7027e+02 1.3944e+02 1.1877e+03 1.9719e+02 1.5766e+02 874.1338 155.2772 300.8269 3.1150e+02 5.4168e+2
Worst 3.2546e+03 1.5581e+04 1.1288e+04 9.7810e+03 6.0413e+03 3.8740e+03 6.6892e+03 1.4298e+04 1.0555e+04 3.1932e+3
Best 2.5372e+03 1.4955e+04 6.9656e+03 9.1231e+03 5.4142e+03 502.6460 6.1002e+03 1.3250e+04 9.3790e+03 1.1473e+3
Rank 3 10 6 7 4 1 5 9 8 2
f 12 (D = 1000) Mean 4.49472157 20.7976705 15.4922 14.3714165 10.3286273 18.8639 14.4206 20.9497 10.3504556 1.83801020
Std. 0.17456627 0.02616263 1.8936 0.40087062 0.15377404 4.0427 0.2008 0.0222 0.57923796 1.62755230
Worst 5.06374703 20.8506176 18.6432 15.2249329 10.6339573 20.8454 14.7842 20.9935 11.1238556 5.60320124
Best 4.24420457 20.7492288 11.3346 13.4575332 9.93588986 8.4914 13.9782 20.8992 9.19276609 0.40833143
Rank 2 9 6 7 5 3 8 10 4 1
f 13 (D = 1000) Mean 1.0210e+03 2.3100e+04 2.9176e+03 2.2307e+03 1.3996e+04 3.1433e+03 1.7159e+03 3.2496e+03 11.8192945 0.36210206
Std. 32.8774099 5.1266e+02 1.4869e+03 2.8098e+02 2.1571e+02 1.0529e+03 91.5869 169.9815 11.8731051 0.96226594
Worst 1.1068e+03 2.3908e+04 6.5186e+03 2.7103e+03 1.4470e+04 5.8108e+03 1.9131e+03 3.5731e+03 53.8148847 4.60350838
Best 9.5388e+02 2.1976e+04 725.5926 1.5753e+03 1.3539e+04 1.1471e+03 1.5693e+03 2.8639e+03 2.94729201 0.00835621
Rank 4 10 3 7 9 5 6 8 2 1
f 14 (D = 1000) Mean 3.58238444 2.6235e+10 9.4922e+08 3.6287e+06 3.8689e+04 9.5072e+09 3.1511e+06 9.0076e+08 1.0956e+07 29.7193724
Std. 0.46770694 9.4178e+08 8.9381e+08 1.2754e+06 2.4728e+04 2.0642e+09 1.3752e+06 1.0468e+08 6.6012e+06 10.8640272
Worst 4.62169611 2.7753e+10 4.1170e+09 7.3906e+06 1.3000e+05 1.4117e+10 7.7277e+06 1.1163e+09 2.4899e+07 47.2595636
Best 2.74775928 2.4014e+10 2.3930e+07 1.5174e+06 5.6569e+03 5.1822e+09 1.5134e+06 7.1553e+08 1.9753e+05 0.51065701
Rank 2 10 7 6 3 9 5 8 4 1
f 15 (D = 1000) Mean 1.4226e+04 4.9243e+10 2.0082e+09 5.5104e+07 6.1049e+06 1.6105e+10 7.2399e+07 2.2732e+09 7.41299e+07 1.1273e+3
Std. 2.2790e+04 1.6004e+09 1.4799e+09 1.6028e+07 1.2137e+06 3.5103e+09 1.0071e+07 2.8365e+08 3.8466e+07 6.6367e+2
Worst 7.5633e+04 5.3105e+10 6.5263e+09 8.3955e+07 8.3946e+06 2.3878e+10 8.9785e+07 2.9667e+09 2.1183e+08 2.8810e+3
Best 4.0719e+02 4.6279e+10 3.7708e+08 3.3109e+07 4.2910e+06 8.3903e+09 5.4157e+07 1.8130e+09 2.4155e+07 86.1454
Rank 2 10 7 5 3 9 6 8 4 1
Average Rank 3.066666667 9.2 5.2 6.466666667 4.866666667 7 5.733333333 7.8 4.4 1.2
Overall Rank 2 10 5 7 4 8 6 9 3 1
  1. The bold values in the table indicate the best results.

4.4.1 Algorithm performance comparison in low dimension functions

For the low-dimensional case in the benchmark functions f1–f15, based on Table 4, except f2, f8, f9, f14, and f15, the optimal value of the SOAPSO algorithm is better than the others. To f8, f14, and f15, the PSO algorithm gives the better results, but the SOAPSO has already reached the theoretical optimal value. To f9, the optimal value of the SOAPSO has reached the theoretical best value, although the optimal fitness value of the SOAPSO is worse than the PSO and the GSA. To f2 function, the optimal value of the SOAPSO is worse than the PSO and the SCA algorithm.

For the low-dimensional case in the benchmark functions f1–f15, based on Table 4, except f2, f3, f4, f9, f10, f14, and f15, the worst value of the SOAPSO algorithm is better than the others. To f2, f3, f4, f14, and f15, the PSO algorithm gives the better results. To f9, the worst fitness value of the SOAPSO is worse than the PSO and the GSA. To f10 function, the optimal value of the SOAPSO is worse than the SA_GA, SSA and the MVO algorithm.

For the low-dimensional case in the benchmark functions f1–f15, based on Table 4, except f2, f3, f4, f9, f10, f11, f14, and f15 the standard deviation results of the SOAPSO algorithm are better than the others. The standard deviation results of the SOAPSO to f2, f4, f14, and f15 are only worse than the PSO algorithm. The results of the SOAPSO to f3, f9, and f11 are worse than the PSO, and the GSA. The result of the SOAPSO to f10 is worse than the PSO, DA, BSO, SSA, MVO, and the SOA algorithm.

For the low-dimensional case in the benchmark functions f1–f15, based on Table 4, except f2, f3, f4, f9, f14 and f15 the mean values of the SOAPSO are better than the others. To f9, the mean test results of the SOAPSO have reached the theoretical best value, although the mean test result of the SOAPSO is worse than the PSO and GSA. The results of the SOAPSO to f2, f3, f4, f14 and f15 are worse than the PSO algorithm.

According to the optimal fitness value mean rank and all rank results from Table 4, the SOAPSO can find solutions, and has strong optimization ability and strong robustness to the low-dimensional benchmark function.

4.4.2 Algorithm performance comparison in high dimension functions

In order to test the optimization ability of the algorithms in high-dimensional space, this paper selects 1000 dimensions for tests [57]. In all the tests, the max number of iterations is 1000, and the set of other parameters is the same. The mean results, the standard deviation (Std.) results, the optimal fitness value, the worst fitness value, and the rank results between the algorithms of 30 independent runs for f1 ∼ f15 are shown in Table 5.

For the high-dimensional case in the benchmark functions f1–f15, based on Table 5, except f9 and f11, the optimal value of the SOAPSO algorithm is better than the others. To f9, the PSO and the GSA algorithm give the better results, but the SOAPSO has already reached the theoretical optimal value. To f11 function, the optimal value of the SOAPSO is worse than the SCA algorithm.

For the high -dimensional case in the benchmark functions f1–f15, based on Table 5, except f4, f7, f9, f12, and f14, the worst value of the SOAPSO algorithm is better than the others. To f4 function, the optimal value of the SOAPSO is worse than the PSO, GSA and the SOA algorithm. To f7 function, the optimal value of the SOAPSO is worse than the PSA and the SSA algorithm. To f9, the worst fitness value of the SOAPSO is worse than the PSO, but the SOAPSO has already reached the theoretical optimal value. To f12 and f14, the PSO algorithm gives the better results.

For the high -dimensional case in the benchmark functions f1–f15, based on Table 5, except f3, f7, f9, f10, f11, f12, and f14, the standard deviation results of the SOAPSO algorithm are better than the others. The standard deviation results of the SOAPSO to f9, and f14 are worse than the PSO algorithm. The results of the SOAPSO to f3 are worse than the SSA and the BSO. The result of the SOAPSO to f7 is worse than the PSO, and the SSA. The results of the SOAPSO to f10 are worse than the that of others. The result of the SOAPSO to f11 is worse than the PSO, SA_GA, BSO, GSA, SSA, MVO and the SOA. The results of the SOAPSO to f12 are worse than the PSO, SA_GA, BSO, GSA, SCA, SSA, and the SOA algorithm.

For the high-dimensional case in the benchmark functions f1–f15, based on Table 5, except f4, f7, f9 and f14 the mean values of the SOAPSO are better than the others. To f9, the mean test results of the SOAPSO have reached the theoretical best value, although the mean test result of the SOAPSO is worse than the PSO. The results of the SOAPSO to f4, f7, and f14 are worse than the PSO algorithm.

According to the optimal fitness value mean rank and all rank results from Table 5, the SOAPSO can find solutions, and has strong optimization ability and strong robustness to the high-dimensional benchmark function.

The same as before, the average rank based on these 15 functions’ ranking is calculated. Then, the average rank and obtain the overall rank are ranked. From the average rank of each algorithm, it shows that the SOAPSO is very robust and efficient.

4.4.3 Search history of the SOAPSO

Figure 1 shows the graph of the optimized function f1, the initial population’s positions, the population’s positions search history, and the convergence curves; the search history behaviors of the search seekers are marked with red mark +. Based on Figure 1, for the benchmark function f1, the convergence curve of the SOAPSO algorithm is the fast. From the population’s positions search history of the SOAPSO algorithm, the search seekers of the SOAPSO extensively move towards promising search regions in the search space; the search seekers searched the given search space by the moment in the change search step size and different search directions; this gives the way to increase local search, escape local optima, and avoid premature convergence.

Algorithm 1: 
SOAPSO.
Algorithm 1:

SOAPSO.

Figure 1: 
Graphs for f1, a) graph of test function f1, b) initial population, iteration = 0, best fitness = 604.6074, c) SOA, search history, best fitness = 1.2588e-09, d) PSO, search history, best fitness = 1.4698e-10 e) SOAPSO, search history, best fitness = 1.7443e-11 f) convergence curve of f1 sphere.
Figure 1:

Graphs for f1, a) graph of test function f1, b) initial population, iteration = 0, best fitness = 604.6074, c) SOA, search history, best fitness = 1.2588e-09, d) PSO, search history, best fitness = 1.4698e-10 e) SOAPSO, search history, best fitness = 1.7443e-11 f) convergence curve of f1 sphere.

Similarly, Figure 2 shows the graph of the optimized function f10, the initial population’s positions, the population’s positions search history, and the convergence curves; the search history behaviors of the search seekers are marked with red mark +. Based on Figure 2, for the benchmark function f10, the convergence curve of the SOAPSO algorithm is the fast. From the population’s positions search history of the SOAPSO algorithm, the search seekers of the SOAPSO extensively move towards promising search regions in the search space; the search seekers searched the given search space by the moment in the change search step size and different search directions; this gives the way to increase local search, escape local optima, and avoid premature convergence.

Figure 2: 
Graphs for f10, a) graph of test function f10, b) initial population, iteration = 0, best fitness = −613.7265 c) SOA, search history, best fitness = −837.9657, d) PSO, search history, best fitness = −617.9003, e) SOAPSO, search history, best fitness = −837.9658, f) convergence curve of f10 Schwefel.
Figure 2:

Graphs for f10, a) graph of test function f10, b) initial population, iteration = 0, best fitness = −613.7265 c) SOA, search history, best fitness = −837.9657, d) PSO, search history, best fitness = −617.9003, e) SOAPSO, search history, best fitness = −837.9658, f) convergence curve of f10 Schwefel.

4.4.4 Convergence of algorithm analysis in benchmark functions

Figure 3 shows the fitness curves of the best values for the benchmark functions f1–f15 (D = 1000). As seen from Figure 3, in all of these functions only except f3, f8, f11, f14, and f15, the convergence of the SOAPSO is faster, and the accuracy of the SOAPSO is better.

Figure 3: 
Fitness curves of the test global minimum values for f1–f15 (D = 1000), a) f1 Sphere, b) f2 Schwefel 2.22, c) f3 Rotated, d) f4 Schwefel 2.21, e) f5 Rosenbrock, f) f6 Step, g) f7 Quartic, h) f8 SumSquares, i) f9 SumPower, j) f10 Schwefel, k) f11 Rastrigin, l) f12 Ackley, m) f13 Griewank, n) f14 Penalized1, o) f15 Penalized2.
Figure 3: 
Fitness curves of the test global minimum values for f1–f15 (D = 1000), a) f1 Sphere, b) f2 Schwefel 2.22, c) f3 Rotated, d) f4 Schwefel 2.21, e) f5 Rosenbrock, f) f6 Step, g) f7 Quartic, h) f8 SumSquares, i) f9 SumPower, j) f10 Schwefel, k) f11 Rastrigin, l) f12 Ackley, m) f13 Griewank, n) f14 Penalized1, o) f15 Penalized2.
Figure 3: 
Fitness curves of the test global minimum values for f1–f15 (D = 1000), a) f1 Sphere, b) f2 Schwefel 2.22, c) f3 Rotated, d) f4 Schwefel 2.21, e) f5 Rosenbrock, f) f6 Step, g) f7 Quartic, h) f8 SumSquares, i) f9 SumPower, j) f10 Schwefel, k) f11 Rastrigin, l) f12 Ackley, m) f13 Griewank, n) f14 Penalized1, o) f15 Penalized2.
Figure 3:

Fitness curves of the test global minimum values for f1–f15 (D = 1000), a) f1 Sphere, b) f2 Schwefel 2.22, c) f3 Rotated, d) f4 Schwefel 2.21, e) f5 Rosenbrock, f) f6 Step, g) f7 Quartic, h) f8 SumSquares, i) f9 SumPower, j) f10 Schwefel, k) f11 Rastrigin, l) f12 Ackley, m) f13 Griewank, n) f14 Penalized1, o) f15 Penalized2.

4.4.5 The ANOVA of algorithm analysis in benchmark functions

Figure 4 is the ANOVA for the benchmark functions f1–f15 (D = 1000). As seen from Figure 4, the SOAPSO is the most robust. The SOAPSO algorithm showed better robustness and an improved SOA. Therefore, the SOAPSO is a feasible solution in the optimization of benchmark functions.

Figure 4: 
ANOVA tests of the global minimum values for f1–f15 (D = 1000), a) f1 Sphere, b) f2 Schwefel 2.22, c) f3 Rotated, d) f4 Schwefel 2.21, e) f5 Rosenbrock, f) f6 Step, g) f7 Quartic, h) f8 SumSquares, i) f9 SumPower, j) f10 Schwefel, k) f11 Rastrigin, l) f12 Ackley, m) f13 Griewank, n) f14 Penalized1, o) f15 Penalized2.
Figure 4: 
ANOVA tests of the global minimum values for f1–f15 (D = 1000), a) f1 Sphere, b) f2 Schwefel 2.22, c) f3 Rotated, d) f4 Schwefel 2.21, e) f5 Rosenbrock, f) f6 Step, g) f7 Quartic, h) f8 SumSquares, i) f9 SumPower, j) f10 Schwefel, k) f11 Rastrigin, l) f12 Ackley, m) f13 Griewank, n) f14 Penalized1, o) f15 Penalized2.
Figure 4: 
ANOVA tests of the global minimum values for f1–f15 (D = 1000), a) f1 Sphere, b) f2 Schwefel 2.22, c) f3 Rotated, d) f4 Schwefel 2.21, e) f5 Rosenbrock, f) f6 Step, g) f7 Quartic, h) f8 SumSquares, i) f9 SumPower, j) f10 Schwefel, k) f11 Rastrigin, l) f12 Ackley, m) f13 Griewank, n) f14 Penalized1, o) f15 Penalized2.
Figure 4:

ANOVA tests of the global minimum values for f1–f15 (D = 1000), a) f1 Sphere, b) f2 Schwefel 2.22, c) f3 Rotated, d) f4 Schwefel 2.21, e) f5 Rosenbrock, f) f6 Step, g) f7 Quartic, h) f8 SumSquares, i) f9 SumPower, j) f10 Schwefel, k) f11 Rastrigin, l) f12 Ackley, m) f13 Griewank, n) f14 Penalized1, o) f15 Penalized2.

4.4.6 Complexity analysis

The calculational complexity of the SOA is O (N.D.M), N represents the total individual count, D represents the dimension count, M represents the maximum count of algebras. The computational complexity of the first phase of the SOA stage is O (N.D.M). The hybrid strategy is introduced to calculate the O (N.D.M) value. So, the overall complexity of the SOAPSO is O (N.D.M + N.D.M). Based on the principle of the Big-O representation [58], if the count of algebras is high (M ≫ N, D), the calculational complexity is O (N.D.M). Therefore, the overall calculational complexity of the SOAPSO is almost the same as the basic SOA.

4.4.7 Statistical testing of algorithms in benchmark functions

Using the Wilcoxon’s rank-sum test [59] can discover the important differentia between the two algorithms. This test gives the value p < 0.05. Table 6 is the results of statistical testing. N/A represents the best algorithm. From Table 6, the SOAPSO is suitable for 15 functions only except f9 and f11. Therefore, the SOAPSO is better than the other algorithms.

Table 6:

The p-values of the Wilcoxon rank-sum test.

Test functions (D = 1000) p test values of various algorithms
PSO SA_GA DA BSO GSA SCA SSA MVO SOA SOAPSO
f 1 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 N/A
f 2 2.8646e-11 1.2118e-12 3.0199e-11 Inf 3.0199e-11 1.2118e-12 3.0199e-11 3.0199e-11 3.0199e-11 N/A
f 3 5.0723e-10 3.0199e-11 3.0199e-11 1.3250e-04 4.2175e-04 3.0199e-11 5.5727e-10 3.0199e-11 2.1959e-07 N/A
f 4 0.6627 3.0199e-11 1.4643e-10 9.5139e-06 0.7618 3.0199e-11 1.0277e-06 3.0199e-11 0.0042 N/A
f 5 3.4971e-09 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 N/A
f 6 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 N/A
f 7 5.5727e-10 3.0199e-11 3.8249e-09 3.0199e-11 3.0199e-11 3.0199e-11 0.4376 3.0199e-11 3.0199e-11 N/A
f 8 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 N/A
f 9 N/A 3.0199e-11 3.0199e-11 Inf 1.3289e-10 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 2.6015e-08
f 10 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 0.0392 0.8883 0.1188 N/A
f 11 1.2493e-05 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 N/A 3.0199e-11 3.0199e-11 3.0199e-11 0.9941
f 12 7.0430e-07 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 N/A
f 13 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 7.3891e-11 N/A
f 14 8.4848e-09 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 N/A
f 15 0.1958 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 N/A

4.4.8 Run time comparison of algorithms in benchmark functions

In the subsection, the running time of the algorithm for each function is recorded under the same conditions: the population number is 30, the evolution algebra is 1000, and 30 independent runs of the above 15 benchmark functions f1–f15 (d = 1000). Then, the running time of the 15 functions is added to obtain the sum of the 30 independent running times of each algorithm for the 15 functions listed in this paper, and the ranking of the total time, as shown in Table 7. As seen from Table 7, the PSO algorithm has the most minor program running time, followed by the SCA algorithm, which has more minor program running time. The SOAPSO algorithm ranks sixth, which has a relatively longer program running time. The program running time of the SOAPSO algorithm is shorter than the SA_GA, DA, BSO, and the GSA algorithm. At the bottom of the list is the DA and the BSO algorithm; the DA takes the most running time; the BSO algorithm can’t find the solution when optimizing function f2 and f9 (d = 1000).

Table 7:

Run time comparison of 30 independent runs for benchmark functions f1–f15 (D = 1000).

Functions (D = 1000) Run time of algorithms
PSO SA_GA DA BSO GSA SCA SSA MVO SOA SOAPSO
f 1 100.719730 880.039134 17573.656518 377.919485 1238.641454 123.011947 111.919349 291.672145 140.420268 212.995084
f 2 68.229668 848.231807 16987.875185 Inf 1319.453189 158.694010 108.328681 120.413171 181.073442 226.594775
f 3 2083.966419 59198.227940 20326.621973 2078.700365 3001.030713 1813.227548 1793.574307 1978.830114 6117.125815 7629.600897
f 4 55.090061 763.100004 17128.983039 247.633844 1182.766535 148.610507 90.994624 282.693611 128.181110 219.457919
f 5 59.726643 907.492580 18250.985859 258.984598 1190.101594 133.724941 109.915102 347.492206 134.319506 257.103444
f 6 60.136433 768.869660 18085.212404 268.858467 1321.559282 127.490008 93.639898 283.733081 117.076822 200.372646
f 7 171.086743 2893.833975 17934.262519 371.589100 1345.984756 203.877433 164.750798 366.515217 429.653999 585.376849
f 8 56.468720 824.393464 17824.021441 279.127729 1361.309649 131.849271 92.755506 296.880364 126.427399 434.676154
f 9 119.475367 2814.347692 18457.107878 Inf 1338.479782 202.890012 164.340304 270.227397 379.676861 727.189193
f 10 96.665578 1331.028369 17637.361668 287.377356 1207.890758 162.894968 142.796763 163.620252 222.037061 525.320156
f 11 67.987998 1166.055564 18049.793773 280.069048 1200.353798 147.172828 104.299900 321.039558 199.011258 431.311811
f 12 85.468484 1298.972295 17955.135260 346.407765 1504.250466 168.363508 133.421811 421.893238 230.803220 372.640589
f 13 105.918299 1489.631271 18681.850475 282.811171 1217.562709 147.627982 112.427627 330.277401 204.931102 349.125084
f 14 227.567551 5817.019363 18773.786856 430.793297 1364.117264 381.158993 260.063965 485.116503 656.703311 957.626150
f 15 252.124024 5953.028083 21243.332251 433.075547 1367.575562 320.156020 278.872164 494.213399 674.918798 958.778772
The total time 3610.631718 86954.2712 274909.9871 Inf 21161.07751 4370.749976 3762.100799 6454.617657 9942.359972 14088.16952
Overall rank 1 8 9 10 7 3 2 4 5 6

To learn more trait about the program running time of the 10 algorithms in the 15 functions, a bar chart in Figure 5 was made for the total time of each algorithm after 30 independent runs. From Figure 5, to the running time, the SOAPSO is less than the SA_GA, DA, and the GSA; the PSO is the least; the DA is the most. The running time of the BSO algorithm is not plotted, because the BSO algorithm can’t find the solution when optimizing function f2 and f9 (d = 1000).

Figure 5: 
Total time of 30 independent runs of 10 algorithms on 15 benchmark functions.
Figure 5:

Total time of 30 independent runs of 10 algorithms on 15 benchmark functions.

4.4.9 Exploration and exploitation in benchmark functions

According to the literature [60], [61], [62], Equations (15)(18) represent the exploration and development capability of an algorithm.

(15) D i v j = 1 n i = 1 n median ( x j ) x i j

(16) D i v = 1 D i = 1 D D i v j

(17) X pl% = D i v D i v max × 100

(18) X pt% = | D i v D i v max | D i v max × 100

with median x j : Median of dimension j in whole swarm, x i j : Dimension j of the swam individual I, n: Size of swarm, Div j : Average for all the individuals, Div: Diversity of swarm in an iteration and Div max: Maximum diversity in all iterations, Xpl% and Xpt%: Exploration and exploitation percentages for an iteration, respectively.

Figure 6 shows the exploration and exploitation abilities of the SOAPSO as the number of iterations increases in the benchmark functions f1–f15. As observed from the plotted curves shown in Figure 6, the SOAPSO maintain good balance between the exploration and exploitation ratios as the number of iterations increases.

Figure 6: 
Exploration and exploitation abilities of the SOAPSO in benchmark functions, a) f1 Sphere, b) f2 Schwefel 2.22, c) f3 Rotated, d) f4 Schwefel2.21, e) f5 Rosenbrock, f) f6 Step, g) f7 Quartic, h) f8 SumSquares, i) f9 SumPower, j) f10 Schwefel, k) f11 Rastrigin, l) f12 Ackley, m) f13 Griewank, n) f14 Penalized1, o) f15 Penalized2 .
Figure 6: 
Exploration and exploitation abilities of the SOAPSO in benchmark functions, a) f1 Sphere, b) f2 Schwefel 2.22, c) f3 Rotated, d) f4 Schwefel2.21, e) f5 Rosenbrock, f) f6 Step, g) f7 Quartic, h) f8 SumSquares, i) f9 SumPower, j) f10 Schwefel, k) f11 Rastrigin, l) f12 Ackley, m) f13 Griewank, n) f14 Penalized1, o) f15 Penalized2 .
Figure 6: 
Exploration and exploitation abilities of the SOAPSO in benchmark functions, a) f1 Sphere, b) f2 Schwefel 2.22, c) f3 Rotated, d) f4 Schwefel2.21, e) f5 Rosenbrock, f) f6 Step, g) f7 Quartic, h) f8 SumSquares, i) f9 SumPower, j) f10 Schwefel, k) f11 Rastrigin, l) f12 Ackley, m) f13 Griewank, n) f14 Penalized1, o) f15 Penalized2 .
Figure 6:

Exploration and exploitation abilities of the SOAPSO in benchmark functions, a) f1 Sphere, b) f2 Schwefel 2.22, c) f3 Rotated, d) f4 Schwefel2.21, e) f5 Rosenbrock, f) f6 Step, g) f7 Quartic, h) f8 SumSquares, i) f9 SumPower, j) f10 Schwefel, k) f11 Rastrigin, l) f12 Ackley, m) f13 Griewank, n) f14 Penalized1, o) f15 Penalized2 .

4.4.10 Performance profiles of algorithms in benchmark functions

The average fitness was selected as the capability index. The algorithmic capability is expressed in performance profiles, which is calculated by Equations (19) and (20).

(19) r f , g = μ f , g / min { μ f , g : g G }

(20) ρ g ( T ) = size { f F : r f , g T } / n f

with g: An algorithm, G : Algorithms set, f: A function, F : Function set, n g : Count of algorithms in the experiment, n f : Number of functions in the experiment, μ f,g : Average fitness after the algorithm g solving function f, r f,g : Capability ratio, ρ g: Algorithmic capability and T: Factor of the best probability [63].

Figure 7 shows the capability ratios of the average value for the 10 algorithms on the benchmark functions f1–f15 (D = 1000). The consequences are revealed by a log scale 2. As shown in Figure 7, the SOAPSO has the highest probability. When τ = 1, the SOAPSO is about 0.73, which is better than that of the others. When τ = 9, the SOAPSO is the winner on the given test functions is about 1, the PSO is 0.53, SA_GA is 0.13, DA is 0.27, BSO is 0.33, GSA is 0.27, SCA is 0.2, SSA is 0.33, MVO is 0.27, and the SOA is 0.33. When τ = 15, the SOAPSO is the winner on the given test functions is about 1, the PSO is 0.6, SA_GA is 0.2, DA is 0.27, BSO is 0.33, GSA is 0.27, SCA is 0.2, SSA is 0.4, MVO is 0.33, and the SOA is 0.33. Regarding the performance curve, the SOAPSO is the best; the SOAPSO can achieve 100% when τ ≥ 9. Thus, the performance of the SOAPSO is better than that of the other algorithms.

Figure 7: 
Performance profile of 10 algorithms on 15 benchmark functions.
Figure 7:

Performance profile of 10 algorithms on 15 benchmark functions.

4.5 Algorithm performance comparison in PID parameter optimization

In this section, to test the performance of the SOAPSO algorithm, five test control system functions are used optimizing PID parameters by algorithm. Equations (21)(25) showed the test control system functions optimizing PID parameters used in the experiment.

(21) g 1 ( s ) = 2.6 ( 2.7 s + 1 ) ( 0.3 s + 1 )

(22) g 2 ( s ) = 981.966306 804.882485 s 2 + 459.744086 s + 2.32887523

(23) g 3 ( s ) = 5 ( 2.7 s + 1 ) e 0.5 s

(24) g 4 ( s ) = 3 ( 2 s + 1 ) e 3 s

(25) g 5 ( s ) = 1 ( s + 1 ) 8

In order to test the performance of the SOAPSO algorithm, the SOAPSO algorithm has been compared with the PSO, SA-GA, DA, BSO, GSA, SCA, SSA, MVO and the SOA algorithm in the PID controller parameter optimization. The mean results, the standard deviation (Std.) results, the optimal fitness value, the worst fitness value, and the rank results between the algorithms of 30 independent runs for g1 ∼ g5 are shown in Table 8. To g1 ∼ g4, the population size of the algorithms is 20, the max number of iterations of all tests is 20, the step response time set 10s. To g5, the population size of algorithms is 50, the max number of iterations of all tests is 50, the step response time set 50s. Figure 8 shows the process diagram for optimizing test control system PID parameters by the SOAPSO. Figure 9 shows the optimization PID parameters model structure of the test control system. Figure 10 is shows the fitness curves. Figure 11 is shows the ANOVA tests of the global minimum values and Figure 12 shows the unit step functions PID controller parameter optimization for g1 to g5.

Table 8:

Comparison of performance of algorithms in PID controller parameter optimization of 30 independent runs.

Test functions Result Algorithms
PSO SA_GA DA BSO GSA SCA SSA MVO SOA SOAPSO
g1 Mean 0.2267 0.3169 0.1462 0.4320 0.4571 0.0918 0.0651 0.2501 0.1917 0.0500
Std. 0.0877 0.0649 0.0803 0.1403 0.1569 0.0263 0.0513 0.0532 0.11226 0.00235
Worst 0.4205 0.4321 0.3062 0.8831 0.8622 0.1139 0.3329 0.2668 0.42650 0.05830
Best 0.0485 0.1002 0.0503 0.2630 0.2732 0.0483 0.0479 0.0513 0.05774 0.04785
Rank 4 8 5 9 10 3 2 6 7 1
g2 Mean 0.35251 0.5053 0.2967 0.6804 0.6494 0.1007 0.2699 0.3953 0.28875 0.10254
Std. 0.20955 0.1059 0.2134 0.1516 0.1232 8.7707e-4 0.2122 0.1963 0.18233 0.00700
Worst 0.54032 0.6176 0.5411 1.0432 0.9683 0.1034 0.5448 0.5343 0.53616 0.13865
Best 0.1000003 0.1555 0.1000 0.3982 0.3806 0.1000 0.0996 0.1000 0.09991 0.09979
Rank 7 8 4 10 9 4 1 4 3 2
g3 Mean 58.4757 62.4599 58.1507 62.4345 60.7787 24.8454 30.9646 59.5805 42.1538 0.49115
Std. 7.75976 0.1216 9.5559 0.8076 5.3034 21.5239 29.6325 7.6556 27.9025 0.13170
Worst 62.5971 62.6105 62.4997 63.0899 64.1487 62.5282 62.5897 62.5030 62.5479 0.82873
Best 36.0409 62.0356 14.3698 58.3042 42.7711 0.4898 0.3392 32.6095 0.39301 0.32158
Rank 7 10 5 9 8 4 2 6 3 1
g4 Mean 1.8481e+2 2.7179e+2 1.3533e+2 2.7181e+2 2.7665e+2 29.0458 41.7597 1.0848e+2 2.6269e+2 10.0080
Std. 59.6434 0.62334 71.5086 5.5362 10.3088 11.9839 40.9625 56.6750 44.8106 1.01517
Worst 3.1486e+2 2.7391e+2 1.9761e+2 2.7955e+2 3.1639e+2 50.000 1.4962e+2 1.9618e+2 2.7470e+2 12.98208
Best 32.5445 2.71191 24.9192 2.4361e+2 2.7139e+2 14.5588 9.10102 20.0492 26.5763 8.74922
Rank 8 1 6 9 10 4 3 5 7 2
g5 Mean 1.7713e+2 55.3556 81.651133 41.441442 2.3413e+2 85.196656 64.196288 35.721213 46.10528 34.642843
Std. 4.2182e+2 36.00807 79.815329 11.386779 2.1754e+2 1.0050e+2 41.708210 1.411226 26.992197 0.0536118
Worst 2.1298e+3 163.6571 3.6475e+2 90.501277 1.1508e+3 4.2034e+2 2.1021e+2 41.285971 1.8468e+2 34.920259
Best 34.625063 34.6294 34.638061 35.045432 58.321733 34.867448 34.625153 34.643162 34.745734 34.62476
Rank 2 4 5 9 10 8 3 6 7 1
Average Rank 5.6 6.2 5 9.2 9.4 4.6 2.2 5.4 5.4 1.4
Overall Rank 7 8 4 9 10 3 2 5 5 1
  1. The bold values in the table indicate the best results.

Figure 8: 
A process diagram for optimizing test control system PID parameters by the SOAPSO.
Figure 8:

A process diagram for optimizing test control system PID parameters by the SOAPSO.

Figure 9: 
Optimization PID parameters model structure of test control system.
Figure 9:

Optimization PID parameters model structure of test control system.

Figure 10: 
Convergence curves for PID controller parameter optimization, a) g1, b) g2, c) g3, d) g4, e) g5.
Figure 10:

Convergence curves for PID controller parameter optimization, a) g1, b) g2, c) g3, d) g4, e) g5.

Figure 11: 
ANOVA tests for PID controller parameter optimization, a) g1, b) g2, c) g3, d) g4, e) g5.
Figure 11:

ANOVA tests for PID controller parameter optimization, a) g1, b) g2, c) g3, d) g4, e) g5.

Figure 12: 
Unit step functions PID controller parameter optimization, a) g1, b) g2, c) g3, d) g4, e) g5.
Figure 12:

Unit step functions PID controller parameter optimization, a) g1, b) g2, c) g3, d) g4, e) g5.

As can be seen from Table 8, the SOAPSO algorithm has the fastest global convergence speed and the highest convergence precision in all of these functions only except g2 and g4, for the benchmark function g2, the SOAPSO algorithm has already reached the theoretical optimal value. From the evolutionary process of fitness value, it can be seen that the SOAPSO algorithm has a strong ability to find the optimal solutions.

For the PID controller parameter optimization, the solution of SOAPSO algorithm is the smallest value to the mean results, the standard deviation (Std.) results, and the worst fitness value. The SOAPSO algorithm can converge within the maximum number of iterations, and it has a faster global convergence speed in many functions and higher convergence precision. Moreover, as seen from Figure 11, the SOAPSO is the most robust in these algorithms. In the last, as seen from Figure 12, by the SOAPSO algorithm to optimization the unit step functions PID controller parameter for g1–g5, the unit step functions tend to stabilize very quickly and accurately. Therefore, the SOAPSO is also an effective and feasible solution for optimization problems in control system functions optimizing PID parameters by algorithm.

4.6 Algorithm performance comparison in constrained engineering problems

Six constrained engineering problems are used to test the capability of the SOAPSO further. These constrained engineering problems are very popular in the literature. The penalty function is used to calculate the constrained problem. The parameters set for all of the heuristic algorithms still adopt the parameter setting Table 3 of Section 4.3.

4.6.1 Welded beam design

This is a least fabrication cost problem, which has four parameters and seven constraints. The formulations of the problem are Equations (26)(43). The parameters of the structural system are shown in Figure 13 [9]. Some of the works come from these kinds of literature: GSA [8], MFO [9], MVO [13], CPSO [64], and HS [65]. For the problem in this paper, the SOAPSO is compared to the PSO, SA_GA, DA, BSO, GSA, SCA, SSA, MVO, and the SOA, and provide the best-obtained values in Table 9.

(26) Consider  x = [ x 1 , x 2 , x 3 , x 4 ] = [ h , l , t , b ] ,

(27) Minimize  f ( x ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14 + x 2 ) ,

(28) Subject to  g 1 ( x ) = τ ( x ) τ max 0 ,

(29) g 2 ( x ) = σ ( x ) σ max 0 ,

(30) g 3 ( x ) = x 1 x 4 0 ,

(31) g 4 ( x ) = 1.10471 x 1 2 + 0.04811 x 3 x 4 ( 14 + x 2 ) 5 0 ,

(32) g 5 ( x ) = 0.125 x 1 0 ,

(33) g 6 ( x ) = δ ( x ) δ max 0 ,

(34) g 7 ( x ) = P P c ( x ) 0 ,

(35) Variable range  0.1 x 1 2 ,

(36) 0.1 x 2 10 ,

(37) 0.1 x 3 10 ,

(38) 0.1 x 4 2 ,

(39) with  τ ( x ) = ( τ ) 2 + 2 τ τ x 2 2 R + ( τ ) 2 ,

(40) τ = P 2 x 1 x 2 , τ = M P J , M = P ( L + x 2 2 ) ,

(41) R = x 2 2 4 + ( x 1 + x 3 2 ) 2 , J = 2 { 2 x 1 x 2 [ x 2 2 4 + ( x 1 + x 3 2 ) 2 ] } ,

(42) σ ( x ) = 6 PL x 4 x 3 2 , δ ( x ) = 6 PL 3 E x 4 x 3 3 ,

(43) P c ( x ) = 4.013 E x 3 2 x 4 6 36 L 2 ( 1 x 3 2 L E 4 G ) ,

and P = 6000 lbs = 2721.6 kg, L = 14 in = 0.3556 m, E = 30 × 106 psi = 206,850 MPa, G = 12 × 106 psi = 82,740 MPa, τ max = 136,000 psi = 93.772 MPa, σ max = 30,000 psi = 206.85 MPa, δ ma x = 0.25 in = 0.0625 m.

Figure 13: 
Design parameters of the welded beam design problem.
Figure 13:

Design parameters of the welded beam design problem.

Table 9:

Comparison results of the welded beam design problem.

Algorithm Optimal values for variables Optimal cost Rank
h l t b
GSA [8] 0.182129 3.856979 10.0000 0.202376 1.87995 14
MFO [9] 0.2057 3.4703 9.0364 0.2057 1.72452 8
MVO [13] 0.205463 3.473193 9.044502 0.205695 1.72645 9
CPSO [64] 0.202369 3.544214 9.048210 0.205723 1.72802 10
HS [65] 0.2442 6.2231 8.2915 0.2443 2.3807 15
PSO 0.20437461682 3.27746206207 9.03907307954 0.20573458497 1.69700648019 2
SA-GA 0.26572876298 2.77789863579 7.63164040030 0.28853829376 1.99412873170 13
DA 0.204403919934271 3.270476762038852 9.060189939938688 0.205612356418801 1.698792090915354 5
BSO 0.200986698000924 3.340190407114191 9.034684361089031 0.205817980582575 1.700320898804440 7
GSA 0.12743403146 5.89076184871 8.05262845397 0.25908004232 2.10212926568 12
SCA 0.20112344041 3.23948182622 9.40574225336 0.20795790595 1.76704865429 11
SSA 0.202070956658453 3.319454357077143 9.036787303432986 0.205728837179598 1.698832459557541 6
MVO 0.20397627841 3.28970350716 9.03536739179 0.20582407425 1.69811381975 4
SOA 0.19348578918 3.489546622637 9.027709656861 0.20615302629 1.69714450048 3
SOAPSO 0.205737406556505 3.253602499355056 9.036942735165496 0.205751419536403 1.695542183515181 1
  1. The bold values in the table indicate the best results.

In Table 9, the SOAPSO algorithm is better than the GSA, MFO, MVO, CPSO, and the HS algorithm in other kinds of literature. The SOAPSO is also better than the PSO, SA_GA, DA, BSO, GSA, SCA, SSA, MVO, and the SOA.

4.6.2 Pressure vessel design

This is also the least fabrication cost problem of four parameters and four constraints. The formulations of the problem are Equations (44)(53). The parameters of the structural system are shown in Figure 14 [9]. Some of the works come from the literature: MFO [9], ES [66], DE [67], ACO [68], and GA [69]. For the problem the SOAPSO is compared to the PSO, SA_GA, DA, BSO, GSA, SCA, SSA, MVO, and the SOA, and provides the best-obtained values in Table 10.

(44) Consider  x = [ x 1 , x 2 , x 3 ] = [ T , s T , h R , L ] ,

(45) Minimize  f ( x ) = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3 ,

(46) Subject to  g 1 ( x ) = x 1 + 0.0193 x 3 0 ,

(47) g 2 ( x ) = x 2 + 0.00954 x 3 0 ,

(48) g 3 ( x ) = π x 3 2 x 4 4 3 π x 3 3 + 1296000 0 ,

(49) g 4 ( x ) = x 4 240 0 ,

(50) Variable range  0 x 1 99 ,

(51) 0 x 2 99 ,

(52) 10 x 3 200 ,

(53) 10 x 4 200 ,

Figure 14: 
Pressure vessel design problem.
Figure 14:

Pressure vessel design problem.

Table 10:

Comparison of results for the pressure vessel design.

Algorithm Optimal values for variables Optimal cost Rank
T s T h R L
MFO [9] 0.8125 0.4375 42.098445 176.636596 6059.7143 11
ES [66] 0.8125 0.4375 42.098087 176.640518 6059.7456 13
DE [67] 0.8125 0.4375 42.098411 176.637690 6059.7340 12
ACO [68] 0.8125 0.4375 42.103624 176.572656 6059.0888 10
GA [69] 0.8125 0.4375 42.097398 176.654050 6059.9463 14
PSO 0.93627266112 0.41391783346 47.19019859907 123.06285131625 6317.0167340514 15
SA-GA 0.83804097369 0.41223740796 45.10610463950 142.64078515697 5931.2868373440 8
DA 0.7424534886963 0.3694874063445 40.3200553665492 200.0000000000000 5735.104525550256 2
BSO 0.7577708260759 0.3768722559688 41.0753218111850 189.7409640585777 5764.131209400236 5
GSA 0.89533101776 0.43654377356 47.89640596198 115.96279725902 6057.9309555313 9
SCA 0.71165237901 0.39215740603 40.39056304889 200.00000000000 5903.0036698882 7
SSA 0.7453325684133 0.3714831435280 40.4653356671543 197.9814104958969 5740.503728086691 4
MVO 0.75462696023 0.37830685291 40.94839768196 191.64503059607 5764.4347452930 6
SOA 0.76961590364 41.5284631287 0.388196715944 183.84147207932 5735.1355906012 3
SOAPSO 0.7430438520196 0.3704103258374 40.3197048517771 200.0000000000000 5734.983040625856 1
  1. The bold values in the table indicate the best results.

For the problem, the SOAPSO algorithm is better than the MFO, ES, DE, ACO, and the GA algorithm in other kinds of literature. The SOAPSO is also better than the PSO, SA_GA, DA, BSO, GSA, SCA, SSA, MVO and the SOA.

4.6.3 Cantilever beam design problem

The promble is determined by five parameters, and is only applied to the scope of variables of constraints. The formulations of the problem are Equations (54)(57). The parameters of the structural system are shown in Figure 15 [9]. Some of the works come from these kinds of literature: MFO [9] CS [70], GCA [71], MMA [71], and SOS [72]. For the problem, the SOAPSO is compared to the PSO, SA_GA, DA, BSO, GSA, SCA, SSA, MVO, and the SOA, and provides the best-obtained values in Table 11.

(54) Consider  x = [ x 1 , x 2 , x 3 , x 4 , x 5 ] ,

(55) Minimize  f ( x ) = 0.0624 ( x 1 + x 2 + x 3 + x 4 + x 5 ) ,

(56) Subject to g ( x ) = 61 x 1 3 + 37 x 2 3 + 19 x 3 3 + 7 x 4 3 + 1 x 5 3 1 0 ,

(57) Variable range  0.01 x 1 , x 2 , x 3 , x 4 , x 5 100 ,

Figure 15: 
Cantilever beam design problem.
Figure 15:

Cantilever beam design problem.

Table 11:

Comparison of results for the cantilever beam design.

Algorithm Optimal values for variables Optimum weight Rank
x 1 x 2 x 3 x 4 x 5
MFO [9] 5.9848717732 5.3167269243 4.4973325858 3.5136164677 2.1616202934 1.339988086 8
CS [70] 6.0089 5.3049 4.5023 3.5077 2.1504 1.33999 9
GCA [71] 6.0100 5.3000 4.4900 3.4900 2.1500 1.3400 10
MMA [71] 6.0100 5.3000 4.4900 3.4900 2.1500 1.3400 11
SOS [72] 6.01878 5.30344 4.49587 3.49896 2.15564 1.33996 4
PSO 6.007219438 5.311747232 4.505611438 3.4904346887 2.158626706 1.339963522 5
SA-GA 6.251285023 5.460509756 4.149903306 3.8032391760 1.974102742 1.350285757 14
DA 6.0563407804 5.3241124746 4.4954051989 3.50655517270 2.0949566737 1.3401929632 13
BSO 6.0357867996 5.2975225360 4.4867307802 3.49489797255 2.1588279511 1.3399679864 6
GSA 6.020285873 5.305304583 4.512114944 3.4939372220 2.142187864 1.339969652 7
SCA 5.801308754 5.589807963 4.497563735 3.4994713866 2.262668613 1.351011196 15
SSA 6.0161915790 5.3091446860 4.4940346160 3.50135576164 2.1527729288 1.3399513774 1
MVO 6.017944991 5.336576175 4.493102726 3.4797461041 2.146292918 1.340024388 12
SOA 6.014092415 5.315583298 4.484154000 3.5033360363 2.156331174 1.339957455 3
SOAPSO 6.0197181873 5.3045384437 4.4909167501 3.50400187111 2.1543547842 1.3399528868 2
  1. The bold values in the table indicate the best results.

In Table 11, the SOAPSO algorithm proves to be better than the MFO, CS, GCA, MMA, and the SOS algorithm in other kinds of literature. The SOAPSO is also better than the PSO, SA_GA, DA, BSO, GSA, SCA, MVO, and the SOA. There is not much of a difference between the optimal value of the SOAPSO algorithm and that of SSA algorithm.

4.6.4 Piston lever problem

This is a locating the piston components problem, which has four variables and four constraints. The formulations of the problem are the Equations (58)(66). The parameters of the structural system are shown in Figure 16 [73]. Some of these works come from the kinds of literature: DE [74], GA [74], HPSO [74], CS [75], SNS [73]. And the SOAPSO is compared to the PSO, SA_GA, GSA, SCA, MVO, and the SOA, and provides the best-obtained values for variables and the best obtained values in Table 12.

In Table 12, except for the SNS, the SOAPSO algorithm proves to be better than the DE, GA, HPSO, and the CS algorithm in other kinds of literature. The SOAPSO is also better than the PSO, SA-GA, SCA, GSA, MVO, and the SOA. The result of the SOAPSO has reached the theoretical best solution, although the optimum of the SOAPSO is worse than that of the SNS algorithm.

(58) Consider  x = [ x 1 , x 2 , x 3 , x 4 ] = [ H , B , D , X ] ,

(59) Minimize  f ( x ) = 1 4 π x 3 2 ( L 2 L 1 ) ,

(60) Subject to  g 1 ( x ) = Q L cos θ R × F 0 ,

(61) g 2 ( x ) = Q ( L x 4 ) M max 0 ,

(62) g 3 ( x ) = 1.2 ( L 2 L 1 ) L 1 0 ,

(63) g 4 ( x ) = x 3 2 x 2 0 ,

(64) with  R = | x 4 ( x 4 sin θ + x 1 ) + x 1 ( x 2 x 4 cos θ ) | ( x 4 x 2 ) 2 + x 1 2 ,

(65) F = π P x 3 2 4 ,

(66) L 1 = ( x 4 x 2 ) 2 + x 1 2 ,

with θ = 45°, Q = 10,000lbs = 4536 kg, L = 240in = 6.096 m, M max = 1.8 × 106lbs/in = 315 N/m, P = 1500psi = 10.3425 MPa.

Table 12:

Comparison of results of the piston lever.

Algorithm Optimal values for variables Optimal value Rank
x 1(H) x 2(B) x 3(D) x 4(X)
DE [74] 129.4 2.43 119.80 4.75 159 9
GA [74] 250.0 3.96 60.03 5.91 161 11
HPSO [74] 135.5 2.48 116.62 4.75 161 11
CS [75] 0.050 2.043 120.000 4.085 8.427 5
SNS [73] 0.050 2.042 120.000 4.083 8.412698349 1
PSO 0.0500000000000 2.2004458009526 4.3429316143480 110.6668235984532 10.24259065154840 8
SA-GA 0.0500000000000 2.0417619164866 4.0830416789885 119.9999715756912 8.413739285922011 4
GSA 215.9646294854686 344.6817103997157 03.2179597335794 60.1907279614403 329.0280579311141 12
SCA 0.0589732446248 2.0456822965266 4.0848813955281 120.0000000000000 8.520961983238227 7
MVO 0.0500000000000 2.0502088605801 4.0908333358488 119.9640112340799 8.479423852552021 6
SOA 0.7649083155587 2.0351385415011 4.0554693386457 120.0000000000000 8.413476646923973 3
SOAPSO 0.0500000000000 2.0414808420678 4.0830580681750 120.0000000000000 8.412928462010163 2
  1. The bold values in the table indicate the best results.

(67) L 2 = ( x 4 sin θ + x 1 ) 2 + ( x 2 x 4 cos θ ) 2 ,
(68) variable range  0.05 x 1 , x 2 , x 4 500 ,
(69) 0.05 x 3 120 .

4.6.5 Tubular column design

This is also a minimum cost promble of two parameters and six constraints. The formulations of the problem are Equations (70)(79). The parameters of the structural system are shown in Figure 17 [73]. Some of these works come from the kinds of literature: CS [75], ISA [76], FA [77], ASO [78], SNS [73]. And the SOAPSO is compared to the PSO, SA_GA, GSA, SCA, MVO, and the SOA, and provides the best-obtained values for variables and the best-obtained values in Table 13.

(70) Consider  x = [ x 1 , x 2 ] = [ d , t ] ,

(71) Minimize  f ( x ) = 9.8 x 1 x 2 + 2 x 1 ,

(72) Subject to  g 1 ( x ) = P π x 1 x 2 σ y 1 0 ,

(73) g 2 ( x ) = 8 P L 2 π 3 E x 1 x 2 ( x 1 2 + x 2 2 ) 1 0 ,

(74) g 3 ( x ) = 2.0 x 1 1 0 ,

(75) g 4 ( x ) = x 1 14 1 0 ,

(76) g 5 ( x ) = 0.2 x 2 1 0 ,

(77) g 6 ( x ) = x 2 8 1 0 ,

(78) variable range  2 x 1 14 ,

(79) 0.2 x 2 0.8 .

with σ y  = 500 kgf/cm2 = 49.03325 MPa, E = 0.85 × 106 kgf/cm2 = 83356.525 MPa.

Figure 16: 
Cantilever beam design.
Figure 16:

Cantilever beam design.

Table 13:

Comparison results of the Tubular column design.

Algorithm Optimal values for variables Optimal cost Rank
x 1(d) x 2(t)
CS [75] 5.45139 0.29196 26.53217 10
ISA [76] 5.45115623 0.29196547 26.5313 8
FA [77] N/A N/A 26.4994969 5
ASO [78] N/A N/A 26.53137828 9
SNS [73] 26.5313 26.5313 26.4994969 5
PSO 5.45241248 0.29161380 26.48643791 3
SA-GA 5.48259506 0.28999485 26.54702124 11
GSA 5.46443535 0.29261817 26.59900262 12
SCA 5.45179801 0.29199765 26.50433586 7
MVO 5.45225365 0.29161486 26.48627240 1
SOA 5.45386190 0.29153427 26.48664773 4
SOAPSO 5.452336447668137 0.291608418803226 26.48643423 2
  1. The bold values in the table indicate the best results.

In Table 13, the SOAPSO algorithm proves to be better than the CS, ISA, FA, ASO, and the SNS algorithms in other kinds of literature. Except for the MVO, the SOAPSO is also better than the PSO, SA-GA, GSA, SCA, and the SOA. Although the optimum of the SOAPSO is worse than that of the MVO, the result of the SOAPSO has reached the theoretical best solution.

4.6.6 Reinforced concrete beam design

This is an optimization problem of designing a reinforced concrete beam, which has three variables and two constraints. The formulations of the problem are Equations (80)(86). Figure 18 is the schematic diagram [73]. Some of these works come from the kinds of literature: GHN-EP [74], FA [79], CS [75], ASO [78], SNS [73]. And the SOAPSO is compared to the PSO, SA_GA, GSA, SCA, MVO, and the SOA, and provides the best-obtained values for variables and the best-obtained values in Table 14.

(80) Consider  x = [ x 1 , x 2 , x 3 ] = [ A s , b , h ] ,

(81) Minimize  f ( x ) = 2.9 x 1 + 0.6 x 2 x 3 ,

(82) Subject to  g 1 ( x ) = x 2 x 3 4 0 ,

(83) g 2 ( x ) = 180 + 7.375 x 1 2 x 3 x 1 x 2 0 ,

(84) Variable range  x 1 { 6 , 6.16 , 6.32 , 6.6 , 7 , 7.11 , 7.2 , 7.8 , 7.9 , 8 , 8.4 } ,

(85) x 2 { 28 , 29 , 30 , , 40 } ,

(86) 5 x 3 10 .

Figure 17: 
Tubular column design problem.
Figure 17:

Tubular column design problem.

Figure 18: 
Reinforced concrete beam design problem.
Figure 18:

Reinforced concrete beam design problem.

Table 14:

Comparison results for reinforced concrete beam design.

Algorithm Optimal values for variables Optimal value Rank
x 1(A s ) x 2(b) x 3(h)
GHN-EP [74] 6.32 34 8.637180 362.00648 12
FA [79] 6.32 34 8.5000 359.2080 6
CS [75] 6.32 34 8.5000 359.2080 6
ASO [78] 6.32 34 8.5000 359.2080 6
SNS [73] 6.32 34 8.5000 359.2080 6
PSO 0.21973978 0.48774465 8.49953948 359.20330245 1
SA-GA 0.22271591 0.49673391 8.49953948 359.20330245 1
GSA 0.27021862 0.46197772 8.52039835 359.62412635 11
SCA 0.23315802 0.50078494 8.49841820 359.23116167 10
MVO 0.22893546 0.48404040 8.49958944 359.20335774 5
SOA 0.21878243 0.50307295 8.49954126 359.20330246 3
SOAPSO 0.20313921 0.49336095 8.49953944 359.20330245 1
  1. The bold values in the table indicate the best results.

In Table 14, the SOAPSO algorithm proves to be better than the GHN-EP, FA, CS, ASO, and the SNS algorithm in other kinds of literature. Except for the PSO, SA_GA, and the SOA, the SOAPSO is also better than the GSA, SCA, and the MVO. Although the optimum of the SOAPSO is same that of the PSO and the SA_GA algorithm, the result of the SOAPSO has reached the theoretical best solution.

In brief, the SOAPSO algorithm proves to be better than the other algorithms in most actual studies. The SOAPSO can better resolve these practical problems than that of the other algorithms.

5 Conclusions

A SOAPSO algorithm is presented based on the hybrid of seeker optimization algorithm and particle swarm optimization. The SOAPSO algorithm is tested to the SOAPSO from different perspectives, the benchmark functions optimization, the PID control parameters optimization problems, and the constrained engineering problems.

According to the comparative analysis of the experiments, the following conclusions can be drawn:

  1. The hybrid algorithm of seeker optimization algorithm and particle swarm optimization tend to generate in any seeker, increase the diversity of seeker, increases the search space, and avoid premature convergence.

  2. Among the PSO, SA_GA, DA, BSO, GSA, SCA, SSA, MVO, SOA, and the SOAPSO algorithm, the SOAPSO optimization benchmark function has higher optimization capability.

  3. The SOAPSO optimization benchmark functions have almost the same calculational complexity as the SOA.

  4. The running time of the SOAPSO optimization benchmark function is relatively high. Among the 10 algorithms compared, the running time is better than that of the SA-GA, DA, BSO, and the GSA algorithm.

  5. The SOAPSO maintain good balance between the exploration and exploitation as the number of iterations increases.

  6. The SOAPSO can solve real challenging problems, such as: the PID control parameters optimization problems and the classical constrained engineering optimization problems.

  7. Further improving and application can be incorporated into future studies.


Corresponding author: Shaomi Duan, Kunming University of Science and Technology, Kunming, Yunnan, China, E-mail:

Funding source: National Natural Science Foundation of China http://dx.doi.org/10.13039/501100001809

Award Identifier / Grant number: 51766005

Award Identifier / Grant number: 52166001

About the authors

Haipeng Liu

Haipeng Liu is an associate professor in the Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan, China. His research interests include: new energy technology, energy saving, and simulation, modeling and optimization of manufacturing process using intelligence techniques.

Shaomi Duan

Shaomi Duan is currently a PhD Student in the Faculty of Civil Engineering and Mechanics, Kunming University of Science and Technology, Kunming, Yunnan, China. Her research interests include: modeling and optimization of manufacturing process using statistical and computational intelligence techniques; and optimization using metaheuristics.

Huilong Luo

Huilong Luo is a professor in the Faculty of Civil Engineering and Mechanics, Kunming University of Science and Technology, Kunming, Yunnan, China. His main research interests are optimization, artificial intelligence, manufacturing processes, heat exchange, and energy saving.

  1. Author contributions: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

  2. Research funding: This work was supported by National Natural Science Foundation of China (http://dx.doi.org/10.13039/501100001809) Grant 51766005, 52166001.

  3. Conflict of interest statement: The authors declare no conflicts of interest regarding this article.

References

[1] D. H. Wolpert and W. G. Macready, “No free lunch theorems for optimization,” IEEE Trans. Evol. Comput., vol. 1, pp. 67–82, 1997, https://doi.org/10.1109/4235.585893.Search in Google Scholar

[2] J. H. Holland, “Genetic algorithms,” Sci. Am., vol. 267, pp. 66–72, 1992, https://doi.org/10.1038/scientificamerican0792-66.Search in Google Scholar

[3] R. C. Eberhart and J. A. Kennedy, “A new optimizer using particle swarm theory,” in Proceedings of the 6th International Symposium on Micro Machine and Human Science (MHS ’95), Nagoya, Japan, IEEE, 1995, pp. 39–43.10.1109/MHS.1995.494215Search in Google Scholar

[4] E. Aarts and P. Laarhoven, “Simulated annealing: an introduction,” Stat. Neerl., vol. 43, pp. 31–52, 1989, https://doi.org/10.1111/j.1467-9574.1989.tb01245.x.Search in Google Scholar

[5] W. G. Zong, J. H. Kim, and G. V. Loganathan, “A new heuristic optimization algorithm, harmony search,” Simulation, vol. 76, pp. 60–68, 2001.10.1177/003754970107600201Search in Google Scholar

[6] S. Mirjalili, “Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems,” Neural Comput. Appl., vol. 29, pp. 1–21, 2016, https://doi.org/10.1007/s00521-015-1920-1.Search in Google Scholar

[7] Y. Shi, “An optimization algorithm based on brainstorming process,” Int. J. Swarm Intell. Res. (IJSIR), vol. 2, no. 4, pp. 35–62, 2011, https://doi.org/10.4018/ijsir.2011100103.Search in Google Scholar

[8] E. Rashedi, H. Nezamabadi-Pour, and S. Saryazdi, “GSA: a gravitational search algorithm,” Inf. Sci., vol. 179, no. 13, pp. 2232–2248, 2009, https://doi.org/10.1016/j.ins.2009.03.004.Search in Google Scholar

[9] S. Mirjalili, “Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm,” Knowl. Base Syst., vol. 89, pp. 228–249, 2015, https://doi.org/10.1016/j.knosys.2015.07.006.Search in Google Scholar

[10] B. S. Yıldız, “Optimal design of automobile structures using moth-flame optimization algorithm and response surface methodology,” Mater. Test., vol. 62, no. 4, pp. 372–377, 2020, https://doi.org/10.3139/120.111494.Search in Google Scholar

[11] S. Mirjalilia, “SCA: a sine cosine Algorithm for solving optimization problems,” Knowl. Base Syst., vol. 96, no. 15, pp. 120–133, 2016. https://doi.org/10.1016/j.knosys.2015.12.022.Search in Google Scholar

[12] S. Gandomi and Z. Mirjalili, “Salp Swarm Algorithm: a bio-inspired optimizer for engineering design problems,” Adv. Eng. Software, vol. 114, pp. 163–191, 2017, https://doi.org/10.1016/j.advengsoft.2017.07.002.Search in Google Scholar

[13] S. Mirjalilia, S. M. Mirjalili, and A. Hatamlou, “Multi-Verse Optimizer: a nature-inspired algorithm for global optimization,” Neural Comput. Appl., vol. 17, pp. 16–19, 2015, https://doi.org/10.1007/s00521-015-1870-7.Search in Google Scholar

[14] M. Tuba, I. Brajevic, and R. Jovanovic, “Hybrid seeker optimization algorithm for global optimization,” Appl. Math. Inf. Sci., vol. 7, no. 3, pp. 867–875, 2013, https://doi.org/10.12785/amis/070304.Search in Google Scholar

[15] J. Jia, S. Song, T. Cheng, S. Gao, and Y. Todo, “An Artificial Bee Colony Algorithm Search Guided by Scale-free Networks,” Inf. Sci., vol. 473, pp. 142–165, 2019, https://doi.org/10.1016/j.ins.2018.09.034.Search in Google Scholar

[16] A. H. Gandomi and A. H. Alavi, “Krill herd: a new bio-inspired optimization algorithm,” Commun. Nonlinear Sci. Numer. Simulat., vol. 17, pp. 4831–4845, 2012. https://doi.org/10.1016/j.cnsns.2012.05.010.Search in Google Scholar

[17] G. Wang, S. Deb, and Z. Cui, “Monarch butterfly optimization,” Neural Comput. Appl., vol. 31 pp. 1995–2014, 2019, https://doi.org/10.1007/s00521-015-1923-y.Search in Google Scholar

[18] G. Wang, S. Deb, and L. S. Coelho, Elephant Herding Optimization, 2015 3rd International Symposium on Computational And Business Intelligence, Bali, Indonesia, IEEE, 2015, pp. 1–5.10.1109/ISCBI.2015.8Search in Google Scholar

[19] G. Wang, “Moth search algorithm: a bio-inspired metaheuristic algorithm for global optimization problems,” Memetic Comput., vol. 10, pp. 151–164, 2018. https://doi.org/10.1007/s12293-016-0212-3.Search in Google Scholar

[20] S. Li, H. Chen, M. Wang, A. A. Heidari, and S. Mirjalili, “Slime mould algorithm: a new method for stochastic optimization,” Future Generat. Comput. Syst., vol. 111, pp. 300–323, 2020. https://doi.org/10.1016/j.future.2020.03.055.Search in Google Scholar

[21] A. A. Heidari, H. Faris, I. Aljarah, and M. Mafarja, “Harris hawks optimization: algorithm and applications,” Future Generat. Comput. Syst., vol. 97, pp. 849–872, 2019. https://doi.org/10.1016/j.future.2019.02.028.Search in Google Scholar

[22] B. S. Yıldız, “The spotted hyena optimization algorithm for weight-reduction of automobile brake components,” Mater. Test., vol. 62, no. 4, pp. 383–388, 2020, https://doi.org/10.3139/120.111495.Search in Google Scholar

[23] H. Abderazek, B. S. Yldz, A. R. Yildiz, E. S. Albak, and S. Bureerat, “Butterfly optimization algorithm for optimum shape design of automobile suspension components,” Mater. Test., vol. 62, no. 4, pp. 365–370, 2020, https://doi.org/10.3139/120.111492.Search in Google Scholar

[24] B. S. Yildiz, A. R. Yildiz, N. Pholdee, S. Bureerat, and V. Patel, “The Henry gas solubility optimization algorithm for optimum structural design of automobile brake components,” Mater. Test., vol. 62, no. 3, pp. 261–264, 2020, https://doi.org/10.3139/120.111479.Search in Google Scholar

[25] H. Zkaya, M. Yldz, A. R. Yildiz, S. Bureerat, and S. M. Sait, “The equilibrium optimization algorithm and the response surface based metamodel for optimal structural design of vehicle components,” Mater. Test., vol. 62, pp. 492–496, 2020, https://doi.org/10.3139/120.111509.Search in Google Scholar

[26] B. S. Yildiz, “The mine blast algorithm for the structural optimization of electrical vehicle components,” Mater. Test., vol. 62, no. 5, pp. 497–501, 2020, https://doi.org/10.3139/120.111511.Search in Google Scholar

[27] B. S. Yildiz, “Natural frequency optimization of vehicle components using the interior search algorithm,” Mater. Test., vol. 59, no. 5, pp. 456–458, 2017, https://doi.org/10.3139/120.111018.Search in Google Scholar

[28] C. Juang, “A hybrid of genetic algorithm and particle swarm optimization for recurrent network design,” IEEE Trans. Syst. Man Cybern. B Cybern., vol. 34, no. 2, pp. 99–1006, 2004, https://doi.org/10.1109/TSMCB.2003.818557.Search in Google Scholar PubMed

[29] B. S. Yildiz and A. R. Yildiz, “Comparison of grey wolf, whale, water cycle optimization algorithm, ant lion and sine-cosine algorithms for the optimization of a vehicle engine connecting rod,” Mater. Test., vol. 60, no. 3, pp. 311–315, 2018, https://doi.org/10.3139/120.111153.Search in Google Scholar

[30] G. Wang, L. Guo, H. Duan, H. Wang, L. Liu, and M. Shao, “A hybrid metaheuristic DE/CS algorithm for UCAV three-dimension path planning,” Sci. World J., vol. 4, pp. 1–11, 2012, https://doi.org/10.1100/2012/583973.Search in Google Scholar PubMed PubMed Central

[31] B. S. Yildiz and A. R. Yildiz, “The Harris hawks optimization algorithm, salp swarm optimization algorithm, grasshopper optimization algorithm and dragonfly algorithm for structural design optimization of vehicle components,” Mater. Test., vol. 61, no. 8, pp. 744–748, 2019, https://doi.org/10.3139/120.111379.Search in Google Scholar

[32] N. Holden and A. A. Freitas, “A hybrid particle swarm/ant colony algorithm for the classification of hierarchical biological data,” in Proceedings of the IEEE Swarm Intelligence Symposium (SIS’05), Pasadena, CA, USA, IEEE, 2005, pp. 100–107.10.1109/SIS.2005.1501608Search in Google Scholar

[33] A. R. Yildiz, B. S. Yildiz, S. M. Sait, and X. Y. Li, “The Harris hawks, grasshopper and multiverse optimization algorithms for the selection of optimal machining parameters in manufacturing operations,” Mater. Test., vol. 61, pp. 725–733, 2019, https://doi.org/10.3139/120.111377.Search in Google Scholar

[34] R. Pitakaso, K. Sethanan, and T. Jamrus, “Hybrid PSO and ALNS algorithm for software and mobile application for transportation in ice manufacturing industry 3.5,” Comput. Ind. Eng., vol. 144, pp. 1–13, 2020, https://doi.org/10.1016/j.cie.2020.106461.Search in Google Scholar

[35] B. S. Yldz, N. Pholdee, S. Bureerat, A. R. Yildiz, and S. M. Sait, “Robust design of a robot gripper mechanism using new hybrid grasshopper optimization algorithm,” Expet Syst., vol. 38, no. 3, pp. 1–15, 2021, https://doi.org/10.1111/exsy.12666.Search in Google Scholar

[36] A. R. Yldz and M. U. Erda, “A new Hybrid Taguchi-salp swarm optimization algorithm for the robust design of real-world engineering problems,” Mater. Test., vol. 63, pp. 157–162, 2021, https://doi.org/10.1515/mt-2020-0022.Search in Google Scholar

[37] A. R. Yildiz, S. Bureerat, E. Kurtulu, and S. M. Sait, “A novel hybrid Harris hawks simulated annealing algorithm and RBF-based metamodel for design optimization of highway guardrails,” Mater. Test., vol. 62, pp. 251–260, 2020, https://doi.org/10.3139/120.111478.Search in Google Scholar

[38] C. Dai, Y. Zhu, and W. Chen, “Seeker optimization algorithm,” in Process. 2006 International Conference Computational Intelligence and Security, vol. 1, Guangzhou, China, IEEE Press, 2006, pp. 225–229.10.1109/ICCIAS.2006.294126Search in Google Scholar

[39] C. Dai, W. Chen, and L. Li, “Seeker optimization algorithm for parameter estimation of time-delay chaotic systems,” Phys. Rev. E – Stat. Nonlinear Soft Matter Phys., vol. 83, no. 3, 2011, No. 036203-1-11, https://doi.org/10.1103/PhysRevE.83.036203.Search in Google Scholar PubMed

[40] C. Dai, W. Chen, Y. Zhu, and X. Zhang, “Seeker optimization algorithm for optimal reactive power dispatch,” IEEE Trans. Power Syst., vol. 24, no. 3, pp. 1218–1231, 2009, https://doi.org/10.1109/TPWRS.2009.2021226.Search in Google Scholar

[41] C. Dai, W. Chen, Y. Song, and Y. Zhu, “Seeker optimization algorithm: a novel stochastic search algorithm for global numerical optimization,” J. Syst. Eng. Electron., vol. 21, no. 2, pp. 300–311, 2010, https://doi.org/10.3969/j.issn.1004-4132.2010.02.021.Search in Google Scholar

[42] C. Dai, W. Chen, and Y. Zhu, “Seeker optimization algorithm for digital IIR filter design,” IEEE Trans. Ind. Electron., vol. 57, no. 5, pp. 1710–1718, 2010, https://doi.org/10.1109/TIE.2009.2031194.Search in Google Scholar

[43] C. Dai, W. Chen, Y. Zhu, Z. Jiang, and Z. You, “Seeker optimization algorithm for tuning the structure and parameters of neural networks,” Neurocomputing, vol. 74, no. 6, pp. 876–883, 2011, https://doi.org/10.1016/j.neucom.2010.08.025.Search in Google Scholar

[44] C. Dai, Z. Cheng, Q. Li, Z. Jiang, and J. Jia, “Seeker optimization algorithm for global optimization: a case study on optimal modelling of proton exchange membrane fuel cell (PEMFC),” Int. J. Electr. Power Energy Syst. Eng., vol. 33, no. 3, pp. 369–376, 2011, https://doi.org/10.1016/j.ijepes.2010.08.032.Search in Google Scholar

[45] C. Dai, W. Chen, L. Ran, Y. Zhang, and Y. Du, “Human group optimizer with local search,” Lect. Notes Comput. Sci., vol. 6728, pp. 310–320, 2011, https://doi.org/10.1007/978-3-642-21515-5_37.Search in Google Scholar

[46] Y. Zhu, C. Dai, and W. Chen, “Seeker optimization algorithm for several practical applications,” Int. J. Comput. Intell. Syst., vol. 7, no. 2, pp. 3 53–359, 2014, https://doi.org/10.1080/18756891.2013.864476.Search in Google Scholar

[47] Z. Li and P. L. Chee, “Intelligent optic disc segmentation using improved particle swarm optimization and evolving ensemble models,” Appl. Soft Comput. J., vol. 92, pp. 1–19, 2020, https://doi.org/10.1016/j.asoc.2020.106328.Search in Google Scholar

[48] H. Xia, Z. Wang, and Y. Zhou, “Double-population differential evolution based on logistic model,” J. Inf. Comput. Sci., vol. 11, no. 15, pp. 5549–5557, 2014, https://doi.org/10.12733/jics20104712.Search in Google Scholar

[49] Z. Bao, Y. Zhou, and L. Li, “A hybrid global optimization algorithm based on wind driven optimization and differential evolution,” Math. Probl Eng., pp. 1–20, 2015, https://doi.org/10.1155/2015/389630.Search in Google Scholar

[50] L. Li, Y. Zhou, and J. Xie, “A free search krill herd algorithm for functions optimization,” Math. Probl Eng., pp. 1–21, 2014, https://doi.org/10.1155/2014/936374.Search in Google Scholar

[51] X. Li, J. Zhang, and M. Yin, “Animal migration optimization: an optimization algorithm inspired by animal migration behavior,” Neural Comput. Appl., vol. 24, nos. 7–8, pp. 1867–1877, 2014, https://doi.org/10.1007/s00521-013-1433-8.Search in Google Scholar

[52] M. Molga and C. Smutnicki, Test functions for optimization needs, 2005. http://www.robertmarks.org/Classes/ENGR5358/Papers/functions.pdf.Search in Google Scholar

[53] J. Kennedy and R. Eberhart, “Particle Swarm Optimization,” in Icnn95-international Conference on Neural Networks, Perth, WA, Australia, IEEE, 1995, pp. 1942–1948.10.1109/ICNN.1995.488968Search in Google Scholar

[54] H. Yu, H. Fang, P. Yao, and Y. Yuan, “A combined genetic algorithm/simulated annealing algorithm for large scale system energy integration,” Comput. Chem. Eng., vol. 24, no. 8, pp. 2023–2035, 2000, https://doi.org/10.1016/S0098-1354(00)00601-3.Search in Google Scholar

[55] C. Li, Z. Luo, Z. Song, F. Yang, F. Jinghui, and P. X. Liu, “An enhanced brain storm sine cosine algorithm for global optimization problems,” IEEE Access, vol. 7, pp. 28211–28229, 2019, https://doi.org/10.1109/ACCESS.2019.2900486.Search in Google Scholar

[56] M. B. Dowlatshahi, H. Nezamabadi-pour, and M. Mashinchi, “A discrete gravitational search algorithm for solving combinatorial optimization problems,” Inf. Sci., vol. 258, pp. 94–107, 2014, https://doi.org/10.1016/j.ins.2013.09.034.Search in Google Scholar

[57] Y. Zhou, Z. Bao, Q. Luo, and S. Zhang, “A complex-valued encoding wind driven optimization with greedy strategy for 0-1 knapsack problem,” Appl. Intell., vol. 46, pp. 684–702, 2017, https://doi.org/10.1007/s10489-016-0855-2.Search in Google Scholar

[58] S. Dasgupta and C. H. Papadimitriou, Vazirani UV, Algorithms, New York, USA, McGraw-Hill.Search in Google Scholar

[59] J. Derrac, S. García, D. Molina, and F. Herrera, “A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms,” Swarm Evol. Comput., vol. 1, pp. 3–18, 2011, https://doi.org/10.1016/j.swevo.2011.02.002.Search in Google Scholar

[60] M. Crepinsek, S. Lin, and M. Mernik, “Exploration and exploitation in evolutionary algorithms: a survey,” ACM Comput. Surv., vol. 45, no. 3, pp. 1–33, 2013, https://doi.org/10.1145/2480741.2480752.Search in Google Scholar

[61] H. Kashif, S. M. N. Mohd, C. Shi, and Y. Shi, “On the exploration and exploitation in popular swarm-based metaheuristic algorithms,” Neural Comput. Appl., vol. 31, pp. 7665–7683, 2019, https://doi.org/10.1007/s00521-018-3592-0.Search in Google Scholar

[62] B. Morales-Castaeda, D. Zaldívar, E. Cuevas, F. Fausto, and A. Rodríguez, “A better balance in metaheuristic algorithms: does it exist?” Swarm Evol. Comput., vol. 54, pp. 1–23, 2020, https://doi.org/10.1016/j.swevo.2020.100671.Search in Google Scholar

[63] E. Dolan and J. Moré, “Benchmarking optimization software with performance profiles,” Math. Program, vol. 91, no. 2, pp. 201–213, 2002, https://doi.org/10.1007/s101070100263.Search in Google Scholar

[64] R. A. Krohling and L. S. Coelho, “Coevolutionary particle swarm optimization using Gaussian distribution for solving constrained optimization problems,” IEEE Trans. Syst. Man Cybern. B Cybern., vol. 36, pp. 1407–1416, 2006, https://doi.org/10.1109/TSMCB.2006.873185.Search in Google Scholar

[65] K. S. Lee and Z. W. Geem, “A new meta-heuristic algorithm for continuous engineering optimization: harmony search theory and practice,” Comput. Methods Appl. Mech. Eng., vol. 194, pp. 3902–3933, 2005, https://doi.org/10.1016/j.cma.2004.09.007.Search in Google Scholar

[66] E. M. Montes and C. A. C. Coello, “An empirical study about the usefulness of evolution strategies to solve constrained optimization problems,” Int. J. Gen. Syst., vol. 37, pp. 443–473, 2008, https://doi.org/10.1080/03081070701303470.Search in Google Scholar

[67] L. Li, Z. Huang, F. Liu, and Q. Wu, “A heuristic particle swarm optimizer for optimization of pin connected structures,” Comput. Struct., vol. 85, pp. 340–349, 2007, https://doi.org/10.1016/j.compstruc.2006.11.020.Search in Google Scholar

[68] A. Kaveh and S. Talatahari, “An improved ant colony optimization for constrained engineering design problems,” Eng. Comput., vol. 27, pp. 155–182, 2010, https://doi.org/10.1108/02644401011008577.Search in Google Scholar

[69] K. Deb, “Optimal design of a welded beam via genetic algorithms,” AIAA J., vol. 29, pp. 2013–2015, 1991, https://doi.org/10.2514/3.10834.Search in Google Scholar

[70] A. H. Gandomi, X. S. Yang, and A. H. Alavi, “Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems,” Eng. Comput., vol. 29, pp. 17–35, 2013, https://doi.org/10.1007/s00366-011-0241-y.Search in Google Scholar

[71] H. Chickermane and H. Gea, “Structural optimization using a new local approximation method,” Int. J. Numer. Methods Eng., vol. 39, pp. 829–846, 1996, https://doi.org/10.1002/(SICI)1097-0207.Search in Google Scholar

[72] M.-Y. Cheng and D. Prayogo, “Symbiotic organisms search: a new metaheuristic optimization algorithm,” Comput. Struct., vol. 139, pp. 98–112, 2014, https://doi.org/10.1016/j.compstruc.2014.03.007.Search in Google Scholar

[73] H. Bayzidi, S. Talatahari, M. Saraee, and C. P. Lamarche, “Social network search for solving engineering optimization problems,” Comput. Intell. Neurosci., vol. 2021, pp. 1–32, 2021, https://doi.org/10.1155/2021/8548639.Search in Google Scholar PubMed PubMed Central

[74] P. Kim and J. Lee, “An integrated method of particle swarm optimization and differential evolution,” J. Mech. Sci. Technol., vol. 23, no. 2, pp. 426–434, 2009, https://doi.org/10.1007/s12206-008-0917-4.Search in Google Scholar

[75] I. Rechenberg, Evolutionsstrategien, Medizinische Informatik und Statistik, Berlin, Heidelberg, Germany, Springer, 1978.10.1007/978-3-642-81283-5_8Search in Google Scholar

[76] A. H. Gandomi and D. A. Roke, “Engineering optimization using interior search algorithm,” in Proceedings of the 2014 IEEE Symposium Series on Computational Intelligence-SIS, Orlando, FL, USA, IEEE, 2015, pp. 20–26.10.1109/SIS.2014.7011771Search in Google Scholar

[77] J. Wu, Y. G. Wang, K. Burrage, Y. C. Tian, B. Lawson, and Z. Ding, “An improved firefly algorithm for global continuous optimization problems,” Expert Syst. Appl., vol. 149, no. 113340, 2020, https://doi.org/10.1016/j.eswa.2020.113340.Search in Google Scholar

[78] S. Talatahari, M. Azizi, and A. H. Gandomi, “Material generation algorithm: a novel metaheuristic algorithm for optimization of engineering problems,” Processes, vol. 9, no. 5, pp. 1–35, 2021, https://doi.org/10.3390/pr9050859.Search in Google Scholar

[79] A. H. Gandomi, X.-S. Yang, and A. H. Alavi, “Mixed variable structural optimization using Firefly Algorithm,” Comput. Struct., vol. 89, nos. 23–24, pp. 2325–2336, 2011, https://doi.org/10.1016/j.compstruc.2011.08.002.Search in Google Scholar

Published Online: 2022-07-07
Published in Print: 2022-07-26

© 2022 Haipeng Liu et al., published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 10.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/mt-2021-2138/html
Scroll to top button