Abstract
Artificial bee colony (ABC) is a kind of a metaheuristic population-based algorithms proposed in 2005. Due to its simple parameters and flexibility, the ABC algorithm is applied to engineering problems, algebra problems, and so on. However, its premature convergence and slow convergence speed are inherent shortcomings. Aiming at the shortcomings, a novel global ABC algorithm with self-perturbing (IGABC) is proposed in this paper. On the basis of the original search equation, IGABC adopts a novel self-adaptive search equation, introducing the guidance of the global optimal solution. The search method improves the convergence precision and the global search capacity. An excellent leader can lead the whole team to obtain more success. In order to obtain a better “leader,” IGABC proposes a novel method with global self-perturbing. To avoid falling into the local optimum, this paper designed a new mutation strategy that simulates the natural phenomenon of sick fish being eaten.
1 Introduction
After the particle swarm optimization (PSO) algorithm [11], genetic algorithm [22], and ant colony algorithm (ACO) [4] were proposed, artificial bee colony (ABC) [9] was a new swarm intelligence algorithm proposed by Karaboga. Owing to its simple principle, fewer control parameters, flexibility, and strong robustness [10], the ABC algorithm is applied to engineering problems, algebra problems, and so on. In the ABC algorithm, onlooker bees select high-quality nectar-rich flower to attach to, and search new nectar in the vicinity of the attached flower. Thus, the ABC algorithm has very strong capacity of local exploration.
However, every coin has two sides. In general, each algorithm has inherent shortcomings. The ABC algorithm is prone to premature convergence, and its convergence speed is slow. Because of lacking the global optimal solution in the search equation, the original ABC algorithm has weak exploitation capacity and easily slips into the local optimum. Moreover, due to the lack of the global optimal solution guide, its convergence speed is slow. Fundamentally, the reason for the slow convergence speed is that its exploration capacity is strong but its exploitation capacity is weak.
Aiming at these problems, researchers have proposed many improvements based on the original ABC algorithm. Zhu and Kwong [29], inspired by the search formula of improved PSO algorithm, proposed a novel improved ABC algorithm, named the Gbest-guided ABC algorithm. With the guidance of global optimal solution, the convergence speed is observably accelerated; however, the problem that the algorithm easily falls into local optimum has not effectively improved. Inspired by the PSO algorithm, Jadhav and Roy [8] proposed an improved global ABC algorithm. Inspired by the PSO and differential evolution (DE) algorithm, Wang and Kong [23] proposed an improved ABC algorithm (IABC). In IABC, a novel search formula is proposed. However, the convergence speed and convergence accuracy of the IABC algorithm is not outstandingly improved. Combining the DE algorithm and the ABC algorithm, Gao et al. [5] proposed a new search equation that is different from the one in the IABC algorithm. The improved ABC algorithm adopted opposition-learning strategy to initialize the population. Aiming at overcoming the shortcoming of the ABC algorithm search capability, Gao et al. [6] improved the capacity of global search by generating a candidate solution, and the improved ABC algorithm was named CABC. Combining quantum computing and ABC algorithm, Bouziz et al. [2] proposed a quantum ABC algorithm. The quantum computing improved the diversity and computing power of this algorithm. Guo et al. [7], in reference to the PSO algorithm, incorporated the global optimum into the search equation of the ABC algorithm, and proposed the global ABC algorithm (GABCS). The GABCS algorithm improves the ability to exploration, but still easily falls into local optimum. Basing on the piecewise search strategy, Luo et al. [14] proposed an improved ABC algorithm (SABC). In the piecewise search strategy, the search space is segmented into several segments. The SABC algorithm improves the search efficiency and convergence speed, but the local exploitation capacity of SABC is still weak. Inspired by the DE algorithm, Zhang and Sanyang [25] proposed an improved ABC algorithm (NABC). The NABC algorithm improves the convergence rate. By improving the search equation of the original ABC algorithm, Wang et al. [24] proposed an ABC/current-to-best algorithm. The convergence speed of the improved algorithm is accelerated; however, the improved algorithm still easily falls into local optimum. Li and Yang [13] added a memory mechanism to the original algorithm, and proposed the ABCM algorithm. Zhao et al. [28] proposed a new ABC algorithm with the self-adaptive global best-guided quick searching strategy (ABCSGQ). Roy and Jadhav [16] used the ABC algorithm to solve the power system problem. Combining the respective advantages of the PSO algorithm and the ACO algorithm, Zhang et al. [27] proposed a new hybrid algorithm. Bolaji et al. [1] proposed a hybrid ABC algorithm for uncapacitated examination timetabling. Inspired by the DE algorithm, Sun et al. [21] proposed a hybrid ABC algorithm called HABC. HABC proposes three search strategies, and one of them is selected according to probability. When the test object is in unimodal function, the improvement effect is obvious. However, when the test function is in multimodal function, the improvement effect is poor. Inspired by the PSO algorithm, Pan et al. [15] proposed an improved ABC algorithm with a novel search strategy called IABC. The convergence speed and convergence precision of the IABC was slightly improved. Sharma and Pant contributed a lot to the survey about ABC. In Ref. [20], Sharma and Pant designed a hybrid algorithm, Shuffled-ABC. In this improved algorithm, the population is divided into two groups according to their fitness. The ABC search equation was applied to the first group; meanwhile, a shuffled frog leaping algorithm (SFLA) search equation was applied to the second group. Shuffled-ABC gathers the advantages of the ABC algorithm and the SFLA. An improved ABC algorithm, IABC [17], was designed by Sharma and Pant. This improved algorithm (IABC) initialized the population by using improved opposition-based learning and searched better food by using the greedy strategy. Kumar et al. [12] effectively dealt with image segmentation by using the ABC algorithm. In Ref. [19], Sharma and Pant proposed a novel ABC algorithm, CF-ABC. In the CF-ABC algorithm, a search equation was embedded with levy probability distribution and abandon factor. CF-ABC balanced the exploration and exploitation ability to obtain quality food sources and convergence speed. CF-ABC had dealt with the software project scheduling problem. To solve three classical structural optimization problems, Sharma and Pant [18] designed a novel ABC algorithm, DABC, which generated new food source by searching dichotomously in both directions.
For these problems, based on the research results of the above scholars, this paper proposed a novel global ABC with self-perturbing (IGABC). Inspired by the literature [29], this paper proposes a novel global search equation for employed bees, and incorporates the global optimum into the search equation. While inspired by the adaptive search method in the PSO algorithm, the adaptive factor is added to the search formula of the employment bees. In the different period of the algorithm iteration, the exploitation capacity and exploration capacity of the IGABC algorithm is balanced. In order to further strengthen the leadership of the global optimal solution, this paper proposes a new update formula of the global optimal solution, by perturbing the global optimal solution slightly.
In later experiments, it was discovered that the IGABC algorithm still easily falls into the local optimal solution, especially when the test function is a multimodal function or function whose convergence is difficult. In general, scholars solve the problem of the algorithm falling into local optimum by increasing the variation. On this basis, the paper proposes a novel mutation strategy. The mutation strategy simulates the natural phenomenon of sick fish being eaten. In the IGABC algorithm, the individual around the global worst individual would have bad fitness. We randomly selected an individual from these bad individuals to reinitialize. This method increases the population diversity and avoids falling into the local optimum. In the following experiments, it was found that this method is effective in solving the problem of those functions whose convergence is difficult, such as the Rosenbrock function or the Schaffer function. However, for the unimodal function, it was found that this method has no obvious effect on solving the problem of the algorithm falling into a local optimum, but increases the computational difficulty. This is because the unimodal function does not require a large amount of variance. Therefore, when the problem is a unimodal function, such as the Sphere function, this method is not joined in the realization of the algorithm.
The simulation experiments showed that the IGABC algorithm greatly improves the convergence precision and convergence speed, and the exploitation ability is enhanced. It is difficult to discover the optimum solution of Rosenbrock. Thus, we designed an experiment that improved all ABC algorithms, including the IGABC algorithm and other six improved ABC algorithms, and tested the Rosenbrock function. By comparing the convergence curves, it was found that the convergence speed and convergence accuracy of the IGABC algorithm is improved significantly. The convergence accuracy is improved by 16 orders of magnitude. The development ability of the IGABC algorithm is outstanding. The IGABC algorithm is good at solving complex function optimization problems.
The article is organized as follows: Section 2 presents the standard ABC algorithm. Section 3 describes the IGABC algorithm proposed in this paper. Section 4 introduces steps and analysis of IGABC. Section 5 introduces the experimental contents and results. The last section is devoted to the conclusion.
2 Standard ABC Algorithm
In the ABC algorithm, bees are divided into employed bees, onlooker bees, and scout bees, according to the division of labor. The bee population size is SN. The initialization of the algorithm is conducted according to formula (1):
where j=1, …, D; i=1, …, SN, SN is the number of bee population, D indicates the dimension of individual vector, lbj indicates the lower bound of the search space, and ubj indicates the upper bound of the search space.
2.1 Colony Evolution Stage
Employed bees search better nectar in the vicinity of the attached flower. If a new nectar is found better than the current, the nectar is updated. The employed bees search better nectar in the vicinity of the attached flower according to formula (2):
where j=1, 2, …, D; k=1, 2, …, D, k is randomly generated but k≠i, and ϕij is a random number in the range of [1, −1].
Onlooker bees choose the nectar in accordance with the Roulette strategy, and then transform into employed bees. The new employed bees search new nectar in the neighborhood of the attached flower. The Roulette strategy ensures that excellent nectars are more likely to be selected. The Roulette selection probability is shown in formula (3):
where fit(xi) is the fitness value of nectar xi and Pi is the selected probability of nectar xi.
The employed bees find a new nectar, then compare the fitness of the new nectar with the fitness of the old one, and leave the better nectar. The computational formula of fitness is shown in formula (4):
where f(xi) is a function value of the objective function, corresponding to nectar xi.
When the nectar is not updated and failTime exceeds the threshold limit, the employed bees will transform into scout bees and search the whole search range. The scout bees generate new solution according to formula (1).
3 A Novel Global ABC Algorithm with Self-Perturbing
For a swarm intelligence optimization algorithm, how to balance the exploration ability and development ability of the algorithm determines the optimization performance of the algorithm. Development capability is the ability to search for and find a better solution in a particular area. In contrast, exploitation capacity is the ability to search for and find a better solution in a different region of the search space. The original ABC algorithm easily falls into local optimum and premature convergence. The reason is that the original ABC algorithm has weak exploitation capacity and good exploration capacity. To solve this problem, this paper proposes a global adaptive and self-perturbing search strategy. The adaptive parameters are added to the strategy, so the development ability and exploration ability of the algorithm are balanced at different periods. At the same time, the global optimal solution is added to enhance the development ability of the algorithm.
3.1 The Global Adaptive Search Equation with Self-Perturbing
Researchers continued seeking a reasonable method to balance the development ability and exploration ability of the algorithm. Many scholars had proposed different improvement measures. In Ref. [29], inspired by the search equation of the PSO algorithm, the authors proposed the Gbest-guided search equation as shown in formula (5):
where xgj is the current global optimal solution; xij is the current solution; xki is a current random solution, which is different from xij; and rij, ϕij is a random number between −1 and 1, respectively. The improved search equation improved the development ability of the algorithm to a certain extent, but did not consider that the algorithm needs fast convergence speed in the early evolutionary stage. With just such an improvement, the IGABC algorithm is still easy to fall into local optimum. Thus, combining with the linear weight concept mentioned in the literature [3], this paper proposes a new adaptive search equation, as shown in formula (6):
where rij is a random number between −1 and 1; iter is the current number of iterations; maxiter is the maximum number of iterations; and c is a constant between 0 and 1, and depends on the problem to be solved. If the test function is a Rosenbrock function, c=0.1. At the early stage of evolution, ϕij ≈ rand∗c, the proportion of the global optimal solution is enhanced. Along with the evolution of the algorithm, the proportion of the random term continues to be strengthened and contributes to avoiding falling into the local optimum solution.
3.2 New Mutation Strategy
According to the given search equation in the preceding paragraph, it can be known that the population individuals all move toward the global optimal solution. However, if the algorithm is just improved by an improved search equation, the algorithm depends unduly on the global optimal solution. Thus, it is likely to fall into, and does not have capacity to jump out of, the local optimum. If the optimal solution quality is poor, it will cause that the entire population to move toward a wrong direction and may fall into a local optimal solution. On the contrary, the global optimal solution whose quality is good can accelerate the convergence speed and improve the search efficiency, as a good leader can lead the whole team to greater success. In order to get a better “leader,” this paper proposes a global traversal method. The method is that in the bee colony, all bees disturb the global optimal solution and the degree of the disturbance is random. The formula that updates the global optimum solution is shown in formula (8):
where Xg is the global optimal solution, c is a constant number greater than zero, and the value of c depends on the problem to be solved. Through the method, the diversity of the population is increased. The global optimal solution moves irregularly in the population. This method is attributed to finding a better “leader,” accelerating the convergence speed, and improving the convergence accuracy.
In later experiments, it was discovered that the IGABC algorithm still falls into local optimal solution easily when the test function is a multimodal function or has a difficult convergence. In general, scholars adopted more variations to solve the problem of the algorithm falling into local optimum. On this basis, the paper proposes a novel mutation strategy. The mutation strategy simulates the natural phenomenon of sick fish being eaten. In the IGABC algorithm, the individual around the global worst individual would have bad fitness. We randomly selected an individual from these bad individuals to reinitialize. gw is the global worst individual. The vicinity of gw is shown in formula (9):
where Range is the radius of the neighborhood to the global worst individual gw. The Euclidean distance between the global optimal solution and the global worst solution is used as the radius of the neighborhood range.
A sick fish moves slowly, and is easily eaten by a big fish. Doing analogy to this natural phenomenon, we select an individual randomly from those in the neighborhood of the global worst individual gw to conduct random initialization. The neighborhood range is determined by formula (9), and we randomly select an individual to conduct initialization. The individual that we select needs to meet formula (10):
where a is the range coefficient and determines the size of the neighborhood. In order to ensure that worse individuals could be selected and better individuals could not be ignored, the value of a is between 0.1 and 0.5. The neighborhood of the worst individual gw is shown in Figure 1.

The Neighborhood of gw.
As is shown in Figure 1, the green ball presents the worst individual gw. The red ball represents individuals in the neighborhood of gw and will be selected randomly to conduct random initialization. The yellow ball stands for global optimal solution xg. The blue ball represents individuals that are not in the neighborhood of gw.
This method increases the population diversity and avoids falling into the local optimum. In the following experiments, it was found that this method is effective in solving multimodal functions whose convergence is difficult, such as the Rosenbrock function or the Griewank function. For the unimodal function, it was found that this method has no obvious effect on the algorithm convergence and increases the computational difficulty. This is because the unimodal function does not require a large amount of variance. Therefore, when the problem is a unimodal function, such as the Sphere function, this method is not joined in the realization of the algorithm.
4 Analysis and Steps of IGABC
On the basis of the standard ABC algorithm, the improved algorithm IGABC adopts the global adaptive search equation with self-perturbing. The paper changes the search equation [29] further, proposed by previous scholars. The new search equation is improved by introducing an adaptive coefficient, and the exploration and exploitation ability are balanced in the different stages of generations. In the early generations, the adaptive coefficient is small. Thus, the exploration ability is strong and the development capacity is weak. With the increase of iterations, the exploration ability gradually weakens and the exploitation ability gradually strengthens. According to the search equation proposed by this paper, it is not difficult to find that the algorithm depends unduly on the global optimal solution. Good global optimal solution can ensure the efficiency of the algorithm, as a good leader can drive the team to greater success. In order to obtain a better “leader,” this paper proposes a global traversal method. The method indicates that all bees disturb the global optimal solution and the degree of the disturbance is random. Through the method, the diversity of the population is increased. The global optimal solution moves irregularly in the population. Thus, the problem that the algorithm easily falls into the local optimum is solved. The pseudo code description of the IGABC algorithm is presented as follows:
The pseudo code description of the IGABC algorithm |
Set the parameters of the algorithm, initialize the population colony, calculate the fitness value of each nectar, record the global optimum solution. |
while (whether algorithm termination condition is not satisfied); |
for (each employed bee in the population) |
update the nectar location according to formula (6); |
use greedy mechanism to choose better nectar; |
if (nectar is updated), then failTime=0; |
else failTime=failTime+1; |
end if |
end for; |
for (each onlooker bee in the bee population) |
if rand (0,1)<Pi, |
update the nectar according to formula (6); |
adopt greedy mechanism to choose better nectar; |
if (nectar is updated), then failTime=0; |
else failTime=failTime+1; |
end if |
end if |
end for; |
for (each employed bee in the bee population) |
update the global optimal solution according to formula (8); |
adopt greedy mechanism to choose better nectar; |
end for; |
calculate the neighborhood of the gw according to formula (9); |
for (each employed bee in the bee population) |
calculate whether the individual is in the neighborhood of the gw according to formula (10); |
end for |
select an individual to be initialized; |
end while (meet the end condition of iterations); |
output optimal solution and optimal value. |
5 Experiment Simulation
In order to verify the performance of the IGABC algorithm proposed in this paper, 18 benchmark functions are selected as the test objects. The standard ABC algorithm and other six improved ABC algorithms are chosen for comparison. The six improved ABC algorithms are published in international journals in nearly 5 years, including the GABC1 algorithm [29], CABC algorithm [6], NABC algorithm [25], WGABC algorithm [26], ABCSGQ algorithm [28], and GABC2 algorithm [16]. In order to fully test the performance of the algorithm, the selected benchmark functions include unimodal functions, multimodal functions, shift unimodal functions, and shift multimodal functions. The mathematical expressions, search scope, and theoretical optimal values of the 18 benchmark functions are shown in Table 1. The experiments were realized through MATLAB programming. The experiments were divided into two groups. In the first group, the population size is 20, the maximum cycle number is 2000, the dimension is 30, and the running time of every algorithm is 30. The experimental results are shown in Table 2. In the second group, the population size is 20, the maximum cycle number is 2000, the dimension is 50, and the running time of every algorithm is 30. The experimental results are shown in Table 3. Mean and Std represent the average and variance of the experimental results, respectively. Because the dimension of the benchmark functions GoldsteinPrice and Schaffer is 2, the experimental results of GoldsteinPrice and Schaffer are shown in Table 4. The convergence curves of some benchmark functions are shown in Figures 2 and 3.
Numerical Benchmark Functions.
Function | Formula | Search range | Global minimum |
---|---|---|---|
Sphere | (−100, 100) | ||
SumSquare | (−30, 30) | ||
Rosenbrock | (−30, 30) | ||
QuadricNoise | (−30, 30) | ||
Ackley | (−30, 30) | ||
Step | (−100, 100) | ||
HyperEllipsoid | (−30, 30) | ||
Rastrigin | (−30, 30) | ||
Zakharov | (−30, 30) | ||
Alpine | (−10, 10) | ||
Schewel | (−30, 30) | ||
SumDifferent | (−30, 30) | ||
Schaffer | (−30, 30) | ||
GoldsteinPrice | (−30, 30) | f14(0, −1)=3 | |
Griewank | (−600, 600) | ||
Shifted Sphere | (−100, 100) | ||
Shifted Rosenbrock | (−30, 30) | ||
Shifted Griewank | (−600, 600) |
Comparison of the Test Results of Eight Algorithms (D=30).
Mean | Std | Mean | Std | Mean | Std | Mean | Std | |
---|---|---|---|---|---|---|---|---|
f1 | f2 | f3 | f4 | |||||
ABC | 1.05e −15 | 3.54e−16 | 1.02e−15 | 4.67e−16 | 2.48e−00 | 4.37e+00 | 1.97e−02 | 4.49e−02 |
GABC1 | 6.46e−16 | 1.26e−16 | 6.60e−16 | 1.36e−16 | 9.70e+00 | 1.38e+01 | 3.83e−02 | 1.73e−02 |
CABC | 4.35e−03 | 1.80e−03 | 4.24e−03 | 8.20e−03 | 5.71e+02 | 1.02e+03 | 2.36e+01 | 8.74e+01 |
NABC | 3.55e−07 | 1.77e−06 | 2.63e−04 | 1.16e−03 | 9.56e+01 | 1.71e+02 | 8.41e−02 | 4.97e−02 |
WGABC | 5.15e−16 | 9.99e−17 | 5.19e−16 | 9.88e−17 | 3.20e+01 | 1.88e+01 | 4.14e−02 | 2.74e−02 |
ABCSGQ | 3.84e−11 | 2.07e−10 | 4.75e−08 | 2.16e−07 | 1.34e+01 | 2.35e+01 | 3.17e−02 | 1.29e−02 |
GABC2 | 5.87e−16 | 1.26e−16 | 6.00e−16 | 1.01e−16 | 9.44e+00 | 1.47 e+01 | 3.19e−02 | 1.54e−02 |
IGABC | 2.74e−16 | 6.51e−17 | 7.83e−16 | 1.01e−15 | 7.78e−16 | 9.16e−16 | 1.15e−03 | 1.25e−03 |
f5 | f6 | f7 | f8 | |||||
ABC | 1.97e−12 | 2.80e−12 | 3.33e−02 | 1.83e−01 | 9.99e−16 | 2.66e−16 | 3.34e−02 | 1.82e−01 |
GABC1 | 4.57e−14 | 4.90e−15 | 0.00e+00 | 0.00e+00 | 7.15e−16 | 1.28e−16 | 9.26e−05 | 2.47e−04 |
CABC | 2.22e+00 | 1.07e+00 | 0.00e+00 | 0.00e+00 | 3.60e−02 | 1.36e−01 | 9.95e+00 | 4.25e+00 |
NABC | 9.71e−01 | 7.12e−01 | 0.00e+00 | 0.00e+00 | 4.02e−03 | 1.94e−02 | 4.80e+00 | 3.67e+00 |
WGABC | 4.33e−14 | 5.90e−15 | 0.00e+00 | 0.00e+00 | 5.40e−16 | 9.74e−17 | 4.36e−14 | 3.56e−14 |
ABCSGQ | 1.61e−04 | 5.93e−04 | 0.00e+00 | 0.00e+00 | 4.93e−07 | 1.75e−06 | 1.62e−09 | 8.07e−09 |
GABC2 | 4.55e−14 | 7.50e−15 | 0.00e+00 | 0.00e+00 | 6.13e−16 | 9.44e−17 | 1.68e−05 | 7.93e−05 |
IGABC | 3.11e−14 | 3.75e−15 | 0.00e+00 | 0.00e+00 | 2.91e−16 | 7.09e−17 | 0.00e+00 | 0.00e+00 |
f9 | f10 | f11 | f12 | |||||
ABC | 9.44e−16 | 2.31e−16 | 5.82e−05 | 8.17e−05 | 1.96e−12 | 2.28e−12 | 1.93e−05 | 2.49e−05 |
GABC1 | 6.50e−16 | 1.15e−16 | 1.71e−05 | 4.58e−05 | 1.64e−15 | 2.38e−16 | 3.80e−06 | 7.35e−06 |
CABC | 3.00e−03 | 1.45e−02 | 8.73 e−02 | 1.34e−02 | 5.54e−02 | 7.28e−02 | 2.86e−15 | 1.35e−14 |
NABC | 5.54e−07 | 1.77e−06 | 2.09 e−02 | 5.31e−02 | 6.11e−03 | 1.03e−02 | 8.8e−12 | 2.72e−11 |
WGABC | 5.51e−16 | 1.24e−16 | 4.81e−05 | 8.57e−05 | 1.46e−15 | 2.11e−16 | 1.05e−16 | 8.49e−17 |
ABCSGQ | 8.49e−08 | 3.15e−07 | 4.97e−05 | 1.00e−04 | 1.79e−03 | 2.19e−03 | 1.13e−05 | 3.43e−05 |
GABC2 | 6.24e−16 | 1.50e−16 | 1.87e−05 | 6.61e−05 | 1.63e−15 | 2.14e−16 | 8.84e−14 | 3.37e−13 |
IGABC | 2.30e−16 | 4.36e−17 | 3.98e−07 | 1.40e−06 | 7.38e−16 | 1.12e−16 | 2.39e−17 | 1.14e−17 |
f15 | f16 | f17 | f18 | |||||
ABC | 2.56e−05 | 6.95e−05 | 3.90e+02 | 1.65e−13 | 3.93e+02 | 3.02e+00 | 3.90e+02 | 1.12e−02 |
GABC1 | 3.60e−08 | 1.87e−07 | 3.90e+02 | 1.17e−13 | 3.99e+02 | 1.41e+01 | 3.90e+02 | 5.88e−03 |
CABC | 4.43e−02 | 4.04e−02 | 3.90e+02 | 6.72e−04 | 8.75e+02 | 8.24e+02 | 3.90e+02 | 1.72e−01 |
NABC | 1.01e−02 | 2.65e−02 | 3.90e+02 | 3.67e−04 | 5.31e+02 | 4.14e+02 | 3.90e+02 | 1.77e−01 |
WGABC | 1.13e−09 | 5.97e−09 | 3.90e+02 | 8.11e−14 | 4.25e+02 | 2.07e+01 | 3.90e+02 | 7.67e−02 |
ABCSGQ | 2.46e−05 | 1.17e−04 | 3.90e+02 | 8.03e−10 | 3.95e+02 | 1.33e+01 | 3.90e+02 | 1.32e−02 |
GABC2 | 1.48e−10 | 4.35e−10 | 3.90e+02 | 1.19e−13 | 3.93e+02 | 4.35e+00 | 3.90e+02 | 5.54e−03 |
IGABC | 0.00e+00 | 0.00e+00 | 3.90e+02 | 2.11e−14 | 3.90e+02 | 4.74e−10 | 3.90e+02 | 3.66e−14 |
Bold values denotes the optimal values in the experimental data.
Comparison of the Test Results of Eight Algorithms (D=50).
Mean | Std | Mean | Std | Mean | Std | Mean | Std | ||||
---|---|---|---|---|---|---|---|---|---|---|---|
f1 | f2 | f3 | f4 | ||||||||
ABC | 5.97e −12 | 1.57e−−11 | 6.88e−09 | 2.24e−08 | 7.62e+00 | 7.24e+00 | 5.76e−01 | 1.13e−01 | |||
GABC1 | 1.56e−15 | 3.17e−16 | 1.39e−15 | 2.48e−16 | 1.10e+01 | 1.43e+01 | 1.59e−01 | 4.56e−01 | |||
CABC | 2.91e−05 | 7.34e−05 | 2.13e+00 | 3.29e+00 | 5.49e+02 | 6.20e+02 | 3.08e+00 | 7.49e+00 | |||
NABC | 3.96e−02 | 1.16e−01 | 1.09e+00 | 5.36e+00 | 1.11e+02 | 1.73e+02 | 1.88e−01 | 9.91e−02 | |||
WGABC | 1.26e−15 | 1.75e−16 | 1.34e−15 | 2.26e−16 | 6.01e+01 | 2.50e+01 | 3.99e−03 | 1.83e−03 | |||
ABCSGQ | 3.05e−06 | 8.62e−06 | 1.76e−06 | 3.78e−06 | 2.67e+01 | 3.36e+01 | 1.08e−01 | 3.07e−02 | |||
GABC2 | 1.48e−15 | 2.58e−16 | 1.55e−15 | 4.07e−16 | 2.22e+01 | 2.93e+01 | 1.61e−01 | 3.17e−02 | |||
IGABC | 6.43e−16 | 1.09e−16 | 6.61e−16 | 1.40e−16 | 5.30e−11 | 2.69e−10 | 9.30e−03 | 1.51e−02 | |||
f5 | f6 | f7 | f8 | ||||||||
ABC | 5.44e−06 | 4.84e−06 | 1.10e+00 | 7.59e−01 | 2.64e−08 | 4.07e−08 | 1.63e+00 | 1.43e+00 | |||
GABC1 | 3.12e−10 | 1.96e−10 | 0.00e+00 | 0.00e+00 | 1.43e−15 | 2.53e−16 | 5.51e−01 | 9.10e−01 | |||
CABC | 2.68e+00 | 1.27e+00 | 0.00e+00 | 0.00e+00 | 5.40e+00 | 1.08e+01 | 3.48e+01 | 1.18e+01 | |||
NABC | 1.66e+00 | 4.46e−01 | 0.00e+00 | 0.00e+00 | 1.25e−03 | 3.14e−02 | 2.28e+01 | 1.14e+01 | |||
WGABC | 1.10e−12 | 6.53e−13 | 0.00e+00 | 0.00e+00 | 1.32e−15 | 3.83e−16 | 9.49e−13 | 2.16e−12 | |||
ABCSGQ | 2.90e−02 | 9.86e−02 | 0.00e+00 | 0.00e+00 | 1.29e−07 | 2.94e−07 | 6.53e+00 | 4.11e+00 | |||
GABC2 | 3.27e−10 | 2.03e−10 | 0.00e+00 | 0.00e+00 | 1.38e−15 | 1.97e−16 | 6.14e−01 | 7.99e−01 | |||
IGABC | 6.73e−14 | 8.47e−15 | 0.00e+00 | 0.00e+00 | 8.26e−16 | 8.39e−16 | 0.00e+00 | 0.00e+00 | |||
f9 | f10 | f11 | f12 | ||||||||
ABC | 1.32e−08 | 1.64e−08 | 5.71e−02 | 3.86e−02 | 1.19e−12 | 6.09e−13 | 7.69e+00 | 2.04e+01 | |||
GABC1 | 1.43e−15 | 2.91e−16 | 1.05e−02 | 1.96e−02 | 1.63e−15 | 1.73e−16 | 2.69e−01 | 6.07e−01 | |||
CABC | 2.52e+00 | 1.08e+01 | 1.26e+00 | 9.88e−01 | 1.56e−01 | 3.44e−01 | 3.47e−02 | 1.17e−01 | |||
NABC | 8.52e−07 | 2.49e−06 | 2.42e−01 | 4.84e−01 | 1.43e−02 | 2.76e−02 | 4.98e−02 | 9.05e−02 | |||
WGABC | 1.43e−15 | 3.91e−16 | 5.18e−04 | 2.20e−04 | 1.49e−15 | 2.10e−16 | 9.90e−11 | 3.51e−10 | |||
ABCSGQ | 1.21e−09 | 2.73e−09 | 5.10e−02 | 1.77e−02 | 1.09e−03 | 1.42e−03 | 5.38e−02 | 8.86e−02 | |||
GABC2 | 1.44e−15 | 2.55e−16 | 2.45e−02 | 4.86e−02 | 1.63e−15 | 2.26e−16 | 5.51e−05 | 1.49e−04 | |||
IGABC | 2.72e−16 | 6.99e−17 | 3.55e−05 | 6.60e−05 | 7.47e−16 | 1.19e−16 | 2.67e−17 | 1.30e−17 | |||
f15 | f16 | f17 | f18 | ||||||||
ABC | 3.4e−08 | 1.73e−07 | 3.90e+02 | 9.14e−13 | 4.03e+02 | 1.82e+01 | 3.90e+02 | 7.71e−03 | |||
GABC1 | 1.36e−11 | 4.57e−11 | 3.90e+02 | 2.08e−13 | 4.24e+02 | 3.91e+01 | 3.90e+02 | 6.24e−03 | |||
CABC | 6.62e−02 | 8.51e−02 | 3.91e+02 | 2.62e+00 | 9.91e+04 | 5.37e+05 | 3.91e+02 | 4.63e−01 | |||
NABC | 1.19e−02 | 5.24e−02 | 3.90e+02 | 1.48e−03 | 4.50e+02 | 9.89e+01 | 3.90e+02 | 2.80e−01 | |||
WGABC | 1.13e−09 | 5.97e−09 | 3.90e+02 | 2.2e−13 | 4.57e+02 | 2.84e+01 | 3.90e+02 | 8.82e−03 | |||
ABCSGQ | 7.27e−06 | 3.46e−05 | 3.90e+02 | 2.43e−04 | 4.21e+02 | 3.04e+01 | 3.90e+02 | 3.63e−02 | |||
GABC2 | 1.76e−09 | 9.01e−09 | 3.90e+02 | 2.11e−13 | 4.23e+02 | 3.96e+01 | 3.90e+02 | 3.04e−03 | |||
IGABC | 0.00e+00 | 0.00e+00 | 3.90e+02 | 1.18e−13 | 3.90e+02 | 8.40e−06 | 3.90e+02 | 1.73e−13 |
Bold values denotes the optimal values in the experimental data.

The Convergent Curve of Different Benchmark Functions.

The Convergent Curve of Different Benchmark Functions.
Table 2 presents the comparison results of GABC1, CABC, NABC, WGABC, ABCSGQ, and GABC2 when the dimension is 30, where “Mean” is the average value and “Std” is the standard deviation. From the results, IGABC outperforms ABC on all benchmark functions. When the test function is STEP, Griewank, or Rastrigin, IGABC can find the theoretical optimal value. It is worth mentioning that when the test function is a Rosenbrock function, whose optimum solution is very difficult to find, the convergence accuracy is improved by 16 orders of magnitude.
Table 3 presents the experimental results of GABC1, CABC, NABC, WGABC, ABCSGQ, and GABC2 when the dimension is 50. According to the results, IGABC outperforms ABC on all benchmark functions. When the test function is STEP, Griewank, or Rastrigin, IGABC can find the theoretical optimal value. When the dimension is 50, the convergence accuracy is improved by 10 orders of magnitude than other improved ABC algorithms when the test object is a Rosenbrock function.
f13, f14 are all test functions whose dimension is 2. f13 has countless extreme points. It is difficult to find the global optimal value due to the strong oscillation of the function. Thus, f13 verifies the exploitation of the algorithm. IGABC can find a better solution than other algorithms according to Figure 2. f14 has a global optimum and three local extreme points. According to Figure 2, it can be known that the convergence accuracy of IGABC is better than other improved ABC algorithms.
As can be seen from Tables 2–4, comparing to the standard ABC, IGABC has different improved effects to the optimization of the 18 benchmark functions. Compared with the other six improved ABC algorithms, IGABC outperforms the other six improved ABC algorithms on most benchmark functions. Especially for Griewank, Step, and Rastrigin benchmark functions, whether it is 30 or 50 dimensions, IGABC can converge to the theoretical optimal value in less iterations. Although other test functions do not converge to the theoretical optimal value, the convergence speed and accuracy are also greatly improved. Especially for difficult optimization benchmark functions Rosenbrock and Schaffer, the IGABC convergence accuracy is significantly higher than other seven kinds of algorithms. In 18 test functions, the Rosenbrock function is a peak-value, spiral-type, non-quadratic function. There is a narrow valley next to its optimal solution. When searching its edges, oscillation often occurs. Thus, it is likely to fall into the local optimal solution, and the Rosenbrock function is often used to evaluate the exploitation ability of the algorithm. The convergence curve when the test benchmark function is the Rosenbrock function is shown in Figure 2. It is easy to elucidate that compared with the other six algorithms, the development capability of IGABC is very significant, and the convergence accuracy is improved by 16 orders of magnitude.
The Test Results of f13, f14.
ABC | GABC1 | CABC | NABC | WGABC | ABCSGQ | GABC2 | IGABC | ||
---|---|---|---|---|---|---|---|---|---|
f13 | Mean | 1.45e −01 | 1.45e−01 | 2.75e−01 | 1.41e−01 | 1.02e−02 | 3.88e−02 | 1.33e−01 | 1.64e−03 |
Std | 5.65e−02 | 9.96e−02 | 1.15e−01 | 1.07e−01 | 2.42e−03 | 3.15e−02 | 9.22e−03 | 3.57e−03 | |
f14 | Mean | 3.42e+01 | 6.85e+00 | 3.00e+00 | 3.07e+01 | 3.00e+00 | 3.13e+00 | 3.20e+00 | 3.00e+00 |
Std | 3.69e+01 | 7.15e+00 | 9.20e−04 | 3.43e+01 | 5.64e−04 | 3.40e−01 | 4.90e−01 | 2.56e−12 |
Bold values denotes the optimal values in the experimental data.
6 Conclusion
The standard ABC algorithm is a kind of a metaheuristic population-based algorithm that mimics the behavior of bees seeking nectar. It is applied to engineering problems, algebra problems, and so on. Because the standard ABC is good at exploring, and neglects the exploitation capacity, it leads to premature convergence and slow convergence speed of the ABC algorithm. Inspired by the DE algorithm and the PSO algorithm, this paper proposes a new adaptive search equation. The convergence speed is accelerated and the development ability is improved with leading of global optimal solution. However, the algorithm still falls into local optimal solution easily. Inspired by the natural phenomenon that describes diseased fish being eaten by big fish, this paper proposes a new mutation method. The mutation method increases the population diversity, solves the issues that the algorithm falls into local optimum easily, and improves the robustness of the algorithm. When the test object is a multimodal function or a function whose convergence is difficult, such as Rosenbrock and Schaffer, the validity of the method is confirmed. When the test object is a unimodal function or a function whose convergence is simple, the improvement is not obvious. The reason is that the functions do not have too many local optimal solutions, and the algorithm does not need to increase the population diversity in the iterative process. When these algorithms were used to optimize 18 benchmark functions, the experimental results showed that IGABC has greatly improved convergence speed and convergence precision compared with the other six improved ABCs. What is more, its convergence accuracy is improved by 16 orders of magnitude when the test object are the benchmark functions Rosenbrock and Schaffer.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (no. 61473266) and the Key Technology Research Project of Henan Province (no. 152102210036).
Bibliography
[1] A. L. Bolaji, A. T. Khader, M. A. Al-Betar and M. A. Awadallah, A hybrid nature-inspired artificial bee colony algorithm for uncapacitated examination timetabling problems, J. Intell. Syst.24 (2015), 37–54.10.1515/jisys-2014-0002Search in Google Scholar
[2] A. Bouziz, A. Draa and S. Chikhi, A quantum-inspired artificial bee colony algorithm for numerical optimization, in: Proceedings of the International Symposium on Programming and Systems, Algiers, pp. 81–88, 2013.10.1109/ISPS.2013.6581498Search in Google Scholar
[3] Y. H. Chi, F. C. Sun, W. J. Wang and C. M. Yu, An improved particle swarm optimization algorithm with search space zoomed factor and attractor, Chin. J. Comput.34 (2011), 116–130.10.3724/SP.J.1016.2011.00115Search in Google Scholar
[4] M. Dorigo and T. Stutzle, Ant Colony Optimization, MIT Press, Cambridge, MA, 2004.10.7551/mitpress/1290.001.0001Search in Google Scholar
[5] W. F. Gao, S. Y. Liu and L. L. Huang, Inspired artificial bee colony algorithm for global optimization problems, Chin. J. Electron.12 (2012), 2396–2403.Search in Google Scholar
[6] W. F. Gao, S. Y. Liu and L. L. Huang, A novel artificial bee colony algorithm based on modified search equation and orthogonal learning, IEEE Trans. Cybernet.43 (2013), 1011–1024.10.1109/TSMCB.2012.2222373Search in Google Scholar PubMed
[7] P. Guo, W. Cheng and J. Liang, Global artificial bee colony search algorithm for numerical function optimization, in: Proceedings of 2011 Seventh International Conference on Natural Computation, Shanghai, pp. 1280–1283, 2011.10.1109/ICNC.2011.6022368Search in Google Scholar
[8] H. T. Jadhav and R. Roy, Gbest guided artificial bee colony algorithm for environmental/economic dispatch considering wind power, Expert Syst. Appl.16 (2013), 6385–6399.10.1016/j.eswa.2013.05.048Search in Google Scholar
[9] D. Karaboga, An Idea Based on Honey Bee Swarm for Numerical Optimization, pp. 1–10, Erciyes University, Turkey, 2005.Search in Google Scholar
[10] D. Karaboga and B. Basturk, On the performance of artificial bee colony algorithm, Appl. Soft Comput.8 (2008), 687–697.10.1016/j.asoc.2007.05.007Search in Google Scholar
[11] J. Kennedy and R. Eberhart, Particle swarm optimization, in: IEEE International Conference on Neural Networks, Perth, pp. 1942–1949, 1995.Search in Google Scholar
[12] P. Kumar, S. Kumar, T. K. Sharma and M. Pant, Bi-level thresholding using PSO, artificial bee colony and MRLDE embedded with Otsu method, Memet. Comput.5 (2013), 323–334.10.1007/s12293-013-0123-5Search in Google Scholar
[13] X. Li and G. Yang, Artificial bee colony algorithm with memory, Appl. Soft Comput.1 (2016), 362–372.10.1016/j.asoc.2015.12.046Search in Google Scholar
[14] J. Luo, X. G. Xiao, L. Fu and Q. Wang, Modified artificial bee colony algorithm based on segmental-search strategy, Control Decis.9 (2012), 1402–1410.Search in Google Scholar
[15] X. Pan, Y. Lu, S. Li and R. Li, An improved artificial bee colony with new search strategy, Int. J. Wireless Mob. Comput.9 (2015), 391–396.10.1504/IJWMC.2015.074032Search in Google Scholar
[16] R. Roy and H. T. Jadhav, Optimal power flow solution of power system incorporating stochastic wind power using Gbest guided artificial bee colony algorithm, Int. J. Elect. Power Energy Syst.1 (2015), 562–578.10.1016/j.ijepes.2014.07.010Search in Google Scholar
[17] T. K. Sharma and M. Pant, Enhancing the food locations in an artificial bee colony algorithm, Soft Comput.17 (2013), 1939–1965.10.1007/s00500-013-1029-3Search in Google Scholar
[18] T. K. Sharma and M. Pant, Improved search mechanism in ABC and its application in engineering, J. Eng. Sci. Technol.10 (2015), 111–133.Search in Google Scholar
[19] T. K. Sharma and M. Pant, Distribution in the placement of food in artificial bee colony based on changing factor, Int. J. Syst. Assur. Eng. Manage. (2016), 1–14. DOI: 10.1007/s13198-016-0495-2.Search in Google Scholar
[20] T. K. Sharma and M. Pant, Shuffled artificial bee colony algorithm, Soft Comput. (2016), 1–20. DOI: 10.1007/s00500-016-2166-2.Search in Google Scholar
[21] H. Sun, B. Li and Q. Yu, A hybrid artificial bee colony algorithm based on different search mechanisms, Int. J. Wireless Mobile Comput.9 (2015), 383–390.10.1504/IJWMC.2015.074033Search in Google Scholar
[22] K. S. Tang, K. F. Man, S. Kwong and Q. He, Genetic algorithms and their application, IEEE Signal Process. Mag.13 (1996), 22–37.10.1109/79.543973Search in Google Scholar
[23] Z. Wang and X. Kong, An improved artificial bee colony algorithm for global optimization, Inf. Technol. J.24 (2013), 8362–8369.10.3923/itj.2013.8362.8369Search in Google Scholar
[24] J. W. Wang, D. Yang, J. F. Qiu and X. J. Wang, Improved artificial bee colony algorithm for solving nonlinear equations, J. Anhui Univ. (Nat. Sci. Ed.)38 (2014), 16–23.Search in Google Scholar
[25] S. Zhang and S. Liu, A novel artificial bee colony algorithm for function optimization, Math. Probl. Eng.2015 (2015), 1–10.10.1155/2015/129271Search in Google Scholar
[26] Y. Y. Zhang, P. Zeng, Y. Wang, B. H. Zhu and F. J. Kuang, Linear weighted Gbest-guided artificial bee colony algorithm, in: 2012 5th International Symposium on Computational Intelligence and Design (ISCID), Hangzhou, China, pp. 155–159, 2012.10.1109/ISCID.2012.191Search in Google Scholar
[27] C. Zhang, Q. Li, P. Chen, S. G. Yang and Y. X. Yin, Improved ant colony optimization based on particle swarm optimization and its application, Chin. J. Eng.7 (2013), 955–960.Search in Google Scholar
[28] H. Zhao, M. D. Li and X. W. Weng, Improved artificial bee colony algorithm with self-adaptive global best-guided quick searching strategy, Control Decis.11 (2014), 2041–2407.Search in Google Scholar
[29] G. Zhu and S. Kwong, Gbest-guided artificial bee colony algorithm for numerical function optimization, Appl. Math. Comput.7 (2010), 3166–3173.10.1016/j.amc.2010.08.049Search in Google Scholar
©2017 Walter de Gruyter GmbH, Berlin/Boston
This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Articles in the same Issue
- Frontmatter
- Extreme Learning Machine-Based Traffic Incidents Detection with Domain Adaptation Transfer Learning
- Mining Dynamics: Using Data Mining Techniques to Analyze Multi-agent Learning
- Automatical Knowledge Representation of Logical Relations by Dynamical Neural Network
- A Comparison of Three Soft Computing Techniques, Bayesian Regression, Support Vector Regression, and Wavelet Regression, for Monthly Rainfall Forecast
- Single Machine Scheduling Based on EDD-SDST-ACO Heuristic Algorithm
- Exponential Genetic Algorithm-Based Stable and Load-Aware QoS Routing Protocol for MANET
- Disruption Management for Predictable New Job Arrivals in Cloud Manufacturing
- Adaptive Fuzzy High-Order Super-Twisting Sliding Mode Controller for Uncertain Robotic Manipulator
- Intelligent Tutoring Systems
- A Novel Global ABC Algorithm with Self-Perturbing
Articles in the same Issue
- Frontmatter
- Extreme Learning Machine-Based Traffic Incidents Detection with Domain Adaptation Transfer Learning
- Mining Dynamics: Using Data Mining Techniques to Analyze Multi-agent Learning
- Automatical Knowledge Representation of Logical Relations by Dynamical Neural Network
- A Comparison of Three Soft Computing Techniques, Bayesian Regression, Support Vector Regression, and Wavelet Regression, for Monthly Rainfall Forecast
- Single Machine Scheduling Based on EDD-SDST-ACO Heuristic Algorithm
- Exponential Genetic Algorithm-Based Stable and Load-Aware QoS Routing Protocol for MANET
- Disruption Management for Predictable New Job Arrivals in Cloud Manufacturing
- Adaptive Fuzzy High-Order Super-Twisting Sliding Mode Controller for Uncertain Robotic Manipulator
- Intelligent Tutoring Systems
- A Novel Global ABC Algorithm with Self-Perturbing