Self-adaptive search equation-based artificial bee colony algorithm with CMA-ES on the noiseless BBOB testbed

Self-Adaptive Search Equation based Artificial Bee Colony (SSEABC) is a recent variant of Artificial Bee Colony (ABC) algorithm. SSEABC proposed three enhancements on the canonical ABC algorithm. These are the self-adaptive search equation selection strategy, hybridization with a local search procedure and incremental population size strategy. The performance of SSEABC is tested on CEC 2015 benchmark suite and ranked third within all participants of competition. In this paper, we benchmark SSEABC using the noise-free BBOB function testbed. We also compare SSEABC performance to PSO, ABC and GA algorithms.


INTRODUCTION
Ever since the Arti cial Bee Colony (ABC) algorithm came into existence [11], it has been used in solving continuous optimization problems. However, failure to produce successful results in some types of problems has led to the emergence of many improved ABC variants in recent years. Many of these algorithms have suggested enhancements over one or more of the steps of the ABC algorithm * Submission deadline: March 31st.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro t or commercial advantage and that copies bear this notice and the full citation on the rst page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permi ed. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior speci c permission and/or a fee. Request permissions from permissions@acm.org. GECCO [1,3]. A recent research [1] has shown that the best improvements can be made with changes to the employed bees and onlooker bees steps or with new extensions to the canonical ABC algorithm.
A recent ABC variant, Self-adaptive Search Equation based Articial Bee Colony (SSEABC) [14], is focused on these remediation methods. e SSEABC algorithm solves the problem of nding the appropriate search equation in the employed bees and onlooker bees steps in a self-adaptive way. On the other hand, the algorithm has been improved with iteratively increasing the number of populations and using local search procedures.
SSEABC algorithm performance has been compared with ABC and many contemporary algorithms on CEC 2016 benchmark functions suite and it has been observed that we have obtained successful results [14]. In this paper, the performance of SSEABC algorithm on the BBOB functions testbed has been tested.

ALGORITHM PRESENTATION
SSEABC proposes three modi cations on the original ABC algorithm to improve performance. ese strategies are based on the self-adaptive search equation selection, hybridization with a local search procedure and increasing population size during execution. e pseudo-code of SSEABC is presented in Algorithm 1.

Self-adaptive search equation selection:
In solving numerical optimization problems, the most important factor a ecting the performance of ABC algorithm is the search equations that take place in the steps of employed bees and onlooker bees. In addition to the search equation, the number of dimensions considered to be changed is another important factor a ecting the performance of the algorithm. When considering the structure of the problem and that is supposed to be solved; determining the appropriate search equation becomes a di cult task.
us, in this study, a mechanism has been developed that determines the appropriate search equation among the various candidates. To do this, SSEABC has proposed a search equation pool which is lled with randomly generated search equations. e general template of candidate search equation is as seen in Algorithm 2. e candidate search equations take the form of four terms with alternatives in Table 1 with M values. At initialization step of the algorithm, the pool, S, is lled by randomly generated search equation using Algorithm 2 and Table 1. en, at each iteration a candidate search equation from the pool is used in the employed bees and onlooker bees steps. is process repeats until all candidates used in the pool. roughout these steps, the number of success rates of the search equations that provide the update of the solution is increased. A er all the candidate search equations in the pool are  12: applyLocalSearch(X best ) if SN < SNmax and itr (mod ) == 0 then ▷ Incemental population size strategy 14: add new solution to the current population by using 3: x i, j = term 1 + term 2 + term 3 + term 4 used in the employed bees and onlooker steps, ps, which is the size of the pool, is scaled down by the equation 1: where MAX F ES is the maximum number of function evaluations for one execution and 2 × SN is the number of function evaluations at each iteration and where itr M AX is the approximated value of the maximum number of iterations. itr M AX is the approximated value because the incremental population size strategy in which SN is changing over time. Finally, when the algorithm nishes its execution, very few search equations, which are the appropriate ones, remain in the pool only.
hybridization with a local search procedure. : In SSEABC algorithm, bees move using the SSEABC rules and by the invocation of a local search procedure. Speci cally, best-so-far solution is used as the initial solution a local search procedure is called from. e nal solution found through local search becomes the new best-so-far solution if it is be er than the initial solution. In SSEABC, the local search procedure is not called every iteration. e local search procedure is called only when it is expected that its invocation will result in an improvement of the-best-so-far solution. In previous implementation of SSEABC [14], competitive local search selection procedure was used. However, for the BBOB testbed, we used CMA-ES algorithm [12] as the local search procedure because competitive local search selection provides a wasteful use of function evaluations.
x AV E, j are best-so-far, best-distance, second best, median, worst foods sources at dimension j, respectively. On the other hand, X r 1 and X r 2 are two randomly selected food source and x AV E, j refers to average positions of the food source at dimension j. ϕ N can take two possible ranges: M term1 term2 terms3 terms4 Incremental population size strategy: is strategy is very similar to the incremental social learning (ISL) framework [2,4]. According to this strategy, SSEABC starts to work with a small population. During the algorithm execution, a new solution in uenced by the best-so-far solution is added to the population a er a certain number of iterations called growth period, .
is addition process continues until the maximum population value is reached. e solution to be newly added to the population is created using the wherex new, j is the new solution to be added.

EXPERIMENTAL PROCEDURE
We have used the default parameter values for SSEABC and CMAES algorithms which were given in [14] and [12] respectively. A maximum of 10 4 D function evaluations was used. Every periodic 2500D function evaluations SSEABC restarts without forge ing the bestso-far solution.

RESULTS
Results from experiments according to [10] and [6] on the benchmark functions given in [5,9] are presented in Figures 1, 2 and 3 and in Tables 2 and 3. e experiments were performed with COCO [8], version 2.0, the plots were produced with version 2.0. e average runtime (aRT), used in the gures and tables, depends on a given target function value, f t = f opt + ∆f , and is computed over all relevant trials as the number of function evaluations executed during each trial while the best function value did not reach f t , summed over all trials and divided by the number of trials that actually reached f t [7,13]. Statistical signi cance is tested with the rank-sum test for a given target ∆f t using, for each trial, either the number of needed function evaluations to reach ∆f t (inverted and multiplied by −1), or, if the target was not reached, the best ∆f -value achieved, measured only up to the smallest number of overall function evaluations for any unsuccessful trial under consideration.
From the experiments, we observed that SSEABC solved 11 functions in dimension 5 and 5 functions in dimension 20 with 100% success rate. Over from dimension from 2 to 20, SSEABC solved f 1, Comparison of SSEABC algorithm to PSO, ABC and GA in previous BBOB workshops are presented in Figure 2. We have seen that SSEABC outperforms PSO, ABC and GA for almost all functions. Moreover, SSEABC obtains be er run-time performance than reference algorithms on the moderate, ill-conditioned and multi-modal functions. When the comparison results are examined, SSEABC for f 4 and f 20 seems to give bad results from ABC. Although SSEABC is an improved variant of the ABC algorithm, it is surprising at rst glance that this situation has emerged. However, this is related to the fact that the entirely selected local search algorithm does not work well on these problems. e use of a certain amount of the function evaluations budget by CMA-ES yields this result.

CONCLUSION
In this paper, we present the benchmark results of SSEABC algorithm on BBOB functions testbed. We have also compared the performance of SSEABC to the data obtained by PSO, ABC and GA algorithms. e comparison results showed that SSEABC algorithm can outperforms the compared algorithms and it is very competitive to (1+1)-CMA-ES and BIPOP-CMA-ES in moderate and ill-conditioned functions.    4.9(5) ⋆2 ∞ ∞ ∞ ∞ 2e5 0/15 ∆f opt 1e1 1e0 1e-1 1e-2 1e-3 1e-5 1e-7 #succ f24 1.3e6 7.5e6 5.2e7 5.2e7 5.2e7 5.2e7 5.2e7 3/15 PSO Table 3: Average runtime (aRT in number of function evaluations) divided by the respective best aRT measured during BBOB-2009 in dimension 20. e aRT and in braces, as dispersion measure, the half di erence between 10 and 90%-tile of bootstrapped run lengths appear for each algorithm and target, the corresponding reference aRT in the rst row. e di erent target ∆fvalues are shown in the top row. #succ is the number of trials that reached the ( nal) target f opt + 10 −8 . e median number of conducted function evaluations is additionally given in italics, if the target in the last column was never reached. Entries, succeeded by a star, are statistically signi cantly better (according to the rank-sum test) when compared to all other algorithms of the table, with p = 0.05 or p = 10 −k when the number k following the star is larger than 1, with Bonferroni correction of 110. A ↓ indicates the same tested against the best algorithm from BBOB 2009. Best results are printed in bold.
Data produced with COCO v2.1