Access the full text.
Sign up today, get DeepDyve free for 14 days.
Vikas, S. Nanda (2016)
Multi-objective Moth Flame Optimization2016 International Conference on Advances in Computing, Communications and Informatics (ICACCI)
Mohammed El-Abd (2012)
Generalized opposition-based artificial bee colony algorithm2012 IEEE Congress on Evolutionary Computation
Ying-Biao Ling, Yongquan Zhou, Qifang Luo (2017)
Lévy Flight Trajectory-Based Whale Optimization Algorithm for Global OptimizationIEEE Access, 5
M. Okwu, L. Tartibu (2020)
Moths–Flame Optimization Algorithm
A. Reynolds, Alan Smith, D. Reynolds, N. Carreck, J. Osborne (2007)
Honeybees perform optimal scale-free searching flights when attempting to locate a food sourceJournal of Experimental Biology, 210
G. Viswanathan, V. Afanasyev, S. Buldyrev, E. Murphy, P. Prince, H. Stanley (1996)
Lévy flight search patterns of wandering albatrossesNature, 381
Xin-She Yang, S. Deb (2009)
Cuckoo Search via Lévy flights2009 World Congress on Nature & Biologically Inspired Computing (NaBIC)
Xiang-tao Li, Jie Zhang, Minghao Yin (2014)
Animal migration optimization: an optimization algorithm inspired by animal migration behaviorNeural Computing and Applications, 24
Nianyin Zeng, Zidong Wang, Hong Zhang, Kee-Eung Kim, Yurong Li, Xiaohui Liu (2019)
An Improved Particle Filter With a Novel Hybrid Proposal Distribution for Quantitative Analysis of Gold Immunochromatographic StripsIEEE Transactions on Nanotechnology, 18
Hui Wang, Zhijian Wu, S. Rahnamayan, Yong Liu, M. Ventresca (2011)
Enhancing particle swarm optimization using generalized opposition-based learningInf. Sci., 181
Xianbing Meng, Yu Liu, X. Gao, Hengzhen Zhang (2014)
A New Bio-inspired Algorithm: Chicken Swarm Optimization
S. Salcedo-Sanz, Á. Pastor-Sánchez, D. Gallo-Marazuela, J. Portilla-Figueras (2013)
A Novel Coral Reefs Optimization Algorithm for Multi-objective Problems
(2018)
Research on evaluation method of spatial straightness for variable step beetle antennae search algorithm, 8
A. Mårell, J. Ball, A. Hofgaard (2002)
Foraging and movement paths of female reindeer: insights from fractal analysis, correlated random walks, and Lévy flightsCanadian Journal of Zoology, 80
Feng-ying Zhu (2016)
Patients’ Responsibilities in Medical EthicsPhilosophy study, 6
Wenyong Dong, Lanlan Kang, Wensheng Zhang (2016)
Opposition-based particle swarm optimization with adaptive mutation strategySoft Computing, 21
So-Youn Park, Jujang Lee (2016)
Stochastic Opposition-Based Learning Using a Beta Distribution in Differential EvolutionIEEE Transactions on Cybernetics, 46
M. Ahandani, H. Alavi-Rad (2015)
Opposition-based learning in shuffled frog leaping: An application for parameter identificationInf. Sci., 291
K. Passino (2002)
Biomimicry of bacterial foraging for distributed optimization and controlIEEE Control Systems Magazine, 22
Qing Wu, Zheping Ma, Gang Xu, Shuai Li, Dechao Chen (2019)
A Novel Neural Network Classifier Using Beetle Antennae Search Algorithm for Pattern ClassificationIEEE Access, 7
Yangming Zhou, Jin-Kao Hao, B. Duval (2017)
Opposition-Based Memetic Search for the Maximum Diversity ProblemIEEE Transactions on Evolutionary Computation, 21
S. Mirjalili, A. Lewis (2016)
The Whale Optimization AlgorithmAdv. Eng. Softw., 95
A. Reynolds, M. Frye (2007)
Free-Flight Odor Tracking in Drosophila Is Consistent with an Optimal Intermittent Scale-Free SearchPLoS ONE, 2
A. Ewees, M. Elaziz, E. Houssein (2018)
Improved grasshopper optimization algorithm using opposition-based learningExpert Syst. Appl., 112
Hossam Faris, Ibrahim Aljarah, M. Al-Betar, S. Mirjalili (2018)
Grey wolf optimizer: a review of recent variants and applicationsNeural Computing and Applications, 30
O. Abedinia, N. Amjady, A. Ghasemi (2016)
A new metaheuristic algorithm based on shark smell optimizationComplex., 21
R. Jensi, G. Jiji (2016)
An enhanced particle swarm optimization with levy flight for global optimizationAppl. Soft Comput., 43
Mostafa Ali, Noor Awad, R. Reynolds, P. Suganthan (2018)
A balanced fuzzy Cultural Algorithm with a modified Levy flight search for real parameter optimizationInf. Sci., 447
Xiangyuan Jiang, Shuai Li (2017)
Beetle Antennae Search without Parameter Tuning (BAS-WPT) for Multi-objective OptimizationArXiv, abs/1711.02395
Xiangyuan Jiang, Shuai Li (2017)
BAS: Beetle Antennae Search Algorithm for Optimization ProblemsArXiv, abs/1710.10724
Yujun Zheng (2015)
Water wave optimization: A new nature-inspired metaheuristicComput. Oper. Res., 55
Nianyin Zeng, Hong Qiu, Zidong Wang, Weibo Liu, Hong Zhang, Yurong Li (2018)
A new switching-delayed-PSO-based optimized SVM algorithm for diagnosis of Alzheimer's diseaseNeurocomputing, 320
S. Rahnamayan, H. Tizhoosh, M. Salama (2008)
Opposition-Based Differential EvolutionIEEE Transactions on Evolutionary Computation, 12
Zongyao Zhu, Zhiyu Zhang, Weishi Man, Xiangqian Tong, Jinzhe Qiu, Fangfang Li (2018)
A new beetle antennae search algorithm for multi-objective energy management in microgrid2018 13th IEEE Conference on Industrial Electronics and Applications (ICIEA)
Nianyin Zeng, Hong Zhang, Weibo Liu, Jinling Liang, F. Alsaadi (2017)
A switching delayed PSO optimized extreme learning machine for short-term load forecastingNeurocomputing, 240
A. Edwards, A. Edwards, R. Phillips, N. Watkins, M. Freeman, E. Murphy, V. Afanasyev, S. Buldyrev, S. Buldyrev, M. Luz, E. Raposo, H. Stanley, G. Viswanathan (2007)
Revisiting Lévy flight search patterns of wandering albatrosses, bumblebees and deerNature, 449
Ying Tan, Yuanchun Zhu (2010)
Fireworks Algorithm for Optimization
V. Savsani, M. Tawhid (2017)
Non-dominated sorting moth flame optimization (NS-MFO) for multi-objective problemsEng. Appl. Artif. Intell., 63
H. Tizhoosh (2005)
Opposition-Based Learning: A New Scheme for Machine IntelligenceInternational Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC'06), 1
(2015)
Water wave optimization
Xiaoyi Feng, Ao Liu, Weiliang Sun, Xiaofeng Yue, Bo Liu (2018)
A Dynamic Generalized Opposition-Based Learning Fruit Fly Algorithm for Function Optimization2018 IEEE Congress on Evolutionary Computation (CEC)
Ali Heidari, P. Pahlavani (2017)
An efficient modified grey wolf optimizer with Lévy flight for optimization tasksAppl. Soft Comput., 60
S. Mirjalili (2015)
Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigmKnowl. Based Syst., 89
Dianna Song (2018)
Application of Particle Swarm Optimization Based on Beetle Antennae Search Strategy in Wireless Sensor Network Coverage
SYSTEMS SCIENCE & CONTROL ENGINEERING: AN OPEN ACCESS JOURNAL 2020, VOL. 8, NO. 1, 35–47 https://doi.org/10.1080/21642583.2019.1708829 a,b a,b a,b Xin Xu , Kailian Deng and Bo Shen a b College of Information Science and Technology, Donghua University, Shanghai, People’s Republic of China; Engineering Research Center of Digitalized Textile and Fashion Technology, Ministry of Education, Shanghai, People’s Republic of China ABSTRACT ARTICLE HISTORY Received 23 October 2019 The beetle antennae search (BAS) algorithm is a new meta-heuristic algorithm which has been shown Accepted 20 December 2019 to be very useful in many applications. However, the algorithm itself still has some problems, such as low precision and easy to fall into local optimum when solving complex problems, and exces- KEYWORDS sive dependence on parameter settings. In this paper, an algorithm called beetle antennae search Beetle antennae search algorithm based on Lévy flights and adaptive strategy (LABAS) is proposed to solve these problems. algorithm; elite individuals; The algorithm turns the beetle into a population and updates the population with elite individu- Lévy ﬂights; adaptive strategy; generalized als’ information to improve the convergence rate and stability. At the same time, Lévy flights and opposition-based learning scaling factor are introduced to enhance the algorithm’s exploration ability. After that, the adaptive step size strategy is used to solve the problem of difficult parameter setting. Finally, the generalized opposition-based learning is applied to the initial population and elite individuals, which makes the algorithm achieve a certain balance between global exploration and local exploitation. The LABAS algorithm is compared with 6 other heuristic algorithms on 10 benchmark functions. And the simula- tion results show that the LABAS algorithm is superior to the other six algorithms in terms of solution accuracy, convergence rate and robustness. 1. Introduction which mimics the hunting behaviour of humpback Heuristic algorithm is a hotspot in the field of opti- whales, and uses the shrinking encircling mechanism mization algorithms because of its simple implementa- and the spiral updating position to optimize. In addi- tion, good scalability and high efficiency. And is widely tion to the aforementioned algorithms, there are many used in computer vision, medical, power systems and other heuristic algorithms (see e.g. Abedinia, Amjady, other fields (Zeng et al., 2018, 2019; Zeng, Zhang, Liu, & Ghasemi, 2016; Faris, Aljarah, Al-Betar, & Mirjalili, 2018; Liang, & Alsaadi, 2017). In recent years, many new Li, Zhang, & Yin, 2014; Passino, 2002; Salcedo-Sanz, heuristic algorithms have been proposed. For exam- Pastor-Sanchez, Gallo-Marazuela, & Portilla-Figueras, ple, the firefly algorithm (FA) has been proposed in 2013;Tan&Zhu, 2010; Zheng, 2015). Yang (2008), which mimics the biological characteristics The algorithms described above have a common prob- of fireflies in nature that transmit information or attract lem, that is, the amount of calculation is large. To this food through luminescence. The cuckoo search (CS) end, a new bio-heuristic intelligent algorithm has been proposed in Jiang and Li (2017, 2018), which is the bee- algorithm has been proposed in Yang and Deb (2009), tle antennae search (BAS) algorithm. BAS requires only which simulates the parasitic brooding behaviour of one individual in the optimization, and the optimization cuckoos, and innovatively introduces the Lévy flights mechanism is simple, so the amount of calculation is mechanism, enabling the algorithm to effectively achieve small. In Jiang and Li (2018), the effectiveness of BAS for global and local searches. The chicken swarm optimiza- tion (CSO) algorithm has been proposed in Meng, Liu, optimization problems is proved by simulation experi- Gao, and Zhang (2014), which is a stochastic optimiza- ments on typical test functions. Currently, BAS has also tion method to simulate the hierarchy and foraging been successfully applied to some optimization prob- behaviour of chickens. The whale optimization algorithm lems. For example, in Song (2018), BAS has been com- (WOA) has been proposed in Mirjalili and Lewis (2016), bined with the particle swarm optimization algorithm to CONTACT Kailian Deng dengkailian@dhu.edu.cn © 2020 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 36 X. XU ET AL. solve the problem of wireless sensor network coverage. that the proposed LABAS algorithm has better perfor- Furthermore, in Zhu et al. (2018), BAS has been applied to mance than the original BAS algorithm and 5 other rep- the energy management of microgrid with constrained, resentative heuristic algorithms. multi-objective, nonlinear and mixed integer program- The rest of this paper is organized as follows. In ming forms. Besides, in Wu, Ma, Xu, Li, and Chen (2019), Section 2, the original BAS algorithm is explained. The BAS has been used to optimize the weights between the proposed LABAS algorithm is presented in Section 3,and hidden layer and the output layer of the neural network, experimental results are shown in Section 4.Asafinal, which effectively improves the calculation speed and conclusions about this paper are presented in Section 5. classification accuracy of the classifier. In addition, there have been other successful applications of BAS algorithm 2. An overview of BAS algorithm (see e.g. Chen, Wang, & Wang, 2018;Wang&Liu, 2018). All these applications show that the BAS algorithm has The BAS algorithm is a new optimization algorithm potential research value. inspired by the foraging behaviour of the beetle. The bee- Although BAS has been applied successfully to prac- tle has two antennae, which can detect the smell of food. tical engineering problems, the algorithm itself still has If the odour intensity detected by the left antenna is larger some shortcomings. For example, an individual has lim- than the right side, the beetle will fly to the left next time, ited ability in optimizing, and the algorithm does not otherwise it will fly to the right. This simple principle gives utilize the information of the current optimal value, which the beetle the direction information of the food, and the also leads to some unnecessary iterations. In addition, food can be found in that direction. when solving the optimization problem with complex In the BAS algorithm, the function to be optimized is functions and high dimensionality of variables, the BAS regarded as food, and the variables of the function can algorithm’s convergence accuracy becomes low and it is be considered as the position of the beetle. Beetle uses easy to fall into local optimum. Furthermore, BAS relies a random search method for unknown areas. In order heavily on the parameter settings during optimization. to simulate this search behaviour, a random direction is Therefore, the convergence and accuracy of the BAS generated by the following formula: algorithm are closely related to the parameters used. rnd(d,1) These shortcomings need to be overcome. However, the b = (1) rnd(d,1) improvement of the algorithm is very challenging, and the related research has been few relatively. where rnd(·) represents a function that produces a ran- In response to the above discussion, this paper dom number in [−1, 1], d can be regarded as the dimen- proposes an algorithm called beetle antennae search sion of space and variable at the same time, · indicates algorithm based on Lévy flights and adaptive strat- the norm. Then the spatial coordinates of the two anten- egy (LABAS), which puts forward four improvements to nae on the left and right sides of the beetle are generated the above shortcomings. The main contributions of this as follows: paper are summarized as follows. Firstly, a novel popula- t t t tion update strategy is adopted instead of the traditional x = x + d · b (2) individual’s optimization. When updating, the elite indi- t t t x = x − d · b viduals’ information is used and the optimal solution is searched in the vicinity of some of the best individuals t t where x and x respectively represent the position coor- that obtained so far. As such, the global optimization abil- dinates of the left and right antennae of the beetle at time ity, stability and exploitation capability of the algorithm t, x indicates the position of the beetle at the tth iteration, are improved. Secondly, Lévy flights are used in the and d is the length of the antennae at the tth iteration. search process to improve the exploration ability of the Then, the concentration of odour on the left and right algorithm and the local optimum issue is hence avoided. t t sides is calculated, denoted by f (x ) and f (x ), where f (·) Thirdly, the adaptive step size strategy is used to solve the is the fitness function. After that, the position of the beetle problem of parameter setting of the original algorithm, is updated with the following formula: and the current position of the beetle is updated by using t+1 t t t t the variables with better fitness value. Fourthly, the gen- x = x − δ · b · sign(f (x ) − f (x )) (3) eralized opposition-based learning strategy is applied to the initial population and elite individuals in order to where δ represents the step size at the tth iteration, increase the diversity of the population and simultane- sign(·) is a sign function. Therefore, the beetle moves the ously the convergence rate of the algorithm is increased. δ length in a direction in which the fitness value is bet- Experimental results on 10 benchmark functions indicate ter. Notably, with the number of iterations increases, the SYSTEMS SCIENCE & CONTROL ENGINEERING: AN OPEN ACCESS JOURNAL 37 beetle’s antennae length d and step size δ need to slowly The beetle population can be represented by a matrix decrease, and they can be expressed as as follows: ⎡ ⎤ x x ··· ··· x 1,1 1,2 1,d t t−1 ⎢ ⎥ x x ··· ··· x d = r · d + 0.01, (4) 2,1 2,2 2,d d ⎢ ⎥ X = ⎢ ⎥ (6) . . . . . . . . . t t−1 ⎣ ⎦ . . . . . δ = r · δ (5) x x ··· ··· x n,1 n,2 n,d where n represents the number of beetles, d represents where the initial values of d and δ and the corre- the dimension of the variables to be optimized. The fit- sponding attenuation coefficients r and r need to be d δ ness values corresponding to these beetles can be repre- selected according to the variables’ ranges of the specific sented by the following vector: function. ⎡ ⎤ ⎢ ⎥ 3. The proposed LABAS algorithm ⎢ ⎥ F = ⎢ ⎥ (7) X . ⎣ ⎦ In this section, a new LABAS algorithm is proposed to improve the performance of the BAS algorithm. There are four main innovations. The first is to group the beetles where n represents the number of the beetles, and the and use the elite individuals to participate in the pop- value of each row in F is the fitness value of the individual ulation update. The second is to introduce Lévy flights in the corresponding row in the beetles matrix X. and scaling factor in the search process. The third is to The BAS algorithm does not utilize the information adopt an adaptive strategy in the step size, and update obtained from the current superior values. To this end, the the position of the beetle with variables with a better LABAS algorithm uses the information of these elite indi- fitness value after detection. The fourth is to use the gen- viduals in the population update process. Elite individuals eralized opposition-based learning strategy for the initial are represented by the following matrix: population and elite individuals to increase the diver- ⎡ ⎤ e e ··· ··· e 1,1 1,2 1,d sity of the population. These strategies make the LABAS ⎢ ⎥ e e ··· ··· e 2,1 2,2 2,d algorithm reduce unnecessary iterations compared with ⎢ ⎥ E = (8) ⎢ ⎥ . . . . . . . . . . BAS algorithm. At the same time, it can improve the stabil- ⎣ ⎦ . . . . . ity of the algorithm, and also make the algorithm achieve e e ··· ··· e n ,1 n ,2 n ,d a certain balance between global exploration and local where n represents the number of optimal individuals exploitation. In addition, LABAS does not have compli- obtained so far and d represents the dimension of the cated parameter setting problem. variable of the problem to be optimized. Thus each row of matrix E represents the elite individual’s position that 3.1. Group composition and population update has been obtained so far. Similarly, the fitness values cor- method involving elite individuals responding to these elite individuals are represented by a vector as follows: BAS uses an individual to optimize, so the amount of ⎡ ⎤ computation is small. For some simple optimization prob- ⎢ ⎥ lems, BAS can also get a satisfactory approximate solu- ⎢ ⎥ F = ⎢ ⎥ (9) E . tion. Once the function to be optimized becomes more ⎣ ⎦ complex and the dimension of the variable increases, the optimization accuracy of the BAS becomes lower and the performance deteriorates. Therefore, this paper It should be noted that E and F are updated simultane- changes the individual’s optimization to the optimization ously with the population, and the elite individuals are the of the population, and refers to and improves the pro- best n solutions obtained so far. In this paper, we make n cessing method of the Moth-Flame optimization (MFO) equal to n. algorithm on the population structure (Mirjalili, 2015; In order to better illustrate the LABAS algorithm Savsani & Tawhid, 2017; Vikas & Nanda, 2016). At the proposed in this paper, the following assumptions are same time, the simple and effective optimization mech- made: anism of BAS is still retained. Such improvements can improve the optimization ability and stability of the (1) each individual in matrix E is sorted in increasing algorithm. order of fitness values. Because the fitness value is as 38 X. XU ET AL. Figure 2. Two-dimensional Lévy ﬂights trajectory simulation. It can be seen from the above equation that the dis- tribution has an infinite mean and an infinite vari- Figure 1. Relationship between beetles and elite individuals. ance. Numerous studies have shown that many living things in nature have the characteristics of Lévy flights small as possible for the minimization problem. F is (Edwards et al., 2007; Marell, Ball, & Hofgaard, 2002; also arranged in the same order as E.Sothe firstrow Reynolds & Frye, 2007; Reynolds, Smith, Reynolds, Car- in matrix E is the best individual’s position so far, and reck, & Osborne, 2007; Viswanathan et al., 1996), such as the first one in F is its corresponding fitness value; bees, fruit flies, albatrosses, reindeers, etc. At the same (2) each beetle will update its position according to the time, Lévy flights can maximize the efficiency of resource corresponding elite individual in the matrix E. There- search in uncertain environments, and many optimiza- fore, the first beetle always updates its position rela- tion algorithms using Lévy flights also show excellent tive to the current optimal solution. performance. Lévy flights is characterized by frequent short-range local searches, but occasionally a longer jump The above assumptions ensure that the beetles can occurs and the direction of motion changes dramati- exploit around the optimal solutions. At the same time, cally. These characteristics can avoid the convergence of because these individuals in matrix E are constantly the algorithm to the local optimal to a certain extent. updated, so the beetles will not always search around Figure 2 is a trajectory simulation of a two-dimensional fixed solutions. This increases the exploration of the Lévy flights. search space and therefore avoids falling into local opti- A symmetric Lévy stable distribution is usually gener- mum. Figure 1 shows the distribution relationship with ated using the Mantegna algorithm (Jensi & Jiji, 2016), the elite individuals when the beetles update their posi- where symmetry means that the step size can be positive tions, and how the elite individuals are updated. or negative. The random step size Lévy(λ) is expressed as follows: 3.2. Search method based on Lévy flights Lévy(λ) = (11) 1/β |v| When solving the complex optimization problem, the BAS algorithm has low convergence precision, and it where λ = 1 + β, β ∈ (0, 2], μ and v obey the following is easy to fall into local optimum. To solve this prob- Gaussian distribution: lem, the Lévy flights strategy is introduced into the 2 2 LABAS algorithm. Lévy flights (Ali, Awad, Reynolds, & Sug- μ ∼ N(0, σ ), v ∼ N(0, σ ) (12) μ v anthan, 2018; Heidari & Pahlavani, 2017; Ling, Zhou, where &Luo, 2017) is a Markov process proposed by the French mathematician Lévy. After that, Benoit Mandelbrot gave a 1/β (1 + β) · sin(π · β/2) detailed description. This is a random walk mode in which σ = , σ = 1 (13) μ v (β−1)/2 (1 + β/2) · β · 2 the step size of the walk follows the stable distribution of Lévy and can be expressed by the following formula: where (·) is the standard gamma function, and β = 1.5 is used in this paper. The symmetric Lévy stable distri- −λ Lévy ∼ u = t , (1 <λ ≤ 3) (10) bution may result in a larger step size after a series of SYSTEMS SCIENCE & CONTROL ENGINEERING: AN OPEN ACCESS JOURNAL 39 small steps, and the direction will change several times. follows: This allows beetle to speed up local searches to approach- t t step = 0.1 · rand(1) · d · Lévy(λ) (16) ing the optimal solution. In addition, a few long-distance i,j i,j jump-type walks are beneficial to expand the search where Lévy(λ) is symmetrical, that is, both positive and range of beetle, which makes it easier escape from the negative numbers can be generated, and its size is a ran- local optimum. Therefore, in this paper, Lévy flights is dom number obeying the Lévy distribution. Lévy(λ) is used to generate direction and step size, and is combined therefore used to generate the required direction and with the adaptive strategy in Section 3.3. Accordingly, the step size. The step size represented by Equation (16) specific implementation is described in Section 3.3. herein can be understood as the jth dimension of d · b in the Equation (2), including both the size and the direction. After the distances of all dimensions are calculated, 3.3. Adaptive step size strategy the antennae on the left and right sides of the beetle are BAS relies heavily on the setting of four parameters, used for detection. Here we change Equation (2) to the namely the initial antennae length and step size and their following expression: corresponding attenuation coefficients. The performance t t t of the algorithm is closely related to the parameters used. x = x + step i i il (17) Therefore, for different optimization problems, it is nec- t t t x = x − step ir i i essary to adjust the parameters reasonably to get a better solution. This undoubtedly adds inconvenience to the use where x represents the position of the ith beetle at time t t t t of the algorithm. In response to this problem, the pro- t, step = [step , step , ... , step ] indicates the step size i i,1 i,2 i,d posed LABAS adopts the adaptive step size strategy in or antennae length of the ith beetle at time t, x and il terms of parameters, and directly updates the current x respectively indicate the position of the left and right ir position of the beetle with the variables with better fit- antennae of the ith beetle at time t. Calculate the fit- t t ness value after detection. ness values f (x ) and f (x ) at the two antennae, and il ir First, the distance between the beetle and the elite then update the position of the beetle using the following individual needs to be calculated. The formula for each formula: dimension is as follows: t+1 t t f = min(f (x ), f (x )) x il ir (18) t+1 t+1 t t t x = arg min f d = e − x (14) i x i i i,j i,j i,j t+1 t t where f denotes the smaller one of f (x ) and f (x ), x ir i il where x represents the jth dimensional coordinate of the t+1 i,j and x is the one with better fitness value in x and il ith beetle at time t, e is the jth dimension of the ith opti- i,j x . The above formula indicates that the position of an ir mal value currently obtained at time t,and d represents i,j antenna with a better fitness value after the detection the distance of the ith beetle from the ith elite individual is directly used to update the current position of the in the jth dimension at time t. beetle, which avoids the extra need to calculate the δ Then, the distance d is multiplied by a random scaling i,j in Equation (5) to update the individual. And the step factor: size here is adaptively adjusted according to the distance between the beetle and the elite individual. rand(1) · d (15) i,j 3.4. Generalized opposition-based learning where rand(1) represents a random number in [0, 1]. This allows the beetle to move to any position between the The opposition-based learning (OBL) has been proposed corresponding elite individual when updating, which is in Tizhoosh (2005), which is a new technology in the more conducive to the algorithm’s search for the promis- field of computational intelligence. At present, the OBL ing optimal solution. At the same time, since d is auto- strategy has been applied to a variety of optimization i,j matically calculated and adjusted according to the dis- algorithms (Ahandani & Alavi-Rad, 2015; Dong, Kang, tance between the beetle and the elite individual. There- & Zhang, 2017; Ewees, Elaziz, & Houssein, 2018;Park fore it is an adaptive strategy. In order to allow beetle to & Lee, 2016; Rahnamayan, Tizhoosh, & Salama, 2008; balance local exploitation and global exploration while Zhou,Hao,&Duval, 2017), and has achieved satis- searching, the Lévy flights strategy introduced in Section factory optimization results. The main idea of OBL is 3.2 is added to Equation (15). At the same time, a con- to simultaneously evaluate the candidate solution and stant 0.1 is multiplied. Hence the step size is expressed as its opposite solution, and choose a better solution 40 X. XU ET AL. from them. However, the OBL uses fixed interval bound- From the above definitions, we can find that the inter- aries, which may result in loss of information of the val boundaries in GOBL are dynamically updated, and the currently converged region. Therefore, the generalized scope of the search space is small. Note that by using the opposition-based learning (GOBL) has been proposed in GOBL, the diversity of the population could be increased Wang, Wu, Rahnamayan, Liu, and Ventresca (2011), which and the convergence of the algorithm may be speeded replaces fixed boundaries with dynamically updated up. That’s why we introduce the GOBL into the LABAS interval boundaries (El-Abd, 2012;Feng,Liu,Sun,Yue, algorithm. It should be noted that the LABAS algorithm & Liu, 2018). This strategy is also more conducive to the proposed in this paper uses GOBL in two places. First, convergence of the algorithm. Some definitions of OBL the GOBL is performed on randomly generated candi- and GOBL are given below. date solutions at the time of population initialization. This makes the initial individuals have better fitness, so it pro- Definition (opposite number): Let x be a real number vides a good start for the algorithm. Second, the LABAS defined in [a, b], then the opposite number of x is defined algorithm performs GOBL for elite individuals. This makes as follows: the algorithm have more opportunities to explore better solutions, and also increases the diversity of the popu- ox = a + b − x (19) lation. Thereby the convergence rate of the algorithm is Similarly, it can be extended to high dimensional space. improved. Definition (opposite point): Let x = (x , x , ... , x ) be a 1 2 3.5. Algorithm flow point in the d-dimensional space, where x , x , ... , x ∈ R, 1 2 and x ∈ [a , b ]. Then the opposite point ox = (ox , ox , i i i 1 2 In order to make the interpretation of the LABAS ... , ox ) of x is defined as follows: algorithm proposed in this paper more clear, the basic steps are represented by the pseudo code Algorithm 1. ox = a + b − x (20) i i i i First, the algorithm randomly initializes the beetle pop- After the opposite point is defined, the opposition-based ulation and performs GOBL (Line 1). At the same time, the learning can be defined as follows. number of iterations is initialized to 1 (Line 2). Then cal- culate the fitness values of the initial population (Lines Definition (opposition-based learning): Let x=(x , x , 1 2 3–5). During each iteration, it is necessary to update the ... , x ) be a point of the d-dimensional space (i.e. a candi- elite individuals (Lines 7–11) and perform GOBL on the date solution). The opposite point ox=(ox , ox , ..., ox ) 1 2 d elite individuals (Line 12). Then, sort the elite individuals is calculated based on the definition of the opposite in increasing order of fitness values (Line 13). When each point. Let f (·) be a fitness function and be used to eval- beetle updates its position, first calculate its distance from uate the appropriateness of the solution. If f (ox) is better the corresponding elite individual (Line 15). The step size than f (x), then the point x is replaced by its opposite is then generated based on the Lévy flights and scaling point ox, otherwise x remains unchanged. Therefore, this factor (Lines 16–17), and the positions of the two anten- point and its opposite point are evaluated simultane- nae on the left and right sides of the beetle are calculated ously, and only the better one will continue to be used using this step size (Line 18). Then update the position for optimization. of the beetle according to the antennae (Line 19). The Definition (generalized opposition-based learning): number of iterations needs to be updated after all beetles t t t t Let x = (x , x , ... , x )∀i ∈ 1, 2, ... , n be the ith candi- finish updating their positions (Line 21). Finally, the rele- i i,1 i,2 i,d date solution at time t, where d is the dimension of the vant information of the optimal solution is returned (Line variables and n is the number of individuals in the pop- 23). ulation. Then the jth dimension of its generalized oppo- t t t t site point ox = (ox , ox , ... , ox ) can be defined as 4. Experimental design and results analysis i i,1 i,2 i,d follows: 4.1. Experimental design t t t t ox = k · (a + b ) − x (21) i,j j j i,j 4.1.1. Experimental operation platform t t a = min (x ) (22) j i,j The simulation environment of this paper is: CPU Intel 1≤i≤n Core i7-4710MQ, 2.50 GHz, 8 GB RAM, Windows 7 OS, Mat- t t b = max (x ) (23) j i,j lab R2013a. 1≤i≤n t t where a and b represent the minimum and maximum 4.1.2. Benchmark functions j j values of the jth dimension of all individuals in the popu- In this paper, 10 benchmark functions on CEC2005 are lation at time t,and k is a random number in [0, 1]. selected for simulation experiments. The expressions, SYSTEMS SCIENCE & CONTROL ENGINEERING: AN OPEN ACCESS JOURNAL 41 Algorithm 1 LABAS algorithm Input : f (x): the objective function d: the dimensions of the variables ub: upper bounds of variables, where ub = [ub , ub , ... , ub ] 1 2 d lb: lower bounds of variables, where lb = [lb , lb , ... , lb ] 1 2 d n: population size of the beetle n : the number of elite individuals T: the maximum number of iterations Output : x , f best best 1: Initialize a random population and perform GOBL to get X = [x , x , ... , x ] 1 2 n 2: Initialize iteration number t = 1 3: for i ← 1to n do 4: f ← f (x ) xi i 5: end for 6: while (t ≤ T)or (stop criterion) do 7: if t == 1 then 8: E ← Sort the initial population of beetles in increasing order of fitness values 9: else 10: E ← Sort ([X; E]) in increasing order of fitness values 11: end if 12: E ← Conduct GOBL according to (21)- -(23) 13: E ← Sort E in increasing order of fitness values 14: for i ← 1to n do 15: d ← Calculate distance according to (14) 16: Lévy ← Generate Lévy flights according to (11) 17: step ← Generate the step size according to (16) t t 18: x , x ← Calculate the antennae’s positions according to (17) ir il t+1 19: x ← Update the position of the beetle according to (18) 20: end for 21: t ← t + 1 22: end while 23: return f = f , x = e e 1 best 1 best dimensions, search ranges, and theoretical optimal values optimization (PSO) algorithm, differential evolution (DE) of the benchmark functions are shown in Table 1.Among algorithm, artificial bee colony (ABC) algorithm, moth- them, f –f are continuous unimodal functions, which are flame optimization (MFO) algorithm and beetle anten- 1 5 used to test the optimization precision, convergence rate nae search (BAS) algorithm. The above algorithms can be and local exploitation ability of the algorithm. f –f are divided into four categories. The CS algorithm also uses 6 10 continuous multimodal functions, and their local extreme Lévy flights, so it is used to compare the performance points increase exponentially as the function’s dimension of the algorithm that also uses this strategy. PSO, DE, increases. Therefore, they are often used to test the con- and ABC are representative algorithms in heuristic opti- vergence rate, local optimal avoidance ability and global mization algorithms, and they have good optimization optimization ability of the algorithm. capabilities. The MFO algorithm is a relatively new opti- mization algorithm, which has been proposed in 2015. 4.1.3. Experimental parameter setting BAS is the algorithm to be improved in this paper, which is In order to compare and analyse the performance of the used to compare whether the improved LABAS is superior proposed LABAS algorithm, this paper selects six other to the original algorithm. It should be noted that since the representative algorithms for comparative experiments, LABAS algorithm adopts an adaptive step size strategy, including: cuckoo search (CS) algorithm, particle swarm there is no need to tune its parameters. 42 X. XU ET AL. Table 1. Benchmark functions. Function Dim Range f min f (x) = x 30 [−100, 100] 0 i=1 n n f (x) = |x |+ |x | 30 [−10, 10] 0 2 i i i=1 i=1 f (x) = max{|x |,1 ≤ i ≤ n} 30 [−100, 100] 0 3 i n−1 2 2 f (x) = [100(x − x ) + (x − 1)]30[−30, 30] 0 4 i+1 i i=1 f (x) = (x + 0.5) 30 [−100, 100] 0 5 i i=1 f (x) =− x sin( |x |) 30 [−500, 500] −418.9829n 6 i i i=1 ⎛ ⎞ n n 2 1 ⎝ ⎠ f (x) =−20 exp −0.2 x − exp cos(2πx ) + 20 + e 30 [−32, 32] 0 7 i i=1 i=1 n n 1 x f (x) = x − cos √ +130[−600, 600] 0 i=1 i=1 n−1 2 2 2 2 f (x) = {10sin (πy ) + (y − 1) [1 + 10sin (πy )] + (y − 1) } 30 [−50, 50] 0 9 1 i i+1 n i=1 k(x − a) , x > a n ⎨ i i x + 1 + u(x , 10, 100, 4), y = 1 + , u(x , a, k, m) = 0, −a ≤ x ≤ a i i i i 4 ⎪ ⎩ m i=1 k(−x − a) , x < −a i i n−1 2 2 2 2 2 f (x) = 0.1{sin (3πx ) + (x − 1) [1 + sin (3πx )]+(x − 1) [1 + sin (2πx )]} 30 [−50, 50] 0 10 1 i i+1 n n i=1 + u(x , 5, 100, 4) i=1 The dimensions of the variables for all benchmark precision of the optimal value of the LABAS algorithm −22 functions in this experiment are set to 30. And in order reaches e . Moreover, the accuracy in the worst case is to compare the results more fairly, the population size better than the best accuracy that other algorithms can and number of iterations of each algorithm are set the achieve. The average solution accuracy is three orders of same. The specific parameter settings of each algorithm magnitude higher than the suboptimal DE algorithm, and are shown in Table 2. is five orders of magnitude higher than the original BAS algorithm. At the same time, the standard deviation of the algorithm is very small. Therefore, LABAS algorithm 4.2. Experimental results and analysis has better robustness and stability. It can be seen from In order to prevent the error caused by the contingency the results in Table 3 that the performance of the LABAS of the algorithm, the 7 algorithms run independently 30 algorithm under most functions is optimal when opti- times under each benchmark function, and record the mizing the unimodal benchmark functions. Among them, minimum (Min), maximum (Max), mean (Mean) and stan- the accuracy of the optimal solution solved by LABAS on dard deviation (Std). These performance indicators are f , f is obviously higher than other algorithms, and the 1 2 used to evaluate the optimization performance of the average solution accuracy is also the best, which reflects algorithm. The simulation results are shown in Tables 3 the excellent optimization ability of the algorithm. When and 4. It should be noted that the bold values indicate the optimizing f , f , the LABAS algorithm also achieved bet- 3 5 optimal data of these algorithms under the correspond- ter average solution accuracy and standard deviation. For ing evaluation indexes. The Mean reflects the average f , the LABAS algorithm fails to achieve the best accuracy, convergence accuracy that the algorithm can achieve for but its average solution accuracy is second only to the a given number of iterations. The Std reflects the stability ABC algorithm that obtains the optimal solution. There- and robustness of the algorithm. fore, the LABAS algorithm is still very competitive. For First analyse the data under the unimodal functions. f , the optimal solution accuracy of the MFO algorithm Taking the function f in Table 3 as an example, the is higher than that of the LABAS algorithm, but its other 1 SYSTEMS SCIENCE & CONTROL ENGINEERING: AN OPEN ACCESS JOURNAL 43 Table 2. Algorithm parameter table. Algorithm Parameter settings CS Population size n = 30, iterations T = 500, discovery probability p = 0.25, Lévy ﬂights parameter β = 1.5 PSO Population size n = 30, iterations T = 500, inertia weight w = 0.7, learning factor c1 = c2 = 1 DE Population size n = 30, iterations T = 500, scaling factor F = 0.6, crossover probability CR = 0.8 MFO Population size n = 30, iterations T = 500, logarithmic spiral shape constant b = 1, the path coeﬃcient t isarandomnumberin[r,1], r decreases linearly from −1to −2 as the number of iterations increases ABC Population size n = 30, iterations T = 500, predetermined number of limit limit = 540 BAS Iterations T = 500, initial values of d, δ, r , r need to be adjusted according to the variables’ ranges of the speciﬁc function LABAS Population size n = 30, iterations T = 500, Lévy ﬂights parameter β = 1.5 Table 3. Comparison of optimization results of seven algorithms for unimodal benchmark functions. F Index CS ABC DE MFO PSO BAS LABAS f Min 2.74E+00 3.46E+01 6.33E−01 1.70E−01 1.50E+01 4.34E+01 8.12E−22 Max 1.44E+01 1.99E+02 4.91E+00 3.00E+04 1.33E+03 2.46E+03 7.31E−02 Mean 8.36E+00 9.48E+01 1.77E+00 3.67E+03 3.77E+02 5.52E+02 8.36E−03 Std 3.30E+00 4.82E+01 9.36E−01 7.52E+03 2.96E+02 5.85E+02 1.74E−02 f Min 6.02E+00 3.98E−01 4.55E−01 2.31E−02 9.38E+00 3.56E+01 1.35E−15 Max 2.13E+01 4.04E+00 1.82E+00 1.00E+02 5.86E+01 3.17E+02 3.89E−02 Mean 1.21E+01 1.25E+00 1.10E+00 4.17E+01 3.07E+01 9.31E+01 8.00E−03 Std 4.23E+00 7.66E−01 3.45E−01 2.53E+01 1.32E+01 4.89E+01 1.17E−02 f Min 6.13E+00 3.63E+01 1.06E+01 4.89E+01 1.61E+01 4.53E+01 4.67E−01 Max 1.79E+01 3.27E+02 3.35E+01 8.77E+01 3.43E+01 8.23E+01 1.23E+01 Mean 1.04E+01 1.23E+02 2.15E+01 7.04E+01 2.61E+01 7.06E+01 5.46E+00 Std 2.79E+00 6.65E+01 6.68E+00 9.85E+00 5.21E+00 7.75E+00 2.53E+00 f Min 2.41E+02 2.91E+00 1.45E+02 1.84E+02 2.43E+03 2.59E+03 2.83E+01 Max 1.30E+03 3.15E+01 1.35E+03 9.19E+04 1.08E+05 1.60E+05 2.90E+01 Mean 5.92E+02 1.23E+01 4.61E+02 8.20E+03 2.38E+04 3.43E+04 2.88E+01 Std 2.10E+02 7.96E+00 2.97E+02 2.27E+04 2.37E+04 3.64E+04 1.47E−01 f Min 2.86E+00 3.06E+01 8.72E−01 1.68E−01 4.11E+01 3.99E+02 1.99E−01 Max 1.88E+01 3.22E+02 3.97E+00 1.01E+04 1.04E+03 3.61E+03 5.94E−01 Mean 8.86E+00 1.26E+02 1.98E+00 2.66E+03 3.24E+02 1.92E+03 3.61E−01 Std 3.86E+00 7.39E+01 9.33E−01 4.40E+03 2.42E+02 8.78E+02 1.07E−01 indicators are worse than the LABAS algorithm. When ability. In addition, the LABAS algorithm also has good comparing LABAS with BAS, it can be found that the accu- robustness. racy and stability of LABAS under each function are better From the comparison results in Table 4,the LABAS than that of BAS, which shows the effectiveness of the algorithm also has good performance for the multimodal benchmark functions, which have an exponential num- improved algorithm. It can be seen from the above com- parison that the solution accuracy of the LABAS algorithm ber of local solutions. Among them, LABAS obtains the is superior to other algorithms in most cases, which also global optimal solution when optimizing function f , shows that the algorithm has better local exploitation and the average solution accuracy is also the best. For Table 4. Comparison of optimization results of seven algorithms for multimodal benchmark functions. F Index CS ABC DE MFO PSO BAS LABAS f Min −8.65E+03 8.14E+02 −7.39E+03 −1.06E+04 −8.26E+03 −6.75E+03 −7.67E+03 Max −7.53E+03 1.23E+04 −5.21E+03 −7.17E+03 −5.89E+03 −3.00E+03 −4.96E+03 Mean −8.09E+03 3.68E+03 −5.87E+03 −8.64E+03 −7.30E+03 −5.40E+03 −6.33E+03 Std 2.82E+02 2.68E+03 5.09E+02 9.89E+02 5.18E+02 8.82E+02 6.90E+02 f Min 3.58E+00 4.07E+00 2.80E−01 2.04E+00 7.55E+00 1.19E+01 8.88E−16 Max 1.26E+01 2.59E+01 2.00E+01 2.00E+01 1.40E+01 1.96E+01 1.24E−01 Mean 6.36E+00 1.10E+01 6.64E+00 1.68E+01 1.03E+01 1.71E+01 2.15E−02 Std 1.76E+00 6.30E+00 8.18E+00 5.11E+00 1.47E+00 2.10E+00 3.09E−02 f Min 1.04E+00 9.02E+02 8.55E−01 2.22E−01 1.24E+00 2.32E+01 0.00E+00 Max 1.15E+00 1.28E+04 1.05E+00 9.11E+01 9.25E+01 1.04E+02 2.59E−01 Mean 1.07E+00 4.44E+03 9.81E−01 2.18E+01 9.52E+00 5.16E+01 3.15E−02 Std 2.84E−02 2.68E+03 4.95E−02 3.81E+01 2.22E+01 1.99E+01 5.96E−02 f Min 1.78E+00 9.29E+00 1.25E−01 1.48E+00 4.75E+00 2.30E+01 7.56E−03 Max 6.46E+00 7.66E+01 4.43E+00 2.56E+08 5.66E+01 1.02E+05 9.33E−02 Mean 3.50E+00 2.39E+01 1.58E+00 8.53E+06 2.13E+01 4.55E+03 2.37E−02 Std 1.10E+00 1.41E+01 1.16E+00 4.60E+07 1.10E+01 1.82E+04 1.92E−02 f Min 1.99E+00 7.78E+00 4.45E−01 6.64E+00 3.08E+01 1.39E+03 3.01E−01 Max 1.42E+01 8.42E+01 2.64E+01 4.10E+08 1.99E+04 4.84E+05 1.40E+00 Mean 7.29E+00 2.76E+01 4.22E+00 1.37E+07 1.50E+03 4.41E+04 7.41E−01 Std 2.83E+00 1.94E+01 5.03E+00 7.36E+07 3.99E+03 8.82E+04 2.44E−01 44 X. XU ET AL. Figure 3. Average convergence curves comparison chart of f . Figure 6. Average convergence curves comparison chart of f . Figure 4. Average convergence curves comparison chart of f . Figure 7. Average convergence curves comparison chart of f . Figure 5. Average convergence curves comparison chart of f . Figure 8. Average convergence curves comparison chart of f . function f , the average solution accuracy of LABAS is not optimal, but it is also the fourth best. In addition, for the functions f , f , f , the values of the LABAS algorithm is also significantly outperforms BAS for the performance 7 9 10 under each index are optimal. This also shows that the of multimodal functions’ optimization. The above com- LABAS algorithm has superior optimization ability. When parison shows that the LABAS algorithm also has well comparing LABAS with BAS, it can be found that LABAS solution accuracy for multimodal functions with many SYSTEMS SCIENCE & CONTROL ENGINEERING: AN OPEN ACCESS JOURNAL 45 Figure 9. Average convergence curves comparison chart of f . Figure 12. Average convergence curves comparison chart of f . In order to reflect and compare the optimization preci- sion and convergence rate of each algorithm more intu- itively, the average convergence curves of the seven algo- rithms under each benchmark function are plotted, as shown in Figures 3–12. It should be noted that the aver- age fitness values of the 7 algorithms are plotted against the base 10 logarithm, except for f . It can be seen from Figures 3 to 12 thattheconvergencerateoftheLABAS algorithm is the fastest under each benchmark function, except for f , f . In addition, observing the change of the 4 6 convergence curve, other algorithms are easy to fall into the local optimum. In this case, the algorithm can not find the theoretical optimal value by performing more itera- tions. However, the LABAS algorithm in this paper adopts Figure 10. Average convergence curves comparison chart of f . Lévy flights, step size adaptive strategy and GOBL, so it can effectively avoid convergence to the local optimal value. 5. Conclusion In view of the shortcomings of BAS algorithm when solv- ing complex optimization problems, such as low con- vergence precision, easy to fall into local optimum, and excessive dependence on parameter settings. This paper proposes an algorithm called beetle antennae search algorithm based on Lévy flights and adaptive strategy (LABAS). Firstly, the population used by the algorithm and the corresponding strategy of updating the popula- tion using elite individual information enhance its opti- mization ability, stability and exploitation ability. Sec- ondly, the Lévy flights and the scaling factor improve Figure 11. Average convergence curves comparison chart of f . the ability of the algorithm to explore the region of the global optimal solution, avoiding falling into local opti- mum and converge to the global optimal value more local minima. This indicates that the algorithm has a good quickly. Thirdly, the adaptive step size strategy avoids the ability to explore the unknown region, and its local opti- difficulty of parameter setting and can be automatically mal avoidance ability is strong. adjusted according to the type and size of the problem. 46 X. XU ET AL. Fourthly, GOBL enhances the diversity of the population function optimization. In 2018 IEEE congress on evolutionary computation (CEC), Rio de Janeiro, Brazil (pp. 1–7). and also makes the algorithm has better ability to find the Heidari, A. A., & Pahlavani, P. (2017). An efficient modified grey optimal solution. The above improvements balance the wolf optimizer with Lévy flight for optimization tasks. Applied global exploration and local exploitation of the algorithm Soft Computing, 60, 115–134. to some extent. Simulation experiments and compara- Jensi, R., & Jiji, G. W. (2016). An enhanced particle swarm opti- tive analysis show that the LABAS algorithm is superior mization with levy flight for global optimization. Applied Soft Computing, 43, 248–261. to the BAS algorithm and other comparison algorithms in Jiang, X., & Li, S. (2017). Beetle antennae search without terms of accuracy, convergence rate, stability, robustness parameter tuning (BAS-WPT) for multi-objective optimization. and local optimal value avoidance. In our future research arXiv:1711.02395v1 [cs.NE]. work, the LABAS algorithm will be applied in the optimiza- Jiang, X., & Li, S. (2018). BAS: Beetle antennae search algorithm tion problems in the textiles, carbon fibre production and for optimization problems. International Journal of Robotics other fields. and Control, 1(1), 1–5. Li, X., Zhang, J., & Yin, M. (2014). Animal migration optimiza- tion: An optimization algorithm inspired by animal migra- Disclosure statement tion behavior. Neural Computing and Applications, 24(7-8), 1867–1877. No potential conflict of interest was reported by the authors. Ling, Y., Zhou, Y., & Luo, Q. (2017). Lévy flight trajectory-based whale optimization algorithm for global optimization. IEEE Access, 5, 6168–6186. Funding Marell, A., Ball, J. P., & Hofgaard, A. (2002). Foraging and move- This work was supported in part by the National Natural Science ment paths of female reindeer: Insights from fractal analysis, Foundation of China [grant numbers 61873059 and 61922024], correlated random walks, and Levy flights. Canadian Journal the Program for Professor of Special Appointment (Eastern of Zoology, 80(5), 854–865. Scholar) at Shanghai Institutions of Higher Learning of China, Meng, X., Liu, Y., Gao, X., & Zhang, H. (2014). A new bio-inspired and the Natural Science Foundation of Shanghai [grant number algorithm: Chicken swarm optimization. In Fifth international 18ZR1401500]. conference on swarm intelligence, Hefei, China (pp. 86–94). Mirjalili, S. (2015). Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowledge-Based Sys- References tems, 89, 228–249. Abedinia, O., Amjady, N., & Ghasemi, A. (2016). A new meta- Mirjalili, S., & Lewis, A. (2016). The whale optimization algorithm. heuristic algorithm based on shark smell optimization. Com- Advances in Engineering Software, 95, 51–67. plexity, 21(5), 97–116. Park, S., & Lee, J. (2016). Stochastic opposition-based learning Ahandani, M. A., & Alavi-Rad, H. (2015). Opposition-based learn- using a beta distribution in differential evolution. IEEE Trans- ing in shuffled frog leaping: An application for parameter actions on Cybernetics, 46(10), 2184–2194. identification. Information Sciences, 291, 19–42. Passino, K. M. (2002). Biomimicry of bacterial foraging for dis- Ali, M. Z., Awad, N. H., Reynolds, R. G., & Suganthan, P. N. (2018). A tributed optimization and control. IEEE Control Systems Mag- balanced fuzzy cultural algorithm with a modified Levy flight azine, 22(3), 52–67. search for real parameter optimization. Information Sciences, Rahnamayan, S., Tizhoosh, H. R., & Salama, M. M. A. (2008). 447, 12–35. Opposition-based differential evolution. IEEE Transactions on Chen, J., Wang, C., & Wang, S. (2018). Research on evaluation Evolutionary Computation, 12(1), 64–79. method of spatial straightness for variable step beetle anten- Reynolds, A. M., & Frye, M. A. (2007). Free-flight odor tracking in nae search algorithm. Tool Engineering, 8, 136–138. drosophila is consistent with an optimal intermittent scale- Dong, W., Kang, L., & Zhang, W. (2017). Opposition-based parti- free search. PLoS One, 2(4), 1–9. cle swarm optimization with adaptive mutation strategy. Soft Reynolds, A. M., Smith, A. D., Reynolds, D. R., Carreck, N. L., & Computing, 21(17), 5081–5090. Osborne, J. L. (2007). Honeybees perform optimal scale-free Edwards, A. M., Phillips, R. A., Watkins, N. W., Freeman, M. P., Mur- searching flights when attempting to locate a food source. phy, E. J., Afanasyev, V., ... Viswanathan, G. M. (2007). Revis- Journal of Experimental Biology, 210(21), 3763–3770. iting Levy flight search patterns of wandering albatrosses, Salcedo-Sanz, S., Pastor-Sanchez, A., Gallo-Marazuela, D., & bumblebees and deer. Nature, 449(7165), 1044–1048. Portilla-Figueras, A. (2013). A novel coral reefs optimiza- El-Abd, M. (2012). Generalized opposition-based artificial bee tion algorithm for multi-objective problems. In 14th inter- colony algorithm. In 2012 IEEE congress on evolutionary com- national conference on intelligent data engineering and auto- putation, Brisbane, Australia (pp. 1–4). mated learning (IDEAL), Hefei, China (pp. 326–333). Ewees, A. A., Elaziz, M. A., & Houssein, E. H. (2018). Improved Savsani, V., & Tawhid, M. A. (2017). Non-dominated sorting moth grasshopper optimization algorithm using opposition-based flame optimization (NS-MFO) for multi-objective problems. learning. Expert Systems with Applications, 112, 156–172. Engineering Applications of Artificial Intelligence, 63, 20–32. Faris, H., Aljarah, I., Al-Betar, M. A., & Mirjalili, S. (2018). Grey wolf Song, D. (2018). Application of particle swarm optimization optimizer: A review of recent variants and applications. Neural based on beetle antennae search strategy in wireless sen- Computing and Applications, 30(2), 413–435. sor network coverage. In International conference on net- Feng, X., Liu, A., Sun, W., Yue, X., & Liu, B. (2018). A dynamic work, communication, computer engineering (NCCE 2018), generalized opposition-based learning fruit fly algorithm for Chongqing, China (pp. 1051–1054). SYSTEMS SCIENCE & CONTROL ENGINEERING: AN OPEN ACCESS JOURNAL 47 Tan, Y., & Zhu, Y. (2010). Fireworks algorithm for optimization. Yang, X., & Deb, S. (2009). Cuckoo search via Lévy flights. In In 1st international conference on swarm intelligence, Beijing, World congress on nature and biologically inspired computing, China (pp. 355–364). Coimbatore, India (pp. 210–214). Tizhoosh, H. R. (2005). Opposition-based learning: A new Zeng, N., Qiu, H., Wang, Z., Liu, W., Zhang, H., & Li, Y. (2018). A scheme for machine intelligence. In International confer- new switching-delayed-PSO-based optimized SVM algorithm ence on computational intelligence for modelling, control and for diagnosis of Alzheimer’s disease. Neurocomputing, 320, automation/international conference on intelligent agents web 195–202. technologies and international commerce, Vienna, Austria (pp. Zeng, N., Wang, Z., Zhang, H., Kim, K., Li, Y., & Liu, X. (2019). 695–701). An improved particle filter with a novel hybrid proposal Vikas, & Nanda, S. J. (2016). Multi-objective moth flame opti- distribution for quantitative analysis of gold immunochro- mization. In 2016 international conference on advances in com- matographic strips. IEEE Transactions on Nanotechnology, 18, puting, communications and informatics (ICACCI), Jaipur (pp. 819–829. 2470–2476). Zeng, N., Zhang, H., Liu, W., Liang, J., & Alsaadi, F. E. (2017). A Viswanathan, G. M., Afanasyev, V., Buldyrev, S. V., Murphy, E. J., switching delayed PSO optimized extreme learning machine Prince, P. A., & Stanley, H. E. (1996). Levy flight search patterns for short-term load forecasting. Neurocomputing, 240, of wandering albatrosses. Nature, 381(6581), 413–415. 175–182. Wang, T., & Liu, Q. (2018). The assessment of storm surge disaster Zheng, Y. (2015). Water wave optimization: A new nature- loss based on BAS-BP model. Marine Environmental Science, inspired metaheuristic. Computers and Operations Research, 37(3), 457–463. 55, 1–11. Wang, H., Wu, Z., Rahnamayan, S., Liu, Y., & Ventresca, M. Zhou, Y., Hao, J., & Duval, B. (2017). Opposition-based memetic (2011). Enhancing particle swarm optimization using general- search for the maximum diversity problem. IEEE Transactions ized opposition-based learning. Information Sciences, 181(20), on Evolutionary Computation, 21(5), 731–745. 4699–4714. Zhu, Z., Zhang, Z., Man, W., Tong, X., Qiu, J., & Li, F. (2018). A new Wu, Q., Ma, Z., Xu, G., Li, S., & Chen, D. (2019). A novel neural beetle antennae search algorithm for multi-objective energy network classifier using beetle antennae search algorithm for management in microgrid. In 13th IEEE conference on indus- pattern classification. IEEE Access, 7, 64686–64696. trial electronics and applications (ICIEA), Wuhan, China (pp. Yang, X. (2008). Nature-inspired metaheuristic algorithms. Frome: 1599–1603). Luniver Press.
Systems Science & Control Engineering – Taylor & Francis
Published: Jan 1, 2020
Keywords: Beetle antennae search algorithm; elite individuals; Lévy flights; adaptive strategy; generalized opposition-based learning
You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.