research:ai2022

Improved Salp Swarm Algorithm for Solving Single-objective Continuous Optimization Problems

Download

Abed-alguni, B., Paul, D., Hammad, R. “Improved Salp Swarm Algorithm for Solving Single-objective Continuous Optimization Problems”, Applied Intelligence, 2022.

The Salp Swarm Algorithm (SSA) is an effective single-objective optimization algorithm that was inspired by the navigating and foraging behaviors of salps in their natural habitats. Although SSA was successfully tailored and applied to solve various types of optimization problems, it often suffers from premature convergence and typically does not perform well with high-dimensional optimization problems. This paper introduces an Improved SSA (ISSA) algorithm to enhance the performance of SSA in solving single-objective continuous optimization problems. ISSA has four characteristics. First, it employs Gaussian Perturbation to improve the diversity of initial population. Second, it uses highly disruptive polynomial mutation (HDPM) to update the leader salp in the salp chain. Third, it uses the Laplace crossover operator to improve its exploration ability. Fourth, it uses a new opposition learning method called Mixed Opposition-based Learning (MOBL) to improve its convergence rate and exploration ability. A set of 14 standard benchmark functions was used to evaluate the performance of ISSA and compare it to three variations of SSA (SSA, Hybrid SSA with Particle Swarm Optimization HSSAPSO Singh et al. (2020) and Enhanced SSA (ESSA) Zhang et al. (2020)). The overall experimental and statistical results indicate that ISSA is a better optimization algorithm than the other SSA variations. Further, the single-objective IEEE CEC 2014 (IEEE Congress on Evolutionary Computation 2014) functions were used to evaluate and compare the performance of ISSA to 18 well-known and state-of-the-art optimization algorithms (Exploratory Cuckoo Search (ECS) Abed-alguni (2021)), Grey Wolf Optimizer (GWO) Mirjalili and Mirjalili (Advances in Engineering Software, 69, 46–61, 2014), Distributed Grey Wolf Optimize (DGWO) Abed-alguni and Barhoush (2018), Cuckoo Search (CS) Yang and Deb (2009), Distributed adaptive differential evolution with linear population size reduction evolution (L-SHADE) Tanabe and Fukunaga (2014), Memory-based Hybrid Dragonfly Algorithm (MHDA) KS and Murugan (Expert Syst Appl, 83, 63–78, 2017), Fireworks Algorithm with Differential Mutation (FWA-DM) Yu et al. (2014), Differential Evolution-based Salp Swarm Algorithm (DESSA) Dhabal et al. (Soft Comput, 25(3), 1941–1961, 2021), LSHADE with Fitness and Diversity Ranking-Based Mutation Operator (FD-LSHADE) Cheng et al. (Swarm and Evolutionary Computation, 61, 100816, 2021), Distance based SHADE (Db-SHADE) Viktorin et al. (Swarm and Evolutionary Computation, 50, 100462, 2019) and Zeng et al. (Knowl-Based Syst, 226, 107150, 2021), Mean–Variance Mapping Optimization (MVMO) Iacca et al. (Expert Syst Appl, 165, 113902, 2021), Time-varying strategy-based Differential Evolution (TVDE) Sun et al. (Soft Comput, 24(4), 2727–2747, 2020), Butterfly Optimization Algorithm with adaptive gbest-guided search strategy and Pinhole-Imaging-based Learning (PIL-BOA)Long et al. (Appl Soft Comput, 103, 107146, 2021), Memory Guided Sine Cosine Algorithm (MG-SCA) Gupta et al. (Eng Appl Artif Intell, 93, 103718, 2020), Lévy flight Jaya Algorithm (LJA) Iacca et al. (2021), Sine Cosine Algorithm (SCA) Dhabala et al. (2021), Covariance Matrix Adaptation Evolution Strategy (CMA-ES) Hansen et al. (Evolutionary Computation, 11(1), 1–18, 2003) and Coyote Optimization Algorithm (COA) Pierezan and Coelho (2018)). The results indicate that ISSA performs better than the tested optimization algorithms.

  • research/ai2022.txt
  • Last modified: 2022-03-31 22:02
  • by david