一种基于教与学的混合灰狼优化算法
作者:
作者单位:

江南大学物联网技术应用教育部工程研究中心

作者简介:

通讯作者:

中图分类号:

TP18

基金项目:

国家自然科学基金项目(61573167,61572237)


A hybrid gray wolf optimization algorithm based on the teaching-learning-based optimization
Author:
Affiliation:

Engineering Research Center of Internet of Things Technology Applications Ministry of Education

Fund Project:

National Natural Science Foundation of China (61573167, 61572237)

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    针对灰狼优化算法(GWO)存在收敛精度不高、易陷入局部最优的不足,提出一种基于教与学的混合灰狼优化算法(HGWO).首先,采用佳点集理论进行种群初始化,提高初始种群的遍历性;其次,提出一种非线性控制参数策略,在迭代前期增加全局搜索能力,避免算法陷入局部最优,在迭代后期增加局部开发能力,提高收敛精度;最后,结合教与学算法(TLBO)和粒子群优化算法,修改原位置更新公式以优化算法搜索方式,从而提升算法的收敛性能.为验证HGWO算法的有效性,选取9种标准测试函数,将HGWO算法、GWO算法以及其他群体智能优化算法和其他改进GWO算法进行仿真实验.实验结果表明,所提出的HGWO算法性能优于GWO算法和其他群体智能优化算法,且在改进算法中具有一定优势.

    Abstract:

    In terms of the problems that the gray wolf optimization algorithm has low convergence accuracy and is easy to fall into local solutions, this paper proposes a hybrid gray wolf optimization algorithm based on the teaching-learning optimization. Firstly, the good-point set theory is used to generate the initial population to improve its ergodicity. Then, a nonlinear control parameter strategy is proposed to increase the global search capability in the early stage of the iteration to avoid the algorithm from falling into the local optimum, and increase the local development capability in the later stage of the iteration to improve the convergence accuracy. Finally, combining with teaching-learning-based optimization(TLBO) algorithm and particle swarm optimization(PSO), the original position update formula is modified to optimize the search mode of the algorithm, thereby improving the convergence performance of the algorithm. In order to verify the effectiveness of HGWO algorithm, this paper compares HGWO algorithm with the classical GWO algorithm, other swarm intelligence optimization algorithms and other improved GWO algorithms by using nine well-known benchmark test functions. The results show that the performance of the proposed HGWO algorithm is significantly better than the classical GWO algorithm and other swarm intelligence optimization algorithms, and has certain advantages in the improved algorithms.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2021-06-02
  • 最后修改日期:2022-02-07
  • 录用日期:2021-08-18
  • 在线发布日期: 2021-09-01
  • 出版日期: