基于强化学习的边缘计算网络资源在线分配方法
作者:
作者单位:

浙江工业大学

作者简介:

通讯作者:

中图分类号:

TP393.2

基金项目:

国家自然科学基金项目(面上项目,重点项目,重大项目)


Reinforcement Learning-Based Online Resource Allocation for Edge Computing Network
Author:
Affiliation:

Zhejiang University of Techonology

Fund Project:

The National Natural Science Foundation of China (General Program, Key Program, Major Research Plan)

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    针对边缘计算应用对实时性的要求, 引入软件定义网络和网络功能虚拟化技术对边缘计算网络进行重构. 基于此, 考虑以最大化长期平均实时任务处理成功率为目标的计算和通信资源在线分配问题. 通过建立马尔可夫决策过程模型,提出基于Q学习的资源在线分配方法. Q学习在状态动作空间较大时内存占用大且会发生维度灾难, 鉴于此, 进一步提出基于DQN的资源在线分配方法. 实验结果表明, 所提出算法能够较快收敛, 且DQN算法相较于Q学习和其他基准方法能够获得更高的实时任务处理成功率.

    Abstract:

    To meet the real-time requirement of the edge computing applications, technologies of software defined network and network function virtualization are introduced to reconstruct the edge computing network. On this basis, we consider the design of online computing and communication resource allocation method, aiming at maximizing the longterm average probability of successfully processing the real-time tasks. By establishing a Markov decision process framework, an online resource allocation method based on Q-learning is proposed. Nevertheless, Q-learning occupies a lot of memory when the state action space is large, and it is prone to dimensional disasters. Therefore, a DQN-based online resource allocation method is proposed. Simulation results show that both proposed algorithms converge quickly and the average probability of successfully processing the real-time tasks achieved by the DQN algorithm is the highest among all the baseline algorithms.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2021-04-05
  • 最后修改日期:2022-05-04
  • 录用日期:2021-07-19
  • 在线发布日期: 2021-08-01
  • 出版日期: