融合环境感知强化学习的电动汽车能耗预测模型研究
DOI:
CSTR:
作者:
作者单位:

湖南工业大学交通与电气工程学院

作者简介:

通讯作者:

中图分类号:

TM 715

基金项目:


Environment-Aware Reinforcement Learning based Energy Consumption Prediction Model for Electric Vehicles
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    针对现有电动汽车实时能耗预测模型在环境感知能力、动态校准机制方面的局限,本文提出一种融合环境感知与强化学习的能耗预测模型。首先,为增强模型对复杂工况的感知能力,构建基于对比学习和耦合强化学习协同训练的路况感知算法,引入多尺度图像特征融合机制,有效提取与汽车能效强相关的环境特征,提升感知精度。其次,建立马尔科夫实时能效估计模型,并将其映射到强化学习框架,通过数据驱动更新Q函数与累积奖励,实现预测模型的主动进化与自适应优化。同时,结合场景感知的优先经验回放机制,强化对坡度突变、急加减速等关键工况的关注,进一步提升模型在复杂环境下的学习效率与泛化能力。最后,通过场景感知的优先采样机制提高训练样本质量,提高强化学习的收敛性与训练效率。。实验结果表明,该方法在多种运行工况和车型条件下均表现出优越的鲁棒性和稳定性,其MAE低于0.2%、RMSE低于0.3%、R2超过99.5%,显著优于现有的Transformer、Informer、Mamba及LSTM模型。

    Abstract:

    To address the limitations of existing real-time energy consumption prediction models for electric vehicles—particularly in environmental perception and dynamic calibration mechanisms—this paper proposes an energy prediction framework that integrates environmental perception with reinforcement learning. First, to enhance the model’s ability to perceive complex driving conditions, a road condition perception algorithm is designed based on contrastive learning and coupled reinforcement learning, incorporating a multi-scale image feature fusion mechanism. This enables the effective extraction of environmental features highly correlated with vehicle energy efficiency, thereby improving perception accuracy. Second, a Markov-based real-time energy estimation model is constructed and embedded into a reinforcement learning framework. By data-driven updates of the Q-function and cumulative rewards, the prediction model achieves self-evolution and adaptive optimization. Meanwhile, a scene-aware prioritized experience replay mechanism is introduced to emphasize key scenarios such as sudden slope changes and aggressive acceleration/deceleration, further enhancing the model’s learning efficiency and generalization capability under complex conditions. Finally, a scenario-aware prioritized sampling strategy improves training data quality, significantly boosting the convergence and training efficiency of the reinforcement learning process. Experimental results demonstrate that the proposed method exhibits superior robustness and stability across diverse driving scenarios and vehicle types, achieving a MAE below 0.2%, RMSE below 0.3%, and R2 above 99.5%, outperforming existing models such as Transformer, Informer, Mamba, and LSTM.

    参考文献
    相似文献
    引证文献
引用本文
相关视频

分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2025-10-11
  • 最后修改日期:2026-02-12
  • 录用日期:2026-02-13
  • 在线发布日期:
  • 出版日期:
文章二维码