基于深度卷积特征提取的红外与可见光图像融合方法
CSTR:
作者:
作者单位:

1. 西南科技大学 信息工程学院,四川 绵阳 621010;2. 特殊环境机器人技术四川省重点实验室,四川 绵阳 621010

作者简介:

通讯作者:

E-mail: liuguihua@swust.edu.cn.

中图分类号:

TP391

基金项目:

国家自然科学基金项目(11602292);四川省科技支撑计划项目(2021YFG0380);四川省卫生和计划生育科研课题项目(17PJ207);广东电网有限责任公司广州供电局委托项目(080044KK52190002).


An infrared and visible image fusion method based on deep convolutional feature extraction
Author:
Affiliation:

1. College of Information Engineering,Southwest University of Science and Technology,Mianyang 621010,China;2. Robot Technology Used for Special Environment Laboratory of Sichuan Province,Mianyang 621010,China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    目前多数红外与可见光图像融合算法在融合过程中通常需要对源图像进行分解,这样易导致融合图像细节模糊和显著性目标丢失.为解决该问题,提出一种基于深度卷积特征提取的红外与可见光图像融合方法.首先,利用迁移学习理论对EfficientNet特征提取性能进行分析,选择7个特征提取模块;然后,直接将源图像送入特征提取模块以实现显著性特征提取;接着,构造通道归一化和平均算子操作用于获取显著图,再使用Softmax与Up-sampling组合的融合规则来得到融合权重,将融合权重与源图像进行卷积,生成7幅候选融合图像;最后,将候选融合图像的像素最大值作为最终的重构融合图像.所有实验均在公共数据集上进行,并与经典的传统和深度学习方法比较,主客观实验结果均表明,所提出方法能够有效地融合红外与可见光图像中的重要信息,突显融合图像的细节纹理,具有更好的视觉效果和更少的图像伪影以及人工噪声.

    Abstract:

    Most current infrared and visible image fusion approaches require decomposition of the source images during the fusion process, which will lead to blurred details and loss of saliency targets. In order to solve this problem, an infrared and visible image fusion method with deep convolutional feature extraction is proposed. Firstly, the feature extraction capability of EfficientNet is analysed by using transfer learning to select seven feature extraction modules. Secondly, the source image is fed directly into the feature extraction module to achieve salient feature extraction. After that, the channel normalization and average operator are constructed for obtaining the activity level map. The fusion rule with a combination of Softmax and Up-sampling is used to obtain the fused weights, which are then convolved with the source image to produce seven candidate fused images. Finally, the pixel maximum of the candidate fused images are used as the final reconstructed fused results. Experiments are based on public datasets, and compared with classical traditional and deep learning methods. The subjective and objective results show that the proposed method can effectively integrate the significant information of infrared and visible images and enhance the detail texture of the fused images, providing better visual effects with less image artefacts and artificial noise.

    参考文献
    相似文献
    引证文献
引用本文

庞忠祥,刘桂华,陈春梅,等.基于深度卷积特征提取的红外与可见光图像融合方法[J].控制与决策,2024,39(3):910-918

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2024-02-25
  • 出版日期: 2024-03-20
文章二维码