基于深度卷积特征提取的红外与可见光图像融合方法
作者:
作者单位:

1.西南科技大学信息工程学院;2.特殊环境机器人技术四川省重点实验室

作者简介:

通讯作者:

中图分类号:

TP391

基金项目:

国家自然科学基金:结冰生长过程 3D 冰形在线测量技术研究(编号:11602292);四川省科技支撑计划:智能AGV车辆的SLAM建图与自主导航避障方法,(编号:2021YFG0380);四川省卫生和计划生育科研课题(17PJ207);广东电网有限责任公司广州供电局委托项目(080044KK52190002).


An Infrared and Visible Image Fusion Method Based on Deep Convolutional Feature Extraction
Author:
Affiliation:

1.School of Information Engineering, Southwest University of Science and Technology;2.Robot Technology Used for Special Environment Key Laboratory of Sichuan Province

Fund Project:

The National Natural Science Foundation of China (General Program, Key Program, Major Research Plan)

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    目前多数红外与可见光图像融合算法在融合过程中通常需要对源图像进行分解,这样易导致融合图像细节模糊和显著性目标丢失。为解决该问题,本文提出了一种基于深度卷积特征提取的红外与可见光图像融合方法。首先,利用迁移学习理论对EfficientNet特征提取性能进行分析,选择七个特征提取模块;其次,直接将源图像送入特征提取模块以实现显著性特征提取。之后,构造通道归一化和平均算子操作用于获取显著图。再使用Softmax和Up-sampling组合的融合规则来得到融合权重,然后将融合权重与源图像进行卷积,生成七幅候选融合图像。最后,将候选融合图像的像素最大值作为最终的重构融合图像。所有实验均在公共数据集上进行,并与经典的传统和深度学习方法比较,主客观实验结果均表明,所提方法能有效地融合红外与可见光图像中的重要信息,突显融合图像的细节纹理,具有更好的视觉效果和更少的图像伪影及人工噪声。

    Abstract:

    Most current infrared and visible image fusion approaches require decomposition of the source images during the fusion process, which will lead to blurred details and loss of saliency targets. In order to solve this problem, a novel infrared and visible image fusion method with deep convolutional feature extraction is proposed. Firstly, the feature extraction capability of EfficientNet is analysed by using transfer learning to select seven feature extraction modules; secondly, the source image is fed directly into the feature extraction module to achieve salient feature extraction. After that, the channel normalization and average operator are constructed for obtaining the activity level map. The fusion rule with a combination of Softmax and Up-sampling is used to obtain the fused weights, which are then convolved with the source image to produce seven candidate fused images. Finally, the pixel maximum of the candidate fused images are used as the final reconstructed fused results. Experiments are based on public datasets, and compared with classical traditional and deep learning methods, the subjective and objective results show that the proposed method can effectively integrate the significant information of infrared and visible images and enhance the detail texture of the fused images, providing better visual effects with less image artefacts and artificial noise.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2022-04-26
  • 最后修改日期:2023-05-04
  • 录用日期:2022-11-02
  • 在线发布日期: 2022-11-25
  • 出版日期: