基于深度学习的恶劣战场环境图像恢复方法
CSTR:
作者:
作者单位:

1. 中北大学 省部共建动态测试技术国家重点实验室,太原 030051;2. 中北大学 电气与控制工程学院,太原 030051

作者简介:

通讯作者:

E-mail: sun_c_m@163.com.

中图分类号:

TP183

基金项目:

国家重点研发计划青年科学家项目(2022YFC2905700);国家重点研发计划项目(2022YFB3205800);山西省高等学校科技创新项目(2020L0294);山西省基础研究计划面上项目(202203021221106).


A deep learning based approach for image recovery in harsh battlefield environments
Author:
Affiliation:

1. State key Laboratory of Dynamic Measurement Technology, North University of China,Taiyuan 030051,China;2. School of Electrical and Control Engineering, North University of China,Taiyuan 030051,China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    为实现恶劣战场环境下降质图像的有效恢复、降低环境因素对战场态势感知的干扰,构建一种全新的、端到端的图像恢复方法-----门控采样网络(GSNet).该网络以编码块-解码块为基本架构,以CNNs与门控卷积为编码与解码机制,以压缩和激励网络为编码块与解码块的连接机制,以高阶信息重要程度的重标定区分目标与背景特征,以通道粒度因子压缩方法为轻量化策略,实现对战场恶劣环境图像的快速恢复.相关实验结果表明,GSNet模型可使PSNR达到19.35dB,并且SSIM达到0.724,无论是客观指标评价还是主观视觉效果,性能均优于对比的主流图像恢复算法;轻量级GSNet模型在较小提升PSNR、SSIM等指标的情况下,其参数量、FLOPs以及单张图像处理时间分别降低56.6%、54.6%和55.56%.

    Abstract:

    To achieve effective recovery of degraded images from harsh battlefield environments and reduce the interference of environmental factors on battlefield situational awareness, a new and end-to-end image recovery method, gated sampling network(GSNet), is constructed. The network adopts encoding block-decoding block as the basic architecture, CNNs and gated convolution as the encoding and decoding mechanism, compression and excitation network as the connection mechanism between encoding and decoding blocks, rescaling of higher-order information importance to distinguish targets and background features, and the channel granularity factor compression method as the light-weighting strategy to achieve rapid recovery of battlefield degraded environment images. The relevant experimental results show that the GSNet model can achieve a PSNR of 19.35 dB and an SSIM of 0.724, which are better than the compared mainstream image recovery algorithms in both objective metrics evaluation and subjective visual performance. The lightweight GSNet model reduces the number of parameters, FLOPs, and single image processing time by 56.6%, 54.6%, and 55.56%, respectively, with smaller improvements in PSNR and SSIM.

    参考文献
    相似文献
    引证文献
引用本文

孙传猛,陈嘉欣,裴东兴,等.基于深度学习的恶劣战场环境图像恢复方法[J].控制与决策,2024,39(4):1297-1304

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2024-03-15
  • 出版日期: 2024-04-20
文章二维码