基于动态卷积和注意力机制的多域特征融合与运动想象解码
CSTR:
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

TN911.7;TP202+.7;R318

基金项目:

国家自然科学基金项目(62173010).


Multi-domain feature fusion and motor imagery decoding based on dynamic convolution and attention mechanism
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    基于卷积神经网络(CNN)进行运动想象(MI)解码是智能康复的一个研究热点, 而目前的解码方法难以针对脑电信号的个体差异性动态深度挖掘其蕴含的时-频-空特征, 影响解码效果. 鉴于此, 提出一种基于动态卷积和注意力机制的CNN模型(DCAMNet). 首先, 使用滤波器组对每导原始脑电信号进行多频带划分, 同时输入特征提取模块; 然后, 由动态卷积块动态计算注意力权重, 获得有价值的时频信息, 再依次经空间卷积块和时间注意力块学习来挖掘空间信息和时间相关性, 以实现个性化时-频-空特征提取和融合; 最后, 由分类模块完成MI解码. 基于公开的BCI Competition IV Dataset 2a数据集对9名受试者进行4分类十折交叉验证实验, 取得了79.17%的平均准确率和0.788的$F_1 $值. 实验结果表明, DCAMNet能够自适应地关注和增强受试者个性化的特征, 可实现多域特征提取和融合, 相对于现有的流行方法在解码精度和泛化性能上具有一定优势.

    Abstract:

    Motor-imagery (MI) decoding based on convolutional neural networks (CNNs) is a research hotspot in intelligent rehabilitation. However, current decoding methods struggle to dynamically explore the temporal-spectral-spatio features of electroencephalogram (EEG) signals, which vary across subjects and affect decoding performance. Therefore, we propose a dynamic convolution and attention mechanism based convolutional neural network (DCAMNet). First, a filter bank divides raw EEG signals from each channel into multi-spectral bands, which are simultaneously input into the feature extraction module. Then, the dynamic convolution block calculates attention weights to obtain valuable temporal-spectral information, which is passed through the spatial convolution block and the temporal attention block to learn spatial information and temporal correlations, achieving discriminative temporal-spectral-spatio feature extraction and fusion. Finally, the classification module completes the MI decoding. Based on the public BCI Competition IV Dataset 2a, a four-class ten-fold cross-validation experiment on nine subjects achieves an average accuracy of 79.17% and an $F_1 $ score of 0.788. The results show that the DCAMNet can adaptively focus on and enhance discriminative features caused by inter-subject variability, achieving multi-domain feature extraction and fusion. Compared to current popular methods, it has advantages in decoding accuracy and generalization performance.

    参考文献
    相似文献
    引证文献
引用本文

张美晨,李明爱.基于动态卷积和注意力机制的多域特征融合与运动想象解码[J].控制与决策,2025,40(6):1873-1882

复制
相关视频

分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2024-08-08
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2025-04-30
  • 出版日期: 2025-06-20
文章二维码