基于帧内关系建模和自注意力融合的多目标跟踪方法
作者:
作者单位:

南京理工大学

作者简介:

通讯作者:

中图分类号:

TP391

基金项目:

国家自然科学基金项目(面上项目,重点项目,重大项目)


Multi-object Tracking Based On Intra-frame Relationship Modeling And Self-attention Fusion Mechanism
Author:
Affiliation:

Nanjing University Of Science And Technology

Fund Project:

The National Natural Science Foundation of China (General Program, Key Program, Major Research Plan)

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    多目标跟踪在视频监控领域有着重要的应用价值.随着深度神经网络(Convolutional Neural Networks, CNN),尤其是图神经网络(Graph Neural Networks, GNN)的发展,多目标跟踪的研究现阶段取得了很大突破.其中,图神经网络由于引入了目标-轨迹间的关系建模,显示出了更稳定的跟踪性能.然而,已有的基于GNN的多目标跟踪方法都仅在连续两帧之间建立全局关系模型,而忽视了帧内目标与周围其它目标的交互,没有考虑在帧内建立合适的局部关系模型.为了解决这个问题,我们提出了基于帧内关系建模和自注意力融合模型(INAF-GNN)的多目标跟踪方法,为帧内目标和周围邻居目标之间建立帧内关系图模型,帧间建立目标和轨迹关系的帧间图模型,并利用注意力机制设计了一个特征融合模块,用来整合帧内和帧间特征.为了验证模型的有效性,我们在MotChallenge标准数据集上进行了大量的实验,与多个基于图神经网络的方法相比较,MOTA提高了1.9%,IDF1提高了3.6%,证明了方法的有效性.

    Abstract:

    Multi-object tracking is a crucial technique of video surveillance. Over the past decade, convolutional neural networks(Convolutional Neural Networks, CNN) and especially graph neural networks(Graph Neural Networks, GNN) have made multi-object tracking a great progress, where the GNN show an significant advantages due to modeling the relationship between targets and trajectories. These GNN models, however, mostly consider building a global relationship model for targets and trajectories only in two neighboring frames, neglecting the interactions between an object with the others within a frame. In respond to this, we propose an intra-frame relationship modeling and self-attention fusion method for multi-object tracking. The method considers both intra- and inter-frame relationships, and has a feature integration module via a self-attention mechanism. To validate the effectiveness of our proposed method, we run various experiments on the MotChallenge benchmark datasets, and the experimental results show that our method outperforms GNN-based multi-object tracking methods by 1.9% of MOTA and 3.6% of IDF1 .

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2021-07-07
  • 最后修改日期:2022-06-27
  • 录用日期:2021-11-26
  • 在线发布日期: 2022-01-02
  • 出版日期: