融合BERT和卷积门控的生成式文本摘要方法
作者:
作者单位:

1.重庆邮电大学;2.中国人民解放军78111部队

作者简介:

通讯作者:

中图分类号:

TP391

基金项目:

国家研发计划资助项目(项目编号:2018YFC0832100,课题编号:2018YFC0832102);国家自然科学基金重点项目(61936001); 国家自然科学基金项目(61876027)重庆市自然科学基金创新群体科学基金项目(cstc2019jcyj-cxttX0002)


An Abstractive Text Summarization Method Combining BERT and Convolutional Gating Unit
Author:
Affiliation:

1.Chongqing University of Posts and Telecommunications;2.Deng WB: 78111 troops, People''s Liberation Army of China

Fund Project:

National Key Research and Development Program of China (Item Program No. 2018YFC0832100,Project No. 2018YFC0832102) , the Key Program of National Natural Science Foundation of China(61936001), and the National Natural Science Foundation of China(61876027).

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    结合注意力机制的循环神经网络(Recurrent Neural Network,RNN)模型是目前主流的生成式文本摘要方法,它采用基于深度学习的序列到序列框架.但基于RNN的生成式摘要模型存在并行能力不足或者效率低的缺陷,以及在生成的摘要过程中存在准确率低和重复率高的问题.为解决上述问题,提出了一种融合BERT预训练模型和卷积门控单元的生成式摘要方法.该方法基于改进Transformer模型,在编码器阶段,充分利用BERT预先训练的大规模语料,代替RNN提取文本的上下文表征,然后结合卷积门控单元对编码器输出进行信息筛选,筛选出源文本的关键内容,在解码器阶段,设计了三种不同的Transformer,旨在探讨BERT预训练模型和卷积门控单元更为有效的融合方式,以此提升文本摘要生成性能.实验采用ROUGE值作为评价指标,在LCSTS中文数据集和CNN/Daily Mail英文数据集上与目前主流的生成式摘要方法作对比的实验,结果表明所提方法提高了摘要的准确性和可读性

    Abstract:

    The Recurrent Neural Network (RNN) model combined with the attention mechanism is the current mainstream abstractive text summarization method, which uses a sequence-to-sequence framework based on deep learning. However, the abstractive summarization model based on RNN has insufficient parallel ability or performance defects of long-term dependence, and the problem of low accuracy and high repetition rate in the process of generating summary. In order to overcome these problems, an abstractive summarization model method combining BERT pre-training model and convolutional gating unit is proposed in this paper. This method is based on the improved Transformer model. In the encoder stage, it makes full use of the large-scale corpus pre-trained by BERT to replace the RNN to extract the contextual representation of the text, and then combines the convolutional gating unit to filter the output of the encoder to filter out the source text. In the decoder stage, three different Transformers are designed, for exploring a more effective fusion method of the BERT pre-training model and convolutional gating unit to improve the performance of text summarization. The ROUGE value is used as the evaluation index in the experiments. The experimental results on the LCSTS Chinese dataset and CNN/Daily Mail dataset show that the proposed method improves the accuracy and readability of the abstract.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2021-03-25
  • 最后修改日期:2022-04-21
  • 录用日期:2021-09-22
  • 在线发布日期: 2021-10-01
  • 出版日期: