基于代理模型的XAI 可解释性量化评估方法
作者:
作者单位:

1.北京邮电大学;2.北京航天计量测试技术研究所;3.国家市场监管重点实验室

作者简介:

通讯作者:

中图分类号:

TP18

基金项目:


Quantitative evaluation method for interpretability of XAI based on surrogate model
Author:
Affiliation:

1.Beijing University of Posts and Telecommunications;2.Beijing Aerospace Institute for Metrology and Measurement Technology;3.Key Laboratory of Artificial Intelligence Measurement and Standards for State Market Regulation

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    可解释人工智能(explainable artificial intelligence, XAI) 近年来发展迅速, 已出现多种人工智能模型的解释技术, 但目前缺乏XAI 可解释性的定量评估方法. 已有评估方法大多需借助用户实验进行评估, 这种方法耗时且成本高昂. 本文针对基于代理模型的XAI, 提出一种可解释性量化评估方法. 首先, 针对这类XAI 设计了一些指标并给出了计算方法, 构建了包含10 个指标的评估指标体系, 从一致性、用户理解性、因果性、有效性、稳定性五个维度来评估XAI 的可解释性. 对于包含多个指标的维度, 将熵权法与TOPSIS 相结合, 建立综合评估模型来评估该维度上的可解释性. 将该评估方法用于评估6个基于规则代理模型的XAI 的可解释性, 实验结果表明: 本文方法能够展现XAI 在不同维度上的可解释性水平, 用户可根据需求选取合适的XAI.

    Abstract:

    Explainable artificial intelligence (XAI) is growing rapidly in recent years and many interpretability techniques have emerged, but there is a lack of quantitative evaluation approaches for XAI’s interpretability . Most of existing evaluation methods rely on users’ experiments, which is time-consuming and costly.In this paper, aiming at the surrogate model-based XAI, we propose a quantitative evaluation approach for the XAI’s interpretability.Firstly, we devise some indices for this kind of XAI and give their computational method, and construct an index system with 10 quantitative indices to evaluate the XAI’s interpretability from five dimensions, namely consistency, user comprehension, causality, effectiveness and stability. For the dimension with multiple indices, a comprehensive evaluation model is established by combining entropy weight method with TOPSIS to evaluate the XAI’s interpretability in the dimension.The proposed approach is applied to the evaluation of the interpretability of 6 XAIs based on rule surrogate model. Experimental results show that the approach can demonstrate the XAI’s interpretability in different dimensions, and users can choose suitable XAI according to their needs.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2022-04-12
  • 最后修改日期:2023-03-27
  • 录用日期:2022-10-10
  • 在线发布日期: 2022-11-09
  • 出版日期: