Perturbation Effect: A Metric to Counter Misleading Validation of Feature Attribution

Ilija Simic, Vedran Sabol, Eduardo Enrique Veas

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Abstract

This paper provides evidence indicating that the most commonly used metric for validating feature attribution methods in eXplainable AI (XAI) is misleading when applied to time series data. To evaluate whether an XAI method attributes importance to relevant features, these are systematically perturbed while measuring the impact on the performance of the classifier. The assumption is that a drastic performance reduction with increasing perturbation of relevant features indicates that these are indeed relevant. We demonstrate empirically that this assumption is incomplete without considering low relevance features in the used metrics. We introduce a novel metric, the Perturbation Effect Size, and demonstrate how it complements existing metrics to offer a more faithful assessment of importance attribution. Finally, we contribute a comprehensive evaluation of attribution methods on time series data, considering the influence of perturbation methods and region size selection.
Original languageEnglish
Title of host publicationCIKM '22: Proceedings of the 31st ACM International Conference on Information & Knowledge Management
Place of PublicationNew York, NY
PublisherAssociation of Computing Machinery
Pages1798-1807
ISBN (Electronic)978-1-4503-9236-5
DOIs
Publication statusPublished - 17 Oct 2022
Event31st ACM International Conference on Information and Knowledge Management: CIKM 2022 - Atlanta, United States
Duration: 17 Oct 202221 Oct 2022

Conference

Conference31st ACM International Conference on Information and Knowledge Management
Abbreviated titleCIKM '22
Country/TerritoryUnited States
CityAtlanta
Period17/10/2221/10/22

Keywords

  • deep learning
  • explainable AI
  • trustworthy AI
  • feature attribution

Fingerprint

Dive into the research topics of 'Perturbation Effect: A Metric to Counter Misleading Validation of Feature Attribution'. Together they form a unique fingerprint.

Cite this