State-of-the-Art Explainability Methods with Focus on Visual Analytics Showcased by Glioma Classification

Milot Gashi, Matej Vukovic, Nikolina Jekic, Stefan Thalmann, Andreas Holzinger, Claire Jean-Quartier*, Fleur Jeanquartier*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This study aims to reflect on a list of libraries providing decision support to AI models. The goal is to assist in finding suitable libraries that support visual explainability and interpretability of the output of their AI model. Especially in sensitive application areas, such as medicine, this is crucial for understanding the decision-making process and for a safe application. Therefore, we use a glioma classification model’s reasoning as an underlying case. We present a comparison of 11 identified Python libraries that provide an addition to the better known SHAP and LIME libraries for visualizing explainability. The libraries are selected based on certain attributes, such as being implemented in Python, supporting visual analysis, thorough documentation, and active maintenance. We showcase and compare four libraries for global interpretations (ELI5, Dalex, InterpretML, and SHAP) and three libraries for local interpretations (Lime, Dalex, and InterpretML). As use case, we process a combination of openly available data sets on glioma for the task of studying feature importance when classifying the grade II, III, and IV brain tumor subtypes glioblastoma multiforme (GBM), anaplastic astrocytoma (AASTR), and oligodendroglioma (ODG), out of 1276 samples and 252 attributes. The exemplified model confirms known variations and studying local explainability contributes to revealing less known variations as putative biomarkers. The full comparison spreadsheet and implementation examples can be found in the appendix.
Original languageEnglish
Pages (from-to)139-158
JournalBioMedInformatics
Volume2
Issue number1
DOIs
Publication statusPublished - 19 Jan 2022

Keywords

  • Explainable artificial intelligence
  • Visualisation
  • SHAP
  • Feature importance
  • Python
  • Glioma

Fingerprint

Dive into the research topics of 'State-of-the-Art Explainability Methods with Focus on Visual Analytics Showcased by Glioma Classification'. Together they form a unique fingerprint.

Cite this