Interpretable brain disease classification and relevance-guided deep learning

Christian Tinauer, Stefan Heber, Lukas Pirpamer, Anna Damulina, Reinhold Schmidt, Rudolf Stollberger, Stefan Ropele, Christian Langkammer

Research output: Contribution to journalArticlepeer-review


Deep neural networks are increasingly used for neurological disease classification by MRI, but the networks' decisions are not easily interpretable by humans. Heat mapping by deep Taylor decomposition revealed that (potentially misleading) image features even outside of the brain tissue are crucial for the classifier's decision. We propose a regularization technique to train convolutional neural network (CNN) classifiers utilizing relevance-guided heat maps calculated online during training. The method was applied using T1-weighted MR images from 128 subjects with Alzheimer's disease (mean age = 71.9 ± 8.5 years) and 290 control subjects (mean age = 71.3 ± 6.4 years). The developed relevance-guided framework achieves higher classification accuracies than conventional CNNs but more importantly, it relies on less but more relevant and physiological plausible voxels within brain tissue. Additionally, preprocessing effects from skull stripping and registration are mitigated. With the interpretability of the decision mechanisms underlying CNNs, these results challenge the notion that unprocessed T1-weighted brain MR images in standard CNNs yield higher classification accuracy in Alzheimer's disease than solely atrophy.

Original languageEnglish
Pages (from-to)20254
JournalScientific Reports
Issue number1
Publication statusPublished - 24 Nov 2022


  • Humans
  • Middle Aged
  • Aged
  • Aged, 80 and over
  • Alzheimer Disease/diagnostic imaging
  • Deep Learning
  • Head
  • Brain/diagnostic imaging
  • Atrophy


Dive into the research topics of 'Interpretable brain disease classification and relevance-guided deep learning'. Together they form a unique fingerprint.

Cite this